query_id
stringlengths 32
32
| query
stringlengths 6
5.38k
| positive_passages
listlengths 1
17
| negative_passages
listlengths 9
100
| subset
stringclasses 7
values |
---|---|---|---|---|
112d0046149714377e8fd3082ff72064
|
A Unified Framework for Multi-Modal Isolated Gesture Recognition
|
[
{
"docid": "92da117d31574246744173b339b0d055",
"text": "We present a method for gesture detection and localization based on multi-scale and multi-modal deep learning. Each visual modality captures spatial information at a particular spatial scale (such as motion of the upper body or a hand), and the whole system operates at two temporal scales. Key to our technique is a training strategy which exploits i) careful initialization of individual modalities; and ii) gradual fusion of modalities from strongest to weakest cross-modality structure. We present experiments on the ChaLearn 2014 Looking at People Challenge gesture recognition track, in which we placed first out of 17 teams.",
"title": ""
},
{
"docid": "128ea037369e69aefa90ec37ae1f9625",
"text": "The deep two-stream architecture [23] exhibited excellent performance on video based action recognition. The most computationally expensive step in this approach comes from the calculation of optical flow which prevents it to be real-time. This paper accelerates this architecture by replacing optical flow with motion vector which can be obtained directly from compressed videos without extra calculation. However, motion vector lacks fine structures, and contains noisy and inaccurate motion patterns, leading to the evident degradation of recognition performance. Our key insight for relieving this problem is that optical flow and motion vector are inherent correlated. Transferring the knowledge learned with optical flow CNN to motion vector CNN can significantly boost the performance of the latter. Specifically, we introduce three strategies for this, initialization transfer, supervision transfer and their combination. Experimental results show that our method achieves comparable recognition performance to the state-of-the-art, while our method can process 390.7 frames per second, which is 27 times faster than the original two-stream method.",
"title": ""
}
] |
[
{
"docid": "f765a0c29c6d553ae1c7937b48416e9c",
"text": "Although the topic of psychological well-being has generated considerable research, few studies have investigated how adults themselves define positive functioning. To probe their conceptions of well-being, interviews were conducted with a community sample of 171 middle-aged (M = 52.5 years, SD = 8.7) and older (M = 73.5 years, SD = 6.1) men and women. Questions pertained to general life evaluations, past life experiences, conceptions of well-being, and views of the aging process. Responses indicated that both age groups and sexes emphasized an \"others orientation\" (being a caring, compassionate person, and having good relationships) in defining well-being. Middle-aged respondents stressed self-confidence, self-acceptance, and self-knowledge, whereas older persons cited accepting change as an important quality of positive functioning. In addition to attention to positive relations with others as an index of well-being, lay views pointed to a sense of humor, enjoying life, and accepting change as criteria of successful aging.",
"title": ""
},
{
"docid": "23aa04378f4eed573d1290c6bb9d3670",
"text": "The ability to compare systems from the same domain is of central importance for their introduction into complex applications. In the domains of named entity recognition and entity linking, the large number of systems and their orthogonal evaluation w.r.t. measures and datasets has led to an unclear landscape regarding the abilities and weaknesses of the different approaches. We present GERBIL—an improved platform for repeatable, storable and citable semantic annotation experiments— and its extension since being release. GERBIL has narrowed this evaluation gap by generating concise, archivable, humanand machine-readable experiments, analytics and diagnostics. The rationale behind our framework is to provide developers, end users and researchers with easy-to-use interfaces that allow for the agile, fine-grained and uniform evaluation of annotation tools on multiple datasets. By these means, we aim to ensure that both tool developers and end users can derive meaningful insights into the extension, integration and use of annotation applications. In particular, GERBIL provides comparable results to tool developers, simplifying the discovery of strengths and weaknesses of their implementations with respect to the state-of-the-art. With the permanent experiment URIs provided by our framework, we ensure the reproducibility and archiving of evaluation results. Moreover, the framework generates data in a machine-processable format, allowing for the efficient querying and postprocessing of evaluation results. Additionally, the tool diagnostics provided by GERBIL provide insights into the areas where tools need further refinement, thus allowing developers to create an informed agenda for extensions and end users to detect the right tools for their purposes. Finally, we implemented additional types of experiments including entity typing. GERBIL aims to become a focal point for the state-of-the-art, driving the research agenda of the community by presenting comparable objective evaluation results. Furthermore, we tackle the central problem of the evaluation of entity linking, i.e., we answer the question of how an evaluation algorithm can compare two URIs to each other without being bound to a specific knowledge base. Our approach to this problem opens a way to address the deprecation of URIs of existing gold standards for named entity recognition and entity linking, a feature which is currently not supported by the state-of-the-art. We derived the importance of this feature from usage and dataset requirements collected from the GERBIL user community, which has already carried out more than 24.000 single evaluations using our framework. Through the resulting updates, GERBIL now supports 8 tasks, 46 datasets and 20 systems.",
"title": ""
},
{
"docid": "f49864c2f892bf4058d953b6439bfdd1",
"text": "Dropout-based regularization methods can be regarded as injecting random noise with pre-defined magnitude to different parts of the neural network during training. It was recently shown that Bayesian dropout procedure not only improves generalization but also leads to extremely sparse neural architectures by automatically setting the individual noise magnitude per weight. However, this sparsity can hardly be used for acceleration since it is unstructured. In the paper, we propose a new Bayesian model that takes into account the computational structure of neural networks and provides structured sparsity, e.g. removes neurons and/or convolutional channels in CNNs. To do this we inject noise to the neurons outputs while keeping the weights unregularized. We establish the probabilistic model with a proper truncated log-uniform prior over the noise and truncated log-normal variational approximation that ensures that the KL-term in the evidence lower bound is computed in closed-form. The model leads to structured sparsity by removing elements with a low SNR from the computation graph and provides significant acceleration on a number of deep neural architectures. The model is easy to implement as it can be formulated as a separate dropout-like layer.",
"title": ""
},
{
"docid": "123760f70d7f609dfe3cf3158a5cc23f",
"text": "We investigate national dialect identification, the task of classifying English documents according to their country of origin. We use corpora of known national origin as a proxy for national dialect. In order to identify general (as opposed to corpus-specific) characteristics of national dialects of English, we make use of a variety of corpora of different sources, with inter-corpus variation in length, topic and register. The central intuition is that features that are predictive of national origin across different data sources are features that characterize a national dialect. We examine a number of classification approaches motivated by different areas of research, and evaluate the performance of each method across 3 national dialects: Australian, British, and Canadian English. Our results demonstrate that there are lexical and syntactic characteristics of each national dialect that are consistent across data sources.",
"title": ""
},
{
"docid": "7cad8fccadff2d8faa8a372c6237469e",
"text": "In the spirit of the tremendous success of deep Convolutional Neural Networks as generic feature extractors from images, we propose Timenet : a multilayered recurrent neural network (RNN) trained in an unsupervised manner to extract features from time series. Fixed-dimensional vector representations or embeddings of variable-length sentences have been shown to be useful for a variety of document classification tasks. Timenet is the encoder network of an auto-encoder based on sequence-to-sequence models that transforms varying length time series to fixed-dimensional vector representations. Once Timenet is trained on diverse sets of time series, it can then be used as a generic off-the-shelf feature extractor for time series. We train Timenet on time series from 24 datasets belonging to various domains from the UCR Time Series Classification Archive, and then evaluate embeddings from Timenet for classification on 30 other datasets not used for training the Timenet. We observe that a classifier learnt over the embeddings obtained from a pre-trained Timenet yields significantly better performance compared to (i) a classifier learnt over the embeddings obtained from the encoder network of a domain-specific auto-encoder, as well as (ii) a nearest neighbor classifier based on the well-known and effective Dynamic Time Warping (DTW) distance measure. We also observe that a classifier trained on embeddings from Timenet give competitive results in comparison to a DTW-based classifier even when using significantly smaller set of labeled training data, providing further evidence that Timenet embeddings are robust. Finally, t-SNE visualizations of Timenet embeddings show that time series from different classes form well-separated clusters.",
"title": ""
},
{
"docid": "b3dcbd8a41e42ae6e748b07c18dbe511",
"text": "There is inconclusive evidence whether practicing tasks with computer agents improves people’s performance on these tasks. This paper studies this question empirically using extensive experiments involving bilateral negotiation and threeplayer coordination tasks played by hundreds of human subjects. We used different training methods for subjects, including practice interactions with other human participants, interacting with agents from the literature, and asking participants to design an automated agent to serve as their proxy in the task. Following training, we compared the performance of subjects when playing state-of-the-art agents from the literature. The results revealed that in the negotiation settings, in most cases, training with computer agents increased people’s performance as compared to interacting with people. In the three player coordination game, training with computer agents increased people’s performance when matched with the state-of-the-art agent. These results demonstrate the efficacy of using computer agents as tools for improving people’s skills when interacting in strategic settings, saving considerable effort and providing better performance than when interacting with human counterparts.",
"title": ""
},
{
"docid": "8bdd071cf5ff246fb02b986be05012df",
"text": "RNA-seq, has recently become an attractive method of choice in the studies of transcriptomes, promising several advantages compared with microarrays. In this study, we sought to assess the contribution of the different analytical steps involved in the analysis of RNA-seq data generated with the Illumina platform, and to perform a cross-platform comparison based on the results obtained through Affymetrix microarray. As a case study for our work we, used the Saccharomyces cerevisiae strain CEN.PK 113-7D, grown under two different conditions (batch and chemostat). Here, we asses the influence of genetic variation on the estimation of gene expression level using three different aligners for read-mapping (Gsnap, Stampy and TopHat) on S288c genome, the capabilities of five different statistical methods to detect differential gene expression (baySeq, Cuffdiff, DESeq, edgeR and NOISeq) and we explored the consistency between RNA-seq analysis using reference genome and de novo assembly approach. High reproducibility among biological replicates (correlation≥0.99) and high consistency between the two platforms for analysis of gene expression levels (correlation≥0.91) are reported. The results from differential gene expression identification derived from the different statistical methods, as well as their integrated analysis results based on gene ontology annotation are in good agreement. Overall, our study provides a useful and comprehensive comparison between the two platforms (RNA-seq and microrrays) for gene expression analysis and addresses the contribution of the different steps involved in the analysis of RNA-seq data.",
"title": ""
},
{
"docid": "cc5815edf96596a1540fa1fca53da0d3",
"text": "INTRODUCTION\nSevere motion sickness is easily identifiable with sufferers showing obvious behavioral signs, including emesis (vomiting). Mild motion sickness and sopite syndrome lack such clear and objective behavioral markers. We postulate that yawning may have the potential to be used in operational settings as such a marker. This study assesses the utility of yawning as a behavioral marker for the identification of soporific effects by investigating the association between yawning and mild motion sickness/sopite syndrome in a controlled environment.\n\n\nMETHODS\nUsing a randomized motion-counterbalanced design, we collected yawning and motion sickness data from 39 healthy individuals (34 men and 5 women, ages 27-59 yr) in static and motion conditions. Each individual participated in two 1-h sessions. Each session consisted of six 10-min blocks. Subjects performed a multitasking battery on a head mounted display while seated on the moving platform. The occurrence and severity of symptoms were assessed with the Motion Sickness Assessment Questionnaire (MSAQ).\n\n\nRESULTS\nYawning occurred predominantly in the motion condition. All yawners in motion (N = 5) were symptomatic. Compared to nonyawners (MSAQ indices: Total = 14.0, Sopite = 15.0), subjects who yawned in motion demonstrated increased severity of motion sickness and soporific symptoms (MSAQ indices: Total = 17.2, Sopite = 22.4), and reduced multitasking cognitive performance (Composite score: nonyawners = 1348; yawners = 1145).\n\n\nDISCUSSION\nThese results provide evidence that yawning may be a viable behavioral marker to recognize the onset of soporific effects and their concomitant reduction in cognitive performance.",
"title": ""
},
{
"docid": "b1c0fb9a020d8bc85b23f696586dd9d3",
"text": "Most instances of real-life language use involve discourses in which several sentences or utterances are coherently linked through the use of repeated references. Repeated reference can take many forms, and the choice of referential form has been the focus of much research in several related fields. In this article we distinguish between three main approaches: one that addresses the ‘why’ question – why are certain forms used in certain contexts; one that addresses the ‘how’ question – how are different forms processed; and one that aims to answer both questions by seriously considering both the discourse function of referential expressions, and the cognitive mechanisms that underlie their processing cost. We argue that only the latter approach is capable of providing a complete view of referential processing, and that in so doing it may also answer a more profound ‘why’ question – why does language offer multiple referential forms. Coherent discourse typically involves repeated references to previously mentioned referents, and these references can be made with different forms. For example, a person mentioned in discourse can be referred to by a proper name (e.g., Bill), a definite description (e.g., the waiter), or a pronoun (e.g., he). When repeated reference is made to a referent that was mentioned in the same sentence, the choice and processing of referential form may be governed by syntactic constraints such as binding principles (Chomsky 1981). However, in many cases of repeated reference to a referent that was mentioned in the same sentence, and in all cases of repeated reference across sentences, the choice and processing of referential form reflects regular patterns and preferences rather than strong syntactic constraints. The present article focuses on the factors that underlie these patterns. Considerable research in several disciplines has aimed to explain how speakers and writers choose which form they should use to refer to objects and events in discourse, and how listeners and readers process different referential forms (e.g., Chafe 1976; Clark & Wilkes 1986; Kintsch 1988; Gernsbacher 1989; Ariel 1990; Gordon, Grosz & Gilliom 1993; Gundel, Hedberg & Zacharski 1993; Garrod & Sanford 1994; Gordon & Hendrick 1998; Almor 1999; Cowles & Garnham 2005). One of the central observations in this research is that there exists an inverse relation between the specificity of the referential",
"title": ""
},
{
"docid": "37f3c127bb575fde94b650063c3a3799",
"text": "This article presents the preliminary results of an exploratory experiment with BilliArT, an interactive installation for music-making. The aim is to extract useful information from the combination of different ways to approach to the art work, namely that of conservation, of the aesthetic experience, and of the artistic creativity. The long-term goal is to achieve a better understanding of how people engage with interactive installations, and ultimately derive an ontology for interactive art.",
"title": ""
},
{
"docid": "96db5cbe83ce9fbee781b8cc26d97fc8",
"text": "We present a novel method to obtain a 3D Euclidean reconstruction of both the background and moving objects in a video sequence. We assume that, multiple objects are moving rigidly on a ground plane observed by a moving camera. The video sequence is first segmented into static background and motion blobs by a homography-based motion segmentation method. Then classical \"Structure from Motion\" (SfM) techniques are applied to obtain a Euclidean reconstruction of the static background. The motion blob corresponding to each moving object is treated as if there were a static object observed by a hypothetical moving camera, called a \"virtual camera\". This virtual camera shares the same intrinsic parameters with the real camera but moves differently due to object motion. The same SfM techniques are applied to estimate the 3D shape of each moving object and the pose of the virtual camera. We show that the unknown scale of moving objects can be approximately determined by the ground plane, which is a key contribution of this paper. Another key contribution is that we prove that the 3D motion of moving objects can be solved from the virtual camera motion with a linear constraint imposed on the object translation. In our approach, a planartranslation constraint is formulated: \"the 3D instantaneous translation of moving objects must be parallel to the ground plane\". Results on real-world video sequences demonstrate the effectiveness and robustness of our approach.",
"title": ""
},
{
"docid": "d4f10c400f187092c19fbb81df0f2bc5",
"text": "The use of resin composite materials to restore the complete occlusion of worn teeth is controversial and data are scarce. In this case series, the authors report on seven cases of progressive mixed erosive/abrasive worn dentition (85 posterior teeth) that have been reconstructed with direct resin composite restorations. In all patients, either one or both tooth arches was completely restored using direct resin composite restorations. All patients were treated with standardized materials and protocols. In five patients, a wax-up-based template was used to avoid freehand build-up techniques and to ensure optimal anatomy and function. All patients were re-assessed after a mean service time of three years (mean 35 +/5 months) using USPHS criteria. Subjective patient satisfaction was measured using visual analogue scales (VAS). The overall quality of the restorations was good, with predominantly determined \"Alpha\"-scores. Only the marginal quality showed small deteriorations, with \"Beta\" scores of 37% and 45% for marginal discoloration and integrity, respectively. In general, the composite showed signs of wear facets that resulted in 46% \"Beta\" scores within the anatomy scores. Small restoration fractures were only seen in two restorations, which were reparable. Two teeth were excluded from the evaluation, as they have been previously repaired due to fracture after biting on a nut. The results were very favorable, and the patients were satisfied with this non-invasive and economic treatment option, which still has the characteristic of a medium-term rehabilitation. The outcomes were comparable to other direct composite restorations successfully applied in adhesive dentistry.",
"title": ""
},
{
"docid": "c1d75b9a71f373a6e44526adf3694f37",
"text": "Segmentation means segregating area of interest from the image. The aim of image segmentation is to cluster the pixels into salient image regions i.e. regions corresponding to individual surfaces, objects, or natural parts of objects. Automatic Brain tumour segmentation is a sensitive step in medical field. A significant medical informatics task is to perform the indexing of the patient databases according to image location, size and other characteristics of brain tumours based on magnetic resonance (MR) imagery. This requires segmenting tumours from different MR imaging modalities. Automated brain tumour segmentation from MR modalities is a challenging, computationally intensive task.Image segmentation plays an important role in image processing. MRI is generally more useful for brain tumour detection because it provides more detailed information about its type, position and size. For this reason, MRI imaging is the choice of study for the diagnostic purpose and, thereafter, for surgery and monitoring treatment outcomes. This paper presents a review of the various methods used in brain MRI image segmentation. The review covers imaging modalities, magnetic resonance imaging and methods for segmentation approaches. The paper concludes with a discussion on the upcoming trend of advanced researches in brain image segmentation. Keywords-Region growing, Level set method, Split and merge algorithm, MRI images",
"title": ""
},
{
"docid": "2331098bd8099a8dba7bab10c9322b5f",
"text": "Aggregating extra features has been considered as an effective approach to boost traditional pedestrian detection methods. However, there is still a lack of studies on whether and how CNN-based pedestrian detectors can benefit from these extra features. The first contribution of this paper is exploring this issue by aggregating extra features into CNN-based pedestrian detection framework. Through extensive experiments, we evaluate the effects of different kinds of extra features quantitatively. Moreover, we propose a novel network architecture, namely HyperLearner, to jointly learn pedestrian detection as well as the given extra feature. By multi-task training, HyperLearner is able to utilize the information of given features and improve detection performance without extra inputs in inference. The experimental results on multiple pedestrian benchmarks validate the effectiveness of the proposed HyperLearner.",
"title": ""
},
{
"docid": "b16300212b0b73b3fcd402d86a7c6d51",
"text": "A 34-year old man was admitted to us with a mild painful cord-like induration on the left dorsal side of the penis extending from corona of the glans to the base of the penis. The patient was treated with anti-inflammatory and anticoagulan agents. Cord-like-induration resolved and recanalized in 4th week, and venous flow was detected in Doppler examination.",
"title": ""
},
{
"docid": "ff418efbdd2381692f01b5cdc94143d5",
"text": "The U.S. legislation at both the federal and state levels mandates certain organizations to inform customers about information uses and disclosures. Such disclosures are typically accomplished through privacy policies, both online and offline. Unfortunately, the policies are not easy to comprehend, and, as a result, online consumers frequently do not read the policies provided at healthcare Web sites. Because these policies are often required by law, they should be clear so that consumers are likely to read them and to ensure that consumers can comprehend these policies. This, in turn, may increase consumer trust and encourage consumers to feel more comfortable when interacting with online organizations. In this paper, we present results of an empirical study, involving 993 Internet users, which compared various ways to present privacy policy information to online consumers. Our findings suggest that users perceive typical, paragraph-form policies to be more secure than other forms of policy representation, yet user comprehension of such paragraph-form policies is poor as compared to other policy representations. The results of this study can help managers create more trustworthy policies, aid compliance officers in detecting deceptive organizations, and serve legislative bodies by providing tangible evidence as to the ineffectiveness of current privacy policies.",
"title": ""
},
{
"docid": "ef3ec9af6f5fe3ff71f5c54a1de262d8",
"text": "This paper proposes an information theoretic criterion for comparing two partitions, or clusterings, of the same data set. The criterion, called variation of information (VI), measures the amount of information lost and gained in changing from clustering C to clustering C′. The basic properties of VI are presented and discussed. We focus on two kinds of properties: (1) those that help one build intuition about the new criterion (in particular, it is shown the VI is a true metric on the space of clusterings), and (2) those that pertain to the comparability of VI values over different experimental conditions. As the latter properties have rarely been discussed explicitly before, other existing comparison criteria are also examined in their light. Finally we present the VI from an axiomatic point of view, showing that it is the only “sensible” criterion for comparing partitions that is both aligned to the lattice and convexely additive. As a consequence, we prove an impossibility result for comparing partitions: there is no criterion for comparing partitions that simultaneoulsly satisfies the above two desirable properties and is bounded.",
"title": ""
},
{
"docid": "1772d22c19635b6636e42f8bb1b1a674",
"text": "• MacArthur Fellowship, 2010 • Guggenheim Fellowship, 2010 • Li Ka Shing Foundation Women in Science Distinguished Lectu re Series Award, 2010 • MIT Technology Review TR-35 Award (recognizing the world’s top innovators under the age of 35), 2009. • Okawa Foundation Research Award, 2008. • Sloan Research Fellow, 2007. • Best Paper Award, 2007 USENIX Security Symposium. • George Tallman Ladd Research Award, Carnegie Mellon Univer sity, 2007. • Highest ranked paper, 2006 IEEE Security and Privacy Sympos ium; paper invited to a special issue of the IEEE Transactions on Dependable and Secure Computing. • NSF CAREER Award on “Exterminating Large Scale Internet Att acks”, 2005. • IBM Faculty Award, 2005. • Highest ranked paper, 1999 IEEE Computer Security Foundati on Workshop; paper invited to a special issue of Journal of Computer Security.",
"title": ""
},
{
"docid": "28b3d7fbcb20f5548d22dbf71b882a05",
"text": "In this paper, we propose a novel abnormal event detection method with spatio-temporal adversarial networks (STAN). We devise a spatio-temporal generator which synthesizes an inter- frame by considering spatio-temporal characteristics with bidirectional ConvLSTM. A proposed spatio-temporal discriminator determines whether an input sequence is real-normal or not with 3D convolutional layers. These two networks are trained in an adversarial way to effectively encode spatio-temporal features of normal patterns. After the learning, the generator and the discriminator can be independently used as detectors, and deviations from the learned normal patterns are detected as abnormalities. Experimental results show that the proposed method achieved competitive performance compared to the state-of-the-art methods. Further, for the interpretation, we visualize the location of abnormal events detected by the proposed networks using a generator loss and discriminator gradients.",
"title": ""
}
] |
scidocsrr
|
6cfb525dd9aea2373510da35eecb78fb
|
ERMS: An Elastic Replication Management System for HDFS
|
[
{
"docid": "41a16f3eb3ff59d34e04ffa77bf1ae86",
"text": "Windows Azure Storage (WAS) is a cloud storage system that provides customers the ability to store seemingly limitless amounts of data for any duration of time. WAS customers have access to their data from anywhere at any time and only pay for what they use and store. In WAS, data is stored durably using both local and geographic replication to facilitate disaster recovery. Currently, WAS storage comes in the form of Blobs (files), Tables (structured storage), and Queues (message delivery). In this paper, we describe the WAS architecture, global namespace, and data model, as well as its resource provisioning, load balancing, and replication systems.",
"title": ""
},
{
"docid": "7add673c4f72e6a7586109ac3bdab2ec",
"text": "Bigtable is a distributed storage system for managing structured data that is designed to scale to a very large size: petabytes of data across thousands of commodity servers. Many projects at Google store data in Bigtable, including web indexing, Google Earth, and Google Finance. These applications place very different demands on Bigtable, both in terms of data size (from URLs to web pages to satellite imagery) and latency requirements (from backend bulk processing to real-time data serving). Despite these varied demands, Bigtable has successfully provided a flexible, high-performance solution for all of these Google products. In this article, we describe the simple data model provided by Bigtable, which gives clients dynamic control over data layout and format, and we describe the design and implementation of Bigtable.",
"title": ""
}
] |
[
{
"docid": "2ab32a04c2d0af4a76ad29ce5a3b2748",
"text": "The future of solid-state lighting relies on how the performance parameters will be improved further for developing high-brightness light-emitting diodes. Eventually, heat removal is becoming a crucial issue because the requirement of high brightness necessitates high-operating current densities that would trigger more joule heating. Here we demonstrate that the embedded graphene oxide in a gallium nitride light-emitting diode alleviates the self-heating issues by virtue of its heat-spreading ability and reducing the thermal boundary resistance. The fabrication process involves the generation of scalable graphene oxide microscale patterns on a sapphire substrate, followed by its thermal reduction and epitaxial lateral overgrowth of gallium nitride in a metal-organic chemical vapour deposition system under one-step process. The device with embedded graphene oxide outperforms its conventional counterpart by emitting bright light with relatively low-junction temperature and thermal resistance. This facile strategy may enable integration of large-scale graphene into practical devices for effective heat removal.",
"title": ""
},
{
"docid": "387c2b51fcac3c4f822ae337cf2d3f8d",
"text": "This paper directly follows and extends, where a novel method for measurement of extreme impedances is described theoretically. In this paper experiments proving that the method can significantly improve stability of a measurement system are described. Using Agilent PNA E8364A vector network analyzer (VNA) the method is able to measure reflection coefficient with stability improved 36-times in magnitude and 354-times in phase compared to the classical method of reflection coefficient measurement. Further, validity of the error model and related equations stated in are verified by real measurement of SMD resistors (size 0603) in microwave test fixture. Values of the measured SMD resistors range from 12 kOmega up to 330 kOmega. A novel calibration technique using three different resistors as calibration standards is used. The measured values of impedances reasonably agree with assumed values.",
"title": ""
},
{
"docid": "c0857e5fdb18c32848f317a2fdec2ab3",
"text": "In recent years, there has been an increasing interest in applying Augmented Reality (AR) to create unique educational settings. So far, however, there is a lack of review studies with focus on investigating factors such as: the uses, advantages, limitations, effectiveness, challenges and features of augmented reality in educational settings. Personalization for promoting an inclusive learning using AR is also a growing area of interest. This paper reports a systematic review of literature on augmented reality in educational settings considering the factors mentioned before. In total, 32 studies published between 2003 and 2013 in 6 indexed journals were analyzed. The main findings from this review provide the current state of the art on research in AR in education. Furthermore, the paper discusses trends and the vision towards the future and opportunities for further research in augmented reality for educational settings.",
"title": ""
},
{
"docid": "d9c7549c2fe3541c49d59d7dc6395050",
"text": "In this chapter, we will review the underlying mechanisms for the evolution of wireless communication networks. We will first discuss macro-cellular technologies used in traditional telecommunication systems, and then introduce some micro-cellular technologies as a recent advance in the telecommunications industry. Finally, we will describe existing interworking techniques available in literature and in standardization, including loosely and tightly coupled, I-WLAN and IEEE 802.21. The term macro-cell is used to describe cells with larger sizes. A macro-cell is a cell in mobile phone networks that provide radio coverage served by a high power cellular base station. The antennas for macro-cells are mounted on ground-based masts and other existing structures, at a height that provides a clear view over the surrounding buildings and terrain. Macro-cell base stations have power outputs of typically tens of watts [18]. Most wireless communication systems maintained by traditional mobile network operators are powered by macro-cellular technologies. In the 1980s, the 1G wireless communication system came to the mobile communication environment, which provided a data speed of 2.4 Kbps to support data communication with mobile phones. An example is Nordic Mobile Telephone (NMT). However, this generation still worked in analog system and there were tight limitations in terms of the system capacity and data rate.",
"title": ""
},
{
"docid": "9760e3676a7df5e185ec35089d06525e",
"text": "This paper examines the sufficiency of existing e-Learning standards for facilitating and supporting the introduction of adaptive techniques in computer-based learning systems. To that end, the main representational and operational requirements of adaptive learning environments are examined and contrasted against current eLearning standards. The motivation behind this preliminary analysis is attainment of: interoperability between adaptive learning systems; reuse of adaptive learning materials; and, the facilitation of adaptively supported, distributed learning activities.",
"title": ""
},
{
"docid": "7563f6e6d8b4a3ffa40ace9380c6288f",
"text": "In this paper, we describe the results of source code personality identification from Team BESUMich. We used a set of simple, robust, scalable, and language-independent features on the PR-SOCO dataset. Using leave-one-coder-out strategy, we obtained minimum RMSE on the test data for extroversion, and competitive results for other personality traits.",
"title": ""
},
{
"docid": "c600408fdadd9ae0316577a1aa565bd7",
"text": "Minutiae extraction is of critical importance in automated fingerprint recognition. Previous works on rolled/slap fingerprints failed on latent fingerprints due to noisy ridge patterns and complex background noises. In this paper, we propose a new way to design deep convolutional network combining domain knowledge and the representation ability of deep learning. In terms of orientation estimation, segmentation, enhancement and minutiae extraction, several typical traditional methods performed well on rolled/slap fingerprints are transformed into convolutional manners and integrated as an unified plain network. We demonstrate that this pipeline is equivalent to a shallow network with fixed weights. The network is then expanded to enhance its representation ability and the weights are released to learn complex background variance from data, while preserving end-to-end differentiability. Experimental results on NIST SD27 latent database and FVC 2004 slap database demonstrate that the proposed algorithm outperforms the state-of-the-art minutiae extraction algorithms. Code is made publicly available at: https://github.com/felixTY/FingerNet.",
"title": ""
},
{
"docid": "7700a97c65a9e6d9e0fe9abea543b1b3",
"text": "Opinionated social media such as product reviews are now widely used by individuals and organizations for their decision making. However, due to the reason of profit or fame, people try to game the system by opinion spamming (e.g., writing fake reviews) to promote or to demote some target products. In recent years, fake review detection has attracted significant attention from both the business and research communities. However, due to the difficulty of human labeling needed for supervised learning and evaluation, the problem remains to be highly challenging. This work proposes a novel angle to the problem by modeling spamicity as latent. An unsupervised model, called Author Spamicity Model (ASM), is proposed. It works in the Bayesian setting, which facilitates modeling spamicity of authors as latent and allows us to exploit various observed behavioral footprints of reviewers. The intuition is that opinion spammers have different behavioral distributions than non-spammers. This creates a distributional divergence between the latent population distributions of two clusters: spammers and non-spammers. Model inference results in learning the population distributions of the two clusters. Several extensions of ASM are also considered leveraging from different priors. Experiments on a real-life Amazon review dataset demonstrate the effectiveness of the proposed models which significantly outperform the state-of-the-art competitors.",
"title": ""
},
{
"docid": "d6bfcd2977db76b0463024d261ffc7d6",
"text": "Efforts to understand and mitigate thehealth effects of particulate matter (PM) air pollutionhave a rich and interesting history. This review focuseson six substantial lines of research that have been pursued since 1997 that have helped elucidate our understanding about the effects of PM on human health. There hasbeen substantial progress in the evaluation of PM health effects at different time-scales of exposure and in the exploration of the shape of the concentration-response function. There has also been emerging evidence of PM-related cardiovascular health effects and growing knowledge regarding interconnected general pathophysiological pathways that link PM exposure with cardiopulmonary morbidiity and mortality. Despite important gaps in scientific knowledge and continued reasons for some skepticism, a comprehensive evaluation of the research findings provides persuasive evidence that exposure to fine particulate air pollution has adverse effects on cardiopulmonaryhealth. Although much of this research has been motivated by environmental public health policy, these results have important scientific, medical, and public health implications that are broader than debates over legally mandated air quality standards.",
"title": ""
},
{
"docid": "7e1df3fd563009c356c8a1620b96a232",
"text": "This research investigates the large hype surrounding big data (BD) and Analytics (BDA) in both academia and the business world. Initial insights pointed to large and complex amalgamations of different fields, techniques and tools. Above all, BD as a research field and as a business tool found to be under developing and is fraught with many challenges. The intention here in this research is to develop an adoption model of BD that could detect key success predictors. The research finds a great interest and optimism about BD value that fueled this current buzz behind this novel phenomenon. Like any disruptive innovation, its assimilation in organizations oppressed with many challenges at various contextual levels. BD would provide different advantages to organizations that would seriously consider all its perspectives alongside its lifecycle in the pre-adoption or adoption or implementation phases. The research attempts to delineate the different facets of BD as a technology and as a management tool highlighting different contributions, implications and recommendations. This is of great interest to researchers, professional and policy makers.",
"title": ""
},
{
"docid": "edf41dbd01d4060982c2c75469bbac6b",
"text": "In this paper, we develop a design method for inclined and displaced (compound) slotted waveguide array antennas. The characteristics of a compound slot element and the design results by using an equivalent circuit are shown. The effectiveness of the designed antennas is verified through experiments.",
"title": ""
},
{
"docid": "ac62d57dac1a363275ddf989881d194a",
"text": "0957-4174/$ see front matter 2012 Elsevier Ltd. A http://dx.doi.org/10.1016/j.eswa.2012.08.010 ⇑ Corresponding author. Address: College of De University, 1239 Siping Road, Shanghai 200092, PR 6598 3432. E-mail addresses: [email protected] (H.-C (L. Liu), [email protected] (N. Liu). Failure mode and effects analysis (FMEA) is a risk assessment tool that mitigates potential failures in systems, processes, designs or services and has been used in a wide range of industries. The conventional risk priority number (RPN) method has been criticized to have many deficiencies and various risk priority models have been proposed in the literature to enhance the performance of FMEA. However, there has been no literature review on this topic. In this study, we reviewed 75 FMEA papers published between 1992 and 2012 in the international journals and categorized them according to the approaches used to overcome the limitations of the conventional RPN method. The intention of this review is to address the following three questions: (i) Which shortcomings attract the most attention? (ii) Which approaches are the most popular? (iii) Is there any inadequacy of the approaches? The answers to these questions will give an indication of current trends in research and the best direction for future research in order to further address the known deficiencies associated with the traditional FMEA. 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "ba2d02d8c3e389b9b7659287eb406b16",
"text": "We propose and consolidate a definition of the discrete fractional Fourier transform that generalizes the discrete Fourier transform (DFT) in the same sense that the continuous fractional Fourier transform generalizes the continuous ordinary Fourier transform. This definition is based on a particular set of eigenvectors of the DFT matrix, which constitutes the discrete counterpart of the set of Hermite–Gaussian functions. The definition is exactlyunitary, index additive, and reduces to the DFT for unit order. The fact that this definition satisfies all the desirable properties expected of the discrete fractional Fourier transform supports our confidence that it will be accepted as the definitive definition of this transform.",
"title": ""
},
{
"docid": "89e88b92adc44176f0112a66ec92515a",
"text": "Computer programming is being introduced in schools worldwide as part of a movement that promotes Computational Thinking (CT) skills among young learners. In general, learners use visual, block-based programming languages to acquire these skills, with Scratch being one of the most popular ones. Similar to professional developers, learners also copy and paste their code, resulting in duplication. In this paper we present the findings of correlating the assessment of the CT skills of learners with the presence of software clones in over 230,000 projects obtained from the Scratch platform. Specifically, we investigate i) if software cloning is an extended practice in Scratch projects, ii) if the presence of code cloning is independent of the programming mastery of learners, iii) if code cloning can be found more frequently in Scratch projects that require specific skills (as parallelism or logical thinking), and iv) if learners who have the skills to avoid software cloning really do so. The results show that i) software cloning can be commonly found in Scratch projects, that ii) it becomes more frequent as learners work on projects that require advanced skills, that iii) no CT dimension is to be found more related to the absence of software clones than others, and iv) that learners -even if they potentially know how to avoid cloning- still copy and paste frequently. The insights from this paper could be used by educators and learners to determine when it is pedagogically more effective to address software cloning, by educational programming platform developers to adapt their systems, and by learning assessment tools to provide better evaluations.",
"title": ""
},
{
"docid": "52a01a3bb4122e313c3146363b3fb954",
"text": "We demonstrate how movements of multiple people or objects within a building can be displayed on a network representation of the building, where nodes are rooms and edges are doors. Our representation shows the direction of movements between rooms and the order in which rooms are visited, while avoiding occlusion or overplotting when there are repeated visits or multiple moving people or objects. We further propose the use of a hybrid visualization that mixes geospatial and topological (network-based) representations, enabling focus-in-context and multi-focal visualizations. An experimental comparison found that the topological representation was significantly faster than the purely geospatial representation for three out of four tasks.",
"title": ""
},
{
"docid": "2b8318a73fdf5a2f2f26ededf29da958",
"text": "With the development of 3-D applications, such as 3-D reconstruction and object recognition, accurate and high-quality depth map is urgently required. Recently, depth cameras have been affordable and widely used in daily life. However, the captured depth map always owns low resolution and poor quality, which limits its practical application. This paper proposes a color-guided depth map super resolution method using convolutional neural network. First, a dual-stream convolutional neural network, which integrates the color and depth information simultaneously, is proposed for depth map super resolution. Then, the optimized edge map generated by the high resolution color image and low resolution depth map is used as additional information to refine the object boundary in the depth map. Experimental results demonstrate the effectiveness of the proposed method compared with the state-of-the-art methods.",
"title": ""
},
{
"docid": "b05a72a6fa5e381b341ba8c9107a690c",
"text": "Acknowledgments are widely used in scientific articles to express gratitude and credit collaborators. Despite suggestions that indexing acknowledgments automatically will give interesting insights, there is currently, to the best of our knowledge, no such system to track acknowledgments and index them. In this paper we introduce AckSeer, a search engine and a repository for automatically extracted acknowledgments in digital libraries. AckSeer is a fully automated system that scans items in digital libraries including conference papers, journals, and books extracting acknowledgment sections and identifying acknowledged entities mentioned within. We describe the architecture of AckSeer and discuss the extraction algorithms that achieve a F1 measure above 83%. We use multiple Named Entity Recognition (NER) tools and propose a method for merging the outcome from different recognizers. The resulting entities are stored in a database then made searchable by adding them to the AckSeer index along with the metadata of the containing paper/book.\n We build AckSeer on top of the documents in CiteSeerx digital library yielding more than 500,000 acknowledgments and more than 4 million mentioned entities.",
"title": ""
},
{
"docid": "f9298a0e17098a436cdfd8968de3ece6",
"text": "Precision Viticulture is experiencing substantial growth thanks to the availability of improved and cost-effective instruments and methodologies for data acquisition and analysis, such as Unmanned Aerial Vehicles (UAV), that demonstrated to compete with traditional acquisition platforms, such as satellite and aircraft, due to low operational costs, high operational flexibility and high spatial resolution of imagery. In order to optimize the use of these technologies for precision viticulture, their technical, scientific and economic performances need to be assessed. The aim of this work is to compare NDVI surveys performed with UAV, aircraft and satellite, to assess the capability of each platform to represent the intra-vineyard vegetation spatial variability. NDVI images of two Italian vineyards were acquired simultaneously from different multi-spectral sensors onboard the OPEN ACCESS Remote Sens. 2015, 7 2972 three platforms, and a spatial statistical framework was used to assess their degree of similarity. Moreover, the pros and cons of each technique were also assessed performing a cost analysis as a function of the scale of application. Results indicate that the different platforms provide comparable results in vineyards characterized by coarse vegetation gradients and large vegetation clusters. On the contrary, in more heterogeneous vineyards, low-resolution images fail in representing part of the intra-vineyard variability. The cost analysis showed that the adoption of UAV platform is advantageous for small areas and that a break-even point exists above five hectares; above such threshold, airborne and then satellite have lower imagery cost.",
"title": ""
},
{
"docid": "119ca30e07356ba6bb06ec2fd9b95811",
"text": "Bioactive compounds from vegetal sources are a potential source of natural antifungic. An ethanol extraction was used to obtain bioactive compounds from Carica papaya L. cv. Maradol leaves and seeds of discarded ripe and unripe fruit. Both, extraction time and the papaya tissue flour:organic solvent ratio significantly affected yield, with the longest time and highest flour:solvent ratio producing the highest yield. The effect of time on extraction efficiency was confirmed by qualitative identification of the compounds present in the lowest and highest yield extracts. Analysis of the leaf extract with phytochemical tests showed the presence of alkaloids, flavonoids and terpenes. Antifungal effectiveness was determined by challenging the extracts (LE, SRE, SUE) from the best extraction treatment against three phytopathogenic fungi: Rhizopus stolonifer, Fusarium spp. and Colletotrichum gloeosporioides. The leaf extract exhibited the broadest action spectrum. The MIC50 for the leaf extract was 0.625 mg ml−1 for Fusarium spp. and >10 mg ml−1 for C. gloeosporioides, both equal to approximately 20% mycelial growth inhibition. Ethanolic extracts from Carica papaya L. cv. Maradol leaves are a potential source of secondary metabolites with antifungal properties.",
"title": ""
}
] |
scidocsrr
|
5c57221cb9d1f2d18c708ade85df5610
|
Twitter financial community modeling using agent based simulation
|
[
{
"docid": "0ee70b75cdcf22b8a22a1810227d401f",
"text": "Traditionally, consumers used the Internet to simply expend content: they read it, they watched it, and they used it to buy products and services. Increasingly, however, consumers are utilizing platforms–—such as content sharing sites, blogs, social networking, and wikis–—to create, modify, share, and discuss Internet content. This represents the social media phenomenon, which can now significantly impact a firm’s reputation, sales, and even survival. Yet, many executives eschew or ignore this form of media because they don’t understand what it is, the various forms it can take, and how to engage with it and learn. In response, we present a framework that defines social media by using seven functional building blocks: identity, conversations, sharing, presence, relationships, reputation, and groups. As different social media activities are defined by the extent to which they focus on some or all of these blocks, we explain the implications that each block can have for how firms should engage with social media. To conclude, we present a number of recommendations regarding how firms should develop strategies for monitoring, understanding, and responding to different social media activities. final version published in Business Horizons (2011) v. 54 pp. 241-251. doi: 10.106/j.bushor.2011.01.005 1. Welcome to the jungle: The social media ecology Social media employ mobile and web-based technologies to create highly interactive platforms via which individuals and communities share, co-",
"title": ""
}
] |
[
{
"docid": "364b82bf3334cf7534088ad63743422e",
"text": "Rigid origami is a class of origami whose entire surface remains rigid during folding except at crease lines. Rigid origami finds applications in manufacturing and packaging, such as map folding and solar panel packing. Advances in material science and robotics engineering also enable the realization of self-folding rigid origami and have fueled the interests in computational origami, in particular the issues of foldability, i.e., finding folding steps from a flat sheet of crease patterns to desired folded state. For example, recent computational methods allow rapid simulation of folding process of certain rigid origamis. However, these methods can fail even when the input crease pattern is extremely simple. This paper attempts to address this problem by modeling rigid origami as a kinematic system with closure constraints and solve the foldability problem through a randomized method. Our experimental results show that the proposed method successfully fold several types of rigid origamis that the existing methods fail to fold.",
"title": ""
},
{
"docid": "e9408e07cae42790c23322467778e409",
"text": "We present an atomic-scale teleoperation system that uses a head-mounted display and force-feedback manipulator arm for a user interface and a Scanning Tunneling Microscope (STM) as a sensor and effector. The system approximates presence at the atomic scale, placing the scientist on the surface, in control, w h i l e the experiment is happening. A scientist using the Nanomanipulator can view incoming STM data, feel the surface, and modify the surface (using voltage pulses) in real time. The Nanomanipulator has been used to study the effects of bias pulse duration on the creation of gold mounds. We intend to use the system to make controlled modifications to silicon surfaces. CR Categories: C.3 (Special-purpose and application-based systems), 1.3.7 (Virtual reality), J.2 (Computer Applications Physical Sciences)",
"title": ""
},
{
"docid": "26cedddd8a5a5f3a947fd6c85b8c41ad",
"text": "In today's world, online social media plays a vital role during real world events, especially crisis events. There are both positive and negative effects of social media coverage of events, it can be used by authorities for effective disaster management or by malicious entities to spread rumors and fake news. The aim of this paper, is to highlight the role of Twitter, during Hurricane Sandy (2012) to spread fake images about the disaster. We identified 10,350 unique tweets containing fake images that were circulated on Twitter, during Hurricane Sandy. We performed a characterization analysis, to understand the temporal, social reputation and influence patterns for the spread of fake images. Eighty six percent of tweets spreading the fake images were retweets, hence very few were original tweets. Our results showed that top thirty users out of 10,215 users (0.3%) resulted in 90% of the retweets of fake images; also network links such as follower relationships of Twitter, contributed very less (only 11%) to the spread of these fake photos URLs. Next, we used classification models, to distinguish fake images from real images of Hurricane Sandy. Best results were obtained from Decision Tree classifier, we got 97% accuracy in predicting fake images from real. Also, tweet based features were very effective in distinguishing fake images tweets from real, while the performance of user based features was very poor. Our results, showed that, automated techniques can be used in identifying real images from fake images posted on Twitter.",
"title": ""
},
{
"docid": "99fd1d5111b9d58f8d370be7de5b003d",
"text": "Molecular approaches to understanding the functional circuitry of the nervous system promise new insights into the relationship between genes, brain and behaviour. The cellular diversity of the brain necessitates a cellular resolution approach towards understanding the functional genomics of the nervous system. We describe here an anatomically comprehensive digital atlas containing the expression patterns of ∼20,000 genes in the adult mouse brain. Data were generated using automated high-throughput procedures for in situ hybridization and data acquisition, and are publicly accessible online. Newly developed image-based informatics tools allow global genome-scale structural analysis and cross-correlation, as well as identification of regionally enriched genes. Unbiased fine-resolution analysis has identified highly specific cellular markers as well as extensive evidence of cellular heterogeneity not evident in classical neuroanatomical atlases. This highly standardized atlas provides an open, primary data resource for a wide variety of further studies concerning brain organization and function.",
"title": ""
},
{
"docid": "a7e7d4232bd5c923746a1ecd7b5d4a27",
"text": "OBJECTIVE\nThe goal of this project was to determine whether screening different groups of elderly individuals in a general or specialty practice would be beneficial in detecting dementia.\n\n\nBACKGROUND\nEpidemiologic studies of aging and dementia have demonstrated that the use of research criteria for the classification of dementia has yielded three groups of subjects: those who are demented, those who are not demented, and a third group of individuals who cannot be classified as normal or demented but who are cognitively (usually memory) impaired.\n\n\nMETHODS\nThe authors conducted computerized literature searches and generated a set of abstracts based on text and index words selected to reflect the key issues to be addressed. Articles were abstracted to determine whether there were sufficient data to recommend the screening of asymptomatic individuals. Other research studies were evaluated to determine whether there was value in identifying individuals who were memory-impaired beyond what one would expect for age but who were not demented. Finally, screening instruments and evaluation techniques for the identification of cognitive impairment were reviewed.\n\n\nRESULTS\nThere were insufficient data to make any recommendations regarding cognitive screening of asymptomatic individuals. Persons with memory impairment who were not demented were characterized in the literature as having mild cognitive impairment. These subjects were at increased risk for developing dementia or AD when compared with similarly aged individuals in the general population.\n\n\nRECOMMENDATIONS\nThere were sufficient data to recommend the evaluation and clinical monitoring of persons with mild cognitive impairment due to their increased risk for developing dementia (Guideline). Screening instruments, e.g., Mini-Mental State Examination, were found to be useful to the clinician for assessing the degree of cognitive impairment (Guideline), as were neuropsychologic batteries (Guideline), brief focused cognitive instruments (Option), and certain structured informant interviews (Option). Increasing attention is being paid to persons with mild cognitive impairment for whom treatment options are being evaluated that may alter the rate of progression to dementia.",
"title": ""
},
{
"docid": "286fc2c4342a9269f40aa2701271f33a",
"text": "While Blockchain network brings tremendous benefits, there are concerns whether their performance would match up with the mainstream IT systems. This paper aims to investigate whether the consensus process using Practical Byzantine Fault Tolerance (PBFT) could be a performance bottleneck for networks with a large number of peers. We model the PBFT consensus process using Stochastic Reward Nets (SRN) to compute the mean time to complete consensus for networks up to 100 peers. We create a blockchain network using IBM Bluemix service, running a production-grade IoT application and use the data to parameterize and validate our models. We also conduct sensitivity analysis over a variety of system parameters and examine the performance of larger networks",
"title": ""
},
{
"docid": "2d91a3dead0aec251e086a3ae90b63d4",
"text": "Experimenting with a new dataset of 1.6M user comments from a Greek news portal and existing datasets of English Wikipedia comments, we show that an RNN outperforms the previous state of the art in moderation. A deep, classification-specific attention mechanism improves further the overall performance of the RNN. We also compare against a CNN and a word-list baseline, considering both fully automatic and semi-automatic moderation.",
"title": ""
},
{
"docid": "1352bb015fea7badea4e9d15f3af4030",
"text": "We present an overview of the QUT plant classification system submitted to LifeCLEF 2014. This system uses generic features extracted from a convolutional neural network previously used to perform general object classification. We examine the effectiveness of these features to perform plant classification when used in combination with an extremely randomised forest. Using this system, with minimal tuning, we obtained relatively good results with a score of 0.249 on the test set of LifeCLEF 2014.",
"title": ""
},
{
"docid": "231365d1de30f3529752510ec718dd38",
"text": "The lack of reliability of gliding contacts in highly constrained environments induces manufacturers to develop contactless transmission power systems such as rotary transformers. The following paper proposes an optimal design methodology for rotary transformers supplied from a low-voltage source at high temperatures. The method is based on an accurate multidisciplinary analysis model divided into magnetic, thermal and electrical parts, optimized thanks to a sequential quadratic programming method. The technique is used to discuss the design particularities of rotary transformers. Two optimally designed structures of rotary transformers : an iron silicon coaxial one and a ferrite pot core one, are compared.",
"title": ""
},
{
"docid": "f926984412481cf7653ed255a0f6db72",
"text": "One cornerstone of computer security is hardware-based isolation mechanisms, among which an emerging technology named Intel Software Guard Extensions (SGX) offers arguably the strongest security on x86 architecture. Intel SGX enables user-level code to create trusted memory regions named enclaves, which are isolated from the rest of the system, including privileged system software. This strong isolation of SGX, however, forbids sharing any trusted memory between enclaves, making it difficult to implement any features or techniques that must share code or data between enclaves. This dilemma between isolation and sharing is especially challenging to system software for SGX (e.g., library OSes), to which both properties are highly desirable.\n To resolve the tension between isolation and sharing in system software for SGX, especially library OSes, we propose a single-address-space approach, which runs all (user-level) processes and the library OS in a single enclave. This single-enclave architecture enables various memory-sharing features or techniques, thus improving both performance and usability. To enforce inter-process isolation and user-privilege isolation inside the enclave, we design a multi-domain software fault isolation (SFI) scheme, which is unique in its support for two types of domains: 1) data domains, which enable process isolation, and 2) code domains, which enable shared libraries. Our SFI is implemented efficiently by leveraging Intel Memory Protection Extensions (MPX). Experimental results show an average overhead of 10%, thus demonstrating the practicality of our approach.",
"title": ""
},
{
"docid": "a33147bd85b4ecf4f2292e4406abfc26",
"text": "Accident detection systems help reduce fatalities stemming from car accidents by decreasing the response time of emergency responders. Smartphones and their onboard sensors (such as GPS receivers and accelerometers) are promising platforms for constructing such systems. This paper provides three contributions to the study of using smartphone-based accident detection systems. First, we describe solutions to key issues associated with detecting traffic accidents, such as preventing false positives by utilizing mobile context information and polling onboard sensors to detect large accelerations. Second, we present the architecture of our prototype smartphone-based accident detection system and empirically analyze its ability to resist false positives as well as its capabilities for accident reconstruction. Third, we discuss how smartphone-based accident detection can reduce overall traffic congestion and increase the preparedness of emergency responders.",
"title": ""
},
{
"docid": "d5378436042ce2e7913d9071669732b6",
"text": "We propose data profiles as a tool for analyzing the performance of derivative-free optimization solvers when there are constraints on the computational budget. We use performance and data profiles, together with a convergence test that measures the decrease in function value, to analyze the performance of three solvers on sets of smooth, noisy, and piecewise-smooth problems. Our results provide estimates for the performance difference between these solvers, and show that on these problems, the model-based solver tested performs better than the two direct search solvers tested.",
"title": ""
},
{
"docid": "d67e2f13f83e69a9162a6730dede6e9d",
"text": "Sparse coding has been widely applied to learning-based single image super-resolution (SR) and has obtained promising performance by jointly learning effective representations for low-resolution (LR) and high-resolution (HR) image patch pairs. However, the resulting HR images often suffer from ringing, jaggy, and blurring artifacts due to the strong yet ad hoc assumptions that the LR image patch representation is equal to, is linear with, lies on a manifold similar to, or has the same support set as the corresponding HR image patch representation. Motivated by the success of deep learning, we develop a data-driven model coupled deep autoencoder (CDA) for single image SR. CDA is based on a new deep architecture and has high representational capability. CDA simultaneously learns the intrinsic representations of LR and HR image patches and a big-data-driven function that precisely maps these LR representations to their corresponding HR representations. Extensive experimentation demonstrates the superior effectiveness and efficiency of CDA for single image SR compared to other state-of-the-art methods on Set5 and Set14 datasets.",
"title": ""
},
{
"docid": "a8fa56dcb8524cc31feb946cf6d88e02",
"text": "We propose a fraud detection method based on the user accounts visualization and threshold-type detection. The visualization technique employed in our approach is the Self-Organizing Map (SOM). Since the SOM technique in its original form visualizes only the vectors, and the user accounts are represented in our work as the matrices storing a collection of records reflecting the user sequential activities, we propose a method of the matrices visualization on the SOM grid, which constitutes the main contribution of this paper. Furthermore, we propose a method of the detection threshold setting on the basis of the SOM U-matrix. The results of the conducted experimental study on real data in three different research fields confirm the advantages and effectiveness of the proposed approach. 2014 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "b25e35dd703d19860bbbd8f92d80bd26",
"text": "Business analytics (BA) systems are an important strategic investment for many organisations and can potentially contribute significantly to firm performance. Establishing strong BA capabilities is currently one of the major concerns of chief information officers. This research project aims to develop a BA capability maturity model (BACMM). The BACMM will help organisations to scope and evaluate their BA initiatives. This research-in-progress paper describes the current BACMM, relates it to existing capability maturity models and explains its theoretical base. It also discusses the design science research approach being used to develop the BACMM and provides details of further work within the research project. Finally, the paper concludes with a discussion of how the BACMM might be used in practice.",
"title": ""
},
{
"docid": "b8e90e97e8522ed45788025ca97ec720",
"text": "The use of Business Intelligence (BI) and Business Analytics for supporting decision-making is widespread in the world of praxis and their relevance for Management Accounting (MA) has been outlined in non-academic literature. Nonetheless, current research on Business Intelligence systems’ implications for the Management Accounting System is still limited. The purpose of this study is to contribute to understanding how BI system implementation and use affect MA techniques and Management Accountants’ role. An explorative field study, which involved BI consultants from Italian consulting companies, was carried out. We used the qualitative field study method since it permits dealing with complex “how” questions and, at the same time, taking into consideration multiple sites thus offering a comprehensive picture of the phenomenon. We found that BI implementation can affect Management Accountants’ expertise and can bring about not only incremental changes in existing Management Accounting techniques but also more relevant ones, by supporting the introduction of new and advanced MA techniques. By identifying changes in the Management Accounting System as well as factors which can prevent or favor a virtuous relationship between BI and Management Accounting Systems this research can be useful both for consultants and for client-companies in effectively managing BI projects.",
"title": ""
},
{
"docid": "1c6cfc9f0be38619ccf91dc0c47ac4d2",
"text": "Multi-label classification, where each instance is assigned to multiple categories, is a prevalent problem in data analysis. However, annotations of multi-label instances are typically more timeconsuming or expensive to obtain than annotations of single-label instances. Though active learning has been widely studied on reducing labeling effort for single-label problems, current research on multi-label active learning remains in a preliminary state. In this paper, we first propose two novel multi-label active learning strategies, a max-margin prediction uncertainty strategy and a label cardinality inconsistency strategy, and then integrate them into an adaptive framework of multi-label active learning. Our empirical results on multiple multilabel data sets demonstrate the efficacy of the proposed active instance selection strategies and the integrated active learning approach.",
"title": ""
},
{
"docid": "e41e5221116a7b63c2238fc4541c1d4c",
"text": ". . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii CHAPTER",
"title": ""
},
{
"docid": "3768b0373b9c2c38ad30987fbce92915",
"text": "Image super-resolution (SR) aims to recover high-resolution images from their low-resolution counterparts for improving image analysis and visualization. Interpolation methods, widely used for this purpose, often result in images with blurred edges and blocking effects. More advanced methods such as total variation (TV) retain edge sharpness during image recovery. However, these methods only utilize information from local neighborhoods, neglecting useful information from remote voxels. In this paper, we propose a novel image SR method that integrates both local and global information for effective image recovery. This is achieved by, in addition to TV, low-rank regularization that enables utilization of information throughout the image. The optimization problem can be solved effectively via alternating direction method of multipliers (ADMM). Experiments on MR images of both adult and pediatric subjects demonstrate that the proposed method enhances the details in the recovered high-resolution images, and outperforms methods such as the nearest-neighbor interpolation, cubic interpolation, iterative back projection (IBP), non-local means (NLM), and TV-based up-sampling.",
"title": ""
},
{
"docid": "069636576cbf6c5dd8cead8fff32ea4b",
"text": "Sleep-disordered breathing-comprising obstructive sleep apnoea (OSA), central sleep apnoea (CSA), or a combination of the two-is found in over half of heart failure (HF) patients and may have harmful effects on cardiac function, with swings in intrathoracic pressure (and therefore preload and afterload), blood pressure, sympathetic activity, and repetitive hypoxaemia. It is associated with reduced health-related quality of life, higher healthcare utilization, and a poor prognosis. Whilst continuous positive airway pressure (CPAP) is the treatment of choice for patients with daytime sleepiness due to OSA, the optimal management of CSA remains uncertain. There is much circumstantial evidence that the treatment of OSA in HF patients with CPAP can improve symptoms, cardiac function, biomarkers of cardiovascular disease, and quality of life, but the quality of evidence for an improvement in mortality is weak. For systolic HF patients with CSA, the CANPAP trial did not demonstrate an overall survival or hospitalization advantage for CPAP. A minute ventilation-targeted positive airway therapy, adaptive servoventilation (ASV), can control CSA and improves several surrogate markers of cardiovascular outcome, but in the recently published SERVE-HF randomized trial, ASV was associated with significantly increased mortality and no improvement in HF hospitalization or quality of life. Further research is needed to clarify the therapeutic rationale for the treatment of CSA in HF. Cardiologists should have a high index of suspicion for sleep-disordered breathing in those with HF, and work closely with sleep physicians to optimize patient management.",
"title": ""
}
] |
scidocsrr
|
01e3c13a2c164c03f2ba6091bcd7e390
|
A Model for Anomalies Detection in Internet of Things (IoT) Using Inverse Weight Clustering and Decision Tree
|
[
{
"docid": "a02882240114791b555392f5adda76aa",
"text": "This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been made to publish reliable data and information, but the author and publisher cannot assume responsibility for the validity of all materials or the consequences of their use. The authors and publishers have attempted to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may rectify in any future reprint. Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying , microfilming, and recording, or in any information storage or retrieval system, without written permission from the publishers. 978-750-8400. CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged. Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. Data clustering : algorithms and applications / [edited by] Charu C. Aggarwal, Chandan K. Reddy. pages cm.-(Chapman & Hall/CRC data mining and knowledge discovery series) Includes bibliographical references and index.",
"title": ""
}
] |
[
{
"docid": "6d6e3b9ae698aca9981dc3b6dfb11985",
"text": "Several recent papers have tried to address the genetic determination of eye colour via microsatellite linkage, testing of pigmentation candidate gene polymorphisms and the genome wide analysis of SNP markers that are informative for ancestry. These studies show that the OCA2 gene on chromosome 15 is the major determinant of brown and/or blue eye colour but also indicate that other loci will be involved in the broad range of hues seen in this trait in Europeans.",
"title": ""
},
{
"docid": "4bd2cffcd5022ff8cd4067442bcd53e2",
"text": "Over the last years, stream data processing has been gaining atten— tion both in industry and in academia due to its wide range of appli— cations. To fulfill the need for scalable and efficient stream analyt— ics, numerous open source stream data processing systems (SDPSs) have been developed, with high throughput and low latency being their key performance targets. In this paper, we propose a frame— work to evaluate the performance of three SDPSs, namely Apache Storm, Apache Spark, and Apache Flink. Our evaluation focuses in particular on measuring the throughput and latency of windowed operations. For this benchmark, we design workloads based on real—life, industrial use—cases. The main contribution of this work is threefold. First, we give a definition of latency and throughput for stateful operators. Second, we completely separate the system under test and driver, so that the measurement results are closer to actual system performance under real conditions. Third, we build the first driver to test the actual sustainable performance of a system under test. Our detailed evaluation highlights that there is no single winner, but rather, each system excels in individual use—cases.",
"title": ""
},
{
"docid": "e9621784df5009b241c563a54583bab9",
"text": "CONTEXT\nPsychopathic antisocial individuals have previously been characterized by abnormal interhemispheric processing and callosal functioning, but there have been no studies on the structural characteristics of the corpus callosum in this group.\n\n\nOBJECTIVES\nTo assess whether (1) psychopathic individuals with antisocial personality disorder show structural and functional impairments in the corpus callosum, (2) group differences are mirrored by correlations between dimensional measures of callosal structure and psychopathy, (3) callosal abnormalities are associated with affective deficits, and (4) callosal abnormalities are independent of psychosocial deficits.\n\n\nDESIGN\nCase-control study.\n\n\nSETTING\nCommunity sample.\n\n\nPARTICIPANTS\nFifteen men with antisocial personality disorder and high psychopathy scores and 25 matched controls, all from a larger sample of 83 community volunteers.\n\n\nMAIN OUTCOME MEASURES\nStructural magnetic resonance imaging measures of the corpus callosum (volume estimate of callosal white matter, thickness, length, and genu and splenium area), functional callosal measures (2 divided visual field tasks), electrodermal and cardiovascular activity during a social stressor, personality measures of affective and interpersonal deficits, and verbal and spatial ability.\n\n\nRESULTS\nPsychopathic antisocial individuals compared with controls showed a 22.6% increase in estimated callosal white matter volume (P<.001), a 6.9% increase in callosal length (P =.002), a 15.3% reduction in callosal thickness (P =.04), and increased functional interhemispheric connectivity (P =.02). Correlational analyses in the larger unselected sample confirmed the association between antisocial personality and callosal structural abnormalities. Larger callosal volumes were associated with affective and interpersonal deficits, low autonomic stress reactivity, and low spatial ability. Callosal abnormalities were independent of psychosocial deficits.\n\n\nCONCLUSIONS\nCorpus callosum abnormalities in psychopathic antisocial individuals may reflect atypical neurodevelopmental processes involving an arrest of early axonal pruning or increased white matter myelination. These findings may help explain affective deficits and previous findings of abnormal interhemispheric transfer in psychopathic individuals.",
"title": ""
},
{
"docid": "a411780d406e8b720303d18cd6c9df68",
"text": "Functional organization of the lateral temporal cortex in humans is not well understood. We recorded blood oxygenation signals from the temporal lobes of normal volunteers using functional magnetic resonance imaging during stimulation with unstructured noise, frequency-modulated (FM) tones, reversed speech, pseudowords and words. For all conditions, subjects performed a material-nonspecific detection response when a train of stimuli began or ceased. Dorsal areas surrounding Heschl's gyrus bilaterally, particularly the planum temporale and dorsolateral superior temporal gyrus, were more strongly activated by FM tones than by noise, suggesting a role in processing simple temporally encoded auditory information. Distinct from these dorsolateral areas, regions centered in the superior temporal sulcus bilaterally were more activated by speech stimuli than by FM tones. Identical results were obtained in this region using words, pseudowords and reversed speech, suggesting that the speech-tones activation difference is due to acoustic rather than linguistic factors. In contrast, previous comparisons between word and nonword speech sounds showed left-lateralized activation differences in more ventral temporal and temporoparietal regions that are likely involved in processing lexical-semantic or syntactic information associated with words. The results indicate functional subdivision of the human lateral temporal cortex and provide a preliminary framework for understanding the cortical processing of speech sounds.",
"title": ""
},
{
"docid": "700eb7f86bc3b815cddb460ba1e0c92b",
"text": "Information centric network (ICN) is progressively becoming the revolutionary paradigm to the traditional Internet with improving data (content) distribution on the Internet along with global unique names. Some ICN-based architecture, such as named data network (NDN) and content centric network (CCN) has recently been developed to deal with prominent advantages to implement the basic idea of ICN. To improve the Internet services, its architecture design is shifting from host-centric (end-to-end) communication to receive-driven content retrieval. A prominent advantage of this novel architecture is that networks are equipped with transparent in-network caching to accelerate the content dissemination and improve the utilization of network resources. The gigantic increase of global network traffic poses new challenges to CCN caching technologies. It requires extensive flexibility for consumers to get information. One of the most imperative commonalities of CCN design is ubiquitous caching. It is broadly accepted that the in-network caching would improve the performance. ICN cache receives on several new characteristics: cache is ubiquitous, cache is transparent to application, and content to be cached is more significant. This paper presents a complete survey of state-of-art CCN-based probabilistic caching schemes aiming to address the caching issues, with certain focus on minimizing cache redundancy and improving the accessibility of cached content.",
"title": ""
},
{
"docid": "ceb02e24964c29ef1bf03f2fe1ef8e3e",
"text": "In this paper we present initial research to develop a conceptual model for describing data quality effects in the context of Big Data. Despite the importance of data quality for modern businesses, current research on Big Data Quality is limited. It is particularly unknown how to apply previous data quality models to Big Data. Therefore in this paper we review data quality research from several perspectives and apply the data quality model developed by Helfert & Heinrich with its elements of quality of conformance and quality of design to the context of Big Data. We extend this model by analyzing the effect of three Big Data characteristics (Volume, Velocity and Variety) and discuss its application to the context of Smart Cities, as one interesting example in which Big Data is increasingly important. Although this paper provides only propositions and a first conceptual discussion, we believe that the paper can build a foundation for further empirical research to understand Big Data Quality and its implications in practice.",
"title": ""
},
{
"docid": "7c1691fd1140b3975b61f8e2ce3dcd9b",
"text": "In this paper, we consider the evolution of structure within large online social networks. We present a series of measurements of two such networks, together comprising in excess of five million people and ten million friendship links, annotated with metadata capturing the time of every event in the life of the network. Our measurements expose a surprising segmentation of these networks into three regions: singletons who do not participate in the network; isolated communities which overwhelmingly display star structure; and a giant component anchored by a well-connected core region which persists even in the absence of stars.We present a simple model of network growth which captures these aspects of component structure. The model follows our experimental results, characterizing users as either passive members of the network; inviters who encourage offline friends and acquaintances to migrate online; and linkers who fully participate in the social evolution of the network.",
"title": ""
},
{
"docid": "772fc1cf2dd2837227facd31f897dba3",
"text": "Eighty-three brains obtained at autopsy from nondemented and demented individuals were examined for extracellular amyloid deposits and intraneuronal neurofibrillary changes. The distribution pattern and packing density of amyloid deposits turned out to be of limited significance for differentiation of neuropathological stages. Neurofibrillary changes occurred in the form of neuritic plaques, neurofibrillary tangles and neuropil threads. The distribution of neuritic plaques varied widely not only within architectonic units but also from one individual to another. Neurofibrillary tangles and neuropil threads, in contrast, exhibited a characteristic distribution pattern permitting the differentiation of six stages. The first two stages were characterized by an either mild or severe alteration of the transentorhinal layer Pre-α (transentorhinal stages I–II). The two forms of limbic stages (stages III–IV) were marked by a conspicuous affection of layer Pre-α in both transentorhinal region and proper entorhinal cortex. In addition, there was mild involvement of the first Ammon's horn sector. The hallmark of the two isocortical stages (stages V–VI) was the destruction of virtually all isocortical association areas. The investigation showed that recognition of the six stages required qualitative evaluation of only a few key preparations.",
"title": ""
},
{
"docid": "d612aeb7f7572345bab8609571f4030d",
"text": "In conventional supervised training, a model is trained to fit all the training examples. However, having a monolithic model may not always be the best strategy, as examples could vary widely. In this work, we explore a different learning protocol that treats each example as a unique pseudo-task, by reducing the original learning problem to a few-shot meta-learning scenario with the help of a domain-dependent relevance function.1 When evaluated on the WikiSQL dataset, our approach leads to faster convergence and achieves 1.1%–5.4% absolute accuracy gains over the non-meta-learning counterparts.",
"title": ""
},
{
"docid": "d9ef259a2a2997a8b447b7c711f7da32",
"text": "Wireless Sensor Networks (WSNs) have attracted much attention in recent years. The potential applications of WSNs are immense. They are used for collecting, storing and sharing sensed data. WSNs have been used for various applications including habitat monitoring, agriculture, nuclear reactor control, security and tactical surveillance. The WSN system developed in this paper is for use in precision agriculture applications, where real time data of climatologically and other environmental properties are sensed and control decisions are taken based on it to modify them. The architecture of a WSN system comprises of a set of sensor nodes and a base station that communicate with each other and gather local information to make global decisions about the physical environment. The sensor network is based on the IEEE 802.15.4 standard and two topologies for this application.",
"title": ""
},
{
"docid": "84c06c5f9a295a136cc2ae0b4471ef8c",
"text": "SENSORS—MULTI-AGENT APPROACH by YUKI ONO (Under the Direction of Walter D. Potter) ABSTRACT This thesis addresses two issues in robotic application: an issue concerned with the verification of how well the existing heuristic methods compensate for uncertainty caused by sensing the unstructured environment, and an issue focusing on the design and implementation of a control system that is easily expandable and portable to another robotic platform aiming to future research and application. Using a robot equipped with a minimal set of sensors such as a camera and infrared sensors, our multi-agent based control system is built to tackle various problems encountered during corridor navigation. The control system consists of four agents: an agent responsible for handling sensors, an agent which identifies a corridor using machine vision techniques, an agent which avoids collisions applying fuzzy logic to proximity data, and an agent responsible for locomotion. In the experiments, the robot’s performance demonstrates the feasibility of a multi-agent approach.",
"title": ""
},
{
"docid": "76e407bc17d0317eae8ff004dc200095",
"text": "Major advances have recently been made in merging language and vision representations. But most tasks considered so far have confined themselves to the processing of objects and lexicalised relations amongst objects (content words). We know, however, that humans (even preschool children) can abstract over raw data to perform certain types of higher-level reasoning, expressed in natural language by function words. A case in point is given by their ability to learn quantifiers, i.e. expressions like few, some and all. From formal semantics and cognitive linguistics, we know that quantifiers are relations over sets which, as a simplification, we can see as proportions. For instance, in most fish are red, most encodes the proportion of fish which are red fish. In this paper, we study how well current language and vision strategies model such relations. We show that state-of-the-art attention mechanisms coupled with a traditional linguistic formalisation of quantifiers gives best performance on the task. Additionally, we provide insights on the role of 'gist' representations in quantification. A 'logical' strategy to tackle the task would be to first obtain a numerosity estimation for the two involved sets and then compare their cardinalities. We however argue that precisely identifying the composition of the sets is not only beyond current state-of-the-art models but perhaps even detrimental to a task that is most efficiently performed by refining the approximate numerosity estimator of the system.",
"title": ""
},
{
"docid": "caac2672c444172f866e5568bbaee251",
"text": "In the setting of secure multiparty computation, a set of parties with private inputs wish to compute some function of their inputs without revealing anything but their output. Over the last decade, the efficiency of secure two-party computation has advanced in leaps and bounds, with speedups of some orders of magnitude, making it fast enough to be of use in practice. In contrast, progress on the case of multiparty computation (with more than two parties) has been much slower, with very little work being done. Currently, the only implemented efficient multiparty protocol has many rounds of communication (linear in the depth of the circuit being computed) and thus is not suited for Internet-like settings where latency is not very low. In this paper, we construct highly efficient constant-round protocols for the setting of multiparty computation for semi-honest adversaries. Our protocols work by constructing a multiparty garbled circuit, as proposed in BMR (Beaver et al., STOC 1990). Our first protocol uses oblivious transfer and constitutes the first concretely-efficient constant-round multiparty protocol for the case of no honest majority. Our second protocol uses BGW, and is significantly more efficient than the FairplayMP protocol (Ben-David et al., CCS 2008) that also uses BGW.\n We ran extensive experimentation comparing our different protocols with each other and with a highly-optimized implementation of semi-honest GMW. Due to our protocol being constant round, it significantly outperforms GMW in Internet-like settings. For example, with 13 parties situated in the Virginia and Ireland Amazon regions and the SHA256 circuit with 90,000 gates and of depth 4000, the overall running time of our protocol is 25 seconds compared to 335 seconds for GMW. Furthermore, our online time is under half a second compared to 330 seconds for GMW.",
"title": ""
},
{
"docid": "ff94a36f6a1420cd0d732976a9a7d10f",
"text": "A basic idea of Dirichlet is to study a collection of interesting quantities {an}n≥1 by means of its Dirichlet series in a complex variable w: ∑ n≥1 ann −w. In this paper we examine this construction when the quantities an are themselves infinite series in a second complex variable s, arising from number theory or representation theory. We survey a body of recent work on such series and present a new conjecture concerning them.",
"title": ""
},
{
"docid": "c05d94b354b1d3a024a87e64d06245f1",
"text": "This paper outlines an innovative game model for learning computational thinking (CT) skills through digital game-play. We have designed a game framework where students can practice and develop their skills in CT with little or no programming knowledge. We analyze how this game supports various CT concepts and how these concepts can be mapped to programming constructs to facilitate learning introductory computer programming. Moreover, we discuss the potential benefits of our approach as a support tool to foster student motivation and abilities in problem solving. As initial evaluation, we provide some analysis of feedback from a survey response group of 25 students who have played our game as a voluntary exercise. Structured empirical evaluation will follow, and the plan for that is briefly described.",
"title": ""
},
{
"docid": "b9c62bd3aa5e6690df15e13d4b007348",
"text": "We introduce a new framework for training deep generative models for high-dimensional conditional density estimation. The Bottleneck Conditional Density Estimator (BCDE) is a variant of the conditional variational autoencoder (CVAE) that employs layer(s) of stochastic variables as the bottleneck between the input x and target y, where both are high-dimensional. Crucially, we propose a new hybrid training method that blends the conditional generative model with a joint generative model. Hybrid blending is the key to effective training of the BCDE, which avoids overfitting and provides a novel mechanism for leveraging unlabeled data. We show that our hybrid training procedure enables models to achieve competitive results in the MNIST quadrant prediction task in the fullysupervised setting, and sets new benchmarks in the semi-supervised regime for MNIST, SVHN, and CelebA.",
"title": ""
},
{
"docid": "1d562cc5517fa367a0f807ce7bb1c897",
"text": "Wireless sensor networks for environmental monitoring and agricultural applications often face long-range requirements at low bit-rates together with large numbers of nodes. This paper presents the design and test of a novel wireless sensor network that combines a large radio range with very low power consumption and cost. Our asymmetric sensor network uses ultralow-cost 40 MHz transmitters and a sensitive software defined radio receiver with multichannel capability. Experimental radio range measurements in two different outdoor environments demonstrate a single-hop range of up to 1.8 km. A theoretical model for radio propagation at 40 MHz in outdoor environments is proposed and validated with the experimental measurements. The reliability and fidelity of network communication over longer time periods is evaluated with a deployment for distributed temperature measurements. Our results demonstrate the feasibility of the transmit-only low-frequency system design approach for future environmental sensor networks. Although there have been several papers proposing the theoretical benefits of this approach, to the best of our knowledge this is the first paper to provide experimental validation of such claims.",
"title": ""
},
{
"docid": "1a7eed6c41824906f947aecbfb4a4a19",
"text": "QoS routing is an important research issue in wireless sensor networks (WSNs), especially for mission-critical monitoring and surveillance systems which requires timely and reliable data delivery. Existing work exploits multipath routing to guarantee both reliability and delay QoS constraints in WSNs. However, the multipath routing approach suffers from a significant energy cost. In this work, we exploit the geographic opportunistic routing (GOR) for QoS provisioning with both end-to-end reliability and delay constraints in WSNs. Existing GOR protocols are not efficient for QoS provisioning in WSNs, in terms of the energy efficiency and computation delay at each hop. To improve the efficiency of QoS routing in WSNs, we define the problem of efficient GOR for multiconstrained QoS provisioning in WSNs, which can be formulated as a multiobjective multiconstraint optimization problem. Based on the analysis and observations of different routing metrics in GOR, we then propose an Efficient QoS-aware GOR (EQGOR) protocol for QoS provisioning in WSNs. EQGOR selects and prioritizes the forwarding candidate set in an efficient manner, which is suitable for WSNs in respect of energy efficiency, latency, and time complexity. We comprehensively evaluate EQGOR by comparing it with the multipath routing approach and other baseline protocols through ns-2 simulation and evaluate its time complexity through measurement on the MicaZ node. Evaluation results demonstrate the effectiveness of the GOR approach for QoS provisioning in WSNs. EQGOR significantly improves both the end-to-end energy efficiency and latency, and it is characterized by the low time complexity.",
"title": ""
},
{
"docid": "509075d64990cf7258c13dd0dfd5e282",
"text": "In recent years we have seen a tremendous growth in applications of passive sensor-enabled RFID technology by researchers; however, their usability in applications such as activity recognition is limited by a key issue associated with their incapability to handle unintentional brownout events leading to missing significant sensed events such as a fall from a chair. Furthermore, due to the need to power and sample a sensor the practical operating range of passive-sensor enabled RFID tags are also limited with respect to passive RFID tags. Although using active or semi-passive tags can provide alternative solutions, they are not without the often undesirable maintenance and limited lifespan issues due to the need for batteries. In this article we propose a new hybrid powered sensor-enabled RFID tag concept which can sustain the supply voltage to the tag circuitry during brownouts and increase the operating range of the tag by combining the concepts from passive RFID tags and semipassive RFID tags, while potentially eliminating shortcomings of electric batteries. We have designed and built our concept, evaluated its desirable properties through extensive experiments and demonstrate its significance in the context of a human activity recognition application.",
"title": ""
},
{
"docid": "786540fad61e862657b778eb57fe1b24",
"text": "OBJECTIVE\nTo compare pharmacokinetics (PK) and pharmacodynamics (PD) of insulin glargine in type 2 diabetes mellitus (T2DM) after evening versus morning administration.\n\n\nRESEARCH DESIGN AND METHODS\nTen T2DM insulin-treated persons were studied during 24-h euglycemic glucose clamp, after glargine injection (0.4 units/kg s.c.), either in the evening (2200 h) or the morning (1000 h).\n\n\nRESULTS\nThe 24-h glucose infusion rate area under the curve (AUC0-24h) was similar in the evening and morning studies (1,058 ± 571 and 995 ± 691 mg/kg × 24 h, P = 0.503), but the first 12 h (AUC0-12h) was lower with evening versus morning glargine (357 ± 244 vs. 593 ± 374 mg/kg × 12 h, P = 0.004), whereas the opposite occurred for the second 12 h (AUC12-24h 700 ± 396 vs. 403 ± 343 mg/kg × 24 h, P = 0.002). The glucose infusion rate differences were totally accounted for by different rates of endogenous glucose production, not utilization. Plasma insulin and C-peptide levels did not differ in evening versus morning studies. Plasma glucagon levels (AUC0-24h 1,533 ± 656 vs. 1,120 ± 344 ng/L/h, P = 0.027) and lipolysis (free fatty acid AUC0-24h 7.5 ± 1.6 vs. 8.9 ± 1.9 mmol/L/h, P = 0.005; β-OH-butyrate AUC0-24h 6.8 ± 4.7 vs. 17.0 ± 11.9 mmol/L/h, P = 0.005; glycerol, P < 0.020) were overall more suppressed after evening versus morning glargine administration.\n\n\nCONCLUSIONS\nThe PD of insulin glargine differs depending on time of administration. With morning administration insulin activity is greater in the first 0-12 h, while with evening administration the activity is greater in the 12-24 h period following dosing. However, glargine PK and plasma C-peptide levels were similar, as well as glargine PD when analyzed by 24-h clock time independent of the time of administration. Thus, the results reflect the impact of circadian changes in insulin sensitivity in T2DM (lower in the night-early morning vs. afternoon hours) rather than glargine per se.",
"title": ""
}
] |
scidocsrr
|
cf95e5d4d89e43ff6629f4e155c8a49b
|
RN / 15 / 07 Causal Impact Analysis Applied to App Releases in Google Play and Windows Phone Store December 16 , 2015
|
[
{
"docid": "054ed84aa377673d1327dedf26c06c59",
"text": "App Stores, such as Google Play or the Apple Store, allow users to provide feedback on apps by posting review comments and giving star ratings. These platforms constitute a useful electronic mean in which application developers and users can productively exchange information about apps. Previous research showed that users feedback contains usage scenarios, bug reports and feature requests, that can help app developers to accomplish software maintenance and evolution tasks. However, in the case of the most popular apps, the large amount of received feedback, its unstructured nature and varying quality can make the identification of useful user feedback a very challenging task. In this paper we present a taxonomy to classify app reviews into categories relevant to software maintenance and evolution, as well as an approach that merges three techniques: (1) Natural Language Processing, (2) Text Analysis and (3) Sentiment Analysis to automatically classify app reviews into the proposed categories. We show that the combined use of these techniques allows to achieve better results (a precision of 75% and a recall of 74%) than results obtained using each technique individually (precision of 70% and a recall of 67%).",
"title": ""
}
] |
[
{
"docid": "c54b240ae85efb774152272e63718ea9",
"text": "An effective methodology using satellite high-resolution polarized information to interpret and quantitatively assess various surface ocean phenomena is suggested. Using a sample RADARSAT-2 quad-polarization ocean synthetic aperture radar (SAR) scene, the dual co-polarization (VV and HH) radar data are combined into polarization difference, polarization ratio, and nonpolarized components. As demonstrated, these field quantities provide means to distinguish Bragg scattering mechanism and radar returns from breaking waves. As shown, quantitative characteristics of the surface manifestation of ocean currents, slicks, and wind field features in these dual co-polarization properties are very different and may be effectively used in the development of new SAR detection and discrimination algorithms.",
"title": ""
},
{
"docid": "5f21a1348ad836ded2fd3d3264455139",
"text": "To date, brain imaging has largely relied on X-ray computed tomography and magnetic resonance angiography with limited spatial resolution and long scanning times. Fluorescence-based brain imaging in the visible and traditional near-infrared regions (400-900 nm) is an alternative but currently requires craniotomy, cranial windows and skull thinning techniques, and the penetration depth is limited to 1-2 mm due to light scattering. Here, we report through-scalp and through-skull fluorescence imaging of mouse cerebral vasculature without craniotomy utilizing the intrinsic photoluminescence of single-walled carbon nanotubes in the 1.3-1.4 micrometre near-infrared window. Reduced photon scattering in this spectral region allows fluorescence imaging reaching a depth of >2 mm in mouse brain with sub-10 micrometre resolution. An imaging rate of ~5.3 frames/s allows for dynamic recording of blood perfusion in the cerebral vessels with sufficient temporal resolution, providing real-time assessment of blood flow anomaly in a mouse middle cerebral artery occlusion stroke model.",
"title": ""
},
{
"docid": "4f5ee37e5cd795de1a4a7c01a611e737",
"text": "In our electronically inter-connected society, reliable and user-friendly recognition and verification system is essential in many sectors of our life. The person’s physiological or behavioral characteristics, known as biometrics, are important and vital methods that can be used for identification and verification. Fingerprint recognition is one of the most popular biometric techniques used in automatic personal identification and verification. Many researchers have addressed the fingerprint classification problem and many approaches to automatic fingerprint classification have been presented in the literature; nevertheless, the research on this topic is still very active. Although significant progress has been made in designing automatic fingerprint identification systems over the past two decades, a number of design factors (lack of reliable minutia extraction algorithms, difficulty in quantitatively defining a reliable match between fingerprint images, poor image acquisition, low contrast images, the difficulty of reading the fingerprint for manual workers, etc.) create bottlenecks in achieving the desired performance. Nowadays, investigating the influence of the fingerprint quality on recognition performances also gains more and more attention. A fingerprint is the pattern of ridges and valleys on the surface of a fingertip. Each individual has unique fingerprints. Most fingerprint matching systems are based on four types of fingerprint representation schemes (Fig. 1): grayscale image (Bazen et al., 2000), phase image (Thebaud, 1999), skeleton image (Feng, 2006; Hara & Toyama, 2007), and minutiae (Ratha et al., 2000; Bazen & Gerez, 2003). Due to its distinctiveness, compactness, and compatibility with features used by human fingerprint experts, minutiae-based representation has become the most widely adopted fingerprint representation scheme. The uniqueness of a fingerprint is exclusively determined by the local ridge characteristics and their relationships. The ridges and valleys in a fingerprint alternate, flowing in a local constant direction. The two most prominent local ridge characteristics are: 1) ridge ending and, 2) ridge bifurcation. A ridge ending is defined as the point where a ridge ends abruptly. A ridge bifurcation is defined as the point where a ridge forks or diverges into branch ridges. Collectively, these features are called minutiae. Detailed description of fingerprint minutiae will be given in the next section. The widespread deployment of fingerprint recognition systems in various applications has caused concerns that compromised fingerprint templates may be used to make fake fingers, which could then be used to deceive all fingerprint systems the same person is enrolled in.",
"title": ""
},
{
"docid": "8d4891ac73cdd4cd76e25438634118b2",
"text": "Although software measurement plays an increasingly important role in Software Engineering, there is no consensus yet on many of the concepts and terminology used in this field. Even worse, vocabulary conflicts and inconsistencies can be frequently found amongst the many sources and references commonly used by software measurement researchers and practitioners. This article presents an analysis of the current situation, and provides a comparison framework that can be used to identify and address the discrepancies, gaps, and terminology conflicts that current software measurement proposals present. A basic software measurement ontology is introduced, that aims at contributing to the harmonization of the different software measurement proposals and standards, by providing a coherent set of common concepts used in software measurement. The ontology is also aligned with the metrology vocabulary used in other more mature measurement engineering disciplines. q 2005 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "358588881317bb68c224dd77045b07b8",
"text": "While going deeper has been witnessed to improve the performance of convolutional neural networks (CNN), going smaller for CNN has received increasing attention recently due to its attractiveness for mobile/embedded applications. It remains an active and important topic how to design a small network while retaining the performance of large and deep CNNs (e.g., Inception Nets, ResNets). Albeit there are already intensive studies on compressing the size of CNNs, the considerable drop of performance is still a key concern in many designs. This paper addresses this concern with several new contributions. First, we propose a simple yet powerful method for compressing the size of deep CNNs based on parameter binarization. The striking difference from most previous work on parameter binarization/quantization lies at different treatments of 1× 1 convolutions and k×k convolutions (k > 1), where we only binarize k × k convolutions into binary patterns. The resulting networks are referred to as pattern networks. By doing this, we show that previous deep CNNs such as GoogLeNet and Inception-type Nets can be compressed dramatically with marginal drop in performance. Second, in light of the different functionalities of 1×1 (data projection/transformation) and k × k convolutions (pattern extraction), we propose a new block structure codenamed the pattern residual block that adds transformed feature maps generated by 1×1 convolutions to the pattern feature maps generated by k × k convolutions, based on which we design a small network with ∼ 1 million parameters. Combining with our parameter binarization, we achieve better performance on ImageNet than using similar sized networks including recently released Google MobileNets.",
"title": ""
},
{
"docid": "e8207f548a8daac1d8ae261796943f7f",
"text": "OBJECTIVE\nAccurate endoscopic differentiation would enable to resect and discard small and diminutive colonic lesions, thereby increasing cost-efficiency. Current classification systems based on narrow band imaging (NBI), however, do not include neoplastic sessile serrated adenomas/polyps (SSA/Ps). We aimed to develop and validate a new classification system for endoscopic differentiation of adenomas, hyperplastic polyps and SSA/Ps <10 mm.\n\n\nDESIGN\nWe developed the Workgroup serrAted polypS and Polyposis (WASP) classification, combining the NBI International Colorectal Endoscopic classification and criteria for differentiation of SSA/Ps in a stepwise approach. Ten consultant gastroenterologists predicted polyp histology, including levels of confidence, based on the endoscopic aspect of 45 polyps, before and after participation in training in the WASP classification. After 6 months, the same endoscopists predicted polyp histology of a new set of 50 polyps, with a ratio of lesions comparable to daily practice.\n\n\nRESULTS\nThe accuracy of optical diagnosis was 0.63 (95% CI 0.54 to 0.71) at baseline, which improved to 0.79 (95% CI 0.72 to 0.86, p<0.001) after training. For polyps diagnosed with high confidence the accuracy was 0.73 (95% CI 0.64 to 0.82), which improved to 0.87 (95% CI 0.80 to 0.95, p<0.01). The accuracy of optical diagnosis after 6 months was 0.76 (95% CI 0.72 to 0.80), increasing to 0.84 (95% CI 0.81 to 0.88) considering high confidence diagnosis. The combined negative predictive value with high confidence of diminutive neoplastic lesions (adenomas and SSA/Ps together) was 0.91 (95% CI 0.83 to 0.96).\n\n\nCONCLUSIONS\nWe developed and validated the first integrative classification method for endoscopic differentiation of small and diminutive adenomas, hyperplastic polyps and SSA/Ps. In a still image evaluation setting, introduction of the WASP classification significantly improved the accuracy of optical diagnosis overall as well as SSA/P in particular, which proved to be sustainable after 6 months.",
"title": ""
},
{
"docid": "53d41fb8e188add204ba96669715b49a",
"text": "A nationwide survey was conducted to investigate the prevalence of video game addiction and problematic video game use and their association with physical and mental health. An initial sample comprising 2,500 individuals was randomly selected from the Norwegian National Registry. A total of 816 (34.0 percent) individuals completed and returned the questionnaire. The majority (56.3 percent) of respondents used video games on a regular basis. The prevalence of video game addiction was estimated to be 0.6 percent, with problematic use of video games reported by 4.1 percent of the sample. Gender (male) and age group (young) were strong predictors for problematic use of video games. A higher proportion of high frequency compared with low frequency players preferred massively multiplayer online role-playing games, although the majority of high frequency players preferred other game types. Problematic use of video games was associated with lower scores on life satisfaction and with elevated levels of anxiety and depression. Video game use was not associated with reported amount of physical exercise.",
"title": ""
},
{
"docid": "4283c9b6b679913648f758abeba2ab93",
"text": "A significant goal of natural language processing (NLP) is to devise a system capable of machine understanding of text. A typical system can be tested on its ability to answer questions based on a given context document. One appropriate dataset for such a system is the Stanford Question Answering Dataset (SQuAD), a crowdsourced dataset of over 100k (question, context, answer) triplets. In this work, we focused on creating such a question answering system through a neural net architecture modeled after the attentive reader and sequence attention mix models.",
"title": ""
},
{
"docid": "bd95c693f27fc28575fee7224092582f",
"text": "Corresponding author: Sarfraz Fayaz Khan Department of Management Information Systems, College of Commerce and Business Administration, Dhofar University, Salalah, Sultanate of Oman Email: [email protected] [email protected] Abstract: Internet of things has acquired attention all over the globe. It has transformed the agricultural field and allowed farmers to compete with massive issues they face. The aim of this paper is to review the various challenges and opportunities associated with the applications of internet of things in agricultural sector. This research makes use of secondary sources that have been gathered from existing academic literature such as journals, books, articles, magazines, internet, newsletter, company publications and whitepapers. Applications reviewed in this research are about crop sensing, mapping and monitoring the croplands pattern, managing and controlling with the help of radio frequency identification and real-time monitoring of environment. Some of the challenges that were taken into consideration for reviewing the applications of internet of things are software complexity, security, lack of supporting infrastructure and technical skill requirement. Complexity in the software has to be rectified in order to support the IoT network. Therefore software must be developed as user-friendly for improving the farming, production and quality of the crop. Security is the major threat in the IoT applications. Security has to be enhanced through proper access control, data confidentiality and user authentication. Technical skill is required for farming to enhance the organizational abilities and to perform the farming functions, solving problems and more. Proper supporting infrastructure can be developed with proper internet availability and connectivity. Some of the opportunities were taken for reviewing the applications of internet of things are low power wireless sensor, better connectivity, operational efficiency and remote management.",
"title": ""
},
{
"docid": "09d6fc2e332cf611d911310b0d49e3bf",
"text": "KEA is a Diffie-Hellman based key-exchange protocol developed by NSA which provides mutual authentication for the parties. It became publicly available in 1998 and since then it was neither attacked nor proved to be secure. We analyze the security of KEA and find that the original protocol is susceptible to a class of attacks. On the positive side, we present a simple modification of the protocol which makes KEA secure. We prove that the modified protocol, called KEA+, satisfies the strongest security requirements for authenticated key-exchange and that it retains some security even if a secret key of a party is leaked. Our security proof is in the random oracle model and uses the Gap Diffie-Hellman assumption. Finally, we show how to add a key confirmation feature to KEA+ (we call the version with key confirmation KEA+C) and discuss the security properties of KEA+C.",
"title": ""
},
{
"docid": "e3913c904630d23b7133978a1116bc57",
"text": "A novel self-substrate-triggered (SST) technique is proposed to solve the nonuniform turn-on issue of the multi-finger GGNMOS for ESD protection. The first turned-on center finger is used to trigger on all fingers in the GGNMOS structure with self-substrate-triggered technique. So, the turn-on uniformity and ESD robustness of GGNMOS can be greatly improved by the new proposed self-substrate-triggered technique.",
"title": ""
},
{
"docid": "e9f9a7c506221bacf966808f54c4f056",
"text": "Reconfigurable antennas, with the ability to radiate more than one pattern at different frequencies and polarizations, are necessary in modern telecommunication systems. The requirements for increased functionality (e.g., direction finding, beam steering, radar, control, and command) within a confined volume place a greater burden on today's transmitting and receiving systems. Reconfigurable antennas are a solution to this problem. This paper discusses the different reconfigurable components that can be used in an antenna to modify its structure and function. These reconfiguration techniques are either based on the integration of radio-frequency microelectromechanical systems (RF-MEMS), PIN diodes, varactors, photoconductive elements, or on the physical alteration of the antenna radiating structure, or on the use of smart materials such as ferrites and liquid crystals. Various activation mechanisms that can be used in each different reconfigurable implementation to achieve optimum performance are presented and discussed. Several examples of reconfigurable antennas for both terrestrial and space applications are highlighted, such as cognitive radio, multiple-input-multiple-output (MIMO) systems, and satellite communication.",
"title": ""
},
{
"docid": "3165b876e7e1bcdccc261593235078f8",
"text": "The next challenge of game AI lies in Real Time Strategy (RTS) games. RTS games provide partially observable gaming environments, where agents interact with one another in an action space much larger than that of GO. Mastering RTS games requires both strong macro strategies and delicate micro level execution. Recently, great progress has been made in micro level execution, while complete solutions for macro strategies are still lacking. In this paper, we propose a novel learning-based Hierarchical Macro Strategy model for mastering MOBA games, a sub-genre of RTS games. Trained by the Hierarchical Macro Strategy model, agents explicitly make macro strategy decisions and further guide their micro level execution. Moreover, each of the agents makes independent strategy decisions, while simultaneously communicating with the allies through leveraging a novel imitated crossagent communication mechanism. We perform comprehensive evaluations on a popular 5v5 Multiplayer Online Battle Arena (MOBA) game. Our 5-AI team achieves a 48% winning rate against human player teams which are ranked top 1% in the player ranking system.",
"title": ""
},
{
"docid": "5718c733a80805c5dbb4333c2d298980",
"text": "{Portions reprinted, with permission from Keim et al. #2001 IEEE Abstract Simple presentation graphics are intuitive and easy-to-use, but show only highly aggregated data presenting only a very small number of data values (as in the case of bar charts) and may have a high degree of overlap occluding a significant portion of the data values (as in the case of the x-y plots). In this article, the authors therefore propose a generalization of traditional bar charts and x-y plots, which allows the visualization of large amounts of data. The basic idea is to use the pixels within the bars to present detailed information of the data records. The so-called pixel bar charts retain the intuitiveness of traditional bar charts while allowing very large data sets to be visualized in an effective way. It is shown that, for an effective pixel placement, a complex optimization problem has to be solved. The authors then present an algorithm which efficiently solves the problem. The application to a number of real-world ecommerce data sets shows the wide applicability and usefulness of this new idea, and a comparison to other well-known visualization techniques (parallel coordinates and spiral techniques) shows a number of clear advantages. Information Visualization (2002) 1, 20 – 34. DOI: 10.1057/palgrave/ivs/9500003",
"title": ""
},
{
"docid": "52d4f95b6dc6da7d5dd54003b0bc5fbf",
"text": "Leadership is a process directing to a target of which followers, the participators are shared. For this reason leadership has an important effect on succeeding organizational targets. More importance is given to the leadership studies in order to increase organizational success each day. One of the leadership researches that attracts attention recently is spiritual leadership. Spiritual leadership (SL) is important for imposing ideal to the followers and giving meaning to the works they do. Focusing on SL that has recently taken its place in leadership literature, this study looks into what extend faculty members teaching at Faculty of Education display SL qualities. The study is in descriptive scanning model. 1819 students studying at Kocaeli University Faculty of Education in 2009-2010 academic year constitute the universe of the study. Observing leadership qualities takes long time. Therefore, the sample of the study is determined by deliberate sampling method and includes 432 students studying at the last year of the faculty. Data regarding faculty members' SL qualities were collected using a questionnaire adapted from Fry's (2003) 'Spiritual Leadership Scale'. Consequently, university students think that academic stuff shows the features of SL and its sub dimensions in a medium level. According to students, academicians show attitudes related to altruistic love rather than faith and vision. It is found that faculty members couldn't display leadership qualities enough according to the students at the end of the study. © 2011 Published by Elsevier Ltd.",
"title": ""
},
{
"docid": "413c7d8931ade3b5257616ccba7f3b94",
"text": "New and unseen polymorphic malware, zero-day attacks, or other types of advanced persistent threats are usually not detected by signature-based security devices, firewalls, or anti-viruses. This represents a challenge to the network security industry as the amount and variability of incidents has been increasing. Consequently, this complicates the design of learning-based detection systems relying on features extracted from network data. The problem is caused by different joint distribution of observation (features) and labels in the training and testing data sets. This paper proposes a classification system designed to detect both known as well as previouslyunseen security threats. The classifiers use statistical feature representation computed from the network traffic and learn to recognize malicious behavior. The representation is designed and optimized to be invariant to the most common changes of malware behaviors. This is achieved in part by a feature histogram constructed for each group of HTTP flows (proxy log records) of a user visiting a particular hostname and in part by a feature self-similarity matrix computed for each group. The parameters of the representation (histogram bins) are optimized and learned based on the training samples along with the classifiers. The proposed classification system was deployed on large corporate networks, where it detected 2,090 new and unseen variants of malware samples with 90% precision (9 of 10 alerts were malicious), which is a considerable improvement when compared to the current flow-based approaches or existing signaturebased web security devices.",
"title": ""
},
{
"docid": "facf85be0ae23eacb7e7b65dd5c45b33",
"text": "We review evidence for partially segregated networks of brain areas that carry out different attentional functions. One system, which includes parts of the intraparietal cortex and superior frontal cortex, is involved in preparing and applying goal-directed (top-down) selection for stimuli and responses. This system is also modulated by the detection of stimuli. The other system, which includes the temporoparietal cortex and inferior frontal cortex, and is largely lateralized to the right hemisphere, is not involved in top-down selection. Instead, this system is specialized for the detection of behaviourally relevant stimuli, particularly when they are salient or unexpected. This ventral frontoparietal network works as a 'circuit breaker' for the dorsal system, directing attention to salient events. Both attentional systems interact during normal vision, and both are disrupted in unilateral spatial neglect.",
"title": ""
},
{
"docid": "db179cdd0e928b5c2a6848d3aca35a53",
"text": "The Brain Storm Optimization (BSO) algorithm is a powerful optimization approach that has been proposed in the past few years. A number of improvements have been previously proposed for BSO with successful application for real-world problems. In this paper, the implementation of a Cooperative Co-evolutionary BSO (CCBSO) algorithm that is based on the explicit space decomposition approach is investigated. The improved performance of CCBSO is illustrated based on experimental comparisons with other BSO variants on a library of 20 well-known classical benchmark functions.",
"title": ""
},
{
"docid": "f4319cf0c9632343edf6754968a6a1f7",
"text": "Distantly supervised relation extraction has been widely used to find novel relational facts from plain text. To predict the relation between a pair of two target entities, existing methods solely rely on those direct sentences containing both entities. In fact, there are also many sentences containing only one of the target entities, which also provide rich useful information but not yet employed by relation extraction. To address this issue, we build inference chains between two target entities via intermediate entities, and propose a path-based neural relation extraction model to encode the relational semantics from both direct sentences and inference chains. Experimental results on realworld datasets show that, our model can make full use of those sentences containing only one target entity, and achieves significant and consistent improvements on relation extraction as compared with strong baselines. The source code of this paper can be obtained from https:// github.com/thunlp/PathNRE.",
"title": ""
},
{
"docid": "525cd643153305af852f2df7b3f48ffb",
"text": "3D modeling of building architecture from mobile scanning is a rapidly advancing field. These models are used in virtual reality, gaming, navigation, and simulation applications. State-of-the-art scanning produces accurate point-clouds of building interiors containing hundreds of millions of points. This paper presents several scalable surface reconstruction techniques to generate watertight meshes that preserve sharp features in the geometry common to buildings. Our techniques can automatically produce high-resolution meshes that preserve the fine detail of the environment by performing a ray-carving volumetric approach to surface reconstruction. We present methods to automatically generate 2D floor plans of scanned building environments by detecting walls and room separations. These floor plans can be used to generate simplified 3D meshes that remove furniture and other temporary objects. We propose a method to texture-map these models from captured camera imagery to produce photo-realistic models. We apply these techniques to several data sets of building interiors, including multi-story datasets.",
"title": ""
}
] |
scidocsrr
|
1d186de2dc4c07167a93a4ee60069cdb
|
Boosting bottom-up and top-down visual features for saliency estimation
|
[
{
"docid": "c0dbb410ebd6c84bd97b5f5e767186b3",
"text": "A new hypothesis about the role of focused attention is proposed. The feature-integration theory of attention suggests that attention must be directed serially to each stimulus in a display whenever conjunctions of more than one separable feature are needed to characterize or distinguish the possible objects presented. A number of predictions were tested in a variety of paradigms including visual search, texture segregation, identification and localization, and using both separable dimensions (shape and color) and local elements or parts of figures (lines, curves, etc. in letters) as the features to be integrated into complex wholes. The results were in general consistent with the hypothesis. They offer a new set of criteria for distinguishing separable from integral features and a new rationale for predicting which tasks will show attention limits and which will not.",
"title": ""
},
{
"docid": "825b567c1a08d769aa334b707176f607",
"text": "A critical function in both machine vision and biological vision systems is attentional selection of scene regions worthy of further analysis by higher-level processes such as object recognition. Here we present the first model of spatial attention that (1) can be applied to arbitrary static and dynamic image sequences with interactive tasks and (2) combines a general computational implementation of both bottom-up (BU) saliency and dynamic top-down (TD) task relevance; the claimed novelty lies in the combination of these elements and in the fully computational nature of the model. The BU component computes a saliency map from 12 low-level multi-scale visual features. The TD component computes a low-level signature of the entire image, and learns to associate different classes of signatures with the different gaze patterns recorded from human subjects performing a task of interest. We measured the ability of this model to predict the eye movements of people playing contemporary video games. We found that the TD model alone predicts where humans look about twice as well as does the BU model alone; in addition, a combined BU*TD model performs significantly better than either individual component. Qualitatively, the combined model predicts some easy-to-describe but hard-to-compute aspects of attentional selection, such as shifting attention leftward when approaching a left turn along a racing track. Thus, our study demonstrates the advantages of integrating BU factors derived from a saliency map and TD factors learned from image and task contexts in predicting where humans look while performing complex visually-guided behavior.",
"title": ""
},
{
"docid": "bf9ed2160f4f3132206c1651dadb592e",
"text": "In this paper, we present a probabilistic multi-task learning approach for visual saliency estimation in video. In our approach, the problem of visual saliency estimation is modeled by simultaneously considering the stimulus-driven and task-related factors in a probabilistic framework. In this framework, a stimulus-driven component simulates the low-level processes in human vision system using multi-scale wavelet decomposition and unbiased feature competition; while a task-related component simulates the high-level processes to bias the competition of the input features. Different from existing approaches, we propose a multi-task learning algorithm to learn the task-related “stimulus-saliency” mapping functions for each scene. The algorithm also learns various fusion strategies, which are used to integrate the stimulus-driven and task-related components to obtain the visual saliency. Extensive experiments were carried out on two public eye-fixation datasets and one regional saliency dataset. Experimental results show that our approach outperforms eight state-of-the-art approaches remarkably.",
"title": ""
}
] |
[
{
"docid": "8fcc9f13f34b03d68f59409b2e3b007a",
"text": "Despite defensive advances, malicious software (malware) remains an ever present cyber-security threat. Cloud environments are far from malware immune, in that: i) they innately support the execution of remotely supplied code, and ii) escaping their virtual machine (VM) confines has proven relatively easy to achieve in practice. The growing interest in clouds by industries and governments is also creating a core need to be able to formally address cloud security and privacy issues. VM introspection provides one of the core cyber-security tools for analyzing the run-time behaviors of code. Traditionally, introspection approaches have required close integration with the underlying hypervisors and substantial re-engineering when OS updates and patches are applied. Such heavy-weight introspection techniques, therefore, are too invasive to fit well within modern commercial clouds. Instead, lighter-weight introspection techniques are required that provide the same levels of within-VM observability but without the tight hypervisor and OS patch-level integration. This work introduces Maitland as a prototype proof-of-concept implementation a lighter-weight introspection tool, which exploits paravirtualization to meet these end-goals. The work assesses Maitland's performance, highlights its use to perform packer-independent malware detection, and assesses whether, with further optimizations, Maitland could provide a viable approach for introspection in commercial clouds.",
"title": ""
},
{
"docid": "64de73be55c4b594934b0d1bd6f47183",
"text": "Smart grid has emerged as the next-generation power grid via the convergence of power system engineering and information and communication technology. In this article, we describe smart grid goals and tactics, and present a threelayer smart grid network architecture. Following a brief discussion about major challenges in smart grid development, we elaborate on smart grid cyber security issues. We define a taxonomy of basic cyber attacks, upon which sophisticated attack behaviors may be built. We then introduce fundamental security techniques, whose integration is essential for achieving full protection against existing and future sophisticated security attacks. By discussing some interesting open problems, we finally expect to trigger more research efforts in this emerging area.",
"title": ""
},
{
"docid": "54c8a8669b133e23035d93aabdc01a54",
"text": "The proposed antenna topology is an interesting radiating element, characterized by broadband or multiband capabilities. The exponential and soft/tapered design of the edge transitions and feeding makes it a challenging item to design and tune, leading though to impressive results. The antenna is build on Rogers RO3010 material. The bands in which the antenna works are GPS and Galileo (1.57 GHz), UMTS (1.8–2.17 GHz) and ISM 2.4 GHz (Bluetooth WiFi). The purpose of such an antenna is to be embedded in an Assisted GPS (A-GPS) reference station. Such a device serves as a fix GPS reference distributing the positioning information to mobile device users and delivering at the same time services via GSM network standards or via Wi-Fi / Bluetooth connections.",
"title": ""
},
{
"docid": "aca12127e8223126af8a2b9f7cd08bbf",
"text": "OBJECTIVES\nTo review research published before and after the passage of the Patient Protection and Affordable Care Act (2010) examining barriers in seeking or accessing health care in rural populations in the USA.\n\n\nSTUDY DESIGN\nThis literature review was based on a comprehensive search for all literature researching rural health care provision and access in the USA.\n\n\nMETHODS\nPubmed, Proquest Allied Nursing and Health Literature, National Rural Health Association (NRHA) Resource Center and Google Scholar databases were searched using the Medical Subject Headings (MeSH) 'Rural Health Services' and 'Rural Health.' MeSH subtitle headings used were 'USA,' 'utilization,' 'trends' and 'supply and distribution.' Keywords added to the search parameters were 'access,' 'rural' and 'health care.' Searches in Google Scholar employed the phrases 'health care disparities in the USA,' inequalities in 'health care in the USA,' 'health care in rural USA' and 'access to health care in rural USA.' After eliminating non-relevant articles, 34 articles were included.\n\n\nRESULTS\nSignificant differences in health care access between rural and urban areas exist. Reluctance to seek health care in rural areas was based on cultural and financial constraints, often compounded by a scarcity of services, a lack of trained physicians, insufficient public transport, and poor availability of broadband internet services. Rural residents were found to have poorer health, with rural areas having difficulty in attracting and retaining physicians, and maintaining health services on a par with their urban counterparts.\n\n\nCONCLUSIONS\nRural and urban health care disparities require an ongoing program of reform with the aim to improve the provision of services, promote recruitment, training and career development of rural health care professionals, increase comprehensive health insurance coverage and engage rural residents and healthcare providers in health promotion.",
"title": ""
},
{
"docid": "509f71d704e5e721642cc18eebd240c0",
"text": "This paper presents an approach to the lane recognition using on-vehicle LIDAR. It detests the objects by 2D scanning and collects the range and reflectivity data in each scanning direction. We developed the lane recognition algorithm with these data, in which the lane curvature, yaw angle and offset are calculated by using the Hough transformation, and the lane width is calculated by statistical procedure. Next the lane marks are tracked by the extended Kalman filter. Then we test the performance of the lane recognition and the good results are achieved. Finally, we show the result of the road environment recognition applying the lane recognition by LIDAR",
"title": ""
},
{
"docid": "d79125db077fdde79653feaf987eb6a0",
"text": "This paper focuses on the overall task of recommending to the chemist candidate molecules (reactants) necessary to synthesize a given target molecule (product), which is a novel application as well as an important step for the chemist to find a synthesis route to generate the product. We formulate this task as a link-prediction problem over a so-called Network of Organic Chemistry (NOC) that we have constructed from 8 million chemical reactions described in the US patent literature between 1976 and 2013. We leverage state-of-the-art factorization algorithms for recommender systems to solve this task. Our empirical evaluation demonstrates that Factorization Machines, trained with chemistry-specific knowledge, outperforms current methods based on similarity of chemical structures.",
"title": ""
},
{
"docid": "313ded9d63967fd0c8bc6ca164ce064a",
"text": "This paper presents a 0.35-mum SiGe BiCMOS VCO IC exhibiting a linear VCO gain (Kvco) for 5-GHz band application. To realize a linear Kvco, a novel resonant circuit is proposed. The measured Kvco changes from 224 MHz/V to 341 MHz/V. The ratio of the maximum Kvco to the minimum one is 1.5 which is less than one-half of that of a conventional VCO. The VCO oscillation frequency range is from 5.45 GHz to 5.95 GHz, the tuning range is 8.8 %, and the dc current consumption is 3.4 mA at a supply voltage of 3.0 V. The measured phase noise is -116 dBc/Hz at 1MHz offset, which is similar to the conventional VCO",
"title": ""
},
{
"docid": "5c48c8a2a20408775f5eaf4f575d5031",
"text": "In this paper we present a computational cognitive model of task interruption and resumption, focusing on the effects of the problem state bottleneck. Previous studies have shown that the disruptiveness of interruptions is for an important part determined by three factors: interruption duration, interrupting-task complexity, and moment of interruption. However, an integrated theory of these effects is still missing. Based on previous research into multitasking, we propose a first step towards such a theory in the form of a process model that attributes these effects to problem state requirements of both the interrupted and the interrupting task. Subsequently, we tested two predictions of this model in two experiments. The experiments confirmed that problem state requirements are an important predictor for the disruptiveness of interruptions. This suggests that interfaces should be designed to a) interrupt users at low-problem state moments and b) maintain the problem state for the user when interrupted.",
"title": ""
},
{
"docid": "2ba69997f51aa61ffeccce33b2e69054",
"text": "We consider the problem of transferring policies to the real world by training on a distribution of simulated scenarios. Rather than manually tuning the randomization of simulations, we adapt the simulation parameter distribution using a few real world roll-outs interleaved with policy training. In doing so, we are able to change the distribution of simulations to improve the policy transfer by matching the policy behavior in simulation and the real world. We show that policies trained with our method are able to reliably transfer to different robots in two real world tasks: swing-peg-in-hole and opening a cabinet drawer. The video of our experiments can be found at https: //sites.google.com/view/simopt.",
"title": ""
},
{
"docid": "708c9b97f4a393ac49688d913b1d2cc6",
"text": "Cognitive NLP systemsi.e., NLP systems that make use of behavioral data augment traditional text-based features with cognitive features extracted from eye-movement patterns, EEG signals, brain-imaging etc.. Such extraction of features is typically manual. We contend that manual extraction of features may not be the best way to tackle text subtleties that characteristically prevail in complex classification tasks like sentiment analysis and sarcasm detection, and that even the extraction and choice of features should be delegated to the learning system. We introduce a framework to automatically extract cognitive features from the eye-movement / gaze data of human readers reading the text and use them as features along with textual features for the tasks of sentiment polarity and sarcasm detection. Our proposed framework is based on Convolutional Neural Network (CNN). The CNN learns features from both gaze and text and uses them to classify the input text. We test our technique on published sentiment and sarcasm labeled datasets, enriched with gaze information, to show that using a combination of automatically learned text and gaze features often yields better classification performance over (i) CNN based systems that rely on text input alone and (ii) existing systems that rely on handcrafted gaze and textual features.",
"title": ""
},
{
"docid": "237345020161bab7ce0b0bba26c5cc98",
"text": "This paper addresses the difficulty of designing 1-V capable analog circuits in standard digital complementary metal–oxide–semiconductor (CMOS) technology. Design techniques for facilitating 1-V operation are discussed and 1-V analog building block circuits are presented. Most of these circuits use the bulk-driving technique to circumvent the metal– oxide–semiconductor field-effect transistor turn-on (threshold) voltage requirement. Finally, techniques are combined within a 1-V CMOS operational amplifier with rail-to-rail input and output ranges. While consuming 300 W, the 1-V rail-to-rail CMOS op amp achieves 1.3-MHz unity-gain frequency and 57 phase margin for a 22-pF load capacitance.",
"title": ""
},
{
"docid": "ec237c01100bf6afa26f3b01a62577f3",
"text": "Polyphenols are secondary metabolites of plants and are generally involved in defense against ultraviolet radiation or aggression by pathogens. In the last decade, there has been much interest in the potential health benefits of dietary plant polyphenols as antioxidant. Epidemiological studies and associated meta-analyses strongly suggest that long term consumption of diets rich in plant polyphenols offer protection against development of cancers, cardiovascular diseases, diabetes, osteoporosis and neurodegenerative diseases. Here we present knowledge about the biological effects of plant polyphenols in the context of relevance to human health.",
"title": ""
},
{
"docid": "9d4b97f66055979079940b267257758f",
"text": "A model that predicts the static friction for elastic-plastic contact of rough surface presented. The model incorporates the results of accurate finite element analyses elastic-plastic contact, adhesion and sliding inception of a single asperity in a statis representation of surface roughness. The model shows strong effect of the externa and nominal contact area on the static friction coefficient in contrast to the classical of friction. It also shows that the main dimensionless parameters affecting the s friction coefficient are the plasticity index and adhesion parameter. The effect of adh on the static friction is discussed and found to be negligible at plasticity index va larger than 2. It is shown that the classical laws of friction are a limiting case of present more general solution and are adequate only for high plasticity index and n gible adhesion. Some potential limitations of the present model are also discussed ing to possible improvements. A comparison of the present results with those obt from an approximate CEB friction model shows substantial differences, with the l severely underestimating the static friction coefficient. @DOI: 10.1115/1.1609488 #",
"title": ""
},
{
"docid": "0b22d7708437c47d5e83ea9fc5f24406",
"text": "The American Association for Respiratory Care has declared a benchmark for competency in mechanical ventilation that includes the ability to \"apply to practice all ventilation modes currently available on all invasive and noninvasive mechanical ventilators.\" This level of competency presupposes the ability to identify, classify, compare, and contrast all modes of ventilation. Unfortunately, current educational paradigms do not supply the tools to achieve such goals. To fill this gap, we expand and refine a previously described taxonomy for classifying modes of ventilation and explain how it can be understood in terms of 10 fundamental constructs of ventilator technology: (1) defining a breath, (2) defining an assisted breath, (3) specifying the means of assisting breaths based on control variables specified by the equation of motion, (4) classifying breaths in terms of how inspiration is started and stopped, (5) identifying ventilator-initiated versus patient-initiated start and stop events, (6) defining spontaneous and mandatory breaths, (7) defining breath sequences (8), combining control variables and breath sequences into ventilatory patterns, (9) describing targeting schemes, and (10) constructing a formal taxonomy for modes of ventilation composed of control variable, breath sequence, and targeting schemes. Having established the theoretical basis of the taxonomy, we demonstrate a step-by-step procedure to classify any mode on any mechanical ventilator.",
"title": ""
},
{
"docid": "0ce82ead0954b99d811b9f50eee76abc",
"text": "Convolutional Neural Networks (CNNs) dominate various computer vision tasks since Alex Krizhevsky showed that they can be trained effectively and reduced the top-5 error from 26.2 % to 15.3 % on the ImageNet large scale visual recognition challenge. Many aspects of CNNs are examined in various publications, but literature about the analysis and construction of neural network architectures is rare. This work is one step to close this gap. A comprehensive overview over existing techniques for CNN analysis and topology construction is provided. A novel way to visualize classification errors with confusion matrices was developed. Based on this method, hierarchical classifiers are described and evaluated. Additionally, some results are confirmed and quantified for CIFAR-100. For example, the positive impact of smaller batch sizes, averaging ensembles, data augmentation and test-time transformations on the accuracy. Other results, such as the positive impact of learned color transformation on the test accuracy could not be confirmed. A model which has only one million learned parameters for an input size of 32× 32× 3 and 100 classes and which beats the state of the art on the benchmark dataset Asirra, GTSRB, HASYv2 and STL-10 was developed.",
"title": ""
},
{
"docid": "32d79366936e301c44ae4ac11784e9d8",
"text": "A vast literature describes transformational leadership in terms of leader having charismatic and inspiring personality, stimulating followers, and providing them with individualized consideration. A considerable empirical support exists for transformation leadership in terms of its positive effect on followers with respect to criteria like effectiveness, extra role behaviour and organizational learning. This study aims to explore the effect of transformational leadership characteristics on followers’ job satisfaction. Survey method was utilized to collect the data from the respondents. The study reveals that individualized consideration and intellectual stimulation affect followers’ job satisfaction. However, intellectual stimulation is positively related with job satisfaction and individualized consideration is negatively related with job satisfaction. Leader’s charisma or inspiration was found to be having no affect on the job satisfaction. The three aspects of transformational leadership were tested against job satisfaction through structural equation modeling using Amos.",
"title": ""
},
{
"docid": "5dfd057e7abc9eda57d031fc0f922505",
"text": "Collective behaviour is often characterised by the so-called “coordination paradox” : Looking at individual ants, for example, they do not seem to cooperate or communicate explicitly, but nevertheless at the social level cooperative behaviour, such as nest building, emerges, apparently without any central coordination. In the case of social insects such emergent coordination has been explained by the theory of stigmergy, which describes how individuals can effect the behaviour of others (and their own) through artefacts, i.e. the product of their own activity (e.g., building material in the ants’ case). Artefacts clearly also play a strong role in human collective behaviour, which has been emphasised, for example, by proponents of activity theory and distributed cognition. However, the relation between theories of situated/social cognition and theories of social insect behaviour has so far received relatively li ttle attention in the cognitive science literature. This paper aims to take a step in this direction by comparing three theoretical frameworks for the study of cognition in the context of agent-environment interaction (activity theory, situated action, and distributed cognition) to each other and to the theory of stigmergy as a possible minimal common ground. The comparison focuses on what each of the four theories has to say about the role/nature of (a) the agents involved in collective behaviour, (b) their environment, (c) the collective activities addressed, and (d) the role that artefacts play in the interaction between agents and their environments, and in particular in the coordination",
"title": ""
},
{
"docid": "b03df3dbdac7279e4fe73ef5388b570b",
"text": "In this paper, we formulate the fuzzy perceptive model for discounted Markov decision processes in which the perception for transition probabilities is described by fuzzy sets. The optimal expected reward, called a fuzzy perceptive value, is characterized and calculated by a new fuzzy relation. As a numerical example, a machine maintenance problem is considered.",
"title": ""
},
{
"docid": "538ad3f32bbf333d73e619efc8ab4e9c",
"text": "In order to learn effective control policies for dynamical systems, policy search methods must be able to discover successful executions of the desired task. While random exploration can work well in simple domains, complex and highdimensional tasks present a serious challenge, particularly when combined with high-dimensional policies that make parameter-space exploration infeasible. We present a method that uses trajectory optimization as a powerful exploration strategy that guides the policy search. A variational decomposition of a maximum likelihood policy objective allows us to use standard trajectory optimization algorithms such as differential dynamic programming, interleaved with standard supervised learning for the policy itself. We demonstrate that the resulting algorithm can outperform prior methods on two challenging locomotion tasks.",
"title": ""
}
] |
scidocsrr
|
0784300c79a359b89e10152a18c26782
|
A DC source of current-fed Cockcroft-Walton multiplier with high gain DC-DC converter
|
[
{
"docid": "095fa44019b071dc842779a7f22a2f8a",
"text": "The high-voltage gain converter is widely employed in many industry applications, such as photovoltaic systems, fuel cell systems, electric vehicles, and high-intensity discharge lamps. This paper presents a novel single-switch high step-up nonisolated dc-dc converter integrating coupled inductor with extended voltage doubler cell and diode-capacitor techniques. The proposed converter achieves extremely large voltage conversion ratio with appropriate duty cycle and reduction of voltage stress on the power devices. Moreover, the energy stored in leakage inductance of coupled inductor is efficiently recycled to the output, and the voltage doubler cell also operates as a regenerative clamping circuit, alleviating the problem of potential resonance between the leakage inductance and the junction capacitor of output diode. These characteristics make it possible to design a compact circuit with high static gain and high efficiency for industry applications. In addition, the unexpected high-pulsed input current in the converter with coupled inductor is decreased. The operating principles and the steady-state analyses of the proposed converter are discussed in detail. Finally, a prototype circuit is implemented in the laboratory to verify the performance of the proposed converter.",
"title": ""
}
] |
[
{
"docid": "4315cbfa13e9a32288c1857f231c6410",
"text": "The likelihood of soft errors increase with system complexity, reduction in operational voltages, exponential growth in transistors per chip, increases in clock frequencies and device shrinking. As the memory bit-cell area is condensed, single event upset that would have formerly despoiled only a single bit-cell are now proficient of upsetting multiple contiguous memory bit-cells per particle strike. While these error types are beyond the error handling capabilities of the frequently used error correction codes (ECCs) for single bit, the overhead associated with moving to more sophisticated codes for multi-bit errors is considered to be too costly. To address this issue, this paper presents a new approach to detect and correct multi-bit soft error by using Horizontal-Vertical-Double-Bit-Diagonal (HVDD) parity bits with a comparatively low overhead.",
"title": ""
},
{
"docid": "4d0b163e7c4c308696fa5fd4d93af894",
"text": "Modeling agent behavior is central to understanding the emergence of complex phenomena in multiagent systems. Prior work in agent modeling has largely been task-specific and driven by handengineering domain-specific prior knowledge. We propose a general learning framework for modeling agent behavior in any multiagent system using only a handful of interaction data. Our framework casts agent modeling as a representation learning problem. Consequently, we construct a novel objective inspired by imitation learning and agent identification and design an algorithm for unsupervised learning of representations of agent policies. We demonstrate empirically the utility of the proposed framework in (i) a challenging highdimensional competitive environment for continuous control and (ii) a cooperative environment for communication, on supervised predictive tasks, unsupervised clustering, and policy optimization using deep reinforcement learning.",
"title": ""
},
{
"docid": "82acea4dad8976d36f99ce76430f40c8",
"text": "Evaluation has always been a key challenge in the development of artificial intelligence (AI) based software, due to the technical complexity of the software artifact and, often, its embedding in complex sociotechnical processes. Recent advances in machine learning (ML) enabled by deep neural networks has exacerbated the challenge of evaluating such software due to the opaque nature of these ML-based artifacts. A key related issue is the (in)ability of such systems to generate useful explanations of their outputs, and we argue that the explanation and evaluation problems are closely linked. The paper models the elements of a ML-based AI system in the context of public sector decision (PSD) applications involving both artificial and human intelligence, and maps these elements against issues in both evaluation and explanation, showing how the two are related. We consider a number of common PSD application patterns in the light of our model, and identify a set of key issues connected to explanation and evaluation in each case. Finally, we propose multiple strategies to promote wider adoption of AI/ML technologies in PSD, where each is distinguished by a focus on different elements of our model, allowing PSD policy makers to adopt an approach that best fits their context and concerns.",
"title": ""
},
{
"docid": "f041a02b565ca9100d20b479fb6951c8",
"text": "Linear blending is a very popular skinning technique for virtual characters, even though it does not always generate realistic deformations. Recently, nonlinear blending techniques (such as dual quaternions) have been proposed in order to improve upon the deformation quality of linear skinning. The trade-off consists of the increased vertex deformation time and the necessity to redesign parts of the 3D engine. In this paper, we demonstrate that any nonlinear skinning technique can be approximated to an arbitrary degree of accuracy by linear skinning, using just a few samples of the nonlinear blending function (virtual bones). We propose an algorithm to compute this linear approximation in an automatic fashion, requiring little or no interaction with the user. This enables us to retain linear skinning at the core of our 3D engine without compromising the visual quality or character setup costs.",
"title": ""
},
{
"docid": "dfea162685fb032ddbe7fd2a6ae0f427",
"text": "Cancer cells exhibit metabolic dependencies that distinguish them from their normal counterparts. Among these addictions is an increased utilization of the amino acid glutamine (Gln) to fuel anabolic processes. Recently, we reported the identification of a non-canonical pathway of Gln utilization in human pancreatic cancer cells that is required for tumor growth. While most cells utilize glutamate dehydrogenase (GLUD1) to convert Gln-derived glutamate (Glu) into a-ketoglutarate (aKG) in the mitochondria to fuel the tricarboxylic acid cycle, pancreatic cancer cells rely on a distinct pathway that integrates the mitochondrial and cytosolic aspartate aminotransferases GOT2 and GOT1. By generating aKG from Glu (in conjunction with the conversion of oxaloacetate into aspartate), GOT2 fuels anaplerosis in place of GLUD1. The Asp created is released into the cytosol and acted on by GOT1. This is subsequently used through a series of reactions to yield cytosolic NADPH from malic enzyme. Importantly, we have demonstrated that pancreatic cancers are strongly dependent on this series of reactions to maintain redox homeostasis which enables proliferation. Herein, we detail the subcellular compartmentalization and consequences of the aforementioned reactions on pancreatic cancer metabolism. We have also investigated the essentiality of this pathway in other contexts and find that pancreatic cancers have a uniform and unique reliance on this pathway, which may provide novel therapeutic approaches to treat these refractory tumors.",
"title": ""
},
{
"docid": "638ae99d6a233ab7f7394acec8da083c",
"text": "Rotating disk and blade fatigue failures are usually a low percentage of failures in most machinery types, but other than coupling / shaft end failures remain some of the most problematic for extensive repairs. High-cycle fatigue failures of rotating disks and blades are not common in most machinery types, but when they occur, they require extensive repairs and resolution can be problematic. This paper is an update of the tutorial given at the 2004 Turbomachinery Symposium focusing on high-cycle fatigue failures in steam turbines, centrifugal and axial gas compressors in refineries and process plants. The failure theories and many of the descriptions for cases given in 2004 have been updated to include blade resonance concerns for potential flow as well as vane and blade wake effects. Disk vibratory modes can be of concern in many machines, but of little concern in others as will be explained. In addition, vibratory modes are included where blades are coupled via communication with the main disk. Over the past decade, fluid-structure-interaction computational methods and modal testing have improved and have been applied to failure theories and problem resolution in the given cases. There is also added information on the effects of mistuning blades and disks, some beneficial and some with serious concerns for increased resonant amplification. Finally, knowledge about acoustic pressure pulsation excitation, particularly for centrifugal impellers at rotating blade passing frequency, has been greatly expanded. A review of acoustics calculations for failure prevention, mainly for high-pressure applications is covered here.",
"title": ""
},
{
"docid": "4f7c309f9a495faa53f2bb11e5885aa4",
"text": "Three different RF chain architectures operating in the FSS (Fixed Satellite Services) + BSS (Broadcast Satellite) spectrum are presented and discussed. The RF chains are based on a common wideband corrugated feed horn, but differ on the approach used for bands and polarizations separation. A breadboard of a novel self-diplexed configuration has been designed, manufactured and tested. It proves to be the preferred candidate for bandwidth, losses and power handling. Very good correlation of the RF performance to the theoretical design is found.",
"title": ""
},
{
"docid": "e8880b633c3f4b9646a7f6e9c9273f6f",
"text": "A) CTMC states. Since we assume that c, d and Xmax are integers, while the premiums that the customers pay are worth 1, every integer between 0 and Xmax is achievable. Accordingly, given our assumptions every cash flow consists of an integer-valued amount of money. Thus, the CTMC cannot reach any non-integer state. We are obviously assuming that the initial amount of cash X(0) is also an integer. Consequently, the state space of the CTMC consists of every nonnegative integer number between 0 and Xmax.",
"title": ""
},
{
"docid": "75821b0aaf9c35490858d2f17d8fcb3e",
"text": "Heretofore the concept of \" blockchain \" has not been precisely defined. Accordingly the potential useful applications of this technology have been largely inflated. This work sidesteps the question of what constitutes a blockchain as such and focuses on the architectural components of the Bitcoin cryptocurrency, insofar as possible, in isolation. We consider common problems inherent in the design of effective supply chain management systems. With each identified problem we propose a solution that utilizes one or more component aspects of Bitcoin. This culminates in five design principles for increased efficiency in supply chain management systems through the application of incentive mechanisms and data structures native to the Bitcoin cryptocurrency protocol.",
"title": ""
},
{
"docid": "8e849b08ff4b33418940c436b34df472",
"text": "In this paper we examine the average running times of Batcher's bitonic merge and Batcher's odd-even merge when they are used as parallel merging algorithms. It has been shown previously that the running time of odd-even merge can be upper bounded by a function of the maximal rank diierence for elements in the two input sequences. Here we give an almost matching lower bound for odd-even merge as well as a similar upper bound for (a special version of) bitonic merge. From this follows that the average running time of odd-even merge (bitonic merge) is ((n=p)(1+log(1+p 2 =n))) (O((n=p)(1+log(1+p 2 =n))), resp.) where n is the size of the input and p is the number of processors used. Using these results we then show that the average running times of odd-even merge sort and bitonic merge sort are O((n=p)(logn + (log(1 + p 2 =n)) 2)), that is, the two algorithms are optimal on the average if n p 2 =2 p log p. The derived bounds do not allow to compare the two sorting algorithms directly, thus we also present experimental results, obtained by a simulation program, for various sizes of input and numbers of processors.",
"title": ""
},
{
"docid": "9af2061be407902c02a20afef2d5a0bc",
"text": "Electrocardiographic (ECG) signals often consist of unwanted noises and speckles. In order to remove the noises, various image processing filters are used in various studies. In this paper, FIR and IIR filters are initially used to remove the linear and nonlinear delay present in the input ECG signal. In addition, filters are used to remove unwanted frequency components from the input ECG signal. Linear Discriminant Analysis (LDA) is used to reduce the features present in the input ECG signal. Support Vector Machines (SVM) is widely used for pattern recognition. However, traditional SVM method does not applicable to compute different characteristics of the features of data sets. In this paper, we use SVM model with a weighted kernel function method to classify more features from the input ECG signal. SVM model with a weighted kernel function method is significantly identifies the Q wave, R wave and S wave in the input ECG signal to classify the heartbeat level such as Left Bundle Branch Block (LBBB), Right Bundle Branch Block (RBBB), Premature Ventricular Contraction (PVC) and Premature Atrial Contractions (PACs). The performance of the proposed Linear Discriminant Analysis (LDA) with enhanced kernel based Support Vector Machine (SVM) method is comparatively analyzed with other machine learning approaches such as Linear Discriminant Analysis (LDA) with multilayer perceptron (MLP), Linear Discriminant Analysis (LDA) with Support Vector Machine (SVM), and Principal Component Analysis (PCA) with Support Vector Machine (SVM). The calculated RMSE, MAPE, MAE, R2 and Q2 for the proposed Linear Discriminant Analysis (LDA) with enhanced kernel based Support Vector Machine (SVM) method is low when compared with other approaches such as LDA with MLP, and PCA with SVM and LDA with SVM. Finally, Sensitivity, Specificity and Mean Square Error (MSE) are calculated to prove the effectiveness of the proposed Linear Discriminant Analysis (LDA) with an enhanced kernel based Support Vector Machine (SVM) method.",
"title": ""
},
{
"docid": "3fe30c4d898ec34b83a36efbba8019ff",
"text": "Find the secret to improve the quality of life by reading this introduction to pattern recognition statistical structural neural and fuzzy logic approaches. This is a kind of book that you need now. Besides, it can be your favorite book to read after having this book. Do you ask why? Well, this is a book that has different characteristic with others. You may not need to know who the author is, how well-known the work is. As wise word, never judge the words from who speaks, but make the words as your good value to your life.",
"title": ""
},
{
"docid": "4560e1b7318013be0688b8e73692fda4",
"text": "This paper introduces a new real-time object detection approach named Yes-Net. It realizes the prediction of bounding boxes and class via single neural network like YOLOv2 and SSD, but owns more efficient and outstanding features. It combines local information with global information by adding the RNN architecture as a packed unit in CNN model to form the basic feature extractor. Independent anchor boxes coming from full-dimension kmeans is also applied in Yes-Net, it brings better average IOU than grid anchor box. In addition, instead of NMS, YesNet uses RNN as a filter to get the final boxes, which is more efficient. For 416 × 416 input, Yes-Net achieves 74.3% mAP on VOC2007 test at 39 FPS on an Nvidia Titan X Pascal.",
"title": ""
},
{
"docid": "52212ff3e1c85b5f5c3fcf0ec71f6f8b",
"text": "Embodied cognition theory proposes that individuals' abstract concepts can be associated with sensorimotor processes. The authors examined the effects of teaching participants novel embodied metaphors, not based in prior physical experience, and found evidence suggesting that they lead to embodied simulation, suggesting refinements to current models of embodied cognition. Creating novel embodiments of abstract concepts in the laboratory may be a useful method for examining mechanisms of embodied cognition.",
"title": ""
},
{
"docid": "a7c3eda27ff129915a59bde0f56069cf",
"text": "Recent proliferation of Unmanned Aerial Vehicles (UAVs) into the commercial space has been accompanied by a similar growth in aerial imagery . While useful in many applications, the utility of this visual data is limited in comparison with the total range of desired airborne missions. In this work, we extract depth of field information from monocular images from UAV on-board cameras using a single frame of data per-mapping. Several methods have been previously used with varying degrees of success for similar spatial inferencing tasks, however we sought to take a different approach by framing this as an augmented style-transfer problem. In this work, we sought to adapt two of the state-of-theart style transfer methods to the problem of depth mapping. The first method adapted was based on the unsupervised Pix2Pix approach. The second was developed using a cyclic generative adversarial network (cycle GAN). In addition to these two approaches, we also implemented a baseline algorithm previously used for depth map extraction on indoor scenes, the multi-scale deep network. Using the insights gained from these implementations, we then developed a new methodology to overcome the shortcomings observed that was inspired by recent work in perceptual feature-based style transfer. These networks were trained on matched UAV perspective visual image, depth-map pairs generated using Microsoft’s AirSim high-fidelity UAV simulation engine and environment. The performance of each network was tested using a reserved test set at the end of training and the effectiveness evaluated using against three metrics. While our new network was not able to outperform any of the other approaches but cycle GANs, we believe that the intuition behind the approach was demonstrated to be valid and that it may be successfully refined with future work.",
"title": ""
},
{
"docid": "0f807d62d491fd24ff7b8c207f468784",
"text": "Domain names play a critical role in cybercrime, because they identify hosts that serve malicious content (such as malware, Trojan binaries, or malicious scripts), operate as command-and-control servers, or carry out some other role in the malicious network infrastructure. To defend against Internet attacks and scams, operators widely use blacklisting to detect and block malicious domain names and IP addresses. Existing blacklists are typically generated by crawling suspicious domains, manually or automatically analyzing malware, and collecting information from honeypots and intrusion detection systems. Unfortunately, such blacklists are difficult to maintain and are often slow to respond to new attacks. Security experts set up and join mailing lists to discuss and share intelligence information, which provides a better chance to identify emerging malicious activities. In this paper, we design Gossip, a novel approach to automatically detect malicious domains based on the analysis of discussions in technical mailing lists (particularly on security-related topics) by using natural language processing and machine learning techniques. We identify a set of effective features extracted from email threads, users participating in the discussions, and content keywords, to infer malicious domains from mailing lists, without the need to actually crawl the suspect websites. Our result shows that Gossip achieves high detection accuracy. Moreover, the detection from our system is often days or weeks earlier than existing public blacklists.",
"title": ""
},
{
"docid": "a1f5de69c61363a7122732e78a7adc7a",
"text": "We investigate data driven natural language generation under the constraints that all words must come from a fixed vocabulary and a specified word must appear in the generated sentence, motivated by the possibility for automatic generation of language education exercises. We present fast and accurate approximations to the ideal rejection samplers for these constraints and compare various sentence level generative language models. Our best systems produce output that is with high frequency both novel and error free, which we validate with human and automatic evaluations.",
"title": ""
},
{
"docid": "2746acb7d620802e949bef7fb855bfa7",
"text": "Our research approach is to design and develop reliable, efficient, flexible, economical, real-time and realistic wellness sensor networks for smart home systems. The heterogeneous sensor and actuator nodes based on wireless networking technologies are deployed into the home environment. These nodes generate real-time data related to the object usage and movement inside the home, to forecast the wellness of an individual. Here, wellness stands for how efficiently someone stays fit in the home environment and performs his or her daily routine in order to live a long and healthy life. We initiate the research with the development of the smart home approach and implement it in different home conditions (different houses) to monitor the activity of an inhabitant for wellness detection. Additionally, our research extends the smart home system to smart buildings and models the design issues related to the smart building environment; these design issues are linked with system performance and reliability. This research paper also discusses and illustrates the possible mitigation to handle the ISM band interference and attenuation losses without compromising optimum system performance.",
"title": ""
},
{
"docid": "81352cec06fb5c0a81c3c55801f36b55",
"text": "Recent research in molecular evolution has raised awareness of the importance of selective neutrality. Several different models of neutrality have been proposed based on Kauffman’s well-known NK landscape model. Two of these models, NKp and NKq, are investigated and found to display significantly different structural proper ties. The fitness distr ibutions of these neutral landscapes reveal that their levels of cor relation with non-neutral landscapes are significantly different, as are the distr ibutions of neutral mutations. In this paper we descr ibe a ser ies of simulations of a hill climbing search algor ithm on NK, NKp and NKq landscapes with varying levels of epistatic interaction. These simulations demonstrate differences in the way that epistatic interaction affects the ‘searchability’ of neutral landscapes. We conclude that the method used to implement neutrality has an impact on both the structure of the resulting landscapes and on the per for mance of evolutionary search algor ithms on these landscapes. These model-dependent effects must be taken into consideration when modelling biological phenomena.",
"title": ""
},
{
"docid": "6b0b0483cf5eeba1bcee560835651a0e",
"text": "Four experiments were carried out to investigate an early- versus late-selection explanation for the attentional blink (AB). In both Experiments 1 and 2, 3 groups of participants were required to identify a noun (Experiment 1) or a name (Experiment 2) target (experimental conditions) and then to identify the presence or absence of a 2nd target (probe), which was their own name, another name, or a specified noun from among a noun distractor stream (Experiment 1) or a name distractor stream (Experiment 2). The conclusions drawn are that individuals do not experience an AB for their own names but do for either other names or nouns. In Experiments 3 and 4, either the participant's own name or another name was presented, as the target and as the item that immediately followed the target, respectively. An AB effect was revealed in both experimental conditions. The results of these experiments are interpreted as support for a late-selection interference account of the AB.",
"title": ""
}
] |
scidocsrr
|
324bdd6f2360607998dc961c25304955
|
Named entity recognition on Indonesian microblog messages
|
[
{
"docid": "ab25d07bd7f1daa44bb3dcb5401756a2",
"text": "Models for many natural language tasks benefit from the flexibility to use overlapping, non-independent features. For example, the need for labeled data can be drastically reduced by taking advantage of domain knowledge in the form of word lists, part-of-speech tags, character ngrams, and capitalization patterns. While it is difficult to capture such inter-dependent features with a generative probabilistic model, conditionally-trained models, such as conditional maximum entropy models, handle them well. There has been significant work with such models for greedy sequence modeling in NLP (Ratnaparkhi, 1996; Borthwick et al., 1998). Conditional Random Fields (CRFs) (Lafferty et al., 2001) are undirected graphical models, a special case of which correspond to conditionally-trained finite state machines. While based on the same exponential form as maximum entropy models, they have efficient procedures for complete, non-greedy finite-state inference and training. CRFs have shown empirical successes recently in POS tagging (Lafferty et al., 2001), noun phrase segmentation (Sha and Pereira, 2003) and Chinese word segmentation (McCallum and Feng, 2003). Given these models’ great flexibility to include a wide array of features, an important question that remains is what features should be used? For example, in some cases capturing a word tri-gram is important, however, there is not sufficient memory or computation to include all word tri-grams. As the number of overlapping atomic features increases, the difficulty and importance of constructing only certain feature combinations grows. This paper presents a feature induction method for CRFs. Founded on the principle of constructing only those feature conjunctions that significantly increase loglikelihood, the approach builds on that of Della Pietra et al (1997), but is altered to work with conditional rather than joint probabilities, and with a mean-field approximation and other additional modifications that improve efficiency specifically for a sequence model. In comparison with traditional approaches, automated feature induction offers both improved accuracy and significant reduction in feature count; it enables the use of richer, higherorder Markov models, and offers more freedom to liberally guess about which atomic features may be relevant to a task.",
"title": ""
}
] |
[
{
"docid": "c2d8c3d6bf74a792707bcaab69cbc510",
"text": "Three-dimensional geometric data offer an excellent domain for studying representation learning and generative modeling. In this paper, we look at geometric data represented as point clouds. We introduce a deep AutoEncoder (AE) network with state-of-the-art reconstruction quality and generalization ability. The learned representations outperform existing methods on 3D recognition tasks and enable shape editing via simple algebraic manipulations, such as semantic part editing, shape analogies and shape interpolation, as well as shape completion. We perform a thorough study of different generative models including GANs operating on the raw point clouds, significantly improved GANs trained in the fixed latent space of our AEs, and Gaussian Mixture Models (GMMs). To quantitatively evaluate generative models we introduce measures of sample fidelity and diversity based on matchings between sets of point clouds. Interestingly, our evaluation of generalization, fidelity and diversity reveals that GMMs trained in the latent space of our AEs yield the best results overall.",
"title": ""
},
{
"docid": "9ac81079d4e957a87cfec465a4a69a7c",
"text": "AIMS\nThe UK has one of the largest systems of immigration detention in Europe.. Those detained include asylum-seekers and foreign national prisoners, groups with a higher prevalence of mental health vulnerabilities compared with the general population. In light of little published research on the mental health status of detainees in immigration removal centres (IRCs), the primary aim of this study was to explore whether it was feasible to conduct psychiatric research in such a setting. A secondary aim was to compare the mental health of those seeking asylum with the rest of the detainees.\n\n\nMETHODS\nCross-sectional study with simple random sampling followed by opportunistic sampling. Exclusion criteria included inadequate knowledge of English and European Union nationality. Six validated tools were used to screen for mental health disorders including developmental disorders like Personality Disorder, Attention Deficit Hyperactivity Disorder (ADHD), Autistic Spectrum Disorder (ASD) and Intellectual Disability, as well as for needs assessment. These were the MINI v6, SAPAS, AQ-10, ASRS, LDSQ and CANFOR. Demographic data were obtained using a participant demographic sheet. Researchers were trained in the use of the screening battery and inter-rater reliability assessed by joint ratings.\n\n\nRESULTS\nA total of 101 subjects were interviewed. Overall response rate was 39%. The most prevalent screened mental disorder was depression (52.5%), followed by personality disorder (34.7%) and post-traumatic stress disorder (20.8%). 21.8% were at moderate to high suicidal risk. 14.9 and 13.9% screened positive for ASD and ADHD, respectively. The greatest unmet needs were in the areas of intimate relationships (76.2%), psychological distress (72.3%) and sexual expression (71.3%). Overall presence of mental disorder was comparable with levels found in prisons. The numbers in each group were too small to carry out any further analysis.\n\n\nCONCLUSION\nIt is feasible to undertake a psychiatric morbidity survey in an IRC. Limitations of the study include potential selection bias, use of screening tools, use of single-site study, high refusal rates, the lack of interpreters and lack of women and children in study sample. Future studies should involve the in-reach team to recruit participants and should be run by a steering group consisting of clinicians from the IRC as well as academics.",
"title": ""
},
{
"docid": "75a01a7891b480aa480a57c1ab7d2c87",
"text": "Increasing population has posed insurmountable challenges to agriculture in the provision of future food security, particularly in the Middle East and North Africa (MENA) region where biophysical conditions are not well-suited for agriculture. Iran, as a major agricultural country in the MENA region, has long been in the quest for food self-sufficiency, however, the capability of its land and water resources to realize this goal is largely unknown. Using very high-resolution spatial data sets, we evaluated the capacity of Iran’s land for sustainable crop production based on the soil properties, topography, and climate conditions. We classified Iran’s land suitability for cropping as (million ha): very good 0.4% (0.6), good 2.2% (3.6), medium 7.9% (12.8), poor 11.4% (18.5), very poor 6.3% (10.2), unsuitable 60.0% (97.4), and excluded areas 11.9% (19.3). In addition to overarching limitations caused by low precipitation, low soil organic carbon, steep slope, and high soil sodium content were the predominant soil and terrain factors limiting the agricultural land suitability in Iran. About 50% of the Iran’s existing croplands are located in low-quality lands, representing an unsustainable practice. There is little room for cropland expansion to increase production but redistribution of cropland to more suitable areas may improve sustainability and reduce pressure on water resources, land, and ecosystem in Iran.",
"title": ""
},
{
"docid": "8955c715c0341057b471eeed90c9c82d",
"text": "The letter presents an exact small-signal discrete-time model for digitally controlled pulsewidth modulated (PWM) dc-dc converters operating in constant frequency continuous conduction mode (CCM) with a single effective A/D sampling instant per switching period. The model, which is based on well-known approaches to discrete-time modeling and the standard Z-transform, takes into account sampling, modulator effects and delays in the control loop, and is well suited for direct digital design of digital compensators. The letter presents general results valid for any CCM converter with leading or trailing edge PWM. Specific examples, including approximate closed-form expressions for control-to-output transfer functions are given for buck and boost converters. The model is verified in simulation using an independent system identification approach.",
"title": ""
},
{
"docid": "07c34b068cc1217de2e623122a22d2b0",
"text": "Rheumatoid arthritis (RA) is a bone destructive autoimmune disease. Many patients with RA recognize fluctuations of their joint synovitis according to changes of air pressure, but the correlations between them have never been addressed in large-scale association studies. To address this point we recruited large-scale assessments of RA activity in a Japanese population, and performed an association analysis. Here, a total of 23,064 assessments of RA activity from 2,131 patients were obtained from the KURAMA (Kyoto University Rheumatoid Arthritis Management Alliance) database. Detailed correlations between air pressure and joint swelling or tenderness were analyzed separately for each of the 326 patients with more than 20 assessments to regulate intra-patient correlations. Association studies were also performed for seven consecutive days to identify the strongest correlations. Standardized multiple linear regression analysis was performed to evaluate independent influences from other meteorological factors. As a result, components of composite measures for RA disease activity revealed suggestive negative associations with air pressure. The 326 patients displayed significant negative mean correlations between air pressure and swellings or the sum of swellings and tenderness (p = 0.00068 and 0.00011, respectively). Among the seven consecutive days, the most significant mean negative correlations were observed for air pressure three days before evaluations of RA synovitis (p = 1.7 × 10(-7), 0.00027, and 8.3 × 10(-8), for swellings, tenderness and the sum of them, respectively). Standardized multiple linear regression analysis revealed these associations were independent from humidity and temperature. Our findings suggest that air pressure is inversely associated with synovitis in patients with RA.",
"title": ""
},
{
"docid": "ef5769145c4c1ebe06af0c8b5f67e70e",
"text": "Structures of biological macromolecules determined by transmission cryoelectron microscopy (cryo-TEM) and three-dimensional image reconstruction are often displayed as surface-shaded representations with depth cueing along the viewed direction (Z cueing). Depth cueing to indicate distance from the center of virus particles (radial-depth cueing, or R cueing) has also been used. We have found that a style of R cueing in which color is applied in smooth or discontinuous gradients using the IRIS Explorer software is an informative technique for displaying the structures of virus particles solved by cryo-TEM and image reconstruction. To develop and test these methods, we used existing cryo-TEM reconstructions of mammalian reovirus particles. The newly applied visualization techniques allowed us to discern several new structural features, including sites in the inner capsid through which the viral mRNAs may be extruded after they are synthesized by the reovirus transcriptase complexes. To demonstrate the broad utility of the methods, we also applied them to cryo-TEM reconstructions of human rhinovirus, native and swollen forms of cowpea chlorotic mottle virus, truncated core of pyruvate dehydrogenase complex from Saccharomyces cerevisiae, and flagellar filament of Salmonella typhimurium. We conclude that R cueing with color gradients is a useful tool for displaying virus particles and other macromolecules analyzed by cryo-TEM and image reconstruction.",
"title": ""
},
{
"docid": "d4e5a5aa65017360db9a87590a728892",
"text": "This work presents a chaotic path planning generator which is used in autonomous mobile robots, in order to cover a terrain. The proposed generator is based on a nonlinear circuit, which shows chaotic behavior. The bit sequence, produced by the chaotic generator, is converted to a sequence of planned positions, which satisfies the requirements for unpredictability and fast scanning of the entire terrain. The nonlinear circuit and the trajectory-planner are described thoroughly. Simulation tests confirm that with the proposed path planning generator better results can be obtained with regard to previous works. © 2012 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "3861e3655de5593526184df4b17f1493",
"text": "A new approach to Image Quality Assessment (IQA) is presented. The idea is based on the fact that two images are similar if their structural relationship within their blocks is preserved. To this end, a transition matrix is defined which exploits structural transitions between corresponding blocks of two images. The matrix contains valuable information about differences of two images, which should be transformed to a quality index. Eigen-value analysis over the transition matrix leads to a new distance measure called Eigen-gap. According to simulation results, the Eigen-gap is not only highly correlated to subjective scores but also, its performance is as good as the SSIM, a trustworthy index.",
"title": ""
},
{
"docid": "785ce19a91fbca6f8b3a3ccbe45669cd",
"text": "Automatic brain tumor segmentation plays an important role for diagnosis, surgical planning and treatment assessment of brain tumors. Deep convolutional neural networks (CNNs) have been widely used for this task. Due to the relatively small data set for training, data augmentation at training time has been commonly used for better performance of CNNs. Recent works also demonstrated the usefulness of data augmentation at test time, in addition to training time, for achieving more robust predictions. We investigate how test-time augmentation can improve CNNs’ performance for brain tumor segmentation. We used different underpinning network structures and augmented the image by 3D rotation, flipping, scaling and adding random noise at both training and test time. Experiments with BraTS 2018 training and validation set show that test-time augmentation can achieve higher segmentation accuracy and obtain uncertainty estimation of the segmentation results.",
"title": ""
},
{
"docid": "489015cc236bd20f9b2b40142e4b5859",
"text": "We present an experimental study which demonstrates that model checking techniques can be effective in finding synchronization errors in safety critical software when they are combined with a design for verification approach. We apply the concurrency controller design pattern to the implementation of the synchronization operations in Java programs. This pattern enables a modular verification strategy by decoupling the behaviors of the concurrency controllers from the behaviors of the threads that use them using interfaces specified as finite state machines. The behavior of a concurrency controller can be verified with respect to arbitrary numbers of threads using infinite state model checking techniques, and the threads which use the controller classes can be checked for interface violations using finite state model checking techniques. We present techniques for thread isolation which enables us to analyze each thread in the program separately during interface verification. We conducted an experimental study investigating the effectiveness of the presented design for verification approach on safety critical air traffic control software. In this study, we first reengineered the Tactical Separation Assisted Flight Environment (TSAFE) software using the concurrency controller design pattern. Then, using fault seeding, we created 40 faulty versions of TSAFE and used both infinite and finite state verification techniques for finding the seeded faults. The experimental study demonstrated the effectiveness of the presented modular verification approach and resulted in a classification of faults that can be found using the presented approach.",
"title": ""
},
{
"docid": "ac09e4a989bb4a9b247aa0ba346f1d71",
"text": "Many applications in information extraction, natural language understanding, information retrieval require an understanding of the semantic relations between entities. We present a comprehensive review of various aspects of the entity relation extraction task. Some of the most important supervised and semi-supervised classification approaches to the relation extraction task are covered in sufficient detail along with critical analyses. We also discuss extensions to higher-order relations. Evaluation methodologies for both supervised and semi-supervised methods are described along with pointers to the commonly used performance evaluation datasets. Finally, we also give short descriptions of two important applications of relation extraction, namely question answering and biotext mining.",
"title": ""
},
{
"docid": "d197eacce97d161e4292ba541f8bed57",
"text": "A Luenberger-based observer is proposed to the state estimation of a class of nonlinear systems subject to parameter uncertainty and bounded disturbance signals. A nonlinear observer gain is designed in order to minimize the effects of the uncertainty, error estimation and exogenous signals in an 7-L, sense by means of a set of state- and parameterdependent linear matrix inequalities that are solved using standard software packages. A numerical example illustrates the approach.",
"title": ""
},
{
"docid": "08aa54980d7664ea6fc57aad1dd0029e",
"text": "Visual surveillance of dynamic objects, particularly vehicles on the road, has been, over the past decade, an active research topic in computer vision and intelligent transportation systems communities. In the context of traffic monitoring, important advances have been achieved in environment modeling, vehicle detection, tracking, and behavior analysis. This paper is a survey that addresses particularly the issues related to vehicle monitoring with cameras at road intersections. In fact, the latter has variable architectures and represents a critical area in traffic. Accidents at intersections are extremely dangerous, and most of them are caused by drivers' errors. Several projects have been carried out to enhance the safety of drivers in the special context of intersections. In this paper, we provide an overview of vehicle perception systems at road intersections and representative related data sets. The reader is then given an introductory overview of general vision-based vehicle monitoring approaches. Subsequently and above all, we present a review of studies related to vehicle detection and tracking in intersection-like scenarios. Regarding intersection monitoring, we distinguish and compare roadside (pole-mounted, stationary) and in-vehicle (mobile platforms) systems. Then, we focus on camera-based roadside monitoring systems, with special attention to omnidirectional setups. Finally, we present possible research directions that are likely to improve the performance of vehicle detection and tracking at intersections.",
"title": ""
},
{
"docid": "c4d2748fbab63fb3ab320f4d2c0fd18b",
"text": "In human fingertips, the fingerprint patterns and interlocked epidermal-dermal microridges play a critical role in amplifying and transferring tactile signals to various mechanoreceptors, enabling spatiotemporal perception of various static and dynamic tactile signals. Inspired by the structure and functions of the human fingertip, we fabricated fingerprint-like patterns and interlocked microstructures in ferroelectric films, which can enhance the piezoelectric, pyroelectric, and piezoresistive sensing of static and dynamic mechanothermal signals. Our flexible and microstructured ferroelectric skins can detect and discriminate between multiple spatiotemporal tactile stimuli including static and dynamic pressure, vibration, and temperature with high sensitivities. As proof-of-concept demonstration, the sensors have been used for the simultaneous monitoring of pulse pressure and temperature of artery vessels, precise detection of acoustic sounds, and discrimination of various surface textures. Our microstructured ferroelectric skins may find applications in robotic skins, wearable sensors, and medical diagnostic devices.",
"title": ""
},
{
"docid": "5b618ffd8e3dc68f36757ad5551a136a",
"text": "Recent years have witnessed the boom of online sharing media contents, which raise significant challenges in effective management and retrieval. Though a large amount of efforts have been made, precise retrieval on video shots with certain topics has been largely ignored. At the same time, due to the popularity of novel time-sync comments, or so-called “bullet-screen comments”, video semantics could be now combined with timestamps to support further research on temporal video labeling. In this paper, we propose a novel video understanding framework to assign temporal labels on highlighted video shots. To be specific, due to the informal expression of bullet-screen comments, we first propose a temporal deep structured semantic model (T-DSSM) to represent comments into semantic vectors by taking advantage of their temporal correlation. Then, video highlights are recognized and labeled via semantic vectors in a supervised way. Extensive experiments on a real-world dataset prove that our framework could effectively label video highlights with a significant margin compared with baselines, which clearly validates the potential of our framework on video understanding, as well as bullet-screen comments interpretation.",
"title": ""
},
{
"docid": "adc06292106114e5e69aa45c5e65cacc",
"text": "The surveillance systems have been widely used in automatic teller machines (ATMs), banks, convenient stores, etc. For example, when a customer uses the ATM, the surveillance systems will record his/her face information. The information will help us understand and trace who withdrew money. However, when criminals use the ATM to withdraw illegal money, they usually block their faces with something (in Taiwan, criminals usually use safety helmets or masks to block their faces). That will degrade the purpose of the surveillance system. In previous work, we already proposed a technology for safety helmet detection. In this paper, we propose a mask detection technology based upon automatic face recognition methods. We use the Gabor filters to generate facial features and utilize geometric analysis algorithms for mask detection. The technology can give an early warning to save-guards when any \"customer\" or \"intruder\" blocks his/her face information with a mask. Besides, the technology can assist face detection in the automatic face recognition system. Experimental results show the performance and reliability of the proposed technology.",
"title": ""
},
{
"docid": "05062605a55c1cae500fb43af8334c46",
"text": "Over the last decade, there has been considerable interest in designing algorithms for processing massive graphs in the data stream model. The original motivation was two-fold: a) in many applications, the dynamic graphs that arise are too large to be stored in the main memory of a single machine and b) considering graph problems yields new insights into the complexity of stream computation. However, the techniques developed in this area are now finding applications in other areas including data structures for dynamic graphs, approximation algorithms, and distributed and parallel computation. We survey the state-of-the-art results; identify general techniques; and highlight some simple algorithms that illustrate basic ideas.",
"title": ""
},
{
"docid": "3acc4d7100331b56fa244bd618373a56",
"text": "Although deep neural networks (DNNs) have achieved great success in many tasks, recent studies have shown they are vulnerable to adversarial examples. Such examples, typically generated by adding small but purposeful distortions, can frequently fool DNN models. Previous studies to defend against adversarial examples mostly focused on refining the DNN models, but have either shown limited success or suffered from expensive computation. We propose a new strategy, feature squeezing, that can be used to harden DNN models by detecting adversarial examples. Feature squeezing reduces the search space available to an adversary by coalescing samples that correspond to many different feature vectors in the original space into a single sample. By comparing a DNN model’s prediction on the original input with that on squeezed inputs, feature squeezing detects adversarial examples with high accuracy and few false positives. This paper explores two types of feature squeezing: reducing the color bit depth of each pixel and spatial smoothing. These strategies are inexpensive and complementary to other defenses, and can be combined in a joint detection framework to achieve high detection rates against state-of-the-art attacks.",
"title": ""
},
{
"docid": "623c78e515abee9830eb0b79e773dcec",
"text": "The main focus in this research paper is to experiment deeply with, and find alternative solutions to the image segmentation and character recognition problems within the License Plate Recognition framework. Three main stages are identified in such applications. First, it is necessary to locate and extract the license plate region from a larger scene image. Second, having a license plate region to work with, the alphanumeric characters in the plate need to be extracted from the background. Third, deliver them to an character system (BOX APPROACH)for recognition. In order to identify a vehicle by reading its license plate successfully, it is obviously necessary to locate the plate in the scene image provided by some acquisition system (e.g. video or still camera). Locating the region of interest helps in dramatically reducing both the computational expense and algorithm complexity. For example, a currently common1024x768 resolution image contains a total of 786,432pixels, while the region of interest (in this case a license plate) may account for only 10% of the image area. Also, the input to the following segmentation and recognition stages is simplified, resulting in easier algorithm design and shorter computation times. The paper mainly work with the standard license plates but the techniques, algorithms and parameters that is be used can be adjusted easily for any similar number plates even with other alpha-numeric set.",
"title": ""
}
] |
scidocsrr
|
f3fb840eaf0b6691c23ec96854b9ed1f
|
The role of information technology in supply chain integration
|
[
{
"docid": "44a5edfc4445b11f7b456c164953bc30",
"text": "Purchasing has increasingly assumed a pivotal strategic role in supply-chain management. Yet, claims of the strategic role of purchasing have not been fully subjected to rigorous theoretical and empirical scrutiny. Extant research has remained largely anecdotal and theoretically under-developed. In this paper, we examine the links among strategic purchasing, supply management, and firm performance. We argue that strategic purchasing can engender sustainable competitive advantage by enabling firms to: (a) foster close working relationships with a limited number of suppliers; (b) promote open communication among supply-chain partners; and (c) develop long-term strategic relationship orientation to achieve mutual gains. Using structural equation modeling, we empirically test a number of hypothesized relationships based on a sample of 221 United States manufacturing firms. Our results provide robust support for the links between strategic purchasing, supply management, customer responsiveness, and financial performance of the buying firm. Implications for future research and managerial practice in supply-chain management are also offered. # 2004 Elsevier B.V. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "b93ab92ac82a34d3a83240e251cf714e",
"text": "Short text is becoming ubiquitous in many modern information systems. Due to the shortness and sparseness of short texts, there are less informative word co-occurrences among them, which naturally pose great difficulty for classification tasks on such data. To overcome this difficulty, this paper proposes a new way for effectively classifying the short texts. Our method is based on a key observation that there usually exists ordered subsets in short texts, which is termed ``information path'' in this work, and classification on each subset based on the classification results of some pervious subsets can yield higher overall accuracy than classifying the entire data set directly. We propose a method to detect the information path and employ it in short text classification. Different from the state-of-art methods, our method does not require any external knowledge or corpus that usually need careful fine-tuning, which makes our method easier and more robust on different data sets. Experiments on two real world data sets show the effectiveness of the proposed method and its superiority over the existing methods.",
"title": ""
},
{
"docid": "40a96dfd399c27ca8b2966693732b975",
"text": "Graph matching problems of varying types are important in a wide array of application areas. A graph matching problem is a problem involving some form of comparison between graphs. Some of the many application areas of such problems include information retrieval, sub-circuit identification, chemical structure classification, and networks. Problems of efficient graph matching arise in any field that may be modeled with graphs. For example, any problem that can be modeled with binary relations between entities in the domain is such a problem. The individual entities in the problem domain become nodes in the graph. And each binary relation becomes an edge between the appropriate nodes. Although it is possible to formulate such a large array of problems as graph matching problems, it is not necessarily a good idea to do so. Graph matching is a very difficult problem. The graph isomorphism problem is to determine if there exists a one-to-one mapping from the nodes of one graph to the nodes of a second graph that preserves adjacency. Similarly, the subgraph isomorphism problem is to determine if there exists a one-to-one mapping from the",
"title": ""
},
{
"docid": "3a04b4e81d4d8c82c538980649ffa09e",
"text": "We present parallel algorithms to accelerate collision queries for sample-based motion planning. Our approach is designed for current many-core GPUs and exploits data-parallelism and multithreaded capabilities. In order to take advantage of high numbers of cores, we present a clustering scheme and collision-packet traversal to perform efficient collision queries on multiple configurations simultaneously. Furthermore, we present a hierarchical traversal scheme that performs workload balancing for high parallel efficiency. We have implemented our algorithms on commodity NVIDIA GPUs using CUDA and can perform 500, 000 collision queries per second on our benchmarks, which is 10X faster than prior GPU-based techniques. Moreover, we can compute collisionfree paths for rigid and articulated models in less than 100 milliseconds for many benchmarks, almost 50-100X faster than current CPU-based PRM planners.",
"title": ""
},
{
"docid": "a4e1f420dfc3b1b30a58ec3e60288761",
"text": "Despite recent advances in uncovering the quantitative features of stationary human activity patterns, many applications, from pandemic prediction to emergency response, require an understanding of how these patterns change when the population encounters unfamiliar conditions. To explore societal response to external perturbations we identified real-time changes in communication and mobility patterns in the vicinity of eight emergencies, such as bomb attacks and earthquakes, comparing these with eight non-emergencies, like concerts and sporting events. We find that communication spikes accompanying emergencies are both spatially and temporally localized, but information about emergencies spreads globally, resulting in communication avalanches that engage in a significant manner the social network of eyewitnesses. These results offer a quantitative view of behavioral changes in human activity under extreme conditions, with potential long-term impact on emergency detection and response.",
"title": ""
},
{
"docid": "5de19873c4bd67cdcc57d879d923dc10",
"text": "BACKGROUND AND PURPOSE\nNeuromyelitis optica (NMO) or Devic's disease is a rare inflammatory and demyelinating autoimmune disorder of the central nervous system (CNS) characterized by recurrent attacks of optic neuritis (ON) and longitudinally extensive transverse myelitis (LETM), which is distinct from multiple sclerosis (MS). The guidelines are designed to provide guidance for best clinical practice based on the current state of clinical and scientific knowledge.\n\n\nSEARCH STRATEGY\nEvidence for this guideline was collected by searches for original articles, case reports and meta-analyses in the MEDLINE and Cochrane databases. In addition, clinical practice guidelines of professional neurological and rheumatological organizations were studied.\n\n\nRESULTS\nDifferent diagnostic criteria for NMO diagnosis [Wingerchuk et al. Revised NMO criteria, 2006 and Miller et al. National Multiple Sclerosis Society (NMSS) task force criteria, 2008] and features potentially indicative of NMO facilitate the diagnosis. In addition, guidance for the work-up and diagnosis of spatially limited NMO spectrum disorders is provided by the task force. Due to lack of studies fulfilling requirement for the highest levels of evidence, the task force suggests concepts for treatment of acute exacerbations and attack prevention based on expert opinion.\n\n\nCONCLUSIONS\nStudies on diagnosis and management of NMO fulfilling requirements for the highest levels of evidence (class I-III rating) are limited, and diagnostic and therapeutic concepts based on expert opinion and consensus of the task force members were assembled for this guideline.",
"title": ""
},
{
"docid": "12915285ce8f1dd1f902562fd8c7500d",
"text": "Expanding view of minimal invasive surgery horizon reveals new practice areas for surgeons and patients. Laparoscopic inguinal hernia repair is an example in progress wondered by many patients and surgeons. Advantages in laparoscopic repair motivate surgeons to discover this popular field. In addition, patients search the most convenient surgical method for themselves today. Laparoscopic approaches to inguinal hernia surgery have become popular as a result of the development of experience about different laparoscopic interventions, and these techniques are increasingly used these days. As other laparoscopic surgical methods, experience is the most important point in order to obtain good results. This chapter aims to show technical details, pitfalls and the literature results about two methods that are commonly used in laparoscopic inguinal hernia repair.",
"title": ""
},
{
"docid": "72e1a2bf37495439a12a53f4b842c218",
"text": "A new transmission model of human malaria in a partially immune population with three discrete delays is formulated for variable host and vector populations. These are latent period in the host population, latent period in the vector population and duration of partial immunity. The results of our mathematical analysis indicate that a threshold parameterR0 exists. ForR0 > 1, the expected number of mosquitoes infected from humansRhm should be greater than a certain critical valueR∗hm or should be less thanR∗hm whenR ∗ hm > 1, for a stable endemic equilibrium to exist. We deduce from model analysis that an increase in the period within which partial immunity is lost increases the spread of the disease. Numerically we deduce that treatment of the partially immune humans assists in reducing the severity of the disease and that transmission blocking vaccines would be effective in a partially immune population. Numerical simulations support our analytical conclusions and illustrate possible behaviour scenarios of the model. c © 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "2bd6dab3aa836728f606732652e4a46d",
"text": "A method called the eigensystem realization algorithm is developed for modal parameter identification and model reduction of dynamic systems from test data. A new approach is introduced in conjunction with the singular-value decomposition technique to derive the basic formulation of minimum order realization which is an extended version of the Ho-Kalman algorithm. The basic formulation is then transformed into modal space for modal parameter identification. Two accuracy indicators are developed to quantitatively identify the system and noise modes. For illustration of the algorithm, an example is shown using experimental data from the Galileo spacecraft.",
"title": ""
},
{
"docid": "a725138a18728b8499cdb006328a44d0",
"text": "This paper presents a wideband directional bridge with a range of operating frequencies from 300 kHz to 13.5 GHz. The original topology of the directional bridge was designed, using the multilayer printed circuit board (PCB) technology, with the top layer of the laminated microwave dielectric Rogers RO4350, as resistive elements surface mounted (SMD) components are used. The circuit is designed for a nominal value of 16 dB coupling and an insertion loss of 1.6 dB.",
"title": ""
},
{
"docid": "cf26ade7932ba0c5deb01e4b3d2463bb",
"text": "Researchers are often confused about what can be inferred from significance tests. One problem occurs when people apply Bayesian intuitions to significance testing-two approaches that must be firmly separated. This article presents some common situations in which the approaches come to different conclusions; you can see where your intuitions initially lie. The situations include multiple testing, deciding when to stop running participants, and when a theory was thought of relative to finding out results. The interpretation of nonsignificant results has also been persistently problematic in a way that Bayesian inference can clarify. The Bayesian and orthodox approaches are placed in the context of different notions of rationality, and I accuse myself and others as having been irrational in the way we have been using statistics on a key notion of rationality. The reader is shown how to apply Bayesian inference in practice, using free online software, to allow more coherent inferences from data.",
"title": ""
},
{
"docid": "2f0da2f7461043476d5ba82ae9cf77bf",
"text": "Recently two emerging areas of research, attosecond and nanoscale physics, have started to come together. Attosecond physics deals with phenomena occurring when ultrashort laser pulses, with duration on the femto- and sub-femtosecond time scales, interact with atoms, molecules or solids. The laser-induced electron dynamics occurs natively on a timescale down to a few hundred or even tens of attoseconds (1 attosecond = 1 as = 10-18 s), which is comparable with the optical field. For comparison, the revolution of an electron on a 1s orbital of a hydrogen atom is ∼152 as. On the other hand, the second branch involves the manipulation and engineering of mesoscopic systems, such as solids, metals and dielectrics, with nanometric precision. Although nano-engineering is a vast and well-established research field on its own, the merger with intense laser physics is relatively recent. In this report on progress we present a comprehensive experimental and theoretical overview of physics that takes place when short and intense laser pulses interact with nanosystems, such as metallic and dielectric nanostructures. In particular we elucidate how the spatially inhomogeneous laser induced fields at a nanometer scale modify the laser-driven electron dynamics. Consequently, this has important impact on pivotal processes such as above-threshold ionization and high-order harmonic generation. The deep understanding of the coupled dynamics between these spatially inhomogeneous fields and matter configures a promising way to new avenues of research and applications. Thanks to the maturity that attosecond physics has reached, together with the tremendous advance in material engineering and manipulation techniques, the age of atto-nanophysics has begun, but it is in the initial stage. We present thus some of the open questions, challenges and prospects for experimental confirmation of theoretical predictions, as well as experiments aimed at characterizing the induced fields and the unique electron dynamics initiated by them with high temporal and spatial resolution.",
"title": ""
},
{
"docid": "882e5a7255be52e39c921f03e282cc8a",
"text": "Introduction Building robust low-level image representations, beyond edge primitives, is a long-standing goal in vision. In its most basic form, an image is a matrix of intensities. How we should progress from this matrix to stable mid-level representations, useful for high-level vision tasks, remains unclear. Popular feature representations such as SIFT or HOG spatially pool edge information to form descriptors that are invariant to local transformations. However, in doing so important cues such as edge intersections, grouping, parallelism and symmetry are lost. (a)",
"title": ""
},
{
"docid": "74ef26e332b12329d8d83f80169de5c0",
"text": "It has been claimed that the discovery of association rules is well-suited for applications of market basket analysis to reveal regularities in the purchase behaviour of customers. Moreover, recent work indicates that the discovery of interesting rules can in fact only be addressed within a microeconomic framework. This study integrates the discovery of frequent itemsets with a (microeconomic) model for product selection (PROFSET). The model enables the integration of both quantitative and qualitative (domain knowledge) criteria. Sales transaction data from a fullyautomated convenience store is used to demonstrate the effectiveness of the model against a heuristic for product selection based on product-specific profitability. We show that with the use of frequent itemsets we are able to identify the cross-sales potential of product items and use this information for better product selection. Furthermore, we demonstrate that the impact of product assortment decisions on overall assortment profitability can easily be evaluated by means of sensitivity analysis.",
"title": ""
},
{
"docid": "2a61df18f9d3340d47073cda41da5822",
"text": "Link prediction is one of the fundamental problems in network analysis. In many applications, notably in genetics, a partially observed network may not contain any negative examples of absent edges, which creates a difficulty for many existing supervised learning approaches. We develop a new method which treats the observed network as a sample of the true network with different sampling rates for positive and negative examples. We obtain a relative ranking of potential links by their probabilities, utilizing information on node covariates as well as on network topology. Empirically, the method performs well under many settings, including when the observed network is sparse. We apply the method to a protein-protein interaction network and a school friendship network.",
"title": ""
},
{
"docid": "46d8cb4cb4db93ca54d4df5427a198e2",
"text": "Recent advances in machine learning are paving the way for the artificial generation of high quality images and videos. In this paper, we investigate how generating synthetic samples through generative models can lead to information leakage, and, consequently, to privacy breaches affecting individuals’ privacy that contribute their personal or sensitive data to train these models. In order to quantitatively measure privacy leakage, we train a Generative Adversarial Network (GAN), which combines a discriminative model and a generative model, to detect overfitting by relying on the discriminator capacity to learn statistical differences in distributions. We present attacks based on both white-box and black-box access to the target model, and show how to improve it through auxiliary knowledge of samples in the dataset. We test our attacks on several state-of-the-art models such as Deep Convolutional GAN (DCGAN), Boundary Equilibrium GAN (BEGAN), and the combination of DCGAN with a Variational Autoencoder (DCGAN+VAE), using datasets consisting of complex representations of faces (LFW) and objects (CIFAR-10). Our white-box attacks are 100% successful at inferring which samples were used to train the target model, while the best black-box attacks can infer training set membership with over 60% accuracy.",
"title": ""
},
{
"docid": "e0ae7f96ef81726777e974e140e4bac7",
"text": "Conjoined twins are a rare complication of 9 monozygotic twins and are associated with high perinatal mortality. Pygopagus are one of the rare types of conjoined twins with only a handful of cases reported in the literature. We present the case of one-and-half month-old male pygopagus conjoined twins, who were joined together dorsally in lower lumbar and sacral region and had spina bifida and shared a single thecal sac with combined weight of 6.14 kg. Spinal cord was separated at the level of the conus followed by duraplasty. They had uneventful recovery with normal 15 months follow-up. Separation of conjoined twins is recommended in where this is feasible with the anticipated survival of both or one infant.",
"title": ""
},
{
"docid": "9b942a1342eb3c4fd2b528601fa42522",
"text": "Peer and self-assessment offer an opportunity to scale both assessment and learning to global classrooms. This article reports our experiences with two iterations of the first large online class to use peer and self-assessment. In this class, peer grades correlated highly with staff-assigned grades. The second iteration had 42.9% of students’ grades within 5% of the staff grade, and 65.5% within 10%. On average, students assessed their work 7% higher than staff did. Students also rated peers’ work from their own country 3.6% higher than those from elsewhere. We performed three experiments to improve grading accuracy. We found that giving students feedback about their grading bias increased subsequent accuracy. We introduce short, customizable feedback snippets that cover common issues with assignments, providing students more qualitative peer feedback. Finally, we introduce a data-driven approach that highlights high-variance items for improvement. We find that rubrics that use a parallel sentence structure, unambiguous wording, and well-specified dimensions have lower variance. After revising rubrics, median grading error decreased from 12.4% to 9.9%.",
"title": ""
},
{
"docid": "6cc99565a0e9081a94e82be93a67482e",
"text": "The existing shortage of therapists and caregivers assisting physically disabled individuals at home is expected to increase and become serious problem in the near future. The patient population needing physical rehabilitation of the upper extremity is also constantly increasing. Robotic devices have the potential to address this problem as noted by the results of recent research studies. However, the availability of these devices in clinical settings is limited, leaving plenty of room for improvement. The purpose of this paper is to document a review of robotic devices for upper limb rehabilitation including those in developing phase in order to provide a comprehensive reference about existing solutions and facilitate the development of new and improved devices. In particular the following issues are discussed: application field, target group, type of assistance, mechanical design, control strategy and clinical evaluation. This paper also includes a comprehensive, tabulated comparison of technical solutions implemented in various systems.",
"title": ""
},
{
"docid": "7b591a91c87770e842b113a3aced6a3f",
"text": "Deep neural networks have achieved increasingly accurate results on a wide variety of complex tasks. However, much of this improvement is due to the growing use and availability of computational resources (e.g use of GPUs, more layers, more parameters, etc). Most state-of-the-art deep networks, despite performing well, over-parameterize approximate functions and take a significant amount of time to train. With increased focus on deploying deep neural networks on resource constrained devices like smart phones, there has been a push to evaluate why these models are so resource hungry and how they can be made more efficient. This work evaluates and compares three distinct methods for deep model compression and acceleration: weight pruning, low rank factorization, and knowledge distillation. Comparisons on VGG nets trained on CIFAR10 show that each of the models on their own are effective, but that the true power lies in combining them. We show that by combining pruning and knowledge distillation methods we can create a compressed network 85 times smaller than the original, all while retaining 96% of the original model's accuracy.",
"title": ""
},
{
"docid": "e7020ef81b6662a3acbb41223abb34e9",
"text": "The ability to attribute mental states to others ('theory of mind') pervades normal social interaction and is impaired in autistic individuals. In a previous positron emission tomography scan study of normal volunteers, performing a 'theory of mind' task was associated with activity in left medial prefrontal cortex. We used the same paradigm in five patients with Asperger syndrome, a mild variant of autism with normal intellectual functioning. No task-related activity was found in this region, but normal activity was observed in immediately adjacent areas. This result suggests that a highly circumscribed region of left medial prefrontal cortex is a crucial component of the brain system that underlies the normal understanding of other minds.",
"title": ""
}
] |
scidocsrr
|
9f9d76929f6c28eaf9e3d2b5fd41c888
|
Viargo - A generic virtual reality interaction library
|
[
{
"docid": "8745e21073db143341e376bad1f0afd7",
"text": "The Virtual Reality (VR) user interface style allows natural hand and body motions to manipulate virtual objects in 3D environments using one or more 3D input devices. This style is best suited to application areas where traditional two-dimensional styles fall short, such as scienti c visualization, architectural visualization, and remote manipulation. Currently, the programming e ort required to produce a VR application is too large, and many pitfalls must be avoided in the creation of successful VR programs. In this paper we describe the Decoupled Simulation Model for creating successful VR applications, and a software system that embodies this model. The MR Toolkit simpli es the development of VR applications by providing standard facilities required by a wide range of VR user interfaces. These facilities include support for distributed computing, head-mounted displays, room geometry management, performance monitoring, hand input devices, and sound feedback. The MR Toolkit encourages programmers to structure their applications to take advantage of the distributed computing capabilities of workstation networks improving the application's performance. In this paper, the motivations and the architecture of the toolkit are outlined, the programmer's view is described, and a simple application is brie y described. CR",
"title": ""
},
{
"docid": "26fc90569b9933554a4f80afa5e876a6",
"text": "Traditionally 3D interaction techniques (3DITs) are implemented in VR applications in a proprietary way on specific target platforms. Mixing 3DIT specific code with application code neither allows for reusability in other applications nor for exchanging 3DITs in a comfortable and flexible way. We propose an additional system software layer called Virtual Environment Interaction Technique Abstraction Layer (VITAL) targeted on platform and application independent (portable) 3DIT implementation. We describe the underlying concepts and provide details on how to integrate VITAL in VR frameworks. Furthermore, development mechanisms targeted on portability and general-purpose interfacing techniques with other system components are outlined and demonstrated in examples.",
"title": ""
},
{
"docid": "3169b4d2f00826d6991af532d6798223",
"text": "We present Avocado, our object-oriented framework for the development of distributed, interactive VE applications. Data distribution is achieved by transparent replication of a shared scene graph among the participating processes of a distributed application. A sophisticated group communication system is used to guarantee state consistency even in the presence of late joining and leaving processes. We also describe how the familiar dataflow graph found in modern stand-alone 3D-application toolkits extends nicely to the distributed case.",
"title": ""
}
] |
[
{
"docid": "946e5205a93f71e0cfadf58df186ef7e",
"text": "Face recognition has made extraordinary progress owing to the advancement of deep convolutional neural networks (CNNs). The central task of face recognition, including face verification and identification, involves face feature discrimination. However, the traditional softmax loss of deep CNNs usually lacks the power of discrimination. To address this problem, recently several loss functions such as center loss, large margin softmax loss, and angular softmax loss have been proposed. All these improved losses share the same idea: maximizing inter-class variance and minimizing intra-class variance. In this paper, we propose a novel loss function, namely large margin cosine loss (LMCL), to realize this idea from a different perspective. More specifically, we reformulate the softmax loss as a cosine loss by L2 normalizing both features and weight vectors to remove radial variations, based on which a cosine margin term is introduced to further maximize the decision margin in the angular space. As a result, minimum intra-class variance and maximum inter-class variance are achieved by virtue of normalization and cosine decision margin maximization. We refer to our model trained with LMCL as CosFace. Extensive experimental evaluations are conducted on the most popular public-domain face recognition datasets such as MegaFace Challenge, Youtube Faces (YTF) and Labeled Face in the Wild (LFW). We achieve the state-of-the-art performance on these benchmarks, which confirms the effectiveness of our proposed approach.",
"title": ""
},
{
"docid": "40ad6bf9f233b58e13cf6a709daba2ca",
"text": "While syntactic dependency annotations concentrate on the surface or functional structure of a sentence, semantic dependency annotations aim to capture betweenword relationships that are more closely related to the meaning of a sentence, using graph-structured representations. We extend the LSTM-based syntactic parser of Dozat and Manning (2017) to train on and generate these graph structures. The resulting system on its own achieves stateof-the-art performance, beating the previous, substantially more complex stateof-the-art system by 1.9% labeled F1. Adding linguistically richer input representations pushes the margin even higher, allowing us to beat it by 2.6% labeled F1.",
"title": ""
},
{
"docid": "f575b371d01ad0af38ca83d4adde1eb5",
"text": "Multiple-antenna systems, also known as multiple-input multiple-output radio, can improve the capacity and reliability of radio communication. However, the multiple RF chains associated with multiple antennas are costly in terms of size, power, and hardware. Antenna selection is a low-cost low-complexity alternative to capture many of the advantages of MIMO systems. This article reviews classic results on selection diversity, followed by a discussion of antenna selection algorithms at the transmit and receive sides. Extensions of classical results to antenna subset selection are presented. Finally, several open problems in this area are pointed out.",
"title": ""
},
{
"docid": "4f8822deb045eec9e8fca676353f1d1d",
"text": "Data mining plays an important role in the business world and it helps to the educational institution to predict and make decisions related to the students' academic status. With a higher education, now a days dropping out of students' has been increasing, it affects not only the students' career but also on the reputation of the institute. The existing system is a system which maintains the student information in the form of numerical values and it just stores and retrieve the information what it contains. So the system has no intelligence to analyze the data. The proposed system is a web based application which makes use of the Naive Bayesian mining technique for the extraction of useful information. The experiment is conducted on 700 students' with 19 attributes in Amrita Vishwa Vidyapeetham, Mysuru. Result proves that Naive Bayesian algorithm provides more accuracy over other methods like Regression, Decision Tree, Neural networks etc., for comparison and prediction. The system aims at increasing the success graph of students using Naive Bayesian and the system which maintains all student admission details, course details, subject details, student marks details, attendance details, etc. It takes student's academic history as input and gives students' upcoming performances on the basis of semester.",
"title": ""
},
{
"docid": "5e0d65ae26f6462c2f49af9188274c9d",
"text": "BACKGROUND\nThis study examined psychiatric comorbidity in adolescents with a gender identity disorder (GID). We focused on its relation to gender, type of GID diagnosis and eligibility for medical interventions (puberty suppression and cross-sex hormones).\n\n\nMETHODS\nTo ascertain DSM-IV diagnoses, the Diagnostic Interview Schedule for Children (DISC) was administered to parents of 105 gender dysphoric adolescents.\n\n\nRESULTS\n67.6% had no concurrent psychiatric disorder. Anxiety disorders occurred in 21%, mood disorders in 12.4% and disruptive disorders in 11.4% of the adolescents. Compared with natal females (n = 52), natal males (n = 53) suffered more often from two or more comorbid diagnoses (22.6% vs. 7.7%, p = .03), mood disorders (20.8% vs. 3.8%, p = .008) and social anxiety disorder (15.1% vs. 3.8%, p = .049). Adolescents with GID considered to be 'delayed eligible' for medical treatment were older [15.6 years (SD = 1.6) vs. 14.1 years (SD = 2.2), p = .001], their intelligence was lower [91.6 (SD = 12.4) vs. 99.1 (SD = 12.8), p = .011] and a lower percentage was living with both parents (23% vs. 64%, p < .001). Although the two groups did not differ in the prevalence of psychiatric comorbidity, the respective odds ratios ('delayed eligible' adolescents vs. 'immediately eligible' adolescents) were >1.0 for all psychiatric diagnoses except specific phobia.\n\n\nCONCLUSIONS\nDespite the suffering resulting from the incongruence between experienced and assigned gender at the start of puberty, the majority of gender dysphoric adolescents do not have co-occurring psychiatric problems. Delayed eligibility for medical interventions is associated with psychiatric comorbidity although other factors are of importance as well.",
"title": ""
},
{
"docid": "03d5e6a0afe9b5bab63e1113c892198e",
"text": "We present in this paper models and statistical methods for performing multiple trait analysis on mapping quantitative trait loci (QTL) based on the composite interval mapping method. By taking into account the correlated structure of multiple traits, this joint analysis has several advantages, compared with separate analyses, for mapping QTL, including the expected improvement on the statistical power of the test for QTL and on the precision of parameter estimation. Also this joint analysis provides formal procedures to test a number of biologically interesting hypotheses concerning the nature of genetic correlations between different traits. Among the testing procedures considered are those for joint mapping, pleiotropy, QTL by environment interaction, and pleiotropy vs. close linkage. The test of pleiotropy (one pleiotropic QTL at a genome position) vs. close linkage (multiple nearby nonpleiotropic QTL) can have important implications for our understanding of the nature of genetic correlations between different traits in certain regions of a genome and also for practical applications in animal and plant breeding because one of the major goals in breeding is to break unfavorable linkage. Results of extensive simulation studies are presented to illustrate various properties of the analyses.",
"title": ""
},
{
"docid": "380fdee23bebf16b05ce7caebd6edac4",
"text": "Automatic detection of emotions has been evaluated using standard Mel-frequency Cepstral Coefficients, MFCCs, and a variant, MFCC-low, calculated between 20 and 300 Hz, in order to model pitch. Also plain pitch features have been used. These acoustic features have all been modeled by Gaussian mixture models, GMMs, on the frame level. The method has been tested on two different corpora and languages; Swedish voice controlled telephone services and English meetings. The results indicate that using GMMs on the frame level is a feasible technique for emotion classification. The two MFCC methods have similar performance, and MFCC-low outperforms the pitch features. Combining the three classifiers significantly improves performance.",
"title": ""
},
{
"docid": "a9d948498c0ad0d99759636ea3ba4d1a",
"text": "Recently, Real Time Location Systems (RTLS) have been designed to provide location information of positioning target. The kernel of RTLS is localization algorithm, range-base localization algorithm is concerned as high precision. This paper introduces real-time range-based indoor localization algorithms, including Time of Arrival, Time Difference of Arrival, Received Signal Strength Indication, Time of Flight, and Symmetrical Double Sided Two Way Ranging. Evaluation criteria are proposed for assessing these algorithms, namely positioning accuracy, scale, cost, energy efficiency, and security. We also introduce the latest some solution, compare their Strengths and weaknesses. Finally, we give a recommendation about selecting algorithm from the viewpoint of the practical application need.",
"title": ""
},
{
"docid": "b7bfebcf77d9486473b9fcd1f4b91e63",
"text": "One of the most widespread applications of the Global Positioning System (GPS) is vehicular navigation. Improving the navigation accuracy continues to be a focus of research, commonly answered by the use of additional sensors. A sensor commonly fused with GPS is the inertial measurement unit (IMU). Due to the fact that the requirements of commercial systems are low cost, small size, and power conservative, micro-electro mechanical sensors (MEMS) IMUs are used. They provide navigation capability even in the absence of GPS signals or in the presence of high multipath or jamming. This paper addresses a centralized filter construction whereby navigation solutions from multiple IMUs are fused together to improve accuracy in GPS degraded areas. The proposed filter is a collection of several single IMU block filters. Each block filter is a 21 state IMU filter. Because each block filter estimates position, velocity and attitude, the system can utilize relative updates between the IMUs. These relative updates provide a method of reducing the position drift in the absence of GPS observations. The proposed filter’s performance is analyzed as a function of the number of IMUs used and relative update type, using a data set consisting of GPS outages, urban canyons and residential open sky conditions. While the use of additional IMUs (including a single IMU) provides negligible improvement in open sky conditions (where GPS alone is sufficient), the use of two, three, four and five IMUs provided a horizontal position improvement of 25 %, 29 %, 32 %, and 34 %, respectively, when GPS observations are removed for 30 seconds. Similarly, the velocity RMS improved by 25 %, 31%, 33%, and 34% for two, three, four and five IMUs, respectively. Attitude estimation also improves significantly ranging from 30 % – 76 %. Results also indicate that the use of more IMUs provides the system with better multipath rejection and performance in urban canyons.",
"title": ""
},
{
"docid": "1bb2ed5e6199b02b0eb320ba3eccf012",
"text": "The paper considers general machine learning models, where knowledge transfer is positioned as the main method to improve their convergence properties. Previous research was focused on mechanisms of knowledge transfer in the context of SVM framework; the paper shows that this mechanism is applicable to neural network framework as well. The paper describes several general approaches for knowledge transfer in both SVM and ANN frameworks and illustrates algorithmic implementations and performance of one of these approaches for several synthetic examples.",
"title": ""
},
{
"docid": "358da92f854c9aee818be6dd336f594d",
"text": "The presence of antipatterns can have a negative impact on the quality of a program. Consequently, their efficient detection has drawn the attention of both researchers and practitioners. However, most aspects of antipatterns are loosely specified because quality assessment is ultimately a human-centric process that requires contextual data. Consequently, there is always a degree of uncertainty on whether a class in a program is an antipattern or not. None of the existing automatic detection approaches handle the inherent uncertainty of the detection process. First, we present BDTEX (Bayesian Detection Expert), a Goal Question Metric (GQM) based approach to build Bayesian Belief Networks (BBNs) from the definitions ntipatterns etection of antipatterns. We discuss the advantages of BBNs over rule-based models and illustrate BDTEX on the Blob antipattern. Second, we validate BDTEX with three antipatterns: Blob, Functional Decomposition, and Spaghetti code, and two open-source programs: GanttProject v1.10.2 and Xerces v2.7.0. We also compare the results of BDTEX with those of another approach, DECOR, in terms of precision, recall, and utility. Finally, we also show the applicability of our approach in an industrial context using Eclipse JDT and JHotDraw and introduce a novel classification of antipatterns depending on the effort needed to map atic d their definitions to autom . Context and problem Software quality is important because of the complexity and ervasiveness of software systems. Moreover, the current trend n outsourcing development and maintenance requires means o measure quality with great details. Object-oriented quality is dversely impacted by antipatterns (Brown et al., 1998); their early etection and correction would ease development and mainteance. Antipatterns are “poor” solutions to recurring implementation nd design problems that impede the maintenance and evolution f programs. They are described using a template which describe heir general forms, their symptoms, their consequences, and some efactored solutions. The symptoms are often code smells (Fowler, 999). Even though a class in a program can present all symptoms f a given antipattern, it is not necessarily an antipattern. Moreover, hen discussing antipatterns, we do not exclude that, in a particPlease cite this article in press as: Khomh, F., et al., BDTEX: A GQM-b Software (2011), doi:10.1016/j.jss.2010.11.921 lar context, an antipattern could be the best way to implement or esign a (part of a) program. For example, automatically generated arsers present many symptoms of Spaghetti Code, i.e., very large lasses with very long and complex methods. Only a quality analyst ∗ Corresponding author at: Department of Electrical and Computer Engineering, ueen’s University, Canada. Tel.: +1 613 533 6000x75542. E-mail address: [email protected] (F. Khomh). 164-1212/$ – see front matter © 2010 Elsevier Inc. All rights reserved. oi:10.1016/j.jss.2010.11.921 etection approaches. © 2010 Elsevier Inc. All rights reserved. can evaluate the impact of antipatterns on their program in their context. All aspects of an antipattern are loosely specified because quality assessment is ultimately a human-centric process that requires contextual data. Consequently, there is always a degree of uncertainty on whether a class in a program is an antipattern or not. Therefore, detection results should be reported with the degree of uncertainty of the detection process. This uncertainty accounts for the loose definitions and the similarity of classes with the antipattern. There exist many approaches to specify and detect antipatterns. Some of these approaches are manual (Travassos et al., 1999), others are based on rules (Marinescu, 2004). Manual detection approaches avoid the problem of uncertainty, but do not scale up to the inspection of large systems. To the best of our knowledge, none of the existing automatic approaches provide a way to deal with the uncertainty of the detection. They provide quality analysts with an unsorted set of candidates classes with no indication of which one(s) should be inspected first for confirmation and correction. This paper builds on our previous work (Khomh et al., 2009) in which we illustrated the use of a Bayesian Belief Network (BBN) to specify the Blob antipattern and to detect its occurrences in proased Bayesian approach for the detection of antipatterns. J. Syst. grams. A Blob, also called God class (Riel, 1996), is a class that centralises functionality and has too many responsibilities. Brown et al. (1998) characterise its structure as a large controller class that depends on data stored in several surrounding data classes. Table 1 ARTICLE IN PRESS G Model JSS-8619; No. of Pages 14 2 F. Khomh et al. / The Journal of Systems and Software xxx (2011) xxx–xxx Table 1 List of detected antipatterns. The Blob (called also God class (Riel, 1996)) corresponds to a large controller class that depends on data stored in surrounded data classes. A large class declares many fields and methods with a low cohesion. A controller class monopolises most of the processing done by a system, takes most of the decisions, and closely directs the processing of other classes. We identify controller classes using suspicious names such as ‘Process’, ‘Control’, ‘Manage’, ‘System’, and so on. A data class contains only data and performs no processing on these data. It is composed of highly cohesive fields and accessors The Functional Decomposition antipattern may occur if experienced procedural developers with little knowledge of object-orientation implement an object-oriented system. Brown describes this antipattern as “a ‘main’ routine that calls numerous subroutines”. The Functional Decomposition design defect consists of a main class, i.e., a class with a procedural name, such as ‘Compute’ or ‘Display’, in which inheritance and polymorphism are scarcely used, that is associated with small classes, which declare many private fields and implement only few methods",
"title": ""
},
{
"docid": "c84ef3f7dfa5e3219a6c1c2f98109651",
"text": "We present JetStream, a system that allows real-time analysis of large, widely-distributed changing data sets. Traditional approaches to distributed analytics require users to specify in advance which data is to be backhauled to a central location for analysis. This is a poor match for domains where available bandwidth is scarce and it is infeasible to collect all potentially useful data. JetStream addresses bandwidth limits in two ways, both of which are explicit in the programming model. The system incorporates structured storage in the form of OLAP data cubes, so data can be stored for analysis near where it is generated. Using cubes, queries can aggregate data in ways and locations of their choosing. The system also includes adaptive filtering and other transformations that adjusts data quality to match available bandwidth. Many bandwidth-saving transformations are possible; we discuss which are appropriate for which data and how they can best be combined. We implemented a range of analytic queries on web request logs and image data. Queries could be expressed in a few lines of code. Using structured storage on source nodes conserved network bandwidth by allowing data to be collected only when needed to fulfill queries. Our adaptive control mechanisms are responsive enough to keep end-to-end latency within a few seconds, even when available bandwidth drops by a factor of two, and are flexible enough to express practical policies.",
"title": ""
},
{
"docid": "b226b612db064f720e32e5a7fd9d9dec",
"text": "Clustering is a fundamental technique widely used for exploring the inherent data structure in pattern recognition and machine learning. Most of the existing methods focus on modeling the similarity/dissimilarity relationship among instances, such as k-means and spectral clustering, and ignore to extract more effective representation for clustering. In this paper, we propose a deep embedding network for representation learning, which is more beneficial for clustering by considering two constraints on learned representations. We first utilize a deep auto encoder to learn the reduced representations from the raw data. To make the learned representations suitable for clustering, we first impose a locality-persevering constraint on the learned representations, which aims to embed original data into its underlying manifold space. Then, different from spectral clustering which extracts representations from the block diagonal similarity matrix, we apply a group sparsity constraint for the learned representations, and aim to learn block diagonal representations in which the nonzero groups correspond to its cluster. After obtaining the learned representations, we use k-means to cluster them. To evaluate the proposed deep embedding network, we compare its performance with k-means and spectral clustering on three commonly-used datasets. The experiments demonstrate that the proposed method achieves promising performance.",
"title": ""
},
{
"docid": "a41c9650da7ca29a51d310cb4a3c814d",
"text": "The analysis of resonant-type antennas based on the fundamental infinite wavelength supported by certain periodic structures is presented. Since the phase shift is zero for a unit-cell that supports an infinite wavelength, the physical size of the antenna can be arbitrary; the antenna's size is independent of the resonance phenomenon. The antenna's operational frequency depends only on its unit-cell and the antenna's physical size depends on the number of unit-cells. In particular, the unit-cell is based on the composite right/left-handed (CRLH) metamaterial transmission line (TL). It is shown that the CRLH TL is a general model for the required unit-cell, which includes a nonessential series capacitance for the generation of an infinite wavelength. The analysis and design of the required unit-cell is discussed based upon field distributions and dispersion diagrams. It is also shown that the supported infinite wavelength can be used to generate a monopolar radiation pattern. Infinite wavelength resonant antennas are realized with different number of unit-cells to demonstrate the infinite wavelength resonance",
"title": ""
},
{
"docid": "7544270b630f600656411ffd51605db9",
"text": ". INTRODUCTION Of interest in both acoustical re-search and electronic music is the synthesis of natural sound. For the researcher, it is the ultimate test of acoustical theory, while for the composer of electronic music it is an extraordinarily rich point of departure in the domain of timbre, or tone quality. The synthesis of natural sounds has been elusive; however, recent research in computer analysis and synthesis of some tones of musical instruments has yielded an insight which may prove to have general relevance in all natural sounds: the character of the temporal evolution of the spectral components is of critical importance in the determination of timbre. In natural sounds the amplitudes of the frequency components of the spectrum are time-variant, or dynamic. The energy of the components often evolves in complicated ways, in particular, during the attack and decay portions of the sound. The temporal evolution of the spectrum is in some cases easily followed as with bells, whereas in other cases not, because the evolution occurs in a very short time period, but it is nevertheless perceived and is an important cue in the recognition of timbre. Many natural sounds seem to have characteristic spectral evolutions that, in addition to providing their \"signature,\" are largely responsible for what we judge to be their lively quality. In contrast, it is largely the fixed proportion spectrum of most synthesized sounds that so readily imparts to the listener the electronic cue and lifeless quality. The special application of the equation for frequency modulation, described below, allows the production of complex spectra with very great simplicity. The fact that the temporal evolution of the frequency components of the spectrum can be easily controlled is perhaps the most striking attribute of the technique, for dynamic spectra are achieved only with considerable difficulty using current techniques of synthesis. At the end of this paper some simulations of brass, woodwind, and percussive sounds are given. The importance of these simulations is as much in their elegance and simplicity as it is in their accuracy. This frequency modulation technique, although not a physical model for natural sound, is shown to be a very powerful perceptual model for at least some. FREQUENCY MODULATION Frequency modulation (FM) is well understood as applied in radio transmission, but the relevant equations have not been applied in any significant way to the generation of audio spectra where both the carrier and the modulating frequencies are in the audio band and the side frequencies form the spectrum directly. In FM, the instantaneous frequency of a carrier wave is varied according to a modulating wave, such that the rate at which the carrier varies is the frequency of the modulating wave, or modulating frequency. The amount the carrier varies around its average, or peak frequency deviation, is proportional to the amplitude of the modulating wave. The parameters of a frequency-modulated signal are",
"title": ""
},
{
"docid": "18da4e2cd0745e400002d24117834fd8",
"text": "This paper examines the possible influence of podcasting on the traditional lecture in higher education. Firstly, it explores some of the benefits and limitations of the lecture as one of the dominant forms of teaching in higher education. The review then moves to explore the emergence of podcasting in education and the purpose of its use, before examining recent relevant literature about podcasting for supporting, enhancing, and indeed replacing the traditional lecture. The review identifies three broad types of use of podcasting: substitutional, supplementary and creative use. Podcasting appears to be most commonly used to provide recordings of past lectures to students for the purposes of review and revision (substitutional use). The second most common use was in providing additional material, often in the form of study guides and summary notes, to broaden and deepen students’ understanding (supplementary use). The third and least common use reported in the literature involved the creation of student generated podcasts (creative use). The review examines three key questions: What are the educational uses of podcasting in teaching and learning in higher education? Can podcasting facilitate more flexible and mobile learning? In what ways will podcasting influence the traditional lecture? These questions are discussed in the final section of the paper, with reference to future policies and practices.",
"title": ""
},
{
"docid": "89f8c52164291f548f4e36e77deacb99",
"text": "The physicochemical (pH, texture, Vitamin C, ash, fat, minerals) and sensory properties of banana were correlated with the genotype and growing conditions. Minerals in particular were shown to discriminate banana cultivars of different geographical origin quite accurately. Another issue relates to the beneficial properties of bananas both in terms of the high dietary fiber and antioxidant compounds, the latter being abundant in the peel. Therefore, banana can be further exploited for extracting several important components such as starch, and antioxidant compounds which can find industrial and pharmaceutical applications. Finally, the various storage methodologies were presented with an emphasis on Modified Atmosphere Packaging which appears to be one of the most promising of technologies.",
"title": ""
},
{
"docid": "1d45b9f29ceabacf15662bf2e59a197f",
"text": "Knee motion is believed to occur about a variable flexion-extension (FE) axis perpendicular to the sagittal plane and a longitudinal rotation (LR) axis. The authors used a mechanical device to locate the FE and the LR axes of six fresh anatomic specimen knees. The motion of points on the LR axis produced circular, planar paths about the fixed FE axis. Magnetic resonance (MR) images in planes perpendicular to the FE axis showed a circular profile for the femoral condyles. The FE axis is constant and directed from anterosuperior on the medial side to posteroinferior on the lateral side, passing through the origins of the medial and lateral collateral ligaments and superior to the crossing point of the cruciates. The LR axis is anterior and not perpendicular to the FE axis, the anatomic planes. This offset produces the valgus and external rotation observed with extension. The implications of two fixed offset axes for knee motion on prosthetic design, braces, gait analysis, and reconstructive surgery are profound.",
"title": ""
},
{
"docid": "306d5ba9eb3c9391eff7fac4e4c814ff",
"text": "Rapid growth of the aged population has caused an immense increase in the demand for healthcare services. Generally, the elderly are more prone to health problems compared to other age groups. With effective monitoring and alarm systems, the adverse effects of unpredictable events such as sudden illnesses, falls, and so on can be ameliorated to some extent. Recently, advances in wearable and sensor technologies have improved the prospects of these service systems for assisting elderly people. In this article, we review state-of-the-art wearable technologies that can be used for elderly care. These technologies are categorized into three types: indoor positioning, activity recognition and real time vital sign monitoring. Positioning is the process of accurate localization and is particularly important for elderly people so that they can be found in a timely manner. Activity recognition not only helps ensure that sudden events (e.g., falls) will raise alarms but also functions as a feasible way to guide people's activities so that they avoid dangerous behaviors. Since most elderly people suffer from age-related problems, some vital signs that can be monitored comfortably and continuously via existing techniques are also summarized. Finally, we discussed a series of considerations and future trends with regard to the construction of \"smart clothing\" system.",
"title": ""
},
{
"docid": "53b32cdb6c3d511180d8cb194c286ef5",
"text": "Silymarin, a C25 containing flavonoid from the plant Silybum marianum, has been the gold standard drug to treat liver disorders associated with alcohol consumption, acute and chronic viral hepatitis, and toxin-induced hepatic failures since its discovery in 1960. Apart from the hepatoprotective nature, which is mainly due to its antioxidant and tissue regenerative properties, Silymarin has recently been reported to be a putative neuroprotective agent against many neurologic diseases including Alzheimer's and Parkinson's diseases, and cerebral ischemia. Although the underlying neuroprotective mechanism of Silymarin is believed to be due to its capacity to inhibit oxidative stress in the brain, it also confers additional advantages by influencing pathways such as β-amyloid aggregation, inflammatory mechanisms, cellular apoptotic machinery, and estrogenic receptor mediation. In this review, we have elucidated the possible neuroprotective effects of Silymarin and the underlying molecular events, and suggested future courses of action for its acceptance as a CNS drug for the treatment of neurodegenerative diseases.",
"title": ""
}
] |
scidocsrr
|
08c8a4e8f8528a13e4e3119c29833f43
|
Motion cues and saliency based unconstrained video segmentation
|
[
{
"docid": "e9a0a18a557bd586b0d23381a2436e0e",
"text": "Although tremendous success has been achieved for interactive object cutout in still images, accurately extracting dynamic objects in video remains a very challenging problem. Previous video cutout systems present two major limitations: (1) reliance on global statistics, thus lacking the ability to deal with complex and diverse scenes; and (2) treating segmentation as a global optimization, thus lacking a practical workflow that can guarantee the convergence of the systems to the desired results.\n We present Video SnapCut, a robust video object cutout system that significantly advances the state-of-the-art. In our system segmentation is achieved by the collaboration of a set of local classifiers, each adaptively integrating multiple local image features. We show how this segmentation paradigm naturally supports local user editing and propagates them across time. The object cutout system is completed with a novel coherent video matting technique. A comprehensive evaluation and comparison is presented, demonstrating the effectiveness of the proposed system at achieving high quality results, as well as the robustness of the system against various types of inputs.",
"title": ""
}
] |
[
{
"docid": "2b75aedec2f8acc52e22e0f22123fb1e",
"text": "Reinforcement Learning (RL) is a generic framework for modeling decision making processes and as such very suited to the task of automatic summarization. In this paper we present a RL method, which takes into account intermediate steps during the creation of a summary. Furthermore, we introduce a new feature set, which describes sentences with respect to already selected sentences. We carry out a range of experiments on various data sets – including several DUC data sets, but also scientific publications and encyclopedic articles. Our results show that our approach a) successfully adapts to data sets from various domains, b) outperforms previous RL-based methods for summarization and state-of-the-art summarization systems in general, and c) can be equally applied to singleand multidocument summarization on various domains and document lengths.",
"title": ""
},
{
"docid": "3a9d639e87d6163c18dd52ef5225b1a6",
"text": "A variety of approaches have been recently proposed to automatically infer users’ personality from their user generated content in social media. Approaches differ in terms of the machine learning algorithms and the feature sets used, type of utilized footprint, and the social media environment used to collect the data. In this paper, we perform a comparative analysis of state-of-the-art computational personality recognition methods on a varied set of social media ground truth data from Facebook, Twitter and YouTube. We answer three questions: (1) Should personality prediction be treated as a multi-label prediction task (i.e., all personality traits of a given user are predicted at once), or should each trait be identified separately? (2) Which predictive features work well across different on-line environments? and (3) What is the decay in accuracy when porting models trained in one social media environment to another?",
"title": ""
},
{
"docid": "3564e82cf5c67e76ec6c7232dd8ed6aa",
"text": "The past few years have witnessed an increase in the development of wearable sensors for health monitoring systems. This increase has been due to several factors such as development in sensor technology as well as directed efforts on political and stakeholder levels to promote projects which address the need for providing new methods for care given increasing challenges with an aging population. An important aspect of study in such system is how the data is treated and processed. This paper provides a recent review of the latest methods and algorithms used to analyze data from wearable sensors used for physiological monitoring of vital signs in healthcare services. In particular, the paper outlines the more common data mining tasks that have been applied such as anomaly detection, prediction and decision making when considering in particular continuous time series measurements. Moreover, the paper further details the suitability of particular data mining and machine learning methods used to process the physiological data and provides an overview of the properties of the data sets used in experimental validation. Finally, based on this literature review, a number of key challenges have been outlined for data mining methods in health monitoring systems.",
"title": ""
},
{
"docid": "dec369fc008d70575428f331d7a428a6",
"text": "Securing off-chip main memory is essential for protection from adversaries with physical access to systems. However, current secure-memory designs incur considerable performance overheads – a major cause being the multiple memory accesses required for traversing an integrity-tree, that provides protection against man-in-the-middle attacks or replay attacks. In this paper, we provide a scalable solution to this problem by proposing a compact integrity tree design that requires fewer memory accesses for its traversal. We enable this by proposing new storage-efficient representations for the counters used for encryption and integrity-tree in secure memories. Our Morphable Counters are more cacheable on-chip, as they provide more counters per cacheline than existing split counters. Additionally, they incur lower overheads due to counter-overflows, by dynamically switching between counter representations based on usage pattern. We show that using Morphable Counters enables a 128-ary integrity-tree, that can improve performance by 6.3% on average (up to 28.3%) and reduce system energy-delay product by 8.8% on average, compared to an aggressive baseline using split counters with a 64-ary integrity-tree. These benefits come without any additional storage or reduction in security and are derived from our compact counter representation, that reduces the integrity-tree size for a 16GB memory from 4MB in the baseline to 1MB. Compared to recently proposed VAULT, our design provides a speedup of 13.5% on average (up to 47.4%).",
"title": ""
},
{
"docid": "4731a95b14335a84f27993666b192bba",
"text": "Blockchain has been applied to study data privacy and network security recently. In this paper, we propose a punishment scheme based on the action record on the blockchain to suppress the attack motivation of the edge servers and the mobile devices in the edge network. The interactions between a mobile device and an edge server are formulated as a blockchain security game, in which the mobile device sends a request to the server to obtain real-time service or launches attacks against the server for illegal security gains, and the server chooses to perform the request from the device or attack it. The Nash equilibria (NEs) of the game are derived and the conditions that each NE exists are provided to disclose how the punishment scheme impacts the adversary behaviors of the mobile device and the edge server.",
"title": ""
},
{
"docid": "c85ee4139239b17d98b0d77836e00b72",
"text": "We present a suite of reinforcement learning environments illustrating various safety properties of intelligent agents. These problems include safe interruptibility, avoiding side effects, absent supervisor, reward gaming, safe exploration, as well as robustness to self-modification, distributional shift, and adversaries. To measure compliance with the intended safe behavior, we equip each environment with a performance function that is hidden from the agent. This allows us to categorize AI safety problems into robustness and specification problems, depending on whether the performance function corresponds to the observed reward function. We evaluate A2C and Rainbow, two recent deep reinforcement learning agents, on our environments and show that they are not able to solve them satisfactorily.",
"title": ""
},
{
"docid": "74feb28c6d3a7b87c84e2ca0c79ff1f5",
"text": "The Speaker Recognition community that participates in NIST evaluations has concentrated on designing genderand channel-conditioned systems. In the real word, this conditioning is not feasible. Our main purpose in this work is to propose a mixture of Probabilistic Linear Discriminant Analysis models (PLDA) as a solution for making systems independent of speaker gender. In order to show the effectiveness of the mixture model, we first experiment on 2010 NIST telephone speech (det5), where we prove that there is no loss of accuracy compared with a baseline gender-dependent model. We also test with success the mixture model on a more realistic situation where there are cross-gender trials. Furthermore, we report results on microphone speech for the det1, det2, det3 and det4 tasks to confirm the effectiveness of the mixture model.",
"title": ""
},
{
"docid": "f24bba45a1905cd4658d52bc7e9ee046",
"text": "In continuous action domains, standard deep reinforcement learning algorithms like DDPG suffer from inefficient exploration when facing sparse or deceptive reward problems. Conversely, evolutionary and developmental methods focusing on exploration like Novelty Search, QualityDiversity or Goal Exploration Processes explore more robustly but are less efficient at fine-tuning policies using gradient-descent. In this paper, we present the GEP-PG approach, taking the best of both worlds by sequentially combining a Goal Exploration Process and two variants of DDPG. We study the learning performance of these components and their combination on a low dimensional deceptive reward problem and on the larger Half-Cheetah benchmark. We show that DDPG fails on the former and that GEP-PG improves over the best DDPG variant in both environments. Supplementary videos and discussion can be found at frama.link/gep_pg, the code at github.com/flowersteam/geppg.",
"title": ""
},
{
"docid": "f25cfe1f277071a033b9665dd893005d",
"text": "This paper presents a review of the literature on gamification design frameworks. Gamification, understood as the use of game design elements in other contexts for the purpose of engagement, has become a hot topic in the recent years. However, there's also a cautionary tale to be extracted from Gartner's reports on the topic: many gamification-based solutions fail because, mostly, they have been created on a whim, or mixing bits and pieces from game components, without a clear and formal design process. The application of a definite design framework aims to be a path to success. Therefore, before starting the gamification of a process, it is very important to know which frameworks or methods exist and their main characteristics. The present review synthesizes the process of gamification design for a successful engagement experience. This review categorizes existing approaches and provides an assessment of their main features, which may prove invaluable to developers of gamified solutions at different levels and scopes.",
"title": ""
},
{
"docid": "7c09cb7f935e2fb20a4d2e56a5471e61",
"text": "This paper proposes and evaluates an approach to the parallelization, deployment and management of bioinformatics applications that integrates several emerging technologies for distributed computing. The proposed approach uses the MapReduce paradigm to parallelize tools and manage their execution, machine virtualization to encapsulate their execution environments and commonly used data sets into flexibly deployable virtual machines, and network virtualization to connect resources behind firewalls/NATs while preserving the necessary performance and the communication environment. An implementation of this approach is described and used to demonstrate and evaluate the proposed approach. The implementation integrates Hadoop, Virtual Workspaces, and ViNe as the MapReduce, virtual machine and virtual network technologies, respectively, to deploy the commonly used bioinformatics tool NCBI BLAST on a WAN-based test bed consisting of clusters at two distinct locations, the University of Florida and the University of Chicago. This WAN-based implementation, called CloudBLAST, was evaluated against both non-virtualized and LAN-based implementations in order to assess the overheads of machine and network virtualization, which were shown to be insignificant. To compare the proposed approach against an MPI-based solution, CloudBLAST performance was experimentally contrasted against the publicly available mpiBLAST on the same WAN-based test bed. Both versions demonstrated performance gains as the number of available processors increased, with CloudBLAST delivering speedups of 57 against 52.4 of MPI version, when 64 processors on 2 sites were used. The results encourage the use of the proposed approach for the execution of large-scale bioinformatics applications on emerging distributed environments that provide access to computing resources as a service.",
"title": ""
},
{
"docid": "7e682f98ee6323cd257fda07504cba20",
"text": "We present a method for automated segmentation of the vasculature in retinal images. The method produces segmentations by classifying each image pixel as vessel or nonvessel, based on the pixel's feature vector. Feature vectors are composed of the pixel's intensity and two-dimensional Gabor wavelet transform responses taken at multiple scales. The Gabor wavelet is capable of tuning to specific frequencies, thus allowing noise filtering and vessel enhancement in a single step. We use a Bayesian classifier with class-conditional probability density functions (likelihoods) described as Gaussian mixtures, yielding a fast classification, while being able to model complex decision surfaces. The probability distributions are estimated based on a training set of labeled pixels obtained from manual segmentations. The method's performance is evaluated on publicly available DRIVE (Staal et al.,2004) and STARE (Hoover et al.,2000) databases of manually labeled images. On the DRIVE database, it achieves an area under the receiver operating characteristic curve of 0.9614, being slightly superior than that presented by state-of-the-art approaches. We are making our implementation available as open source MATLAB scripts for researchers interested in implementation details, evaluation, or development of methods",
"title": ""
},
{
"docid": "d60deca88b46171ad940b9ee8964dc77",
"text": "Established in 1987, the EuroQol Group initially comprised a network of international, multilingual and multidisciplinary researchers from seven centres in Finland, the Netherlands, Norway, Sweden and the UK. Nowadays, the Group comprises researchers from Canada, Denmark, Germany, Greece, Japan, New Zealand, Slovenia, Spain, the USA and Zimbabwe. The process of shared development and local experimentation resulted in EQ-5D, a generic measure of health status that provides a simple descriptive profile and a single index value that can be used in the clinical and economic evaluation of health care and in population health surveys. Currently, EQ-5D is being widely used in different countries by clinical researchers in a variety of clinical areas. EQ-5D is also being used by eight out of the first 10 of the top 50 pharmaceutical companies listed in the annual report of Pharma Business (November/December 1999). Furthermore, EQ-5D is one of the handful of measures recommended for use in cost-effectiveness analyses by the Washington Panel on Cost Effectiveness in Health and Medicine. EQ-5D has now been translated into most major languages with the EuroQol Group closely monitoring the process.",
"title": ""
},
{
"docid": "ee0d89ccd67acc87358fa6dd35f6b798",
"text": "Lessons learned from developing four graph analytics applications reveal good research practices and grand challenges for future research. The application domains include electric-power-grid analytics, social-network and citation analytics, text and document analytics, and knowledge domain analytics.",
"title": ""
},
{
"docid": "f4eb27fa17bfaef9a9a32aa84f38420c",
"text": "Effective design of concurrent tree implementation plays a major role in improving the scalability of applications in a multicore environment. We introduce a concurrent binary search tree with fatnodes (FatCBST) and present algorithms to perform basic operations on it. Unlike a simple node with single value, a fatnode consists of a set of values. FatCBST concept allows a thread to perform update operations on an existing fatnode without changing the tree structure, making it less disruptive to other threads' operations. Fatnodes help to take advantage of the spatial locality in the cache hierarchy and also reduce the height of the concurrent binary search tree. Our FatCBST implementation allows multiple threads to perform update operations on the same existing fatnode at the same time. Experimental results show that the FatCBST implementations that have small fatnode sizes provide better throughput for high and medium contention workloads; and large fatnode sizes provide better throughput for low contention workloads, as compared to the current state-of-the-art implementations.",
"title": ""
},
{
"docid": "243d1dc8df4b8fbd37cc347a6782a2b5",
"text": "This paper introduces a framework for`curious neural controllers' which employ an adaptive world model for goal directed on-line learning. First an on-line reinforcement learning algorithm for autonomousànimats' is described. The algorithm is based on two fully recurrent`self-supervised' continually running networks which learn in parallel. One of the networks learns to represent a complete model of the environmental dynamics and is called thèmodel network'. It provides completècredit assignment paths' into the past for the second network which controls the animats physical actions in a possibly reactive environment. The an-imats goal is to maximize cumulative reinforcement and minimize cumulativèpain'. The algorithm has properties which allow to implement something like the desire to improve the model network's knowledge about the world. This is related to curiosity. It is described how the particular algorithm (as well as similar model-building algorithms) may be augmented by dynamic curiosity and boredom in a natural manner. This may be done by introducing (delayed) reinforcement for actions that increase the model network's knowledge about the world. This in turn requires the model network to model its own ignorance, thus showing a rudimentary form of self-introspective behavior.",
"title": ""
},
{
"docid": "d13bf709580b207841db407338393df6",
"text": "One version of a stochastic computer simulation of airspace includes the implementation of complex, high-fidelity models of aircraft. Since the models are pre-existing, third-party developed products, these aircraft models require validation prior to implementation. Several methodologies are available to demonstrate the accuracy of these models and a variety of testers potentially involved, so a notation is proposed to describe the level of testing performed in the validation of a given model using seven fields, each making use of succinct notation. Rather than limiting those qualified to do this type of work or restrict the aircraft models available for use, this classification is proposed in order to allow for anyone to complete validation tasks and to allow for a wide variety of tasks during the course of validation, while keeping the ultimate user of the model easily and fully informed as to the level of testing done and the experience and qualifications of the tester.",
"title": ""
},
{
"docid": "6f6733c35f78b00b771cf7099c953954",
"text": "This paper proposes an asymmetrical pulse width modulation (APWM) with frequency tracking control of full bridge series resonant inverter for induction heating application. In this method, APWM is used as power regulation, and phased locked loop (PLL) is used to attain zero-voltage-switching (ZVS) over a wide load range. The complete closed loop control model is obtained using small signal analysis. The validity of the proposed control is verified by simulation results.",
"title": ""
},
{
"docid": "0c6c5fe1e81451ee5a7b4c7c4a37d423",
"text": "0957-4174/$ see front matter 2012 Elsevier Ltd. A http://dx.doi.org/10.1016/j.eswa.2012.03.028 ⇑ Corresponding author. Tel./fax: +98 2182883637. E-mail addresses: [email protected] com (A. Hassanzadeh), [email protected] (F. K (S. Elahi). 1 Measuring e-learning systems success. In the era of internet, universities and higher education institutions are increasingly tend to provide e-learning. For suitable planning and more enjoying the benefits of this educational approach, a model for measuring success of e-learning systems is essential. So in this paper, we try to survey and present a model for measuring success of e-learning systems in universities. For this purpose, at first, according to literature review, a conceptual model was designed. Then, based on opinions of 33 experts, and assessing their suggestions, research indicators were finalized. After that, to examine the relationships between components and finalize the proposed model, a case study was done in 5 universities: Amir Kabir University, Tehran University, Shahid Beheshti University, Iran University of Science & Technology and Khaje Nasir Toosi University of Technology. Finally, by analyzing questionnaires completed by 369 instructors, students and alumni, which were e-learning systems user, the final model (MELSS Model). 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "72a86b52797d61bf631d75cd7109e9d9",
"text": "We introduce Olympus, a freely available framework for research in conversational interfaces. Olympus’ open, transparent, flexible, modular and scalable nature facilitates the development of large-scale, real-world systems, and enables research leading to technological and scientific advances in conversational spoken language interfaces. In this paper, we describe the overall architecture, several systems spanning different domains, and a number of current research efforts supported by Olympus.",
"title": ""
}
] |
scidocsrr
|
19d1eb41a173d2cf4d2fe448e0cc93ae
|
Using SVM based method for equipment fault detection in a thermal power plant
|
[
{
"docid": "785a6d08ef585302d692864d09b026fe",
"text": "Linear Discriminant Analysis (LDA) is a well-known method for dimensionality reduction and classification. LDA in the binaryclass case has been shown to be equivalent to linear regression with the class label as the output. This implies that LDA for binary-class classifications can be formulated as a least squares problem. Previous studies have shown certain relationship between multivariate linear regression and LDA for the multi-class case. Many of these studies show that multivariate linear regression with a specific class indicator matrix as the output can be applied as a preprocessing step for LDA. However, directly casting LDA as a least squares problem is challenging for the multi-class case. In this paper, a novel formulation for multivariate linear regression is proposed. The equivalence relationship between the proposed least squares formulation and LDA for multi-class classifications is rigorously established under a mild condition, which is shown empirically to hold in many applications involving high-dimensional data. Several LDA extensions based on the equivalence relationship are discussed.",
"title": ""
},
{
"docid": "48cdea9a78353111d236f6d0f822dc3a",
"text": "Support vector machines (SVMs) with the gaussian (RBF) kernel have been popular for practical use. Model selection in this class of SVMs involves two hyper parameters: the penalty parameter C and the kernel width . This letter analyzes the behavior of the SVM classifier when these hyper parameters take very small or very large values. Our results help in understanding the hyperparameter space that leads to an efficient heuristic method of searching for hyperparameter values with small generalization errors. The analysis also indicates that if complete model selection using the gaussian kernel has been conducted, there is no need to consider linear SVM.",
"title": ""
}
] |
[
{
"docid": "c197fcf3042099003f3ed682f7b7f19c",
"text": "Interaction graphs are ubiquitous in many fields such as bioinformatics, sociology and physical sciences. There have been many studies in the literature targeted at studying and mining these graphs. However, almost all of them have studied these graphs from a static point of view. The study of the evolution of these graphs over time can provide tremendous insight on the behavior of entities, communities and the flow of information among them. In this work, we present an event-based characterization of critical behavioral patterns for temporally varying interaction graphs. We use non-overlapping snapshots of interaction graphs and develop a framework for capturing and identifying interesting events from them. We use these events to characterize complex behavioral patterns of individuals and communities over time. We demonstrate the application of behavioral patterns for the purposes of modeling evolution, link prediction and influence maximization. Finally, we present a diffusion model for evolving networks, based on our framework.",
"title": ""
},
{
"docid": "3f7c6490ccb6d95bd22644faef7f452f",
"text": "A blockchain is a distributed, decentralised database of records of digital events (transactions) that took place and were shared among the participating parties. Each transaction in the public ledger is verified by consensus of a majority of the participants in the system. Bitcoin may not be that important in the future, but blockchain technology's role in Financial and Non-financial world can't be undermined. In this paper, we provide a holistic view of how Blockchain technology works, its strength and weaknesses, and its role to change the way the business happens today and tomorrow.",
"title": ""
},
{
"docid": "f297dee76369722f8143dba41f4355a8",
"text": "Inferior pedicle and free nipple grafting are commonly used as breast reduction techniques for patients with breast hypertrophy and gigantomastia. Limitations of these techniques are, respectively, possible vascular compromise and total/partial necrosis of the nipple–areola complex (NAC). The authors describe the innovative inferocentral pedicled reduction mammaplasty (ICPBR) enhanced by preservation of Würinger’s septum for severe hypertrophic breasts. Among 287 breast reductions performed between January 2001 and 2015, 83 (28.9%) macromastia and gigantomastia patients met the inclusion criteria (breast volume resection ≥400 g–sternal notch-to-nipple distance ≥33 cm) and were included in the study. Patients were stratified according to pedicle type: Group A (51 patients) underwent ICPBR with Würinger’s septum preservation; group B (32 patients) underwent IPBR. Groups were compared for NAC vascular complications, surgical revisions, wound-healing period and patient satisfaction at a minimum 6-month follow-up assessed by a five-category questionnaire (breast size, shape, symmetry, texture and scars appearance), with five Likert subscales (1 = poor to 5 = excellent). Descriptive statistics were reported, and comparisons of performance endpoints between groups were performed using Chi-squared, Fisher’s exact and Mann–Whitney U tests, with p value <0.05 considered significant. Group A and group B had, respectively, a mean age of 48.3 ± 12.4 and 50.1 ± 11.7 years, mean BMI of 23.8 and 24.6, mean weight resected of 560 ± 232 g and 590 ± 195 g, mean sternal notch-to-nipple distance of 35.1 and 34.3 cm, average nipple elevation of 9.7 and 9.5 cm. Among group A and group B, NAC complication rates were, respectively, 6.2 and 24.2% (p = 0.03), surgical revision rates were 33.3 and 60% (p = 1.00), healing time was 15.90 ± 3.2 and 19.03 ± 5.9 days (p = 0.002), and mean patient satisfaction scores were 19.9 ± 2.6 and 18.7 ± 3.4 (p = 0.07). The ICPBR technique enhanced by Würinger’s septum preservation was found to be a reproducible and effective procedure for hypertrophic breasted and gigantomastia patients, improving the reliability of the vascular supply to the inferior–central pedicle. The authors do believe this procedure should be regarded as an innovative and safe option giving optimal aesthetic outcomes in this demanding group of patients. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .",
"title": ""
},
{
"docid": "86314426c9afd5dbd13d096605af7b05",
"text": "Large scale knowledge graphs (KGs) such as Freebase are generally incomplete. Reasoning over multi-hop (mh) KG paths is thus an important capability that is needed for question answering or other NLP tasks that require knowledge about the world. mh-KG reasoning includes diverse scenarios, e.g., given a head entity and a relation path, predict the tail entity; or given two entities connected by some relation paths, predict the unknown relation between them. We present ROPs, recurrent one-hop predictors, that predict entities at each step of mh-KB paths by using recurrent neural networks and vector representations of entities and relations, with two benefits: (i) modeling mh-paths of arbitrary lengths while updating the entity and relation representations by the training signal at each step; (ii) handling different types of mh-KG reasoning in a unified framework. Our models show state-of-the-art for two important multi-hop KG reasoning tasks: Knowledge Base Completion and Path Query Answering.1",
"title": ""
},
{
"docid": "ddeb9251ed726a7b5df687a32b72fa5f",
"text": "Medical visualization is the use of computers to create 3D images from medical imaging data sets, almost all surgery and cancer treatment in the developed world relies on it.Volume visualization techniques includes iso-surface visualization, mesh visualization and point cloud visualization techniques, these techniques have revolutionized medicine. Much of modern medicine relies on the 3D imaging that is possible with magnetic resonance imaging (MRI) scanners, functional magnetic resonance imaging (fMRI)scanners, positron emission tomography (PET) scanners, ultrasound imaging (US) scanners, X-Ray scanners, bio-marker microscopy imaging scanners and computed tomography (CT) scanners, which make 3D images out of 2D slices. The primary goal of this report is the application-oriented optimization of existing volume rendering methods providing interactive frame-rates. Techniques are presented for traditional alpha-blending rendering, surface-shaded display, maximum intensity projection (MIP), and fast previewing with fully interactive parameter control. Different preprocessing strategies are proposed for interactive iso-surface rendering and fast previewing, such as the well-known marching cube algorithm.",
"title": ""
},
{
"docid": "6ea18481d5741006c898dd23e8078c0e",
"text": "In this paper, air-filled Substrate Integrated Waveguide (SIW) is proposed and demonstrated for the first time at U-band. This low-loss transmission line is developed on a low-cost multilayer Printed Circuit Board (PCB) process. The top and bottom layers may make use of an extremely low-cost standard substrate such as FR-4 on which base-band or digital circuits can be designed so to obtain a very compact, high performance, low-cost and self-packaged integrated system. For measurement purposes, an optimized-length dielectric- to air-filled SIW transition operating at U-band with 0.21 ±0.055 dB insertion loss is developed. The measured insertion loss of an air-filled SIW of interest at U-band is 0.122 ±0.122 dB/cm compared to 0.4 ±0.13 dB/cm for its dielectric-filled counterpart. Furthermore, an air-filled SIW phase shifter is reported for the first time. It achieves a measured 0.15 ±0.14 dB transmission loss at U-band. The proposed air-filled SIW transmission line and phase shifter are of particular interest for high performance and low-cost millimeter-wave circuits and systems.",
"title": ""
},
{
"docid": "b241b428f2012437b32b755f8ed53b7b",
"text": "Mobile cloud computing presents an effective solution to overcome smartphone constraints, such as limited computational power, storage, and energy. As the traditional mobile application development models do not support computation offloading, mobile cloud computing requires novel application development models that can facilitate the development of cloud enabled mobile applications. This paper presents a mobile cloud application development model, named MobiByte, to enhance mobile device applications’ performance, energy efficiency, and execution support. MobiByte is a context-aware application model that uses multiple data offloading techniques to support a wide range of applications. The proposed model is validated using prototype applications and detailed results are presented. Moreover, MobiByte is compared with the most recent application models with a conclusion that it outperforms the existing application models in many aspects like energy efficiency, performance, generality, context awareness, and privacy.",
"title": ""
},
{
"docid": "4c406b80ad6c6ca617177a55d149f325",
"text": "REST Chart is a Petri-Net based XML modeling framework for REST API. This paper presents two important enhancements and extensions to REST Chart modeling - Hyperlink Decoration and Hierarchical REST Chart. In particular, the proposed Hyperlink Decoration decomposes resource connections from resource representation, such that hyperlinks can be defined independently of schemas. This allows a Navigation-First Design by which the important global connections of a REST API can be designed first and reused before the local resource representations are implemented and specified. Hierarchical REST Chart is a powerful mechanism to rapidly decompose and extend a REST API in several dimensions based on Hyperlink Decoration. These new mechanisms can be used to manage the complexities in large scale REST APIs that undergo frequent changes as in some large scale open source development projects. This paper shows that these new capabilities can fit nicely in the REST Chart XML with very minor syntax changes. These enhancements to REST Chart are applied successfully in designing and verifying REST APIs for software-defined-networking (SDN) and Cloud computing.",
"title": ""
},
{
"docid": "23ffb68125a7f1deb27062acc262701e",
"text": "All metropolitan cities face traffic congestion problems especially in the downtown areas. Normal cities can be transformed into “smart cities” by exploiting the information and communication technologies (ICT). The paradigm of Internet of Thing (IoT) can play an important role in realization of smart cities. This paper proposes an IoT based traffic management solutions for smart cities where traffic flow can be dynamically controlled by onsite traffic officers through their smart phones or can be centrally monitored or controlled through Internet. We have used the example of the holy city of Makkah Saudi Arabia, where the traffic behavior changes dynamically due to the continuous visitation of the pilgrims throughout the year. Therefore, Makkah city requires special traffic controlling algorithms other than the prevailing traffic control systems. However the scheme proposed is general and can be used in any Metropolitan city without the loss of generality.",
"title": ""
},
{
"docid": "29df7f7e7739bd78f0d72986d43e3adf",
"text": "2009;53;992-1002; originally published online Feb 19, 2009; J. Am. Coll. Cardiol. and Leonard S. Gettes E. William Hancock, Barbara J. Deal, David M. Mirvis, Peter Okin, Paul Kligfield, International Society for Computerized Electrocardiology Endorsed by the Cardiology Foundation; and the Heart Rhythm Society Committee, Council on Clinical Cardiology; the American College of the American Heart Association Electrocardiography and Arrhythmias Associated With Cardiac Chamber Hypertrophy A Scientific Statement From Interpretation of the Electrocardiogram: Part V: Electrocardiogram Changes AHA/ACCF/HRS Recommendations for the Standardization and This information is current as of August 2, 2011 http://content.onlinejacc.org/cgi/content/full/53/11/992 located on the World Wide Web at: The online version of this article, along with updated information and services, is",
"title": ""
},
{
"docid": "4ac139237dd1a3c85a6a7140650b833d",
"text": "Background\nDepression, anxiety, and stress levels are considered important indicators for mental health. Khat chewing habit is prevalent among all segments of Jazan population in Saudi Arabia. Few studies have been conducted to evaluate depression, anxiety, and stress among Jazan University students, and information about the correlation between khat use and these disorders is scarce. Thus, this study aims to evaluate the prevalence of depression, anxiety, and stress and their correlation with khat chewing and other risk factors among Jazan University students.\n\n\nMethods\nA cross-sectional study was conducted on 642 students from Jazan University. Multistage sampling was used, with probability proportional to size-sampling technique. The Depression, Anxiety, and Stress Scale 21 questionnaire was used to collect the data, which were analyzed using SPSS Version 20.0 software.\n\n\nResults\nModerate depression was prevalent among 53.6% of the sample, anxiety was found among 65.7%, while 34.3% of the students suffered from stress. Female gender was strongly associated with higher mean scores for symptoms of depression, anxiety, and stress, with P-values <0.05 for all. Moreover, anxiety symptoms scores were statistically associated with grade point average and caffeine consumption. Khat use was statistically associated with higher mean scores of anxiety among males and a higher mean score of depression and anxiety among females.\n\n\nConclusion\nThe results indicate a high rate of symptoms of depression, anxiety, and stress among Jazan University students. Khat use was associated with anxiety, and a higher rate of symptoms of depression, anxiety, and stress was indicated among female students. Therefore, strategy for the prevention and management of depression, anxiety, and stress is highly recommended to minimize the impact of these serious disorders.",
"title": ""
},
{
"docid": "f3a4f5bd47e978d3c74aa5dbfe93f9f9",
"text": "We study the problem of analyzing tweets with Universal Dependencies (UD; Nivre et al., 2016). We extend the UD guidelines to cover special constructions in tweets that affect tokenization, part-ofspeech tagging, and labeled dependencies. Using the extended guidelines, we create a new tweet treebank for English (TWEEBANK V2) that is four times larger than the (unlabeled) TWEEBANK V1 introduced by Kong et al. (2014). We characterize the disagreements between our annotators and show that it is challenging to deliver consistent annotation due to ambiguity in understanding and explaining tweets. Nonetheless, using the new treebank, we build a pipeline system to parse raw tweets into UD. To overcome annotation noise without sacrificing computational efficiency, we propose a new method to distill an ensemble of 20 transition-based parsers into a single one. Our parser achieves an improvement of 2.2 in LAS over the un-ensembled baseline and outperforms parsers that are state-ofthe-art on other treebanks in both accuracy and speed.",
"title": ""
},
{
"docid": "6e4798c01a0a241d1f3746cd98ba9421",
"text": "BACKGROUND\nLarge blood-based prospective studies can provide reliable assessment of the complex interplay of lifestyle, environmental and genetic factors as determinants of chronic disease.\n\n\nMETHODS\nThe baseline survey of the China Kadoorie Biobank took place during 2004-08 in 10 geographically defined regions, with collection of questionnaire data, physical measurements and blood samples. Subsequently, a re-survey of 25,000 randomly selected participants was done (80% responded) using the same methods as in the baseline. All participants are being followed for cause-specific mortality and morbidity, and for any hospital admission through linkages with registries and health insurance (HI) databases.\n\n\nRESULTS\nOverall, 512,891 adults aged 30-79 years were recruited, including 41% men, 56% from rural areas and mean age was 52 years. The prevalence of ever-regular smoking was 74% in men and 3% in women. The mean blood pressure was 132/79 mmHg in men and 130/77 mmHg in women. The mean body mass index (BMI) was 23.4 kg/m(2) in men and 23.8 kg/m(2) in women, with only 4% being obese (>30 kg/m(2)), and 3.2% being diabetic. Blood collection was successful in 99.98% and the mean delay from sample collection to processing was 10.6 h. For each of the main baseline variables, there is good reproducibility but large heterogeneity by age, sex and study area. By 1 January 2011, over 10,000 deaths had been recorded, with 91% of surviving participants already linked to HI databases.\n\n\nCONCLUSION\nThis established large biobank will be a rich and powerful resource for investigating genetic and non-genetic causes of many common chronic diseases in the Chinese population.",
"title": ""
},
{
"docid": "0181fe2a56dbbcedf306f997ac8d80a8",
"text": "World forest resources are continually depleting. Assessing and quantifying the current forest resources status is a prerequisite for forest resources improvement planning and implementation. The objectives of this study are to assess, quantify, and map forest resources in the Amhara National Regional State, Ethiopia. GIS, GPS, and Remote Sensing technologies were applied for the study. As a result, forest distribution map is prepared. Most of the forest covers were found along the lowland belt of Mirab Gojam, Awi, and Semen Gonder zones bordering the neighboring country, Sudan and the Tigray and Benishangul-Gumz regions. The total forest cover of the region is 12,884 km, that is, about 8.2 % of the total land area. Including bushlands, it is about 21,783 km, which is about 13.85 %. Woodlands, natural dense forest, riverine forest, bushlands, and plantations are 740,808, 463,950, 20,653, 889,912, and 62,973 ha in area with percentage coverage of 4.71, 2.95, 0.13, 5.66, and 0.40 respectively. GIS, GPS, and Remote Sensing were found to be important tools for forest resource assessment and mapping. M. Mekonnen (&) M. Gebeyehu B. Azene Amhara National Regional State, Bureau of Agriculture, Natural Resource Conservation and Management Department, PO Box 1188, Bahir Dar, Ethiopia e-mail: [email protected]; [email protected] M. Gebeyehu e-mail: [email protected] B. Azene e-mail: [email protected] T. Sewunet Amhara National Regional State, Bureau of Finance and Economic Development, Bahir Dar, Ethiopia e-mail: [email protected] A.M. Melesse Department of Earth & Environment, Florida International University, 11200 SW 8th Street, Miami, USA e-mail: [email protected] © Springer International Publishing Switzerland 2016 A.M. Melesse and W. Abtew (eds.), Landscape Dynamics, Soils and Hydrological Processes in Varied Climates, Springer Geography, DOI 10.1007/978-3-319-18787-7_2 9",
"title": ""
},
{
"docid": "a830eaa981d6d7594fdb4d3a6a4474a1",
"text": "MOTIVATION\nHigh-content screening (HCS) technologies have enabled large scale imaging experiments for studying cell biology and for drug screening. These systems produce hundreds of thousands of microscopy images per day and their utility depends on automated image analysis. Recently, deep learning approaches that learn feature representations directly from pixel intensity values have dominated object recognition challenges. These tasks typically have a single centered object per image and existing models are not directly applicable to microscopy datasets. Here we develop an approach that combines deep convolutional neural networks (CNNs) with multiple instance learning (MIL) in order to classify and segment microscopy images using only whole image level annotations.\n\n\nRESULTS\nWe introduce a new neural network architecture that uses MIL to simultaneously classify and segment microscopy images with populations of cells. We base our approach on the similarity between the aggregation function used in MIL and pooling layers used in CNNs. To facilitate aggregating across large numbers of instances in CNN feature maps we present the Noisy-AND pooling function, a new MIL operator that is robust to outliers. Combining CNNs with MIL enables training CNNs using whole microscopy images with image level labels. We show that training end-to-end MIL CNNs outperforms several previous methods on both mammalian and yeast datasets without requiring any segmentation steps.\n\n\nAVAILABILITY AND IMPLEMENTATION\nTorch7 implementation available upon request.\n\n\nCONTACT\[email protected].",
"title": ""
},
{
"docid": "17a475b655134aafde0f49db06bec127",
"text": "Estimating the number of persons in a public place provides useful information for video-based surveillance and monitoring applications. In the case of oblique camera setup, counting is either achieved by detecting individuals or by statistically establishing relations between values of simple image features (e.g. amount of moving pixels, edge density, etc.) to the number of people. While the methods of the first category exhibit poor accuracy in cases of occlusions, the second category of methods are sensitive to perspective distortions, and require people to move in order to be counted. In this paper we investigate the possibilities of developing a robust statistical method for people counting. To maximize its applicability scope, we choose-in contrast to the majority of existing methods from this category-not to require prior learning of categories corresponding to different number of people. Second, we search for a suitable way of correcting the perspective distortion. Finally, we link the estimation to a confidence value that takes into account the known factors being of influence on the result. The confidence is then used to refine final results.",
"title": ""
},
{
"docid": "d8f21e77a60852ea83f4ebf74da3bcd0",
"text": "In recent years different lines of evidence have led to the idea that motor actions and movements in both vertebrates and invertebrates are composed of elementary building blocks. The entire motor repertoire can be spanned by applying a well-defined set of operations and transformations to these primitives and by combining them in many different ways according to well-defined syntactic rules. Motor and movement primitives and modules might exist at the neural, dynamic and kinematic levels with complicated mapping among the elementary building blocks subserving these different levels of representation. Hence, while considerable progress has been made in recent years in unravelling the nature of these primitives, new experimental, computational and conceptual approaches are needed to further advance our understanding of motor compositionality.",
"title": ""
},
{
"docid": "c3e2ceebd3868dd9fff2a87fdd339dce",
"text": "Augmented Reality (AR) holds unique and promising potential to bridge between real-world activities and digital experiences, allowing users to engage their imagination and boost their creativity. We propose the concept of Augmented Creativity as employing ar on modern mobile devices to enhance real-world creative activities, support education, and open new interaction possibilities. We present six prototype applications that explore and develop Augmented Creativity in different ways, cultivating creativity through ar interactivity. Our coloring book app bridges coloring and computer-generated animation by allowing children to create their own character design in an ar setting. Our music apps provide a tangible way for children to explore different music styles and instruments in order to arrange their own version of popular songs. In the gaming domain, we show how to transform passive game interaction into active real-world movement that requires coordination and cooperation between players, and how ar can be applied to city-wide gaming concepts. We employ the concept of Augmented Creativity to authoring interactive narratives with an interactive storytelling framework. Finally, we examine how Augmented Creativity can provide a more compelling way to understand complex concepts, such as computer programming.",
"title": ""
},
{
"docid": "97531e5a9bbe4d6e5e495fbbc380b3cd",
"text": "Nowadays, more and more users keep up with news through information streams coming from real-time micro-blogging activity offered by services such as Twitter. In these sites, information is shared via a followers/followees social network structure in which a follower will receive all the micro-blogs from the users he follows, named fol-lowees. Recent research efforts on understanding micro-blogging as a novel form of communication and news spreading medium have identified different categories of users in Twitter: information sources, information seekers and friends. Users acting as information sources are characterized for having a larger number of followers than follo-wees, information seekers subscribe to this kind of users but rarely post tweets and, finally, friends are users exhibiting reciprocal relationships. With information seekers being an important portion of registered users in the system, finding relevant and reliable sources becomes essential. To address this problem, we propose a followee recommender system based on an algorithm that explores the topol-ogy of followers/followees network of Twitter considering different factors that allow us to identify users as good information sources. Experimental evaluation conducted with a group of users is reported , demonstrating the potential of the approach .",
"title": ""
},
{
"docid": "954526bc72495a62e6205ca1b5d231f8",
"text": "We propose a novel decoding approach for neural machine translation (NMT) based on continuous optimisation. We reformulate decoding, a discrete optimization problem, into a continuous problem, such that optimization can make use of efficient gradient-based techniques. Our powerful decoding framework allows for more accurate decoding for standard neural machine translation models, as well as enabling decoding in intractable models such as intersection of several different NMT models. Our empirical results show that our decoding framework is effective, and can leads to substantial improvements in translations, especially in situations where greedy search and beam search are not feasible. Finally, we show how the technique is highly competitive with, and complementary to, reranking.",
"title": ""
}
] |
scidocsrr
|
3a2ec5d92497c36b65a105446e733709
|
Predicting poaching for wildlife Protection
|
[
{
"docid": "32a964bd36770b8c50a0e74289f4503b",
"text": "Several competing human behavior models have been proposed to model and protect against boundedly rational adversaries in repeated Stackelberg security games (SSGs). However, these existing models fail to address three main issues which are extremely detrimental to defender performance. First, while they attempt to learn adversary behavior models from adversaries’ past actions (“attacks on targets”), they fail to take into account adversaries’ future adaptation based on successes or failures of these past actions. Second, they assume that sufficient data in the initial rounds will lead to a reliable model of the adversary. However, our analysis reveals that the issue is not the amount of data, but that there just is not enough of the attack surface exposed to the adversary to learn a reliable model. Third, current leading approaches have failed to include probability weighting functions, even though it is well known that human beings’ weighting of probability is typically nonlinear. The first contribution of this paper is a new human behavior model, SHARP, which mitigates these three limitations as follows: (i) SHARP reasons based on success or failure of the adversary’s past actions on exposed portions of the attack surface to model adversary adaptiveness; (ii) SHARP reasons about similarity between exposed and unexposed areas of the attack surface, and also incorporates a discounting parameter to mitigate adversary’s lack of exposure to enough of the attack surface; and (iii) SHARP integrates a non-linear probability weighting function to capture the adversary’s true weighting of probability. Our second contribution is a first “longitudinal study” – at least in the context of SSGs – of competing models in settings involving repeated interaction between the attacker and the defender. This study, where each experiment lasted a period of multiple weeks with individual sets of human subjects, illustrates the strengths and weaknesses of different models and shows the advantages of SHARP.",
"title": ""
}
] |
[
{
"docid": "be99f6ba66d573547a09d3429536049e",
"text": "With the development of sensor, wireless mobile communication, embedded system and cloud computing, the technologies of Internet of Things have been widely used in logistics, Smart Meter, public security, intelligent building and so on. Because of its huge market prospects, Internet of Things has been paid close attention by several governments all over the world, which is regarded as the third wave of information technology after Internet and mobile communication network. Bridging between wireless sensor networks with traditional communication networks or Internet, IOT Gateway plays an important role in IOT applications, which facilitates the seamless integration of wireless sensor networks and mobile communication networks or Internet, and the management and control with wireless sensor networks. In this paper, we proposed an IOT Gateway system based on Zigbee and GPRS protocols according to the typical IOT application scenarios and requirements from telecom operators, presented the data transmission between wireless sensor networks and mobile communication networks, protocol conversion of different sensor network protocols, and control functionalities for sensor networks, and finally gave an implementation of prototyping system and system validation.",
"title": ""
},
{
"docid": "5249a94aa9d9dbb211bb73fa95651dfd",
"text": "Power and energy have become increasingly important concerns in the design and implementation of today's multicore/manycore chips. In this paper, we present two priority-based CPU scheduling algorithms, Algorithm Cache Miss Priority CPU Scheduler (CM-PCS) and Algorithm Context Switch Priority CPU Scheduler (CS-PCS), which take advantage of often ignored dynamic performance data, in order to reduce power consumption by over 20 percent with a significant increase in performance. Our algorithms utilize Linux cpusets and cores operating at different fixed frequencies. Many other techniques, including dynamic frequency scaling, can lower a core's frequency during the execution of a non-CPU intensive task, thus lowering performance. Our algorithms match processes to cores better suited to execute those processes in an effort to lower the average completion time of all processes in an entire task, thus improving performance. They also consider a process's cache miss/cache reference ratio, number of context switches and CPU migrations, and system load. Finally, our algorithms use dynamic process priorities as scheduling criteria. We have tested our algorithms using a real AMD Opteron 6134 multicore chip and measured results directly using the “KillAWatt” meter, which samples power periodically during execution. Our results show not only a power (energy/execution time) savings of 39 watts (21.43 percent) and 38 watts (20.88 percent), but also a significant improvement in the performance, performance per watt, and execution time · watt (energy) for a task consisting of 24 concurrently executing benchmarks, when compared to the default Linux scheduler and CPU frequency scaling governor.",
"title": ""
},
{
"docid": "21ffd3ae843e694a052ed14edb5ec149",
"text": "This article discusses the need for more satisfactory implicit measures in consumer psychology and assesses the theoretical foundations, validity, and value of the Implicit Association Test (IAT) as a measure of implicit consumer social cognition. Study 1 demonstrates the IAT’s sen sitivity to explicit individual differences in brand attitudes, ownership, and usage frequency, and shows their correlations with IAT-based measures of implicit brand attitudes and brand re lationship strength. In Study 2, the contrast between explicit and implicit measures of attitude toward the ad for sportswear advertisements portraying African American (Black) and Euro pean American (White) athlete–spokespersons revealed different patterns of responses to ex plicit and implicit measures in Black and White respondents. These were explained in terms of self-presentation biases and system justification theory. Overall, the results demonstrate that the IAT enhances our understanding of consumer responses, particularly when consumers are either unable or unwilling to identify the sources of influence on their behaviors or opinions.",
"title": ""
},
{
"docid": "489aa160c450539b50c63c6c3c6993ab",
"text": "Adequacy of citations is very important for a scientific paper. However, it is not an easy job to find appropriate citations for a given context, especially for citations in different languages. In this paper, we define a novel task of cross-language context-aware citation recommendation, which aims at recommending English citations for a given context of the place where a citation is made in a Chinese paper. This task is very challenging because the contexts and citations are written in different languages and there exists a language gap when matching them. To tackle this problem, we propose the bilingual context-citation embedding algorithm (i.e. BLSRec-I), which can learn a low-dimensional joint embedding space for both contexts and citations. Moreover, two advanced algorithms named BLSRec-II and BLSRec-III are proposed by enhancing BLSRec-I with translation results and abstract information, respectively. We evaluate the proposed methods based on a real dataset that contains Chinese contexts and English citations. The results demonstrate that our proposed algorithms can outperform a few baselines and the BLSRec-II and BLSRec-III methods can outperform the BLSRec-I method.",
"title": ""
},
{
"docid": "1898b8223039609f0389144b6fe9e56d",
"text": "A fundamental goal of protein biochemistry is to determine the sequence-function relationship, but the vastness of sequence space makes comprehensive evaluation of this landscape difficult. However, advances in DNA synthesis and sequencing now allow researchers to assess the functional impact of every single mutation in many proteins, but challenges remain in library construction and the development of general assays applicable to a diverse range of protein functions. This Perspective briefly outlines the technical innovations in DNA manipulation that allow massively parallel protein biochemistry and then summarizes the methods currently available for library construction and the functional assays of protein variants. Areas in need of future innovation are highlighted with a particular focus on assay development and the use of computational analysis with machine learning to effectively traverse the sequence-function landscape. Finally, applications in the fundamentals of protein biochemistry, disease prediction, and protein engineering are presented.",
"title": ""
},
{
"docid": "2e3cee13657129d26ec236f9d2641e6c",
"text": "Due to the prevalence of social media websites, one challenge facing computer vision researchers is to devise methods to process and search for persons of interest among the billions of shared photos on these websites. Facebook revealed in a 2013 white paper that its users have uploaded more than 250 billion photos, and are uploading 350 million new photos each day. Due to this humongous amount of data, large-scale face search for mining web images is both important and challenging. Despite significant progress in face recognition, searching a large collection of unconstrained face images has not been adequately addressed. To address this challenge, we propose a face search system which combines a fast search procedure, coupled with a state-of-the-art commercial off the shelf (COTS) matcher, in a cascaded framework. Given a probe face, we first filter the large gallery of photos to find the top-k most similar faces using deep features generated from a convolutional neural network. The k retrieved candidates are re-ranked by combining similarities from deep features and the COTS matcher. We evaluate the proposed face search system on a gallery containing 80 million web-downloaded face images. Experimental results demonstrate that the deep features are competitive with state-of-the-art methods on unconstrained face recognition benchmarks (LFW and IJB-A). More specifically, on the LFW database, we achieve 98.23% accuracy under the standard protocol and a verification rate of 87.65% at FAR of 0.1% under the BLUFR protocol. For the IJB-A benchmark, our accuracies are as follows: TAR of 51.4% at FAR of 0.1% (verification); Rank 1 retrieval of 82.0% (closed-set search); FNIR of 61.7% at FPIR of 1% (open-set search). Further, the proposed face search system offers an excellent trade-off between accuracy and scalability on datasets consisting of millions of images. Additionally, in an experiment involving searching for face images of the Tsarnaev brothers, convicted of the Boston Marathon bombing, the proposed cascade face search system could find the younger brother’s (Dzhokhar Tsarnaev) photo at rank 1 in 1 second on a 5M gallery and at rank 8 in 7 seconds",
"title": ""
},
{
"docid": "4010e2e5fae6ce45920186141089706f",
"text": "Short-term traffic flow forecasting plays an important role in current intelligent transportation system. For most models, the selection of time lag is a crucial factor affecting the forecasting performance. Instead of choosing a single time lag when constructing model, in this paper, we propose a novel approach attempting to construct multiple base forecasting models, each with different time lag and performance. Least squares support vector regression (LSSVR) with the Gaussian kernel function is adopted as the base model because of its nonlinear modeling capability, as well as empirical performance. Then, the outputs of these base models are integrated to produce final prediction through another LSSVR with the linear kernel function. This ensemble forecasting framework consists of many parameters that need to be adjusted. To address this issue, an improved harmony search algorithm tailored for our forecasting system is further developed for seeking the optimal parameters. The real-world traffic flow data are collected from several observation sites located around the intersection of Interstate 205 and Interstate 84 freeways in Portland, OR, USA. Experimental results verify that the proposed approach is able to provide better forecasting performance in comparison with other competing methods.",
"title": ""
},
{
"docid": "6a0ac77c7471484e3829b7a561c78723",
"text": "While the growth of business-to-consumer electronic commerce seems phenomenal in recent years, several studies suggest that a large number of individuals using the Internet have serious privacy concerns, and that winning public trust is the primary hurdle to continued growth in e-commerce. This research investigated the relative importance, when purchasing goods and services over the Web, of four common trust indices (i.e. (1) third party privacy seals, (2) privacy statements, (3) third party security seals, and (4) security features). The results indicate consumers valued security features significantly more than the three other trust indices. We also investigated the relationship between these trust indices and the consumer’s perceptions of a marketer’s trustworthiness. The findings indicate that consumers’ ratings of trustworthiness of Web merchants did not parallel experts’ evaluation of sites’ use of the trust indices. This study also examined the extent to which consumers are willing to provide private information to electronic and land merchants. The results revealed that when making the decision to provide private information, consumers rely on their perceptions of trustworthiness irrespective of whether the merchant is electronic only or land and electronic. Finally, we investigated the relative importance of three types of Web attributes: security, privacy and pleasure features (convenience, ease of use, cosmetics). Privacy and security features were of lesser importance than pleasure features when considering consumers’ intention to purchase. A discussion of the implications of these results and an agenda for future research are provided. q 2002 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "1b6af47ddb23b3927c451b8b659fb13e",
"text": "— This project presents an approach to develop a real-time hand gesture recognition enabling human-computer interaction. It is \" Vision Based \" that uses only a webcam and Computer Vision (CV) technology, such as image processing that can recognize several hand gestures. The applications of real time hand gesture recognition are numerous, due to the fact that it can be used almost anywhere where we interact with computers ranging from basic usage which involves small applications to domain-specific specialized applications. Currently, at this level our project is useful for the society but it can further be expanded to be readily used at the industrial level as well. Gesture recognition is an area of active current research in computer vision. Existing systems use hand detection primarily with some type of marker. Our system, however, uses a real-time hand image recognition system. Our system, however, uses a real-time hand image recognition without any marker, simply using bare hands. I. INTRODUCTION In today \" s computer age, every individual is dependent to perform most of their day-today tasks using computers. The major input devices one uses while operating a computer are keyboard and mouse. But there are a wide range of health problems that affects many people nowadays, caused by the constant and continuous work with the computer. Direct use of hands as an input device is an attractive method for providing natural Human Computer Interaction which has evolved from text-based interfaces through 2D graphical-based interfaces, multimedia-supported interfaces, to fully fledged multi participant Virtual Environment (VE) systems. Since hand gestures are completely natural form for communication it does not adversely affect the health of the operator as in case of excessive usage of keyboard and mouse. Imagine the human-computer interaction of the future: A 3Dapplication where you can move and rotate objects simply by moving and rotating your hand-all without touching any input device. In this paper a review of vision based hand gesture recognition is presented.",
"title": ""
},
{
"docid": "13d5011f3d6c1997e3c44b3f03cf2017",
"text": "Reinforcement learning with appropriately designed reward signal could be used to solve many sequential learning problems. However, in practice, the reinforcement learning algorithms could be broken in unexpected, counterintuitive ways. One of the failure modes is reward hacking which usually happens when a reward function makes the agent obtain high return in an unexpected way. This unexpected way may subvert the designer’s intentions and lead to accidents during training. In this paper, a new multi-step state-action value algorithm is proposed to solve the problem of reward hacking. Unlike traditional algorithms, the proposed method uses a new return function, which alters the discount of future rewards and no longer stresses the immediate reward as the main influence when selecting the current state action. The performance of the proposed method is evaluated on two games, Mappy and Mountain Car. The empirical results demonstrate that the proposed method can alleviate the negative impact of reward hacking and greatly improve the performance of reinforcement learning algorithm. Moreover, the results illustrate that the proposed method could also be applied to the continuous state space problem successfully.",
"title": ""
},
{
"docid": "2e1516411941e5ea8fbbf817d613ff0a",
"text": "Self-heating effects in scaled bulk FinFETs from 14nm to 7nm node are discussed based on 3D FEM simulations and experimental measurements. Following a typical 0.7x scaling, heat confinement is expected to increase by 20% in Si-channel FinFETs and by another 57% for strained Ge-channel. Reducing the drive current needed to reach target performance by reducing capacitances, and fin depopulation help mitigate self-heating effects. These thermal behaviors propagates to AC circuit benchmark, resulting in ~5% performance variation for high performance devices due to device scaling and increased number of fins.",
"title": ""
},
{
"docid": "decd813dfea894afdceb55b3ca087487",
"text": "BACKGROUND\nAddiction to smartphone usage is a common worldwide problem among adults, which might negatively affect their wellbeing. This study investigated the prevalence and factors associated with smartphone addiction and depression among a Middle Eastern population.\n\n\nMETHODS\nThis cross-sectional study was conducted in 2017 using a web-based questionnaire distributed via social media. Responses to the Smartphone Addiction Scale - Short version (10-items) were rated on a 6-point Likert scale, and their percentage mean score (PMS) was commuted. Responses to Beck's Depression Inventory (20-items) were summated (range 0-60); their mean score (MS) was commuted and categorized. Higher scores indicated higher levels of addiction and depression. Factors associated with these outcomes were identified using descriptive and regression analyses. Statistical significance was set at P < 0.05.\n\n\nRESULTS\nComplete questionnaires were 935/1120 (83.5%), of which 619 (66.2%) were females and 316 (33.8%) were males. The mean ± standard deviation of their age was 31.7 ± 11 years. Majority of participants obtained university education 766 (81.9%), while 169 (18.1%) had school education. The PMS of addiction was 50.2 ± 20.3, and MS of depression was 13.6 ± 10.0. A significant positive linear relationship was present between smart phone addiction and depression (y = 39.2 + 0.8×; P < 0.001). Significantly higher smartphone addiction scores were associated with younger age users, (β = - 0.203, adj. P = 0.004). Factors associated with higher depression scores were school educated users (β = - 2.03, adj. P = 0.01) compared to the university educated group and users with higher smart phone addiction scores (β =0.194, adj. P < 0.001).\n\n\nCONCLUSIONS\nThe positive correlation between smartphone addiction and depression is alarming. Reasonable usage of smart phones is advised, especially among younger adults and less educated users who could be at higher risk of depression.",
"title": ""
},
{
"docid": "9b451aa93627d7b44acc7150a1b7c2d0",
"text": "BACKGROUND\nAerobic endurance exercise has been shown to improve higher cognitive functions such as executive control in healthy subjects. We tested the hypothesis that a 30-minute individually customized endurance exercise program has the potential to enhance executive functions in patients with major depressive disorder.\n\n\nMETHOD\nIn a randomized within-subject study design, 24 patients with DSM-IV major depressive disorder and 10 healthy control subjects performed 30 minutes of aerobic endurance exercise at 2 different workload levels of 40% and 60% of their predetermined individual 4-mmol/L lactic acid exercise capacity. They were then tested with 4 standardized computerized neuropsychological paradigms measuring executive control functions: the task switch paradigm, flanker task, Stroop task, and GoNogo task. Performance was measured by reaction time. Data were gathered between fall 2000 and spring 2002.\n\n\nRESULTS\nWhile there were no significant exercise-dependent alterations in reaction time in the control group, for depressive patients we observed a significant decrease in mean reaction time for the congruent Stroop task condition at the 60% energy level (p = .016), for the incongruent Stroop task condition at the 40% energy level (p = .02), and for the GoNogo task at both energy levels (40%, p = .025; 60%, p = .048). The exercise procedures had no significant effect on reaction time in the task switch paradigm or the flanker task.\n\n\nCONCLUSION\nA single 30-minute aerobic endurance exercise program performed by depressed patients has positive effects on executive control processes that appear to be specifically subserved by the anterior cingulate.",
"title": ""
},
{
"docid": "bd5cadfdf5400cb206f341b000cd7d3c",
"text": "Streaming music services represent the music industry’s greatest prospective source of revenue and are well established among consumers. This paper presents a theory of a streaming music business model consisting of two types of services provided by a monopolist. The first service, which offers access free of charge, is of low quality and financed by advertising. The second service charges its users and is of high quality. The analysis demonstrates that if users are highly tolerant of commercials, the monopolist benefits from advertising funding and hence charges a high price to users of the fee-based service to boost demand for the advertising supported service. The analysis addresses the welfare consequences of such a business model and shows it is an effective policy for combating digital piracy. 2013 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "f6a339de620c058332fa469b37f1ecdd",
"text": "Typical mobile robot structures (e.g. wheelchair or carlike, ...) do not have the required mobility for common applications such as displacement in a corridor, hospital, office, ... New structures based on the \"universal wheel\" (i.e. a wheel equipped with free rotating rollers) have been developed to increase mobility. However these structures have important drawbacks such as vertical vibration and limited load capacity. This paper presents a comparison of several types of universal wheel performances based on the three following criteria: load capacity, surmountable bumps and vertical vibration. It is hoped that this comparison will help the designer in the selection of the best suitable solution for his application.",
"title": ""
},
{
"docid": "2000c393acd11a31331d234fb56b8abd",
"text": "This letter reports the fabrication of a GaN heterostructure field-effect transistor with oxide spacer placed on the mesa sidewalls. The presence of an oxide spacer effectively eliminates the gate leakage current that occurs at the channel edge, where the gate metal is in contact with the 2-D electron gas edge on the mesa sidewall. From the two-terminal gate leakage current measurements, the leakage current was found to be several nA at VG=-12 V and at VG=-450 V. The benefits of the proposed spacer scheme include the patterning of the metal electrodes by plasma etching and a lower manufacturing cost.",
"title": ""
},
{
"docid": "9402365e2fdbdbdea13c18da5e4a05de",
"text": "Battery models capture the characteristics of real-life batteries, and can be used to predict their behavior under various operating conditions. In this paper, a dynamic model of lithium-ion battery has been developed with MATLAB/Simulink® in order to investigate the output characteristics of lithium-ion batteries. Dynamic simulations are carried out, including the observation of the changes in battery terminal output voltage under different charging/discharging, temperature and cycling conditions, and the simulation results are compared with the results obtained from several recent studies. The simulation studies are presented for manifesting that the model is effective and operational.",
"title": ""
},
{
"docid": "9172d4ba2e86a7d4918ef64d7b837084",
"text": "Electromagnetic generators (EMGs) and triboelectric nanogenerators (TENGs) are the two most powerful approaches for harvesting ambient mechanical energy, but the effectiveness of each depends on the triggering frequency. Here, after systematically comparing the performances of EMGs and TENGs under low-frequency motion (<5 Hz), we demonstrated that the output performance of EMGs is proportional to the square of the frequency, while that of TENGs is approximately in proportion to the frequency. Therefore, the TENG has a much better performance than that of the EMG at low frequency (typically 0.1-3 Hz). Importantly, the extremely small output voltage of the EMG at low frequency makes it almost inapplicable to drive any electronic unit that requires a certain threshold voltage (∼0.2-4 V), so that most of the harvested energy is wasted. In contrast, a TENG has an output voltage that is usually high enough (>10-100 V) and independent of frequency so that most of the generated power can be effectively used to power the devices. Furthermore, a TENG also has advantages of light weight, low cost, and easy scale up through advanced structure designs. All these merits verify the possible killer application of a TENG for harvesting energy at low frequency from motions such as human motions for powering small electronics and possibly ocean waves for large-scale blue energy.",
"title": ""
}
] |
scidocsrr
|
a00b8ecee5acf71a0aa139a2b40ecfcd
|
Weisfeiler-Lehman Graph Kernels
|
[
{
"docid": "2bf9e347e163d97c023007f4cc88ab02",
"text": "State-of-the-art graph kernels do not scale to large graphs with hundreds of nodes and thousands of edges. In this article we propose to compare graphs by counting graphlets, i.e., subgraphs with k nodes where k ∈ {3, 4, 5}. Exhaustive enumeration of all graphlets being prohibitively expensive, we introduce two theoretically grounded speedup schemes, one based on sampling and the second one specifically designed for bounded degree graphs. In our experimental evaluation, our novel kernels allow us to efficiently compare large graphs that cannot be tackled by existing graph kernels.",
"title": ""
}
] |
[
{
"docid": "c75095680818ccc7094e4d53815ef475",
"text": "We propose a new learning method, \"Generalized Learning Vector Quantization (GLVQ),\" in which reference vectors are updated based on the steepest descent method in order to minimize the cost function . The cost function is determined so that the obtained learning rule satisfies the convergence condition. We prove that Kohonen's rule as used in LVQ does not satisfy the convergence condition and thus degrades recognition ability. Experimental results for printed Chinese character recognition reveal that GLVQ is superior to LVQ in recognition ability.",
"title": ""
},
{
"docid": "8da50eee8aaebe575eeaceae49c9fb37",
"text": "In this paper, we propose a set of language resources for building Turkish language processing applications. Specifically, we present a finite-state implementation of a morphological parser, an averaged perceptron-based morphological disambiguator, and compilation of a web corpus. Turkish is an agglutinative language with a highly productive inflectional and derivational morphology. We present an implementation of a morphological parser based on two-level morphology. This parser is one of the most complete parsers for Turkish and it runs independent of any other external system such as PCKIMMO in contrast to existing parsers. Due to complex phonology and morphology of Turkish, parsing introduces some ambiguous parses. We developed a morphological disambiguator with accuracy of about 98% using averaged perceptron algorithm. We also present our efforts to build a Turkish web corpus of about 423 million words.",
"title": ""
},
{
"docid": "c07bb7085ca42bc50a39750a1b49c621",
"text": "Although hippocampal CA1 pyramidal neurons (PNs) were thought to comprise a uniform population, recent evidence supports two distinct sublayers along the radial axis, with deep neurons more likely to form place cells than superficial neurons. CA1 PNs also differ along the transverse axis with regard to direct inputs from entorhinal cortex (EC), with medial EC (MEC) providing spatial information to PNs toward CA2 (proximal CA1) and lateral EC (LEC) providing non-spatial information to PNs toward subiculum (distal CA1). We demonstrate that the two inputs differentially activate the radial sublayers and that this difference reverses along the transverse axis, with MEC preferentially targeting deep PNs in proximal CA1 and LEC preferentially exciting superficial PNs in distal CA1. This differential excitation reflects differences in dendritic spine numbers. Our results reveal a heterogeneity in EC-CA1 connectivity that may help explain differential roles of CA1 PNs in spatial and non-spatial learning and memory.",
"title": ""
},
{
"docid": "6e79df8b9db8bd81774d72b8ef672760",
"text": "Concepts of sexuality and gender identity are undergoing re-examination in society. Recent media attention has intensified interest in the area, although reliable information is sometimes lacking. Gender dysphoria, and its extreme form, transsexualism, frequently brings sufferers into contact with psychiatric, social, and mental health professionals, and surgical caregivers. Treatment of these patients often represents a challenge on many levels. Some guidelines for this care are outlined.",
"title": ""
},
{
"docid": "1997b8a0cac1b3beecfd79b3e206d7e4",
"text": "Scatterplots are well established means of visualizing discrete data values with two data variables as a collection of discrete points. We aim at generalizing the concept of scatterplots to the visualization of spatially continuous input data by a continuous and dense plot. An example of a continuous input field is data defined on an n-D spatial grid with respective interpolation or reconstruction of in-between values. We propose a rigorous, accurate, and generic mathematical model of continuous scatterplots that considers an arbitrary density defined on an input field on an n-D domain and that maps this density to m-D scatterplots. Special cases are derived from this generic model and discussed in detail: scatterplots where the n-D spatial domain and the m-D data attribute domain have identical dimension, 1-D scatterplots as a way to define continuous histograms, and 2-D scatterplots of data on 3-D spatial grids. We show how continuous histograms are related to traditional discrete histograms and to the histograms of isosurface statistics. Based on the mathematical model of continuous scatterplots, respective visualization algorithms are derived, in particular for 2-D scatterplots of data from 3-D tetrahedral grids. For several visualization tasks, we show the applicability of continuous scatterplots. Since continuous scatterplots do not only sample data at grid points but interpolate data values within cells, a dense and complete visualization of the data set is achieved that scales well with increasing data set size. Especially for irregular grids with varying cell size, improved results are obtained when compared to conventional scatterplots. Therefore, continuous scatterplots are a suitable extension of a statistics visualization technique to be applied to typical data from scientific computation.",
"title": ""
},
{
"docid": "8ebd9dcbbe29083ce1ecaf10b475630a",
"text": "Modeling data is the way we-scientists-believe that information should be explained and handled. Indeed, models play a central role in practically every task in signal and image processing and machine learning. Sparse representation theory (we shall refer to it as Sparseland) puts forward an emerging, highly effective, and universal model. Its core idea is the description of data as a linear combination of few atoms taken from a dictionary of such fundamental elements.",
"title": ""
},
{
"docid": "d4437541cc3c5bea2d16ccc5e9948aec",
"text": "Gamification is a term that refers to the use of game elements in non-game contexts with the goal of engaging people in a variety of tasks. There is a growing interest in gamification as well as its applications and implications in the field of Education since it provides an alternative to engage and motivate students during the process of learning. Despite this increasing interest, to the best of our knowledge, there are no studies that cover and classify the types of research being published and the most investigated topics in the area. As a first step towards bridging this gap, we carried out a systematic mapping to synthesize an overview of the area. We went through 357 papers on gamification. Among them, 48 were related to education and only 26 met the criteria for inclusion and exclusion of articles defined in this study. These 26 papers were selected and categorized according to their contribution. As a result, we provide an overview of the area. Such an overview suggests that most studies focus on investigating how gamification can be used to motivate students, improve their skills, and maximize learning.",
"title": ""
},
{
"docid": "b3fc899c49ceb699f62b43bb0808a1b2",
"text": "Social network users publicly share a wide variety of information with their followers and the general public ranging from their opinions, sentiments and personal life activities. There has already been significant advance in analyzing the shared information from both micro (individual user) and macro (community level) perspectives, giving access to actionable insight about user and community behaviors. The identification of personal life events from user’s profiles is a challenging yet important task, which if done appropriately, would facilitate more accurate identification of users’ preferences, interests and attitudes. For instance, a user who has just broken his phone, is likely to be upset and also be looking to purchase a new phone. While there is work that identifies tweets that include mentions of personal life events, our work in this paper goes beyond the state of the art by predicting a future personal life event that a user will be posting about on Twitter solely based on the past tweets. We propose two architectures based on recurrent neural networks, namely the classification and generation architectures, that determine the future personal life event of a user. We evaluate our work based on a gold standard Twitter life event dataset and compare our work with the state of the art baseline technique for life event detection. While presenting performance measures, we also discuss the limitations of our work in this paper.",
"title": ""
},
{
"docid": "fcdd881b983cfd011e15de473f389572",
"text": "In this paper we describe the development, experiments and evaluation of the iFloor, an interactive floor prototype installed at the local central municipality library. The primary purpose of the iFloor prototype is to support and stimulate community interaction between collocated people. The context of the library demands that any user can walk up and use the prototype without any devices or prior introduction. To achieve this, the iFloor proposes innovative interaction (modes/paradigms/patterns) for floor surfaces through the means of video tracking. Browsing and selecting content is done in a collaborative process and mobile phones are used for posting messages onto the floor. The iFloor highlights topics on social issues of ubiquitous computing environments in public spaces, and provides an example of how to exploit human spatial movements, positions and arrangements in interaction with computers.",
"title": ""
},
{
"docid": "01b9bf49c88ae37de79b91edeae20437",
"text": "While online, some people self-disclose or act out more frequently or intensely than they would in person. This article explores six factors that interact with each other in creating this online disinhibition effect: dissociative anonymity, invisibility, asynchronicity, solipsistic introjection, dissociative imagination, and minimization of authority. Personality variables also will influence the extent of this disinhibition. Rather than thinking of disinhibition as the revealing of an underlying \"true self,\" we can conceptualize it as a shift to a constellation within self-structure, involving clusters of affect and cognition that differ from the in-person constellation.",
"title": ""
},
{
"docid": "6a33013c19dc59d8871e217461d479e9",
"text": "Cancer tissues in histopathology images exhibit abnormal patterns; it is of great clinical importance to label a histopathology image as having cancerous regions or not and perform the corresponding image segmentation. However, the detailed annotation of cancer cells is often an ambiguous and challenging task. In this paper, we propose a new learning method, multiple clustered instance learning (MCIL), to classify, segment and cluster cancer cells in colon histopathology images. The proposed MCIL method simultaneously performs image-level classification (cancer vs. non-cancer image), pixel-level segmentation (cancer vs. non-cancer tissue), and patch-level clustering (cancer subclasses). We embed the clustering concept into the multiple instance learning (MIL) setting and derive a principled solution to perform the above three tasks in an integrated framework. Experimental results demonstrate the efficiency and effectiveness of MCIL in analyzing colon cancers.",
"title": ""
},
{
"docid": "1a0e65754fa4d88325e1360a292d4e5f",
"text": "Sketch portrait generation benefits a wide range of applications such as digital entertainment and law enforcement. Although plenty of efforts have been dedicated to this task, several issues still remain unsolved for generating vivid and detail-preserving personal sketch portraits. For example, quite a few artifacts may exist in synthesizing hairpins and glasses, and textural details may be lost in the regions of hair or mustache. Moreover, the generalization ability of current systems is somewhat limited since they usually require elaborately collecting a dictionary of examples or carefully tuning features/components. In this paper, we present a novel representation learning framework that generates an end-to-end photo-sketch mapping through structure and texture decomposition. In the training stage, we first decompose the input face photo into different components according to their representational contents (i.e., structural and textural parts) by using a pre-trained convolutional neural network (CNN). Then, we utilize a branched fully CNN for learning structural and textural representations, respectively. In addition, we design a sorted matching mean square error metric to measure texture patterns in the loss function. In the stage of sketch rendering, our approach automatically generates structural and textural representations for the input photo and produces the final result via a probabilistic fusion scheme. Extensive experiments on several challenging benchmarks suggest that our approach outperforms example-based synthesis algorithms in terms of both perceptual and objective metrics. In addition, the proposed method also has better generalization ability across data set without additional training.",
"title": ""
},
{
"docid": "94f364c7b1f4254db525c3c6108a9e4c",
"text": "A planar radar sensor for automotive application is presented. The design comprises a fully integrated transceiver multi-chip module (MCM) and an electronically steerable microstrip patch array. The antenna feed network is based on a modified Rotman-lens. An extended angular coverage together with an adapted resolution allows for the integration of automatic cruise control (ACC), precrash sensing and cut-in detection within a single 77 GHz frontend. For ease of manufacturing the interconnects between antenna and MCM rely on a mixed wire bond and flip-chip approach. The concept is validated by laboratory radar measurements.",
"title": ""
},
{
"docid": "74383319fc9dd814f77d8766fcf79a85",
"text": "Although interactive learning puts the user into the loop, the learner remains mostly a black box for the user. Understanding the reasons behind queries and predictions is important when assessing how the learner works and, in turn, trust. Consequently, we propose the novel framework of explanatory interactive learning: in each step, the learner explains its interactive query to the user, and she queries of any active classifier for visualizing explanations of the corresponding predictions. We demonstrate that this can boost the predictive and explanatory powers of and the trust into the learned model, using text (e.g. SVMs) and image classification (e.g. neural networks) experiments as well as a user study.",
"title": ""
},
{
"docid": "5fa6dc64a5a43dd8ea076fa34fd6abd2",
"text": "Cellular networks have been undergoing an extraordinarily fast evolution in the past years. With commercial deployments of Release 8 (Rel-8) Long Term Evolution (LTE) already being carried out worldwide, a significant effort is being put forth by the research and standardization communities on the development and specification of LTE-Advanced. The work started in Rel-10 by the Third Generation Partnership Project (3GPP) had the initial objective of meeting the International Mobile Telecommunications-Advanced (IMTAdvanced) requirements set by the International Telecommunications Union (ITU) which defined fourth generation (4G) systems. However, predictions based on the wireless traffic explosion in recent years indicate a need for more advanced technologies and higher performance. Hence, 3GPP’s efforts have continued through Rel-11 and now Rel-12. This paper provides a state-of-the-art comprehensive view on the key enabling technologies for LTE-Advanced systems. Already consolidated technologies developed for Rel-10 and Rel11 are reviewed while novel approaches and enhancements currently under consideration for Rel-12 are also discussed. Technical challenges for each of the main areas of study are pointed out as an encouragement for the research community to participate in this",
"title": ""
},
{
"docid": "01bd8fcce2f4b94e206a1ea91898fcff",
"text": "With deep learning becoming the dominant approach in computer vision, the use of representations extracted from Convolutional Neural Nets (CNNs) is quickly gaining ground on Fisher Vectors (FVs) as favoured state-of-the-art global image descriptors for image instance retrieval. While the good performance of CNNs for image classification are unambiguously recognised, which of the two has the upper hand in the image retrieval context is not entirely clear yet. In this work, we propose a comprehensive study that systematically evaluates FVs and CNNs for image retrieval. The first part compares the performances of FVs and CNNs on multiple publicly available data sets. We investigate a number of details specific to each method. For FVs, we compare sparse descriptors based on interest point detectors with dense single-scale and multi-scale variants. For CNNs, we focus on understanding the impact of depth, architecture and training data on retrieval results. Our study shows that no descriptor is systematically better than the other and that performance gains can usually be obtained by using both types together. The second part of the study focuses on the impact of geometrical transformations such as rotations and scale changes. FVs based on interest point detectors are intrinsically resilient to such transformations while CNNs do not have a built-in mechanism to ensure such invariance. We show that performance of CNNs can quickly degrade in presence of rotations while they are far less affected by changes in scale. We then propose a number of ways to incorporate the required invariances in the CNN pipeline. Overall, our work is intended as a reference guide offering practically useful and simply implementable guidelines to anyone looking for state-of-the-art global descriptors best suited to their specific image instance retrieval problem.",
"title": ""
},
{
"docid": "eafa6403e38d2ceb63ef7c00f84efe77",
"text": "We propose a novel approach to learning distributed representations of variable-length text sequences in multiple languages simultaneously. Unlike previous work which often derive representations of multi-word sequences as weighted sums of individual word vectors, our model learns distributed representations for phrases and sentences as a whole. Our work is similar in spirit to the recent paragraph vector approach but extends to the bilingual context so as to efficiently encode meaning-equivalent text sequences of multiple languages in the same semantic space. Our learned embeddings achieve state-of-theart performance in the often used crosslingual document classification task (CLDC) with an accuracy of 92.7 for English to German and 91.5 for German to English. By learning text sequence representations as a whole, our model performs equally well in both classification directions in the CLDC task in which past work did not achieve.",
"title": ""
},
{
"docid": "9fdecc8854f539ddf7061c304616130b",
"text": "This paper describes the pricing strategy model deployed at Airbnb, an online marketplace for sharing home and experience. The goal of price optimization is to help hosts who share their homes on Airbnb set the optimal price for their listings. In contrast to conventional pricing problems, where pricing strategies are applied to a large quantity of identical products, there are no \"identical\" products on Airbnb, because each listing on our platform offers unique values and experiences to our guests. The unique nature of Airbnb listings makes it very difficult to estimate an accurate demand curve that's required to apply conventional revenue maximization pricing strategies.\n Our pricing system consists of three components. First, a binary classification model predicts the booking probability of each listing-night. Second, a regression model predicts the optimal price for each listing-night, in which a customized loss function is used to guide the learning. Finally, we apply additional personalization logic on top of the output from the second model to generate the final price suggestions. In this paper, we focus on describing the regression model in the second stage of our pricing system. We also describe a novel set of metrics for offline evaluation. The proposed pricing strategy has been deployed in production to power the Price Tips and Smart Pricing tool on Airbnb. Online A/B testing results demonstrate the effectiveness of the proposed strategy model.",
"title": ""
},
{
"docid": "0999a01e947019409c75150f85058728",
"text": "We present a robot localization system using biologically inspired vision. Our system models two extensively studied human visual capabilities: (1) extracting the ldquogistrdquo of a scene to produce a coarse localization hypothesis and (2) refining it by locating salient landmark points in the scene. Gist is computed here as a holistic statistical signature of the image, thereby yielding abstract scene classification and layout. Saliency is computed as a measure of interest at every image location, which efficiently directs the time-consuming landmark-identification process toward the most likely candidate locations in the image. The gist features and salient regions are then further processed using a Monte Carlo localization algorithm to allow the robot to generate its position. We test the system in three different outdoor environments-building complex (38.4 m times 54.86 m area, 13 966 testing images), vegetation-filled park (82.3 m times 109.73 m area, 26 397 testing images), and open-field park (137.16 m times 178.31 m area, 34 711 testing images)-each with its own challenges. The system is able to localize, on average, within 0.98, 2.63, and 3.46 m, respectively, even with multiple kidnapped-robot instances.",
"title": ""
},
{
"docid": "e21e08745f7f9c5f0c1d8efb5f70f918",
"text": "We present Swapnet, a framework to transfer garments across images of people with arbitrary body pose, shape, and clothing. Garment transfer is a challenging task that requires (i) disentangling the features of the clothing from the body pose and shape and (ii) realistic synthesis of the garment texture on the new body. We present a neural network architecture that tackles these sub-problems with two task-specific sub-networks. Since acquiring pairs of images showing the same clothing on different bodies is difficult, we propose a novel weaklysupervised approach that generates training pairs from a single image via data augmentation. We present the first fully automatic method for garment transfer in unconstrained images without solving the difficult 3D reconstruction problem. We demonstrate a variety of transfer results and highlight our advantages over traditional image-to-image and anal-",
"title": ""
}
] |
scidocsrr
|
23e7ecb72b720141cf26881d9308ee25
|
TrimBot2020: an outdoor robot for automatic gardening
|
[
{
"docid": "33de1981b2d9a0aa1955602006d09db9",
"text": "The FlowNet demonstrated that optical flow estimation can be cast as a learning problem. However, the state of the art with regard to the quality of the flow has still been defined by traditional methods. Particularly on small displacements and real-world data, FlowNet cannot compete with variational methods. In this paper, we advance the concept of end-to-end learning of optical flow and make it work really well. The large improvements in quality and speed are caused by three major contributions: first, we focus on the training data and show that the schedule of presenting data during training is very important. Second, we develop a stacked architecture that includes warping of the second image with intermediate optical flow. Third, we elaborate on small displacements by introducing a subnetwork specializing on small motions. FlowNet 2.0 is only marginally slower than the original FlowNet but decreases the estimation error by more than 50%. It performs on par with state-of-the-art methods, while running at interactive frame rates. Moreover, we present faster variants that allow optical flow computation at up to 140fps with accuracy matching the original FlowNet.",
"title": ""
}
] |
[
{
"docid": "c2c8c8a40caea744e40eb7bf570a6812",
"text": "OBJECTIVE\nTo investigate the association between single nucleotide polymorphisms (SNPs) of BARD1 gene and susceptibility of early-onset breast cancer in Uygur women in Xinjiang.\n\n\nMETHODS\nA case-control study was designed to explore the genotypes of Pro24Ser (C/T), Arg378Ser (G/C) and Val507Met (G/A) of BARD1 gene, detected by PCR-restriction fragment length polymorphism (PCR-RFLP) assay, in 144 early-onset breast cancer cases of Uygur women (≤ 40 years) and 136 cancer-free controls matched by age and ethnicity. The association between SNPs of BARD1 gene and risk of early-onset breast cancer in Uygur women was analyzed by unconditional logistic regression model.\n\n\nRESULTS\nEarly age at menarche, late age at first pregnancy, and positive family history of cancer may be important risk factors of early-onset breast cancer in Uygur women in Xinjiang. The frequencies of genotypes of Pro24Ser (C/T), Arg378Ser (G/C) and Val507Met (G/A) of BARD1 gene showed significant differences between the cancer cases and cancer-free controls (P < 0.05). Compared with wild-type genotype Pro24Ser CC, it showed a lower incidence of early-onset breast cancer in Uygur women with variant genotypes of Pro24Ser TT (OR = 0.117, 95%CI = 0.058 - 0.236), and dominance-genotype CT+TT (OR = 0.279, 95%CI = 0.157 - 0.494), or Arg378Ser CC (OR = 0.348, 95%CI = 0.145 - 0.834) and Val507Met AA(OR = 0.359, 95%CI = 0.167 - 0.774). Furthermore, SNPS in three polymorphisms would have synergistic effects on the risk of breast cancer. In addition, the SNP-SNP interactions of dominance-genotypes (CT+TT, GC+CC and GA+AA) showed a 52.1% lower incidence of early-onset breast cancer in Uygur women (OR = 0.479, 95%CI = 0.230 - 0.995). Stratified analysis indicated that the protective effect of carrying T variant genotype (CT/TT) in Pro24Ser and carrying C variant genotype (GC/CC) in Arg378Ser were more evident in subjects with early age at menarche and negative family history of cancer. With an older menarche age, the protective effect was weaker.\n\n\nCONCLUSIONS\nSNPs of Pro24Ser(C/T), Arg378Ser (G/C) and Val507Met (G/A) of BARD1 gene are associated with significantly decreased risk of early-onset breast cancer in Uygur women in Xinjiang. Early age at menarche and negative family history of cancer can enhance the protective effect of mutant allele.",
"title": ""
},
{
"docid": "0f17511a99f77a00930f4e8be525f1f9",
"text": "The fourth member of the leucine-rich repeat-containing GPCR family (LGR4, frequently referred to as GPR48) and its cognate ligands, R-spondins (RSPOs) play crucial roles in the development of multiple organs as well as the survival of adult stem cells by activation of canonical Wnt signaling. Wnt/β-catenin signaling acts to regulate breast cancer; however, the molecular mechanisms determining its spatiotemporal regulation are largely unknown. In this study, we identified LGR4 as a master controller of Wnt/β-catenin signaling-mediated breast cancer tumorigenesis, metastasis, and cancer stem cell (CSC) maintenance. LGR4 expression in breast tumors correlated with poor prognosis. Either Lgr4 haploinsufficiency or mammary-specific deletion inhibited mouse mammary tumor virus (MMTV)- PyMT- and MMTV- Wnt1-driven mammary tumorigenesis and metastasis. Moreover, LGR4 down-regulation decreased in vitro migration and in vivo xenograft tumor growth and lung metastasis. Furthermore, Lgr4 deletion in MMTV- Wnt1 tumor cells or knockdown in human breast cancer cells decreased the number of functional CSCs by ∼90%. Canonical Wnt signaling was impaired in LGR4-deficient breast cancer cells, and LGR4 knockdown resulted in increased E-cadherin and decreased expression of N-cadherin and snail transcription factor -2 ( SNAI2) (also called SLUG), implicating LGR4 in regulation of epithelial-mesenchymal transition. Our findings support a crucial role of the Wnt signaling component LGR4 in breast cancer initiation, metastasis, and breast CSCs.-Yue, Z., Yuan, Z., Zeng, L., Wang, Y., Lai, L., Li, J., Sun, P., Xue, X., Qi, J., Yang, Z., Zheng, Y., Fang, Y., Li, D., Siwko, S., Li, Y., Luo, J., Liu, M. LGR4 modulates breast cancer initiation, metastasis, and cancer stem cells.",
"title": ""
},
{
"docid": "7fab2075a73a5795075b29e20f5354ac",
"text": "The selection of hospital once an ambulance has picked up its patient is today decided by the ambulance staff. This report describes a supervised machine learning approach for predicting hospital selection. This is a multi-class classification problem. The performance of random forest, logistic regression and neural network were compared to each other and to a baseline, namely the one rule-algorithm. The algorithms were applied to real world data from SOS-alarm, the company that operate Sweden’s emergency call services. Performance was measured with accuracy and f1-score. Random Forest got the best result followed by neural network. Logistic regression exhibited slightly inferior results but still performed far better than the baseline. The results point toward machine learning being a suitable method for learning the problem of hospital selection. Analys av ambulanstransport medelst maskininlärning",
"title": ""
},
{
"docid": "6b59e286bdb09f64c8e3a7aa9ff15381",
"text": "Noting that skills and knowledge taught in schools have become abstracted from their uses in the world, this paper clarifies some of the implications for the nature of the knowledge that students acquire through a i;roposal for the retooling of apprenticeship methods for the teaching and learning of cognitive skills. The paper specifically proposes the development of a new cognitive apprenticeship to teach students the thinking and problem-solving skills involved in school subjects such as reading, writing, and mathematics. The first section of the paper, after discussing key shortcomings in current curricular and pedagogical practices, presents some of the structural features of traditional apprenticeship, detailing what would be required to adapt these characteristics to the teaching and learning of cognitive skills. The central section of the paper considers three recently developed pedagogical models that exemplify aspects of apprenticeship methods in teaching thinking and reasoning skills. The section notes that these methods--A. S. Palincsar and A. L. Brown's reciprocal reading teaching, M. Scardamalia and C. Bereiter's procedural facilitation of writing, and A. H. Schoenfeld's method for teaching mathematical problem solving--appear to develop successfully not only the cognitive, but also the metacognitive, skills required for true expertise. The final section organizes ideas on the purposes and characteristics of successful teaching into a general framework for the design of learning \"environments,\" including the content being taught, pedagogical methods employed, sequencing of learning activities, and the sociology of learning--emphasizing how cognitive apprenticeship goes beyond the techniques of traditional apprenticeship. Tables of data are included, and references are appended. (Author/NKA) CENTER FOR THE STUDY OF READING Technical Report No. 403 COGNITIVE APPRENTICESHIP: TEACHING THE CRAFT OF READING, WRITING, AND MATHEMATICS Allan Collins BBN Laboratories John Seely Brown Susan E. Newman Xerox Palo Alto Research Center",
"title": ""
},
{
"docid": "b9147ef0cf66bdb7ecc007a4e3092790",
"text": "This paper is related to the use of social media for disaster management by humanitarian organizations. The past decade has seen a significant increase in the use of social media to manage humanitarian disasters. It seems, however, that it has still not been used to its full potential. In this paper, we examine the use of social media in disaster management through the lens of Attribution Theory. Attribution Theory posits that people look for the causes of events, especially unexpected and negative events. The two major characteristics of disasters are that they are unexpected and have negative outcomes/impacts. Thus, Attribution Theory may be a good fit for explaining social media adoption patterns by emergency managers. We propose a model, based on Attribution Theory, which is designed to understand the use of social media during the mitigation and preparedness phases of disaster management. We also discuss the theoretical contributions and some practical implications. This study is still in its nascent stage and is research in progress.",
"title": ""
},
{
"docid": "cfad427a3200b46d195d1d715d32658a",
"text": "STUDY DESIGN\nA retrospective study.\n\n\nPURPOSE\nTo examine the efficacy and safety for a posterior-approach circumferential decompression and shortening reconstruction with a titanium mesh cage for lumbar burst fractures.\n\n\nOVERVIEW OF LITERATURE\nSurgical decompression and reconstruction for severely unstable lumbar burst fractures requires an anterior or combined anteroposterior approach. Furthermore, anterior instrumentation for the lower lumbar is restricted through the presence of major vessels.\n\n\nMETHODS\nThree patients with an L1 burst fracture, one with an L3 and three with an L4 (5 men, 2 women; mean age, 65.0 years) who underwent circumferential decompression and shortening reconstruction with a titanium mesh cage through a posterior approach alone and a 4-year follow-up were evaluated regarding the clinical and radiological course.\n\n\nRESULTS\nMean operative time was 277 minutes. Mean blood loss was 471 ml. In 6 patients, the Frankel score improved more than one grade after surgery, and the remaining patient was at Frankel E both before and after surgery. Mean preoperative visual analogue scale was 7.0, improving to 0.7 postoperatively. Local kyphosis improved from 15.7° before surgery to -11.0° after surgery. In 3 cases regarding the mid to lower lumbar patients, local kyphosis increased more than 10° by 3 months following surgery, due to subsidence of the cages. One patient developed severe tilting and subsidence of the cage, requiring additional surgery.\n\n\nCONCLUSIONS\nThe results concerning this small series suggest the feasibility, efficacy, and safety of this treatment for unstable lumbar burst fractures. This technique from a posterior approach alone offers several advantages over traditional anterior or combined anteroposterior approaches.",
"title": ""
},
{
"docid": "26a2a78909393566ef618a7d56b342d3",
"text": "The purpose of this study is to develop a wearable power assist device for hand grasping in order to support activity of daily living (ADL) safely and easily. In this paper, the mechanism of the developed power assist device is described, and then the effectiveness of this device is discussed experimentally.",
"title": ""
},
{
"docid": "70421a1d5c22452728eec63cbca95101",
"text": "The National Institute on Aging and the Alzheimer's Association charged a workgroup with the task of revising the 1984 criteria for Alzheimer's disease (AD) dementia. The workgroup sought to ensure that the revised criteria would be flexible enough to be used by both general healthcare providers without access to neuropsychological testing, advanced imaging, and cerebrospinal fluid measures, and specialized investigators involved in research or in clinical trial studies who would have these tools available. We present criteria for all-cause dementia and for AD dementia. We retained the general framework of probable AD dementia from the 1984 criteria. On the basis of the past 27 years of experience, we made several changes in the clinical criteria for the diagnosis. We also retained the term possible AD dementia, but redefined it in a manner more focused than before. Biomarker evidence was also integrated into the diagnostic formulations for probable and possible AD dementia for use in research settings. The core clinical criteria for AD dementia will continue to be the cornerstone of the diagnosis in clinical practice, but biomarker evidence is expected to enhance the pathophysiological specificity of the diagnosis of AD dementia. Much work lies ahead for validating the biomarker diagnosis of AD dementia.",
"title": ""
},
{
"docid": "83f1cb63b10552a5c14748e3cf2dfc92",
"text": "Recent automotive vision work has focused almost exclusively on processing forward-facing cameras. However, future autonomous vehicles will not be viable without a more comprehensive surround sensing, akin to a human driver, as can be provided by 360◦ panoramic cameras. We present an approach to adapt contemporary deep network architectures developed on conventional rectilinear imagery to work on equirectangular 360◦ panoramic imagery. To address the lack of annotated panoramic automotive datasets availability, we adapt a contemporary automotive dataset, via style and projection transformations, to facilitate the cross-domain retraining of contemporary algorithms for panoramic imagery. Following this approach we retrain and adapt existing architectures to recover scene depth and 3D pose of vehicles from monocular panoramic imagery without any panoramic training labels or calibration parameters. Our approach is evaluated qualitatively on crowd-sourced panoramic images and quantitatively using an automotive environment simulator to provide the first benchmark for such techniques within panoramic imagery.",
"title": ""
},
{
"docid": "783cb476dd8d663f63dc079b7b943e0d",
"text": "Temporal segmentation of human motion into actions is a crucial step for understanding and building computational models of human motion. Several issues contribute to the challenge of this task. These include the large variability in the temporal scale and periodicity of human actions, as well as the exponential nature of all possible movement combinations. We formulate the temporal segmentation problem as an extension of standard clustering algorithms. In particular, this paper proposes aligned cluster analysis (ACA), a robust method to temporally segment streams of motion capture data into actions. ACA extends standard kernel k-means clustering in two ways: (1) the cluster means contain a variable number of features, and (2) a dynamic time warping (DTW) kernel is used to achieve temporal invariance. Experimental results, reported on synthetic data and the Carnegie Mellon Motion Capture database, demonstrate its effectiveness.",
"title": ""
},
{
"docid": "ade2fd7f83a78a5a7d78c7e8286aeb18",
"text": "We present a method for solving the independent set formulation of the graph coloring problem (where there is one variable for each independent set in the graph). We use a column generation method for implicit optimization of the linear program at each node of the branch-and-bound tree. This approach, while requiring the solution of a diicult subproblem as well as needing sophisticated branching rules, solves small to moderate size problems quickly. We have also implemented an exact graph coloring algorithm based on DSATUR for comparison. Implementation details and computational experience are presented.",
"title": ""
},
{
"docid": "9db779a5a77ac483bb1991060dca7c28",
"text": "An Ambient Intelligence (AmI) environment is primary developed using intelligent agents and wireless sensor networks. The intelligent agents could automatically obtain contextual information in real time using Near Field Communication (NFC) technique and wireless ad-hoc networks. In this research, we propose a stock trading and recommendation system with mobile devices (Android platform) interface in the over-the-counter market (OTC) environments. The proposed system could obtain the real-time financial information of stock price through a multi-agent architecture with plenty of useful features. In addition, NFC is used to achieve a context-aware environment allowing for automatic acquisition and transmission of useful trading recommendations and relevant stock information for investors. Finally, AmI techniques are applied to successfully create smart investment spaces, providing investors with useful monitoring tools and investment recommendation.",
"title": ""
},
{
"docid": "b721cdddce57146f540fe12d957f47cc",
"text": "The effects of social influence and homophily suggest that both network structure and node attribute information should inform the tasks of link prediction and node attribute inference. Recently, Yin et al. [28, 29] proposed Social-Attribute Network (SAN), an attribute-augmented social network, to integrate network structure and node attributes to perform both link prediction and attribute inference. They focused on generalizing the random walk with restart algorithm to the SAN framework and showed improved performance. In this paper, we extend the SAN framework with several leading supervised and unsupervised link prediction algorithms and demonstrate performance improvement for each algorithm on both link prediction and attribute inference. Moreover, we make the novel observation that attribute inference can help inform link prediction, i.e., link prediction accuracy is further improved by first inferring missing attributes. We comprehensively evaluate these algorithms and compare them with other existing algorithms using a novel, largescale Google+ dataset, which we make publicly available.",
"title": ""
},
{
"docid": "6793d42185be4a264e66c22202bf670a",
"text": "Two key elements in the area of cognitive ergonomics are user-system performance and the user similarity. However, with the introduction of skinnable user interfaces, a technology that gives the user interface a chameleon-like ability, elements such as aesthetic, fun, and especially user individuality and identity become more important. This paper presents two explorative studies on user personality in relation to skin preferences. In the studies participants were asked to rate their preference of a set of Windows Media Player skins and to complete the BIS/BAS and the IPIP-NEO personality inventories. The results of the first study suggest colour and similarity-attraction as two possible underlying factors for the correlations found between personality traits and skin preferences. The results of the second study partly confirm these findings, however not for similar personality traits and skin types correlations.",
"title": ""
},
{
"docid": "4a227bddcaed44777eb7a29dcf940c6c",
"text": "Deep neural networks have achieved great success on a variety of machine learning tasks. There are many fundamental and open questions yet to be answered, however. We introduce the Extended Data Jacobian Matrix (EDJM) as an architecture-independent tool to analyze neural networks at the manifold of interest. The spectrum of the EDJM is found to be highly correlated with the complexity of the learned functions. After studying the effect of dropout, ensembles, and model distillation using EDJM, we propose a novel spectral regularization method, which improves network performance.",
"title": ""
},
{
"docid": "6c6206e330f0d9b7f9ed68f8af78b117",
"text": "This paper deals with the design, manufacture and test of a high efficiency power amplifier for L-band space borne applications. The circuit operates with a single 36 mm gate periphery GaN HEMT power bar die allowing both improved integration and performance as compared with standard HPA design in a similar RF power range. A huge effort dedicated to the device's characterization and modeling has eased the circuit optimization leaning on the multi-harmonics impedances synthesis. Test results demonstrate performance up to 140 W RF output power with an associated 60% PAE for a limited 3.9 dB gain compression under 50 V supply voltage using a single GaN power bar.",
"title": ""
},
{
"docid": "bf48f9ac763b522b8d43cfbb281fbffa",
"text": "We present a declarative framework for collective deduplication of entity references in the presence of constraints. Constraints occur naturally in many data cleaning domains and can improve the quality of deduplication. An example of a constraint is \"each paper has a unique publication venue''; if two paper references are duplicates, then their associated conference references must be duplicates as well. Our framework supports collective deduplication, meaning that we can dedupe both paper references and conference references collectively in the example above. Our framework is based on a simple declarative Datalog-style language with precise semantics. Most previous work on deduplication either ignoreconstraints or use them in an ad-hoc domain-specific manner. We also present efficient algorithms to support the framework. Our algorithms have precise theoretical guarantees for a large subclass of our framework. We show, using a prototype implementation, that our algorithms scale to very large datasets. We provide thoroughexperimental results over real-world data demonstrating the utility of our framework for high-quality and scalable deduplication.",
"title": ""
},
{
"docid": "9db0e9b90db4d7fd9c0f268b5ee9b843",
"text": "Traditionally, the evaluation of surgical procedures in virtual reality (VR) simulators has been restricted to their individual technical aspects disregarding the procedures carried out by teams. However, some decision models have been proposed to support the collaborative training evaluation process of surgical teams in collaborative virtual environments. The main objective of this article is to present a collaborative simulator based on VR, named SimCEC, as a potential solution for education, training, and evaluation in basic surgical routines for teams of undergraduate students. The simulator considers both tasks performed individually and those carried in a collaborative manner. The main contribution of this work is to improve the discussion about VR simulators requirements (design and implementation) to provide team training in relevant topics, such as users’ feedback in real time, collaborative training in networks, interdisciplinary integration of curricula, and continuous evaluation.",
"title": ""
},
{
"docid": "93a3895a03edcb50af74db901cb16b90",
"text": "OBJECT\nBecause lumbar magnetic resonance (MR) imaging fails to identify a treatable cause of chronic sciatica in nearly 1 million patients annually, the authors conducted MR neurography and interventional MR imaging in 239 consecutive patients with sciatica in whom standard diagnosis and treatment failed to effect improvement.\n\n\nMETHODS\nAfter performing MR neurography and interventional MR imaging, the final rediagnoses included the following: piriformis syndrome (67.8%), distal foraminal nerve root entrapment (6%), ischial tunnel syndrome (4.7%), discogenic pain with referred leg pain (3.4%), pudendal nerve entrapment with referred pain (3%), distal sciatic entrapment (2.1%), sciatic tumor (1.7%), lumbosacral plexus entrapment (1.3%), unappreciated lateral disc herniation (1.3%), nerve root injury due to spinal surgery (1.3%), inadequate spinal nerve root decompression (0.8%), lumbar stenosis (0.8%), sacroiliac joint inflammation (0.8%), lumbosacral plexus tumor (0.4%), sacral fracture (0.4%), and no diagnosis (4.2%). Open MR-guided Marcaine injection into the piriformis muscle produced the following results: no response (15.7%), relief of greater than 8 months (14.9%), relief lasting 2 to 4 months with continuing relief after second injection (7.5%), relief for 2 to 4 months with subsequent recurrence (36.6%), and relief for 1 to 14 days with full recurrence (25.4%). Piriformis surgery (62 operations; 3-cm incision, transgluteal approach, 55% outpatient; 40% with local or epidural anesthesia) resulted in excellent outcome in 58.5%, good outcome in 22.6%, limited benefit in 13.2%, no benefit in 3.8%, and worsened symptoms in 1.9%.\n\n\nCONCLUSIONS\nThis Class A quality evaluation of MR neurography's diagnostic efficacy revealed that piriformis muscle asymmetry and sciatic nerve hyperintensity at the sciatic notch exhibited a 93% specificity and 64% sensitivity in distinguishing patients with piriformis syndrome from those without who had similar symptoms (p < 0.01). Evaluation of the nerve beyond the proximal foramen provided eight additional diagnostic categories affecting 96% of these patients. More than 80% of the population good or excellent functional outcome was achieved.",
"title": ""
},
{
"docid": "cc15583675d6b19fbd9a10f06876a61e",
"text": "Matrix factorization approaches to relation extraction provide several attractive features: they support distant supervision, handle open schemas, and leverage unlabeled data. Unfortunately, these methods share a shortcoming with all other distantly supervised approaches: they cannot learn to extract target relations without existing data in the knowledge base, and likewise, these models are inaccurate for relations with sparse data. Rule-based extractors, on the other hand, can be easily extended to novel relations and improved for existing but inaccurate relations, through first-order formulae that capture auxiliary domain knowledge. However, usually a large set of such formulae is necessary to achieve generalization. In this paper, we introduce a paradigm for learning low-dimensional embeddings of entity-pairs and relations that combine the advantages of matrix factorization with first-order logic domain knowledge. We introduce simple approaches for estimating such embeddings, as well as a novel training algorithm to jointly optimize over factual and first-order logic information. Our results show that this method is able to learn accurate extractors with little or no distant supervision alignments, while at the same time generalizing to textual patterns that do not appear in the formulae.",
"title": ""
}
] |
scidocsrr
|
fa552a66a6d954187f27ec5b6811bcb0
|
Amazon Aurora: Design Considerations for High Throughput Cloud-Native Relational Databases
|
[
{
"docid": "41a16f3eb3ff59d34e04ffa77bf1ae86",
"text": "Windows Azure Storage (WAS) is a cloud storage system that provides customers the ability to store seemingly limitless amounts of data for any duration of time. WAS customers have access to their data from anywhere at any time and only pay for what they use and store. In WAS, data is stored durably using both local and geographic replication to facilitate disaster recovery. Currently, WAS storage comes in the form of Blobs (files), Tables (structured storage), and Queues (message delivery). In this paper, we describe the WAS architecture, global namespace, and data model, as well as its resource provisioning, load balancing, and replication systems.",
"title": ""
}
] |
[
{
"docid": "81fd8d4c38a65c5d0df0c849e8c080fc",
"text": "The paper presents two types of one cycle current control method for Triple Active Bridge(TAB) phase-shifted DC-DC converter integrating Renewable Energy Source(RES), Energy Storage System(ESS) and a output dc bus. The main objective of the current control methods is to control the transformer current in each cycle so that dc transients are eliminated during phase angle change from one cycle to the next cycle. In the proposed current control methods, the transformer currents are sampled within a switching cycle and the phase shift angles for the next switching cycle are generated based on sampled current values and current references. The discussed one cycle control methods also provide an inherent power decoupling feature for the three port phase shifted triple active bridge converter. Two different methods, (a) sampling and updating twice in a switching cycle and (b) sampling and updating once in a switching cycle, are explained in this paper. The current control methods are experimentally verified using digital implementation technique on a laboratory made hardware prototype.",
"title": ""
},
{
"docid": "01d8f6e022099977bdcf92ee5735e11d",
"text": "We present a novel deep learning based image inpainting system to complete images with free-form masks and inputs. e system is based on gated convolutions learned from millions of images without additional labelling efforts. e proposed gated convolution solves the issue of vanilla convolution that treats all input pixels as valid ones, generalizes partial convolution by providing a learnable dynamic feature selection mechanism for each channel at each spatial location across all layers. Moreover, as free-form masks may appear anywhere in images with any shapes, global and local GANs designed for a single rectangular mask are not suitable. To this end, we also present a novel GAN loss, named SN-PatchGAN, by applying spectral-normalized discriminators on dense image patches. It is simple in formulation, fast and stable in training. Results on automatic image inpainting and user-guided extension demonstrate that our system generates higher-quality and more exible results than previous methods. We show that our system helps users quickly remove distracting objects, modify image layouts, clear watermarks, edit faces and interactively create novel objects in images. Furthermore, visualization of learned feature representations reveals the eectiveness of gated convolution and provides an interpretation of how the proposed neural network lls in missing regions. More high-resolution results and video materials are available at hp://jiahuiyu.com/deepll2.",
"title": ""
},
{
"docid": "def650b2d565f88a6404997e9e93d34f",
"text": "Quality uncertainty and high search costs for identifying relevant information from an ocean of information may prevent customers from making purchases. Recognizing potential negative impacts of this search cost for quality information and relevant information, firms began to invest in creating a virtual community that enables consumers to share their opinions and experiences to reduce quality uncertainty, and in developing recommendation systems that help customers identify goods in which they might have an interest. However, not much is known regarding the effectiveness of these efforts. In this paper, we empirically investigate the impacts of recommendations and consumer feedbacks on sales based on data gathered from Amazon.com. Our results indicate that more recommendations indeed improve sales at Amazon.com; however, consumer ratings are not found to be related to sales. On the other hand, number of consumer reviews is positively associated with sales. We also find that recommendations work better for less-popular books than for more-popular books. This is consistent with the search cost argument: a consumer’s search cost for less-popular books may be higher, and thus they may rely more on recommendations to locate a product of interest.",
"title": ""
},
{
"docid": "4b3e7c1682b9e039e26702105fd0cc63",
"text": "Recent research has shown that voltage scaling is a very effective technique for low-power design. This paper describes a voltage scaling technique to minimize the power consumption of a combinational circuit. First, the converter-free multiple-voltage (CFMV) structures are proposed, including the p-type, the n-type, and the two-way CFMV structures. The CFMV structures make use of multiple supply voltages and do not require level converters. In contrast, previous works employing multiple supply voltages need level converters to prevent static currents, which may result in large power consumption. In addition, the CFMV structures group the gates with the same supply voltage in a cluster to reduce the complexity of placement and routing for the subsequent physical layout stage. Next, we formulated the problem and proposed an efficient heuristic algorithm to solve it. The heuristic algorithm has been implemented in C and experiments were performed on the ISCAS85 circuits to demonstrate the effectiveness of our approach.",
"title": ""
},
{
"docid": "5e23bcd2f5bc996525056093c8e47e14",
"text": "No matter how mild, dehydration is not a desirable condition because there is an imbalance in the homeostatic function of the internal environment. This can adversely affect cognitive performance, not only in groups more vulnerable to dehydration, such as children and the elderly, but also in young adults. However, few studies have examined the impact of mild or moderate dehydration on cognitive performance. This paper reviews the principal findings from studies published to date examining cognitive skills. Being dehydrated by just 2% impairs performance in tasks that require attention, psychomotor, and immediate memory skills, as well as assessment of the subjective state. In contrast, the performance of long-term and working memory tasks and executive functions is more preserved, especially if the cause of dehydration is moderate physical exercise. The lack of consistency in the evidence published to date is largely due to the different methodology applied, and an attempt should be made to standardize methods for future studies. These differences relate to the assessment of cognitive performance, the method used to cause dehydration, and the characteristics of the participants.",
"title": ""
},
{
"docid": "fd4d5a8dcd0cad43aecb11705475103a",
"text": "The use of electronic health record (EHR) systems by medical professionals enables the electronic exchange of patient data, yielding cost and quality of care benefits. The United States American Recovery and Reinvestment Act (ARRA) of 2009 provides up to $34 billion for meaningful use of certified EHR systems. But, will these certified EHR systems provide the infrastructure for secure patient data exchange? As a window into the ability of current and emerging certification criteria to expose security vulnerabilities, we performed exploratory security analysis on a proprietary and an open source EHR. We were able to exploit a range of common code-level and design-level vulnerabilities. These common vulnerabilities would have remained undetected by the 2011 security certification test scripts from the Certification Commission for Health Information Technology, the most widely used certification process for EHR systems. The consequences of these exploits included, but were not limited to: exposing all users' login information, the ability of any user to view or edit health records for any patient, and creating a denial of service for all users. Based upon our results, we suggest that an enhanced set of security test scripts be used as entry criteria to the EHR certification process. Before certification bodies spend the time to certify that an EHR application is functionally complete, they should have confidence that the software system meets a basic level of security competence.",
"title": ""
},
{
"docid": "f622860032b9a4dd054082be0741f18d",
"text": "Full Metal Jacket is a general-purpose visual dataflow language currently being developed on top of Emblem, a Lisp dialect strongly influenced by Common Lisp but smaller and more type-aware, and with support for CLOS-style object orientation, graphics, event handling and multi-threading. Methods in Full Metal Jacket Jacket are directed acyclic graphs. Data arriving at ingates from the calling method flows along edges through vertices, at which it gets transformed by applying Emblem functions or methods, or methods defined in Full Metal Jacket, before it finally arrives at outgates where it is propagated back upwards to the calling method. The principal difference between Full Metal Jacket and existing visual dataflow languages such as Prograph is that Full Metal Jacket is a pure dataflow language, with no special syntax being required for control constructs such as loops or conditionals, which resemble ordinary methods except in the number of times they generate outputs. This uniform syntax means that, like Lisp and Prolog, methods in Full Metal Jacket are themselves data structures and can be manipulated as such.",
"title": ""
},
{
"docid": "69198cc56f9c4f7f1f235ae7d7c34479",
"text": "This paper presents fine-tuned CNN features for person re-identification. Recently, features extracted from top layers of pre-trained Convolutional Neural Network (CNN) on a large annotated dataset, e.g., ImageNet, have been proven to be strong off-the-shelf descriptors for various recognition tasks. However, large disparity among the pre-trained task, i.e., ImageNet classification, and the target task, i.e., person image matching, limits performances of the CNN features for person re-identification. In this paper, we improve the CNN features by conducting a fine-tuning on a pedestrian attribute dataset. In addition to the classification loss for multiple pedestrian attribute labels, we propose new labels by combining different attribute labels and use them for an additional classification loss function. The combination attribute loss forces CNN to distinguish more person specific information, yielding more discriminative features. After extracting features from the learned CNN, we apply conventional metric learning on a target re-identification dataset for further increasing discriminative power. Experimental results on four challenging person re-identification datasets (VIPeR, CUHK, PRID450S and GRID) demonstrate the effectiveness of the proposed features.",
"title": ""
},
{
"docid": "dc7361721e3a40de15b3d2211998cc2a",
"text": "Despite advances in surgical technique and postoperative care, fibrosis remains the major impediment to a marked reduction of intraocular pressure without the need of additional medication (complete success) following filtering glaucoma surgery. Several aspects specific to filtering surgery may contribute to enhanced fibrosis. Changes in conjunctival tissue structure and composition due to preceding treatments as well as alterations in interstitial fluid flow and content due to aqueous humor efflux may act as important drivers of fibrosis. In light of these pathophysiological considerations, current and possible future strategies to control fibrosis following filtering glaucoma surgery are discussed.",
"title": ""
},
{
"docid": "43a0ba335b2e024830c53893269ea144",
"text": "Controlling for selection and confounding biases are two of the most challenging problems in the empirical sciences as well as in artificial intelligence tasks. Covariate adjustment (or, Backdoor Adjustment) is the most pervasive technique used for controlling confounding bias, but the same is oblivious to issues of sampling selection. In this paper, we introduce a generalized version of covariate adjustment that simultaneously controls for both confounding and selection biases. We first derive a sufficient and necessary condition for recovering causal effects using covariate adjustment from an observational distribution collected under preferential selection. We then relax this setting to consider cases when additional, unbiased measurements over a set of covariates are available for use (e.g., the age and gender distribution obtained from census data). Finally, we present a complete algorithm with polynomial delay to find all sets of admissible covariates for adjustment when confounding and selection biases are simultaneously present and unbiased data is available.",
"title": ""
},
{
"docid": "45a45087a6829486d46eda0adcff978f",
"text": "Container technology has the potential to considerably simplify the management of the software stack of High Performance Computing (HPC) clusters. However, poor integration with established HPC technologies is still preventing users and administrators to reap the benefits of containers. Message Passing Interface (MPI) is a pervasive technology used to run scientific software, often written in Fortran and C/C++, that presents challenges for effective integration with containers. This work shows how an existing MPI implementation can be extended to improve this integration.",
"title": ""
},
{
"docid": "98689a2f03193a2fb5cc5195ef735483",
"text": "Darknet markets are online services behind Tor where cybercriminals trade illegal goods and stolen datasets. In recent years, security analysts and law enforcement start to investigate the darknet markets to study the cybercriminal networks and predict future incidents. However, vendors in these markets often create multiple accounts (\\em i.e., Sybils), making it challenging to infer the relationships between cybercriminals and identify coordinated crimes. In this paper, we present a novel approach to link the multiple accounts of the same darknet vendors through photo analytics. The core idea is that darknet vendors often have to take their own product photos to prove the possession of the illegal goods, which can reveal their distinct photography styles. To fingerprint vendors, we construct a series deep neural networks to model the photography styles. We apply transfer learning to the model training, which allows us to accurately fingerprint vendors with a limited number of photos. We evaluate the system using real-world datasets from 3 large darknet markets (7,641 vendors and 197,682 product photos). A ground-truth evaluation shows that the system achieves an accuracy of 97.5%, outperforming existing stylometry-based methods in both accuracy and coverage. In addition, our system identifies previously unknown Sybil accounts within the same markets (23) and across different markets (715 pairs). Further case studies reveal new insights into the coordinated Sybil activities such as price manipulation, buyer scam, and product stocking and reselling.",
"title": ""
},
{
"docid": "64406c6b0e45eb49743f0789dcb89029",
"text": "Hand gesture is one of the typical methods used in sign language for non-verbal communication. Sign gestures are a non-verbal visual language, different from the spoken language, but serving the same function. It is often very difficult for the hearing impaired community to communicate their ideas and creativity to the normal humans. This paper presents a system that will not only automatically recognize the hand gestures but also convert it into corresponding speech output so that speaking impaired person can easily communicate with normal people. The gesture to speech system, G2S, has been developed using the skin colour segmentation. The system consists of camera attached to computer that will take images of hand gestures. Image segmentation & feature extraction algorithm is used to recognize the hand gestures of the signer. According to recognized hand gestures, corresponding pre-recorded sound track will be played.",
"title": ""
},
{
"docid": "ac6b7e1b0aea33018f5f4a9077549aba",
"text": "Management of sudden unrelenting breast growth in a young woman included use of antiestrogen hormone therapy and subcutaneous mastectomy. Later, massive breast growth again occurred during pregnancy, requiring a repeat postpartum subcutaneous mastectomy. The dramatic response to a specific antiestrogen agent and the subsequent massive regrowth of breast tissue after subcutaneous mastectomy suggests that breast tissue is extremely sensitive to circulating hormones in certain patients with macromastia. The unusual nature of this patient's recurrent macromastia warrants this review of reports of similarly affected patients and discussion of general concepts in the medical and surgical management of the disorder.",
"title": ""
},
{
"docid": "d5907911dfa7340b786f85618702ac12",
"text": "In recent years many popular data visualizations have emerged that are created largely by designers whose main area of expertise is not computer science. Designers generate these visualizations using a handful of design tools and environments. To better inform the development of tools intended for designers working with data, we set out to understand designers' challenges and perspectives. We interviewed professional designers, conducted observations of designers working with data in the lab, and observed designers working with data in team settings in the wild. A set of patterns emerged from these observations from which we extract a number of themes that provide a new perspective on design considerations for visualization tool creators, as well as on known engineering problems.",
"title": ""
},
{
"docid": "17b24352c5b255bbbc85a8eaf98b7b84",
"text": "Situational awareness involves the timely acquisition of knowledge about real-world events, distillation of those events into higher-level conceptual constructs, and their synthesis into a coherent context-sensitive view. We explore how convergent trends in video sensing, crowd sourcing and edge computing can be harnessed to create a shared real-time information system for situational awareness in vehicular systems that span driverless and drivered vehicles.",
"title": ""
},
{
"docid": "ca0ede1b7a0f81e3f17f2bb8804b2eeb",
"text": "WiFi in indoor environments exhibits spatio-temporal variations in terms of coverage and interference in typical WLAN deployments with multiple APs, motivating the need for automated monitoring to aid network administrators to adapt the WLAN deployment in order to match the user expectations. We develop Pazl, a mobile crowdsensing based indoor WiFi monitoring system that is enabled by a novel hybrid localization mechanism to locate individual measurements taken from participant phones. The localization mechanism in Pazl integrates the best aspects of two well known localization techniques, pedestrian dead reckoning and WiFi fingerprinting; it also relies on crowdsourcing for constructing the WiFi fingerprint database. Compared to existing WiFi monitoring systems based on static sniffers, Pazl is low cost and provides a user-side perspective. Pazl is significantly more automated than wireless site survey tools such as Ekahau Mobile Survey tool by drastically reducing the manual point-and-click based measurement location determination. We implement Pazl through a combination of Android mobile app and cloud backend application on the Google App Engine. Experimental evaluation of Pazl with a trial set of users shows that it yields similar results to manual site surveys but without the tedium.",
"title": ""
},
{
"docid": "d78acb79ccd229af7529dae1408dea6a",
"text": "Making recommendations by learning to rank is becoming an increasingly studied area. Approaches that use stochastic gradient descent scale well to large collaborative filtering datasets, and it has been shown how to approximately optimize the mean rank, or more recently the top of the ranked list. In this work we present a family of loss functions, the k-order statistic loss, that includes these previous approaches as special cases, and also derives new ones that we show to be useful. In particular, we present (i) a new variant that more accurately optimizes precision at k, and (ii) a novel procedure of optimizing the mean maximum rank, which we hypothesize is useful to more accurately cover all of the user's tastes. The general approach works by sampling N positive items, ordering them by the score assigned by the model, and then weighting the example as a function of this ordered set. Our approach is studied in two real-world systems, Google Music and YouTube video recommendations, where we obtain improvements for computable metrics, and in the YouTube case, increased user click through and watch duration when deployed live on www.youtube.com.",
"title": ""
},
{
"docid": "4bcdc83f93bec38616eea1acec30d512",
"text": "Sentiment analysis deals with identifying and classifying opinions or sentiments expressed in source text. Social media is generating a vast amount of sentiment rich data in the form of tweets, status updates, blog posts etc. Sentiment analysis of this user generated data is very useful in knowing the opinion of the crowd. Twitter sentiment analysis is difficult compared to general sentiment analysis due to the presence of slang words and misspellings. The maximum limit of characters that are allowed in Twitter is 140. Knowledge base approach and Machine learning approach are the two strategies used for analyzing sentiments from the text. In this paper, we try to analyze the twitter posts about electronic products like mobiles, laptops etc using Machine Learning approach. By doing sentiment analysis in a specific domain, it is possible to identify the effect of domain information in sentiment classification. We present a new feature vector for classifying the tweets as positive, negative and extract peoples' opinion about products.",
"title": ""
},
{
"docid": "19fbd4a685e7fc8c299447644f496d5f",
"text": "The creation of the e-learning web services are increasingly growing. Therefore, their discovery is a very important challenge. The choice of the e-learning web services depend, generally, on the pedagogic, the financial and the technological constraints. The Learning Quality ontology extends existing ontology such as OWL-S to provide a semantically rich description of these constraints. However, due to the diversity of web services customers, other parameters must be considered during the discovery process, such as their preferences. For this purpose, the user profile takes into account to increase the degree of relevance of discovery results. We also present a modeling scenario to illustrate how our ontology can be used.",
"title": ""
}
] |
scidocsrr
|
95591fb80a563d7aad8ddb2365a50951
|
Relating kindergarten attention to subsequent developmental pathways of classroom engagement in elementary school.
|
[
{
"docid": "9c0baef3b1d0c0f13b87a2dbeb4769f9",
"text": "In a longitudinal study of 140 eighth-grade students, self-discipline measured by self-report, parent report, teacher report, and monetary choice questionnaires in the fall predicted final grades, school attendance, standardized achievement-test scores, and selection into a competitive high school program the following spring. In a replication with 164 eighth graders, a behavioral delay-of-gratification task, a questionnaire on study habits, and a group-administered IQ test were added. Self-discipline measured in the fall accounted for more than twice as much variance as IQ in final grades, high school selection, school attendance, hours spent doing homework, hours spent watching television (inversely), and the time of day students began their homework. The effect of self-discipline on final grades held even when controlling for first-marking-period grades, achievement-test scores, and measured IQ. These findings suggest a major reason for students falling short of their intellectual potential: their failure to exercise self-discipline.",
"title": ""
}
] |
[
{
"docid": "46291c5a7fafd089c7729f7bc77ae8b7",
"text": "This paper proposes a new system for offline writer identification and writer verification. The proposed method uses GMM supervectors to encode the feature distribution of individual writers. Each supervector originates from an individual GMM which has been adapted from a background model via a maximum-a-posteriori step followed by mixing the new statistics with the background model. We show that this approach improves the TOP-1 accuracy of the current best ranked methods evaluated at the ICDAR-2013 competition dataset from 95.1% [13] to 97.1%, and from 97.9% [11] to 99.2% at the CVL dataset, respectively. Additionally, we compare the GMM supervector encoding with other encoding schemes, namely Fisher vectors and Vectors of Locally Aggregated Descriptors.",
"title": ""
},
{
"docid": "f5da20b4dcdabe473efbd3fd0dea1049",
"text": "A surface light field is a function that assigns a color to each ray originating on a surface. Surface light fields are well suited to constructing virtual images of shiny objects under complex lighting conditions. This paper presents a framework for construction, compression, interactive rendering, and rudimentary editing of surface light fields of real objects. Generalization of vector quantization and principal component analysis are used to construct a compressed representation of an object's surface light field from photographs and range scans. A new rendering algorithm achieves interactive rendering of images from the compressed representation, incorporating view-dependent geometric level-of-detail control. The surface light field representation can also be directly edited to yield plausible surface light fields for small changes in surface geometry and reflectance properties.",
"title": ""
},
{
"docid": "14d8bf0bdf519cf0197098d56e6a0c49",
"text": "Overlapped subarray networks produce flat-topped sector patterns with low sidelobes that suppress grating lobes outside of the main beam of the subarray pattern. They are typically used in limited scan applications, where it is desired to minimize the number of controls required to steer the beam. However, the architecture of an overlapped subarray antenna includes many signal crossovers and a wide variation in splitting/combining ratios, which make it difficult to maintain required error tolerances. This paper presents the design considerations and results for an overlapped subarray radar antenna, including a custom subarray weighting function and the corresponding circuit design and fabrication. Measured pattern results will be shown for a prototype design compared with desired patterns.",
"title": ""
},
{
"docid": "f043acf163d787c4a53924515b509aba",
"text": "A two-wheeled self-balancing robot is a special type of wheeled mobile robot, its balance problem is a hot research topic due to its unstable state for controlling. In this paper, human transporter model has been established. Kinematic and dynamic models are constructed and two control methods: Proportional-integral-derivative (PID) and Linear-quadratic regulator (LQR) are implemented to test the system model in which controls of two subsystems: self-balance (preventing system from falling down when it moves forward or backward) and yaw rotation (steering angle regulation when it turns left or right) are considered. PID is used to control both two subsystems, LQR is used to control self-balancing subsystem only. By using simulation in Matlab, two methods are compared and discussed. The theoretical investigations for controlling the dynamic behavior are meaningful for design and fabrication. Finally, the result shows that LQR has a better performance than PID for self-balancing subsystem control.",
"title": ""
},
{
"docid": "561b3604a579022e93d88c6c81bfb13b",
"text": "In this thesis we will use Random Forests to define a trading strategy. Using this powerful machine learning technique, we will try to predict the daily price changes of financial products that move similarly over the long term, so-called cointegrated pairs. We propose a way to adjust our portfolio based on these prediction, while limiting our risk. Firstly, we test our strategy on data generated from a model that mimics these kinds of financial products. After promising results, we test our strategy on the Dutch AEX index and the German DAX index. From our backtests we see that our strategy outperforms both indices in terms of Sharpe ratio. Using a backtesting period of 10 year up to mid 2017 we find an annualized Sharpe ratio of about 0.7, before transaction costs and ignoring the riskfree return rate.",
"title": ""
},
{
"docid": "50df49f3c9de66798f89fdeab9d2ae85",
"text": "Predictive modeling is increasingly being employed to assist human decision-makers. One purported advantage of replacing or augmenting human judgment with computer models in high stakes settings– such as sentencing, hiring, policing, college admissions, and parole decisions– is the perceived “neutrality” of computers. It is argued that because computer models do not hold personal prejudice, the predictions they produce will be equally free from prejudice. There is growing recognition that employing algorithms does not remove the potential for bias, and can even amplify it if the training data were generated by a process that is itself biased. In this paper, we provide a probabilistic notion of algorithmic bias. We propose a method to eliminate bias from predictive models by removing all information regarding protected variables from the data to which the models will ultimately be trained. Unlike previous work in this area, our procedure accommodates data on any measurement scale. Motivated by models currently in use in the criminal justice system that inform decisions on pre-trial release and parole, we apply our proposed method to a dataset on the criminal histories of individuals at the time of sentencing to produce “race-neutral” predictions of re-arrest. In the process, we demonstrate that a common approach to creating “race-neutral” models– omitting race as a covariate– still results in racially disparate predictions. We then demonstrate that the application of our proposed method to these data removes racial disparities from predictions with minimal impact on predictive accuracy.",
"title": ""
},
{
"docid": "df3c5a848c66dbd5e804242a93cdb998",
"text": "Handwritten character recognition has been one of the most fascinating research among the various researches in field of image processing. In Handwritten character recognition method the input is scanned from images, documents and real time devices like tablets, tabloids, digitizers etc. which are then interpreted into digital text. There are basically two approaches - Online Handwritten recognition which takes the input at run time and Offline Handwritten Recognition which works on scanned images. In this paper we have discussed the architecture, the steps involved, and the various proposed methodologies of offline and online character recognition along with their comparison and few applications.",
"title": ""
},
{
"docid": "6eb2c0e22ecc0816cb5f83292902d799",
"text": "In this paper, we demonstrate that Android malware can bypass all automated analysis systems, including AV solutions, mobile sandboxes, and the Google Bouncer. We propose a tool called Sand-Finger for the fingerprinting of Android-based analysis systems. By analyzing the fingerprints of ten unique analysis environments from different vendors, we were able to find characteristics in which all tested environments differ from actual hardware. Depending on the availability of an analysis system, malware can either behave benignly or load malicious code at runtime. We classify this group of malware as Divide-and-Conquer attacks that are efficiently obfuscated by a combination of fingerprinting and dynamic code loading. In this group, we aggregate attacks that work against dynamic as well as static analysis. To demonstrate our approach, we create proof-of-concept malware that surpasses up-to-date malware scanners for Android. We also prove that known malware samples can enter the Google Play Store by modifying them only slightly. Due to Android's lack of an API for malware scanning at runtime, it is impossible for AV solutions to secure Android devices against these attacks.",
"title": ""
},
{
"docid": "b3104044c7e173a754288f44fa98668a",
"text": "The presence of biofilm remains a challenging factor that contributes to the delayed healing of many chronic wounds. The major threat of chronic wound biofilms is their substantial protection from host immunities and extreme tolerance to antimicrobial agents. To help guide the development of wound treatment strategies, a panel of experts experienced in clinical and laboratory aspects of biofilm convened to discuss what is understood and not yet understood about biofilms and what is needed to better identify and treat chronic wounds in which biofilm is suspected. This article reviews evidence of the problem of biofilms in chronic wounds, summarizes literature-based and experience-based recommendations from the panel meeting, and identifies future and emerging technologies needed to address the current gaps in knowledge. While currently there is insufficient evidence to provide an accurate comparison of the effectiveness of current therapies/products in reducing or removing biofilm, research has shown that in addition to debridement, appropriate topical antimicrobial application can suppress biofilm reformation. Because the majority of the resistance of bacteria in a biofilm population is expressed by its own secreted matrix of extracellular polymeric substance (EPS), panel members stressed the need for a paradigm shift toward biofilm treatment strategies that disrupt this shield. High-osmolarity surfactant solution technology is emerging as a potential multimodal treatment that has shown promise in EPS disruption and prevention of biofilm formation when used immediately post debridement. Panel members advocated incorporating an EPS-disrupting technology into an antibiofilm treatment approach for all chronic wounds. The activity of this panel is a step toward identifying technology and research needed to improve biofilm management of chronic wounds.",
"title": ""
},
{
"docid": "c04fc6682403d89e1fbca19787f7a118",
"text": "This paper presents a Compact Dual-Circularly Polarized Corrugated Horn with Integrated Septum Polarizer in the X-band. Usually such a complicated structure would be fabricated in parts and assembled together. However, exploiting the versatility afforded by Metal 3D-printing, a complete prototype is fabricated as a single part, enabling a compact design. Any variation due to mating tolerance of separate parts is eliminated. The prototype is designed to work from 9.5GHz to 10GHz. It has an impedance match of better than |S11|<−15dB and a gain about 13.4dBic at 9.75GHz. The efficiency is better than 95% in the operating band.",
"title": ""
},
{
"docid": "b1a384176d320576ec8bc398474f5e68",
"text": "Concept mapping (a mixed qualitative–quantitative methodology) was used to describe and understand the psychosocial experiences of adults with confirmed and self-identified dyslexia. Using innovative processes of art and photography, Phase 1 of the study included 15 adults who participated in focus groups and in-depth interviews and were asked to elucidate their experiences with dyslexia. On index cards, 75 statements and experiences with dyslexia were recorded. The second phase of the study included 39 participants who sorted these statements into self-defined categories and rated each statement to reflect their personal experiences to produce a visual representation, or concept map, of their experience. The final concept map generated nine distinct cluster themes: Organization Skills for Success; Finding Success; A Good Support System Makes the Difference; On Being Overwhelmed; Emotional Downside; Why Can’t They See It?; Pain, Hurt, and Embarrassment From Past to Present; Fear of Disclosure; and Moving Forward. Implications of these findings are discussed.",
"title": ""
},
{
"docid": "0bb53802df49097659ec2e9962ef4ede",
"text": "In her 2006 book \"My Stroke of Insight\" Dr. Jill Bolte Taylor relates her experience of suffering from a left hemispheric stroke caused by a congenital arteriovenous malformation which led to a loss of inner speech. Her phenomenological account strongly suggests that this impairment produced a global self-awareness deficit as well as more specific dysfunctions related to corporeal awareness, sense of individuality, retrieval of autobiographical memories, and self-conscious emotions. These are examined in details and corroborated by numerous excerpts from Taylor's book.",
"title": ""
},
{
"docid": "ab9f99378328dbfbec1a81ca9557b2f1",
"text": "This paper presents the results of a concurrent, nested, mixed methods exploratory study on the safety and e ectiveness of the use of a 30 lb weighted blanket with a convenience sample of 32 adults. Safety is investigated measuring blood pressure, pulse rate, and pulse oximetry, and e ectiveness by electrodermal activity (EDA), the State Trait Anxiety Inventory-10 and an exit survey. The results reveal that Brian Mullen (E-mail: [email protected]), BS, is Graduate Research Assistant, Sundar Krishnamurty (E-mail: [email protected]), PhD, is In terim Department Head and Associate Professor, and Robert X. Gao (E-mail: [email protected]), PhD, is Professor; all are at University of MassachusettsAmherst, Department of Mechanical & Industrial Engineering–ELAB Building, 160 Governors Drive, Amherst, MA 01003 . Tina Champagne (E-mail: [email protected]), MEd, OTR/L, is Occupational Therapy and Group Program Supervisor, and Debra Dickson (E-mail: [email protected]), APRN, BC, is Behavioral Health Clinical Nurse Specialist; both are at Cooley Dickinson Hospital, Acute Inpatient Behav ioral Health Department, 30 Locust Street, Northampton, MA 01060. Address correspondence to Tina Champagne at the above address. The authors wish to acknowledge and thank the UMASS-Amherst School of Nursing for providing use of the nursing lab and vital signs monitoring equipment for the pur poses of this study and to Dr. Keli Mu for his assistance with the revisions of this paper. Occupational Therapy in Mental Health, Vol. 24(1) 2008 Available online at http://otmh.haworthpress.com © 2008 by The Haworth Press. All rights reserved. doi:10.1300/J004v24n01_05 65 the use of the 30 lb weighted blanket, in the lying down position, is safe as evidenced by the vital sign metrics. Data obtained on e ectiveness reveal 33% demonstrated lowering in EDA when using the weighted blanket, 63% reported lower anxiety after use, and 78% preferred the weighted blanket as a calming modality. The results of this study will be used to form the basis for subsequent research on the therapeutic in u ence of the weighted blanket with adults during an acute inpatient mental health admission.doi:10.1300/J004v24n01_05 [Article copies available for a fee from The Haworth Document Delivery Service: 1-800-HAWORTH. E-mail address: <[email protected]> Website: <http://www.HaworthPress. com> © 2008 by The Haworth Press. All rights reserved.]",
"title": ""
},
{
"docid": "b5b8553b1f50a48af88f9902eab74254",
"text": "In this paper we introduce the Fourier tag, a synthetic fiducial marker used to visually encode information and provide controllable positioning. The Fourier tag is a synthetic target akin to a bar-code that specifies multi-bit information which can be efficiently and robustly detected in an image. Moreover, the Fourier tag has the beneficial property that the bit string it encodes has variable length as a function of the distance between the camera and the target. This follows from the fact that the effective resolution decreases as an effect of perspective. This paper introduces the Fourier tag, describes its design, and illustrates its properties experimentally.",
"title": ""
},
{
"docid": "42062770fefe4787688718323a528313",
"text": "We present SPred, a novel method for the creation of large repositories of semantic predicates. We start from existing collocations to form lexical predicates (e.g., break ∗) and learn the semantic classes that best fit the ∗ argument. To do this, we extract all the occurrences in Wikipedia which match the predicate and abstract its arguments to general semantic classes (e.g., break BODY PART, break AGREEMENT, etc.). Our experiments show that we are able to create a large collection of semantic predicates from the Oxford Advanced Learner’s Dictionary with high precision and recall, and perform well against the most similar approach.",
"title": ""
},
{
"docid": "5f4e761af11ace5a4d6819431893a605",
"text": "The high power density converter is required due to the strict demands of volume and weight in more electric aircraft, which makes SiC extremely attractive for this application. In this work, a prototype of 50 kW SiC high power density converter with the topology of two-level three-phase voltage source inverter is demonstrated. This converter is driven at high switching speed based on the optimization in switching characterization. It operates at a switching frequency up to 100 kHz and a low dead time of 250 ns. And the converter efficiency is measured to be 99% at 40 kHz and 97.8% at 100 kHz.",
"title": ""
},
{
"docid": "8be610106348aba1f67d4c359a7ecc21",
"text": "This paper presents simulation model of the DVB-S2 (Digital Video Broadcasting - Satellite - Second Generation) system implemented in Simulink, Matlab. The model provides simulation of the DVB-S2 system parameters in AWGN (Additive White Gaussian Noise) channel. The aim of this model is to propose optimal DVB-S2 parameters in different propagation conditions. The simulation offers two modulation scheme options QPSK (Quadrature Phase Shift Keying) and 8PSK (8 Phase Shift Keying) with different code ratio values. During the simulation, BER (bit error rate) and PER (packet error rate) are calculated and the constellation diagram is observed. Simulation results, obtained by using two test images with different texture, showed that QPSK modulation is more robust compared to 8PSK modulation in the same propagation conditions. Simulink model results were compared with measurements of several Astra 1 satellite (19,2° E) transponders parameters. Lab-measured values achieved higher SNR values than simulation cases because of real wireless channel conditions. Optimal operation parameters for a DVB-S2 system according to channel conditions and the required bit rate are proposed.",
"title": ""
},
{
"docid": "f8339417b0894191670d1528df7ac297",
"text": "OBJECTIVE\nThe purpose of this study was to reanalyze the results of a previously published trial that compared 3 methods of anterior colporrhaphy according to the clinically relevant definitions of success.\n\n\nSTUDY DESIGN\nA secondary analysis of a trial of 114 subjects who underwent surgery for anterior pelvic organ prolapse who were assigned randomly to standard anterior colporrhaphy, ultralateral colporrhaphy, or anterior colporrhaphy plus polyglactin 910 mesh from 1996-1999. For the current analysis, success was defined as (1) no prolapse beyond the hymen, (2) the absence of prolapse symptoms (visual analog scale ≤ 2), and (3) the absence of retreatment.\n\n\nRESULTS\nEighty-eight percent of the women met our definition of success at 1 year. One subject (1%) underwent surgery for recurrence 29 months after surgery. No differences among the 3 groups were noted for any outcomes.\n\n\nCONCLUSION\nReanalysis of a trial of 3 methods of anterior colporrhaphy revealed considerably better success with the use of clinically relevant outcome criteria compared with strict anatomic criteria.",
"title": ""
},
{
"docid": "70f1f5de73c3a605b296299505fd4e61",
"text": "Dropout is a popular stochastic regularization technique for deep neural networks that works by randomly dropping (i.e. zeroing) units from the network during training. This randomization process allows to implicitly train an ensemble of exponentially many networks sharing the same parametrization, which should be averaged at test time to deliver the final prediction. A typical workaround for this intractable averaging operation consists in scaling the layers undergoing dropout randomization. This simple rule called “standard dropout” is efficient, but might degrade the accuracy of the prediction. In this work we introduce a novel approach, coined “dropout distillation”, that allows us to train a predictor in a way to better approximate the intractable, but preferable, averaging process, while keeping under control its computational efficiency. We are thus able to construct models that are as efficient as standard dropout, or even more efficient, while being more accurate. Experiments on standard benchmark datasets demonstrate the validity of our method, yielding consistent improvements over conventional dropout.",
"title": ""
},
{
"docid": "cd18d1e77af0e2146b67b028f1860ff0",
"text": "Convolutional neural networks (CNN) have recently shown outstanding image classification performance in the large- scale visual recognition challenge (ILSVRC2012). The success of CNNs is attributed to their ability to learn rich mid-level image representations as opposed to hand-designed low-level features used in other image classification methods. Learning CNNs, however, amounts to estimating millions of parameters and requires a very large number of annotated image samples. This property currently prevents application of CNNs to problems with limited training data. In this work we show how image representations learned with CNNs on large-scale annotated datasets can be efficiently transferred to other visual recognition tasks with limited amount of training data. We design a method to reuse layers trained on the ImageNet dataset to compute mid-level image representation for images in the PASCAL VOC dataset. We show that despite differences in image statistics and tasks in the two datasets, the transferred representation leads to significantly improved results for object and action classification, outperforming the current state of the art on Pascal VOC 2007 and 2012 datasets. We also show promising results for object and action localization.",
"title": ""
}
] |
scidocsrr
|
b91bdfe8ac4594401fab6aeaad266c6e
|
High performance true random number generator based on FPGA block RAMs
|
[
{
"docid": "1939a5101fbdb8734161ab74333a2d52",
"text": "Two FPGA based implementations of random number generators intended for embedded cryptographic applications are presented. The first is a true random number generator (TRNG) which employs oscillator phase noise, and the second is a bit serial implementation of a Blum Blum Shub (BBS) pseudorandom number generator (PRNG). Both designs are extremely compact and can be implemented on any FPGA or PLD device. They were designed specifically for use as FPGA based cryptographic hardware cores. The TRNG and PRNG were tested using the NIST and Diehard random number test suites.",
"title": ""
}
] |
[
{
"docid": "1a2f2e75691e538c867b6ce58591a6a5",
"text": "Despite the profusion of NIALM researches and products using complex algorithms, addressing the market for low cost, compact, real-time and effective NIALM smart meters is still a challenge. This paper talks about the design of a NIALM smart meter for home appliances, with the ability to self-detect and disaggregate most home appliances. In order to satisfy the compact, real-time, low price requirements and to solve the challenge in slow transient and multi-state appliances, two algorithms are used: the CUSUM to improve the event detection and the Genetic Algorithm (GA) for appliance disaggregation. Evaluation of these algorithms has been done according to public NIALM REDD data set [6]. They are now in first stage of architecture design using Labview FPGA methodology. KeywordsNIALM, CUSUM, Genetic Algorithm, K-mean, classification, smart meter, FPGA.",
"title": ""
},
{
"docid": "a63c4ce93a67ab1668213802c5aab855",
"text": "One of the key challenges of utilizing concentrated winding in interior permanent magnet machines (IPMs) is the high rotor eddy current losses in both magnets and rotor iron due to the presence of a large number of lower and higher order space harmonics in the stator magnetomotive force (MMF). These MMF harmonics also result in other undesirable effects, such as localized core saturation, acoustic noise, and vibrations. This paper proposes a nine-phase 18-slot 14-pole IPM machine using the multiple three-phase winding sets to reduce MMF harmonics. All the subharmonics and some of the higher order harmonics are cancelled out, while the advantages of the concentrate windings are retained. The proposed machine exhibits a high efficiency over wide torque and speed ranges. A 10-kW machine prototype is built and tested in generator mode for the experimental validation. The experimental results indicate the effectiveness of the MMF harmonics cancellation in the proposed machine.",
"title": ""
},
{
"docid": "999aef425b90782b85c9b5e8b32129d7",
"text": "Data analysis has become a fundamental task in analytical chemistry due to the great quantity of analytical information provided by modern analytical instruments. Supervised pattern recognition aims to establish a classification model based on experimental data in order to assign unknown samples to a previously defined sample class based on its pattern of measured features. The basis of the supervised pattern recognition techniques mostly used in food analysis are reviewed, making special emphasis on the practical requirements of the measured data and discussing common misconceptions and errors that might arise. Applications of supervised pattern recognition in the field of food chemistry appearing in bibliography in the last two years are also reviewed.",
"title": ""
},
{
"docid": "53a67740e444b5951bc6ab257236996e",
"text": "Although human perception appears to be automatic and unconscious, complex sensory mechanisms exist that form the preattentive component of understanding and lead to awareness. Considerable research has been carried out into these preattentive mechanisms and computational models have been developed for similar problems in the fields of computer vision and speech analysis. The focus here is to explore aural and visual information in video streams for modeling attention and detecting salient events. The separate aural and visual modules may convey explicit, complementary or mutually exclusive information around the detected audiovisual events. Based on recent studies on perceptual and computational attention modeling, we formulate measures of attention using features of saliency for the audiovisual stream. Audio saliency is captured by signal modulations and related multifrequency band features, extracted through nonlinear operators and energy tracking. Visual saliency is measured by means of a spatiotemporal attention model driven by various feature cues (intensity, color, motion). Features from both modules mapped to one-dimensional, time-varying saliency curves, from which statistics of salient segments can be extracted and important audio or visual events can be detected through adaptive, threshold-based mechanisms. Audio and video curves are integrated in a single attention curve, where events may be enhanced, suppressed or vanished. Salient events from the audiovisual curve are detected through geometrical features such as local extrema, sharp transitions and level sets. The potential of inter-module fusion and audiovisual event detection is demonstrated in applications such as video key-frame selection, video skimming and video annotation.",
"title": ""
},
{
"docid": "a0d34b1c003b7e88c2871deaaba761ed",
"text": "Sentence simplification aims to make sentences easier to read and understand. Most recent approaches draw on insights from machine translation to learn simplification rewrites from monolingual corpora of complex and simple sentences. We address the simplification problem with an encoder-decoder model coupled with a deep reinforcement learning framework. Our model, which we call DRESS (as shorthand for Deep REinforcement Sentence Simplification), explores the space of possible simplifications while learning to optimize a reward function that encourages outputs which are simple, fluent, and preserve the meaning of the input. Experiments on three datasets demonstrate that our model outperforms competitive simplification systems.1",
"title": ""
},
{
"docid": "9901be4dddeb825f6443d75a6566f2d0",
"text": "In this paper a new approach to gas leakage detection in high pressure natural gas transportation networks is proposed. The pipeline is modelled as a Linear Parameter Varying (LPV) System driven by the source node massflow with the gas inventory variation in the pipe (linepack variation, proportional to the pressure variation) as the scheduling parameter. The massflow at the offtake node is taken as the system output. The system is identified by the Successive Approximations LPV System Subspace Identification Algorithm which is also described in this paper. The leakage is detected using a Kalman filter where the fault is treated as an augmented state. Given that the gas linepack can be estimated from the massflow balance equation, a differential method is proposed to improve the leakage detector effectiveness. A small section of a gas pipeline crossing Portugal in the direction South to North is used as a case study. LPV models are identified from normal operational data and their accuracy is analyzed. The proposed LPV Kalman filter based methods are compared with a standard mass balance method in a simulated 10% leakage detection scenario. The Differential Kalman Filter method proved to be highly efficient.",
"title": ""
},
{
"docid": "0cc61499ca4eaba9d23214fc7985f71c",
"text": "We review the recent progress of the latest 100G to 1T class coherent PON technology using a simplified DSP suitable for forthcoming 5G era optical access systems. The highlight is the presentation of the first demonstration of 100 Gb/s/λ × 8 (800 Gb/s) based PON.",
"title": ""
},
{
"docid": "cbcf8582ac745855557697568d390a51",
"text": "We present a method for restoring antialiased edges that are damaged by certain types of nonlinear image filters. This problem arises with many common operations such as intensity thresholding, tone mapping, gamma correction, histogram equalization, bilateral filters, unsharp masking, and certain nonphotorealistic filters. We present a simple algorithm that selectively adjusts the local gradients in affected regions of the filtered image so that they are consistent with those in the original image. Our algorithm is highly parallel and is therefore easily implemented on a GPU. Our prototype system can process up to 500 megapixels per second and we present results for a number of different image filters.",
"title": ""
},
{
"docid": "4c135811091cf5b0547189272d3c1ffd",
"text": "DBSCAN is a well-known density-based data clustering algorithm that is widely used due to its ability to find arbitrarily shaped clusters in noisy data. However, DBSCAN is hard to scale which limits its utility when working with large data sets. Resilient Distributed Datasets (RDDs), on the other hand, are a fast data-processing abstraction created explicitly for in-memory computation of large data sets. This paper presents a new algorithm based on DBSCAN using the Resilient Distributed Datasets approach: RDD-DBSCAN. RDD-DBSCAN overcomes the scalability limitations of the traditional DBSCAN algorithm by operating in a fully distributed fashion. The paper also evaluates an implementation of RDD-DBSCAN using Apache Spark, the official RDD implementation.",
"title": ""
},
{
"docid": "b7a6adb1eee3fe1f0a9abd4508d57828",
"text": "As part of a complete software stack for autonomous driving, NVIDIA has created a neural-network-based system, known as PilotNet, which outputs steering angles given images of the road ahead. PilotNet is trained using road images paired with the steering angles generated by a human driving a data-collection car. It derives the necessary domain knowledge by observing human drivers. This eliminates the need for human engineers to anticipate what is important in an image and foresee all the necessary rules for safe driving. Road tests demonstrated that PilotNet can successfully perform lane keeping in a wide variety of driving conditions, regardless of whether lane markings are present or not. The goal of the work described here is to explain what PilotNet learns and how it makes its decisions. To this end we developed a method for determining which elements in the road image most influence PilotNet’s steering decision. Results show that PilotNet indeed learns to recognize relevant objects on the road. In addition to learning the obvious features such as lane markings, edges of roads, and other cars, PilotNet learns more subtle features that would be hard to anticipate and program by engineers, for example, bushes lining the edge of the road and atypical vehicle classes.",
"title": ""
},
{
"docid": "d9cdbc7dd4d8ae34a3d5c1765eb48072",
"text": "Beanstalk is an educational game for children ages 6-10 teaching balance-fulcrum principles while folding in scientific inquiry and socio-emotional learning. This paper explores the incorporation of these additional dimensions using intrinsic motivation and a framing narrative. Four versions of the game are detailed, along with preliminary player data in a 2×2 pilot test with 64 children shaping the modifications of Beanstalk for much broader testing.",
"title": ""
},
{
"docid": "fce8ec4c0cc90c085ce5d269c4f8d683",
"text": "Hardware simulation of channel codes offers the potential of improving code evaluation speed by orders of magnitude over workstationor PC-based simulation. We describe a hardware-based Gaussian noise generator used as a key component in a hardware simulation system, for exploring channel code behavior at very low bit error rates (BERs) in the range of 10−9 to 10−10. The main novelty is the design and use of non-uniform piecewise linear approximations in computing trigonometric and logarithmic functions. The parameters of the approximation are chosen carefully to enable rapid computation of coefficients from the inputs, while still retaining extremely high fidelity to the modelled functions. The output of the noise generator accurately models a true Gaussian PDF even at very high σ values. Its properties are explored using: (a) several different statistical tests, including the chi-square test and the Kolmogorov-Smirnov test, and (b) an application for decoding of low density parity check (LDPC) codes. An implementation at 133MHz on a Xilinx Virtex-II XC2V4000-6 FPGA produces 133 million samples per second, which is 40 times faster than a 2.13GHz PC; another implementation on a Xilinx Spartan-IIE XC2S300E-7 FPGA at 62MHz is capable of a 20 times speedup. The performance can be improved by exploiting parallelism: an XC2V4000-6 FPGA with three parallel instances of the noise generator at 126MHz can run 100 times faster than a 2.13GHz PC. We illustrate the deterioration of clock speed with the increase in the number of instances.",
"title": ""
},
{
"docid": "ce41e19933571f6904e317a33b97716b",
"text": "Ivan Voitalov, 2 Pim van der Hoorn, 2 Remco van der Hofstad, and Dmitri Krioukov 2, 4, 5 Department of Physics, Northeastern University, Boston, Massachusetts 02115, USA Network Science Institute, Northeastern University, Boston, Massachusetts 02115, USA Department of Mathematics and Computer Science, Eindhoven University of Technology, Postbus 513, 5600 MB Eindhoven, Netherlands Department of Mathematics, Northeastern University, Boston, Massachusetts 02115, USA Department of Electrical & Computer Engineering, Northeastern University, Boston, Massachusetts 02115, USA",
"title": ""
},
{
"docid": "76c05736b10834a396e370197d84d2d3",
"text": "In recent years, crowd counting in still images has attracted many research interests due to its applications in public safety. However, it remains a challenging task for reasons of perspective and scale variations. In this paper, we propose an effective Skip-connection Convolutional Neural Network (SCNN) for crowd counting to overcome the issue of scale variations. The proposed SCNN architecture consists of several multi-scale units to extract multi-scale features. Each multi-scale unit including three convolutional layers builds connections between the input and each convolutional layer. In addition, we propose a scale-related training method to improve the accuracy and robustness of crowd counting. We evaluate our method on three crowd counting benchmarks. Experimental results verify the efficiency of the proposed method, and it achieves superior performance compared with other methods.",
"title": ""
},
{
"docid": "9e04dfd81bc7379d4aaa465e4a265ec8",
"text": "Globally, national pharmacovigilance systems rely on spontaneous reporting in which suspected adverse drug reactions (ADRs) are reported to a national coordinating centre by health professionals, manufacturers or patients. Spontaneous reporting systems are the easiest to establish and the cheapest to run but suffer from poor-quality reports and underreporting. It is difficult to estimate rates and frequencies of ADRs through spontaneous reporting. Public health programmes need to quantify and characterize risks to individuals and communities from their medicines, to minimize harm and improve use, to sustain public confidence in the programmes, and to track problems due to medication errors and poor quality medicines. Additional methods are therefore needed to monitor the quantitative aspects of medicine safety, to better identify specific risk factors and high-risk groups, and to characterize ADRs associated with specific medicines and in specific populations. The present paper introduces two methods, cohort event monitoring and targeted spontaneous reporting, that are being implemented by the WHO, in its public health programmes, to complement spontaneous reporting. The advantages and disadvantages of these methods and how each can be applied in clinical practice are discussed.",
"title": ""
},
{
"docid": "77a84e637f3db5dd9d35ee7bb9a33176",
"text": "Mobile Opportunistic Network (MON) is characterized by intermittent connectivity where communication largely depends on the mobility pattern of the participating nodes. In MON, a node can take the custody of a packet for a long time and carry it until a new forwarding path has been established, unlike mobile adhoc network (MANET), where a node must drop the packet otherwise. Therefore, routing in MON depends on the repeated make-and-break of communication links, which again depends on the mobility of the nodes as they encounter and drift away from each other. MONs can simply be formed by humans carrying hand-held devices (like Personal Digital Assistant [PDAs] or cell phones) or on-board devices installed in vehicles. Therefore, with mobility playing a major role in the performance of MON, researchers have repeatedly tried to understand the nature of mobility with respect to humans, vehicles, and wild animals. To study the nature of mobility, researchers have collected mobility traces, proposed mobility models, and analyzed the performance of MON with respect to various mobility parameters. This article provides a detailed survey of different mobility models which have been proposed to date and how mobility largely determines the performance of opportunistic routing. We divide the article into four major sections. First, we provide a detailed survey of all the synthetic mobility models which have been developed to date. Second, we study the various mobility traces which have been collected and analyzed. Third, we study how mobility parameters affect the performance of MON. Finally, we highlight on some of the research areas and open challenges which yet remain unsolved.",
"title": ""
},
{
"docid": "911ea52fa57524e002154e2fe276ac44",
"text": "Many current natural language processing applications for social media rely on representation learning and utilize pre-trained word embeddings. There currently exist several publicly-available, pre-trained sets of word embeddings, but they contain few or no emoji representations even as emoji usage in social media has increased. In this paper we release emoji2vec, pre-trained embeddings for all Unicode emoji which are learned from their description in the Unicode emoji standard.1 The resulting emoji embeddings can be readily used in downstream social natural language processing applications alongside word2vec. We demonstrate, for the downstream task of sentiment analysis, that emoji embeddings learned from short descriptions outperforms a skip-gram model trained on a large collection of tweets, while avoiding the need for contexts in which emoji need to appear frequently in order to estimate a representation.",
"title": ""
},
{
"docid": "65b765277d74981a46e231f69a213ac9",
"text": "We propose a hierarchical generative model that captures the self-similar structure of image regions as well as how this structure is shared across image collections. Our model is based on a novel, variational interpretation of the popular expected patch log-likelihood (EPLL) method as a model for randomly positioned grids of image patches. While previous EPLL methods modeled the density of image patches with finite Gaussian mixtures, we use nonparametric Dirichlet process (DP) mixtures to create models whose complexity grows as additional images are observed. An extension based on the hierarchical DP then captures the repetitive and self-similar structure of image regions via image-specific variations in cluster frequencies. We derive a structured variational inference algorithm that uses birth and delete moves to create new patch clusters and thus more accurately model novel image textures. Our denoising performance on standard benchmarks is superior to EPLL and comparable to the state-of-the-art, while providing a novel statistical interpretation for many common image processing heuristics.",
"title": ""
},
{
"docid": "faa077308647a951cc31b4f3efdbca2b",
"text": "This letter presents the design, manufacturing, and operational performance of a graphene-flakes-based screen-printed wideband elliptical dipole antenna operating from 2 up to 5 GHz for low-cost wireless communications applications. To investigate radio frequency (RF) conductivity of the printed graphene, a coplanar waveguide (CPW) test structure was designed, fabricated, and tested in the frequency range from 1 to 20 GHz. Antenna and CPW were screen-printed on Kapton substrates using a graphene paste formulated with a graphene-to-binder ratio of 1:2. A combination of thermal treatment and subsequent compression rolling is utilized to further decrease the sheet resistance for printed graphene structures, ultimately reaching 4 Ω/□ at 10-μ m thicknesses. For the graphene-flakes printed antenna, an antenna efficiency of 60% is obtained. The measured maximum antenna gain is 2.3 dBi at 4.8 GHz. Thus, the graphene-flakes printed antenna adds a total loss of only 3.1 dB to an RF link when compared to the same structure screen-printed for reference with a commercial silver ink. This shows that the electrical performance of screen-printed graphene flakes, which also does not degrade after repeated bending, is suitable for realizing low-cost wearable RF wireless communication devices.",
"title": ""
}
] |
scidocsrr
|
be86c17767afd5fa3083d19a8b63f652
|
Motion planning in urban environments: Part II
|
[
{
"docid": "484f6a3bd0679db1bf00fd9d53b53b74",
"text": "The paper presents the Intelligent Control System Laboratory's (ICSL) Cooperative Autonomous Mobile Robot technologies and their application to intelligent vehicles for cities. The deployed decision and control algorithms made the road-scaled vehicles capable of undertaking cooperative autonomous maneuvers. Because the focus of ICSL's research is in decision and control algorithms, it is therefore reasonable to consider replacing or upgrading the sensors used with more recent road sensory concepts as produced by other research groups. While substantial progress has been made, there are still some issues that need to be addressed such as: decision and control algorithms for navigating roundabouts, real-time integration of all data, and decision-making algorithms to enable intelligent vehicles to choose the driving maneuver as they go. With continued research, it is feasible that cooperative autonomous vehicles will coexist alongside human drivers in the not-too-distant future.",
"title": ""
},
{
"docid": "a4473c2cc7da3fb5ee52b60cee24b9b9",
"text": "The ALVINN (Autonomous h d Vehide In a N d Network) projea addresses the problem of training ani&ial naxal naarork in real time to perform difficult perapaon tasks. A L W is a back-propagation network dmpd to dnve the CMU Navlab. a modided Chevy van. 'Ibis ptpa describes the training techniques which allow ALVIN\" to luun in under 5 minutes to autonomously conm>l the Navlab by wardung ahuamr, dziver's rmaions. Usingthese technrques A L W has b&n trained to drive in a variety of Cirarmstanccs including single-lane paved and unprved roads. and multi-lane lined and rmlinecd roads, at speeds of up IO 20 miles per hour",
"title": ""
}
] |
[
{
"docid": "bf2c7b1d93b6dee024336506fb5a2b32",
"text": "In this paper we present the first public, online demonstration of MaxTract; a tool that converts PDF files containing mathematics into multiple formats including LTEX, HTML with embedded MathML, and plain text. Using a bespoke PDF parser and image analyser, we directly extract character and font information to use as input for a linear grammar which, in conjunction with specialised drivers, can accurately recognise and reproduce both the two dimensional relationships between symbols in mathematical formulae and the one dimensional relationships present in standard text. The main goals of MaxTract are to provide translation services into standard mathematical markup languages and to add accessibility to mathematical documents on multiple levels. This includes both accessibility in the narrow sense of providing access to content for print impaired users, such as those with visual impairments, dyslexia or dyspraxia, as well as more generally to enable any user access to the mathematical content at more re-usable levels than merely visual. MaxTract produces output compatible with web browsers, screen readers, and tools such as copy and paste, which is achieved by enriching the regular text with mathematical markup. The output can also be used directly, within the limits of the presentation MathML produced, as machine readable mathematical input to software systems such as Mathematica or Maple.",
"title": ""
},
{
"docid": "2f94bd95e2b17b4ff517133544087fc9",
"text": "MPEG DASH is a widely used standard for adaptive video streaming over HTTP. The conceptual architecture for DASH includes a web server and clients, which download media segments from the server. Clients select the resolution of video segments by using an Adaptive Bit-Rate (ABR) strategy; in particular, a throughput-based ABR is used in the case of live video applications. However, recent papers show that these strategies may suffer from the presence of proxies/caches in the network, which are instrumental in streaming video on a large scale. To face this issue, we propose to extend the MPEG DASH architecture with a Tracker functionality, enabling client-to-client sharing of control information. This extension paves the way to a novel family of Tracker-assisted strategies that allow a greater design flexibility, while solving the specific issue caused by proxies/caches; in addition, its utility goes beyond the problem at hand, as it can be used by other applications as well, e.g. for peer-to-peer streaming.",
"title": ""
},
{
"docid": "b8573915765b33e1d57f34f7756cc235",
"text": "Data mining is the process of finding correlations in the relational databases. There are different techniques for identifying malicious database transactions. Many existing approaches which profile is SQL query structures and database user activities to detect intrusion, the log mining approach is the automatic discovery for identifying anomalous database transactions. Mining of the Data is very helpful to end users for extracting useful business information from large database. Multi-level and multi-dimensional data mining are employed to discover data item dependency rules, data sequence rules, domain dependency rules, and domain sequence rules from the database log containing legitimate transactions. Database transactions that do not comply with the rules are identified as malicious transactions. The log mining approach can achieve desired true and false positive rates when the confidence and support are set up appropriately. The implemented system incrementally maintain the data dependency rule sets and optimize the performance of the intrusion detection process.",
"title": ""
},
{
"docid": "bda0ae59319660987e9d2686d98e4b9a",
"text": "Due to the shift from software-as-a-product (SaaP) to software-as-a-service (SaaS), software components that were developed to run in a single address space must increasingly be accessed remotely across the network. Distribution middleware is frequently used to facilitate this transition. Yet a range of middleware platforms exist, and there are few existing guidelines to help the programmer choose an appropriate middleware platform to achieve desired goals for performance, expressiveness, and reliability. To address this limitation, in this paper we describe a case study of transitioning an Open Service Gateway Initiative (OSGi) service from local to remote access. Our case study compares five remote versions of this service, constructed using different distribution middleware platforms. These platforms are implemented by widely-used commercial technologies or have been proposed as improvements on the state of the art. In particular, we implemented a service-oriented version of our own Remote Batch Invocation abstraction. We compare and contrast these implementations in terms of their respective performance, expressiveness, and reliability. Our results can help remote service programmers make informed decisions when choosing middleware platforms for their applications.",
"title": ""
},
{
"docid": "0687e28b42ca1acff99dc4917b920127",
"text": "Advanced Synchronization Facility (ASF) is an AMD64 hardware extension for lock-free data structures and transactional memory. It provides a speculative region that atomically executes speculative accesses in the region. Five new instructions are added to demarcate the region, use speculative accesses selectively, and control the speculative hardware context. Programmers can use speculative regions to build flexible multi-word atomic primitives with no additional software support by relying on the minimum guarantee of available ASF hardware resources for lock-free programming. Transactional programs with high-level TM language constructs can either be compiled directly to the ASF code or be linked to software TM systems that use ASF to accelerate transactional execution. In this paper we develop an out-of-order hardware design to implement ASF on a future AMD processor and evaluate it with an in-house simulator. The experimental results show that the combined use of the L1 cache and the LS unit is very helpful for the performance robustness of ASF-based lock free data structures, and that the selective use of speculative accesses enables transactional programs to scale with limited ASF hardware resources.",
"title": ""
},
{
"docid": "a6a55ff4f72abce0c56986e8a44df2da",
"text": "Antibodies are important therapeutic agents for cancer. Recently, it has become clear that antibodies possess several clinically relevant mechanisms of action. Many clinically useful antibodies can manipulate tumour-related signalling. In addition, antibodies exhibit various immunomodulatory properties and, by directly activating or inhibiting molecules of the immune system, antibodies can promote the induction of antitumour immune responses. These immunomodulatory properties can form the basis for new cancer treatment strategies.",
"title": ""
},
{
"docid": "283849657698b38e537171e2e74de52d",
"text": "Web logs can provide a wealth of information on user access patterns of a corresponding website, when they are properly analyzed. However, finding interesting patterns hidden in the low-level log data is non-trivial due to large log volumes, and the distribution of the log files in cluster environments. This paper presents a novel technique, the application of Density-Based Spatial Clustering of Applications with Noise (DBSCAN) and Expectation Maximization (EM) algorithms in an iterative manner for clustering web user sessions. Each cluster corresponds to one or more web user activities. The unique user access pattern of each cluster is identified by frequent pattern mining and sequential pattern mining techniques. When compared with the clustering output of EM, DBSCAN, and k-means algorithms, this technique shows better accuracy in web session mining, and it is more effective in identifying cluster changes with time. We demonstrate that the implemented system is capable of not only identifying common user behaviors, but also of identifying cyber-attacks.",
"title": ""
},
{
"docid": "3763da6b72ee0a010f3803a901c9eeb2",
"text": "As NAND flash memory manufacturers scale down to smaller process technology nodes and store more bits per cell, reliability and endurance of flash memory reduce. Wear-leveling and error correction coding can improve both reliability and endurance, but finding effective algorithms requires a strong understanding of flash memory error patterns. To enable such understanding, we have designed and implemented a framework for fast and accurate characterization of flash memory throughout its lifetime. This paper examines the complex flash errors that occur at 30-40nm flash technologies. We demonstrate distinct error patterns, such as cycle-dependency, location-dependency and value-dependency, for various types of flash operations. We analyze the discovered error patterns and explain why they exist from a circuit and device standpoint. Our hope is that the understanding developed from this characterization serves as a building block for new error tolerance algorithms for flash memory.",
"title": ""
},
{
"docid": "0ffe59ea5705ae6d180cee8976bbffb4",
"text": "We propose an analytical framework for studying parallel repetition, a basic product operation for one-round twoplayer games. In this framework, we consider a relaxation of the value of projection games. We show that this relaxation is multiplicative with respect to parallel repetition and that it provides a good approximation to the game value. Based on this relaxation, we prove the following improved parallel repetition bound: For every projection game G with value at most ρ, the k-fold parallel repetition G⊗k has value at most\n [EQUATION]\n This statement implies a parallel repetition bound for projection games with low value ρ. Previously, it was not known whether parallel repetition decreases the value of such games. This result allows us to show that approximating set cover to within factor (1 --- ε) ln n is NP-hard for every ε > 0, strengthening Feige's quasi-NP-hardness and also building on previous work by Moshkovitz and Raz.\n In this framework, we also show improved bounds for few parallel repetitions of projection games, showing that Raz's counterexample to strong parallel repetition is tight even for a small number of repetitions.\n Finally, we also give a short proof for the NP-hardness of label cover(1, Δ) for all Δ > 0, starting from the basic PCP theorem.",
"title": ""
},
{
"docid": "5054443e7133111f2511631e4cf6e0db",
"text": "Stitching multiple images together to create beautiful highresolution panoramas is one of the most popular consumer applications of image registration and blending. In this chapter, I review the motion models (geometric transformations) that underlie panoramic image stitching, discuss direct intensity-based and feature-based registration algorithms, and present global and local alignment techniques needed to establish highaccuracy correspondences between overlapping images. I then discuss various compositing options, including multi-band and gradient-domain blending, as well as techniques for removing blur and ghosted images. The resulting techniques can be used to create high-quality panoramas for static or interactive viewing.",
"title": ""
},
{
"docid": "cd92f750461aff9877853f483cf09ecf",
"text": "Designing and maintaining Web applications is one of the major challenges for the software industry of the year 2000. In this paper we present Web Modeling Language (WebML), a notation for specifying complex Web sites at the conceptual level. WebML enables the high-level description of a Web site under distinct orthogonal dimensions: its data content (structural model), the pages that compose it (composition model), the topology of links between pages (navigation model), the layout and graphic requirements for page rendering (presentation model), and the customization features for one-to-one content delivery (personalization model). All the concepts of WebML are associated with a graphic notation and a textual XML syntax. WebML specifications are independent of both the client-side language used for delivering the application to users, and of the server-side platform used to bind data to pages, but they can be effectively used to produce a site implementation in a specific technological setting. WebML guarantees a model-driven approach to Web site development, which is a key factor for defining a novel generation of CASE tools for the construction of complex sites, supporting advanced features like multi-device access, personalization, and evolution management. The WebML language and its accompanying design method are fully implemented in a pre-competitive Web design tool suite, called ToriiSoft.",
"title": ""
},
{
"docid": "cce107dc268b2388e301f64718de1463",
"text": "The training of convolutional neural networks for image recognition usually requires large image datasets to produce favorable results. Those large datasets can be acquired by web crawlers that accumulate images based on keywords. Due to the nature of data in the web, these image sets display a broad variation of qualities across the contained items. In this work, a filtering approach for noisy datasets is proposed, utilizing a smaller trusted dataset. Hereby a convolutional neural network is trained on the trusted dataset and then used to construct a filtered subset from the noisy datasets. The methods described in this paper were applied to plant image classification and the created models have been submitted to the PlantCLEF 2017 competition.",
"title": ""
},
{
"docid": "0b941153b9ade732ca52058698643a44",
"text": "In this paper, we prove the complexity bounds for methods of Convex Optimization based only on computation of the function value. The search directions of our schemes are normally distributed random Gaussian vectors. It appears that such methods usually need at most n times more iterations than the standard gradient methods, where n is the dimension of the space of variables. This conclusion is true both for nonsmooth and smooth problems. For the later class, we present also an accelerated scheme with the expected rate of convergence O(n/k), where k is the iteration counter. For Stochastic Optimization, we propose a zero-order scheme and justify its expected rate of convergence O(n/k). We give also some bounds for the rate of convergence of the random gradient-free methods to stationary points of nonconvex functions, both for smooth and nonsmooth cases. Our theoretical results are supported by preliminary computational experiments.",
"title": ""
},
{
"docid": "46a47931c51a3b5580580d27a9a6d132",
"text": "In airline service industry, it is difficult to collect data about customers' feedback by questionnaires, but Twitter provides a sound data source for them to do customer sentiment analysis. However, little research has been done in the domain of Twitter sentiment classification about airline services. In this paper, an ensemble sentiment classification strategy was applied based on Majority Vote principle of multiple classification methods, including Naive Bayes, SVM, Bayesian Network, C4.5 Decision Tree and Random Forest algorithms. In our experiments, six individual classification approaches, and the proposed ensemble approach were all trained and tested using the same dataset of 12864 tweets, in which 10 fold evaluation is used to validate the classifiers. The results show that the proposed ensemble approach outperforms these individual classifiers in this airline service Twitter dataset. Based on our observations, the ensemble approach could improve the overall accuracy in twitter sentiment classification for other services as well.",
"title": ""
},
{
"docid": "24fc1997724932c6ddc3311a529d7505",
"text": "In these days securing a network is an important issue. Many techniques are provided to secure network. Cryptographic is a technique of transforming a message into such form which is unreadable, and then retransforming that message back to its original form. Cryptography works in two techniques: symmetric key also known as secret-key cryptography algorithms and asymmetric key also known as public-key cryptography algorithms. In this paper we are reviewing different symmetric and asymmetric algorithms.",
"title": ""
},
{
"docid": "114f23172377fadf945b7a7632908ae0",
"text": "Scene understanding is an important prerequisite for vehicles and robots that operate autonomously in dynamic urban street scenes. For navigation and high-level behavior planning, the robots not only require a persistent 3D model of the static surroundings-equally important, they need to perceive and keep track of dynamic objects. In this paper, we propose a method that incrementally fuses stereo frame observations into temporally consistent semantic 3D maps. In contrast to previous work, our approach uses scene flow to propagate dynamic objects within the map. Our method provides a persistent 3D occupancy as well as semantic belief on static as well as moving objects. This allows for advanced reasoning on objects despite noisy single-frame observations and occlusions. We develop a novel approach to discover object instances based on the temporally consistent shape, appearance, motion, and semantic cues in our maps. We evaluate our approaches to dynamic semantic mapping and object discovery on the popular KITTI benchmark and demonstrate improved results compared to single-frame methods.",
"title": ""
},
{
"docid": "ab390e0bee6b8fb33cda52821c7787ff",
"text": "Zero-day polymorphic worms pose a serious threat to the Internet security. With their ability to rapidly propagate, these worms increasingly threaten the Internet hosts and services. Not only can they exploit unknown vulnerabilities but can also change their own representations on each new infection or can encrypt their payloads using a different key per infection. They have many variations in the signatures of the same worm thus, making their fingerprinting very difficult. Therefore, signature-based defenses and traditional security layers miss these stealthy and persistent threats. This paper provides a detailed survey to outline the research efforts in relation to detection of modern zero-day malware in form of zero-day polymorphic worms.",
"title": ""
},
{
"docid": "510cbd4c2a27140f6a8da04fdbc3cb1e",
"text": "Although relevance judgments are fundamental to the design and evaluation of all information retrieval systems, information scientists have not reached a consensus in defining the central concept of relevance. In this paper we ask two questions: What is the meaning of relevance? and What role does relevance play in information behavior? We attempt to address these questions by reviewing literature over the last 30 years that presents various views of relevance as topical, user-oriented, multidimensional, cognitive, and dynamic. We then discuss traditional assumptions on which most research in the field has been based and begin building a case for an approach to the problem of definition based on alternative assumptions. The dynamic, situational approach we suggest views the user-regardless of system-as the central and active determinant of the dimensions of relevance. We believe that relevance is a multidimensional concept; that it is dependent on both internal (cognitive) and external (situational) factors; that it is based on a dynamic human judgment process; and that it is a complex but systematic and mea-",
"title": ""
},
{
"docid": "051b819eeb22e71eff526f1aa7248db6",
"text": "Technical studies on automated driving of passenger cars were started in the 1950s, but those on heavy trucks were started in the mid-1990s, and only a few projects have dealt with truck automation, which include “Chauffeur” within the EU project T-TAP from the mid-1990s, truck automation by California PATH from around 2000, “KONVOI” in Germany from 2005, and “Energy ITS” by Japan from 2008. The objectives of truck automation are energy saving and enhanced transportation capacity by platooning, and eventually possible reduction of personnel cost by unmanned operation of following vehicles. The sensing technologies for automated vehicle control are computer vision, radar, lidar, laser scanners, localization by GNSS, and vehicle to vehicle communications. Experiments of platooning of three or four heavy trucks have shown the effectiveness of platooning in achieving energy saving due to short gaps between vehicles.",
"title": ""
},
{
"docid": "421261547adfa6c47c6ced492e7e3463",
"text": "Purpose – Conventional street lighting systems in areas with a low frequency of passersby are online most of the night without purpose. The consequence is that a large amount of power is wasted meaninglessly. With the broad availability of flexible-lighting technology like light-emitting diode lamps and everywhere available wireless internet connection, fast reacting, reliably operating, and power-conserving street lighting systems become reality. The purpose of this work is to describe the Smart Street Lighting (SSL) system, a first approach to accomplish the demand for flexible public lighting systems. Design/methodology/approach – This work presents the SSL system, a framework developed for a dynamic switching of street lamps based on pedestrians’ locations and desired safety (or “fear”) zones. In the developed system prototype, each pedestrian is localized via his/her smartphone, periodically sending location and configuration information to the SSL server. For street lamp control, each and every lamppost is equipped with a ZigBee-based radio device, receiving control information from the SSL server via multi-hop routing. Findings – This research paper confirms that the application of the proposed SSL system has great potential to revolutionize street lighting, particularly in suburban areas with low-pedestrian frequency. More important, the broad utilization of SSL can easily help to overcome the regulatory requirement for CO2 emission reduction by switching off lampposts whenever they are not required. Research limitations/implications – The paper discusses in detail the implementation of SSL, and presents results of its application on a small scale. Experiments have shown that objects like trees can interrupt wireless communication between lampposts and that inaccuracy of global positioning system position detection can lead to unexpected lighting effects. Originality/value – This paper introduces the novel SSL framework, a system for fast, reliable, and energy efficient street lamp switching based on a pedestrian’s location and personal desires of safety. Both safety zone definition and position estimation in this novel approach is accomplished using standard smartphone capabilities. Suggestions for overcoming these issues are discussed in the last part of the paper.",
"title": ""
}
] |
scidocsrr
|
a1aca808ed4c27d5c31715eb14403405
|
Antecedents of User Stickiness and Loyalty and Their Effects on Users' Group-Buying Repurchase Intention
|
[
{
"docid": "1c0efa706f999ee0129d21acbd0ef5ab",
"text": "Ten years ago, we presented the DeLone and McLean Information Systems (IS) Success Model as a framework and model for measuring the complexdependent variable in IS research. In this paper, we discuss many of the important IS success research contributions of the last decade, focusing especially on research efforts that apply, validate, challenge, and propose enhancements to our original model. Based on our evaluation of those contributions, we propose minor refinements to the model and propose an updated DeLone and McLean IS Success Model. We discuss the utility of the updated model for measuring e-commerce system success. Finally, we make a series of recommendations regarding current and future measurement of IS success. 10 DELONE AND MCLEAN",
"title": ""
},
{
"docid": "1a44e040bbb5c81a53a1255fc7f5d4d7",
"text": "Information technology and the Internet have had a dramatic effect on business operations. Companies are making large investments in e-commerce applications but are hard pressed to evaluate the success of their e-commerce systems. The DeLone & McLean Information Systems Success Model can be adapted to the measurement challenges of the new e-commerce world. The six dimensions of the updated model are a parsimonious framework for organizing the e-commerce success metrics identified in the literature. Two case examples demonstrate how the model can be used to guide the identification and specification of e-commerce success metrics.",
"title": ""
}
] |
[
{
"docid": "77233d4f7a7bb0150b5376c7bb93c108",
"text": "In-filled frame structures are commonly used in buildings, even in those located in seismically active regions. Precent codes unfortunately, do not have adequate guidance for treating the modelling, analysis and design of in-filled frame structures. This paper addresses this need and first develops an appropriate technique for modelling the infill-frame interface and then uses it to study the seismic response of in-filled frame structures. Finite element time history analyses under different seismic records have been carried out and the influence of infill strength, openings and soft-storey phenomenon are investigated. Results in terms of tip deflection, fundamental period, inter-storey drift ratio and stresses are presented and they will be useful in the seismic design of in-filled frame structures.",
"title": ""
},
{
"docid": "447c008d30a6f86830d49bd74bd7a551",
"text": "OBJECTIVES\nTo investigate the effects of 24 weeks of whole-body-vibration (WBV) training on knee-extension strength and speed of movement and on counter-movement jump performance in older women.\n\n\nDESIGN\nA randomized, controlled trial.\n\n\nSETTING\nExercise Physiology and Biomechanics Laboratory, Leuven, Belgium.\n\n\nPARTICIPANTS\nEighty-nine postmenopausal women, off hormone replacement therapy, aged 58 to 74, were randomly assigned to a WBV group (n=30), a resistance-training group (RES, n=30), or a control group (n=29).\n\n\nINTERVENTION\nThe WBV group and the RES group trained three times a week for 24 weeks. The WBV group performed unloaded static and dynamic knee-extensor exercises on a vibration platform, which provokes reflexive muscle activity. The RES group trained knee-extensors by performing dynamic leg-press and leg-extension exercises increasing from low (20 repetitions maximum (RM)) to high (8RM) resistance. The control group did not participate in any training.\n\n\nMEASUREMENTS\nPre-, mid- (12 weeks), and post- (24 weeks) isometric strength and dynamic strength of knee extensors were measured using a motor-driven dynamometer. Speed of movement of knee extension was assessed using an external resistance equivalent to 1%, 20%, 40%, and 60% of isometric maximum. Counter-movement jump performance was determined using a contact mat.\n\n\nRESULTS\nIsometric and dynamic knee extensor strength increased significantly (P<.001) in the WBV group (mean+/-standard error 15.0+/-2.1% and 16.1+/-3.1%, respectively) and the RES group (18.4+/-2.8% and 13.9+/-2.7%, respectively) after 24 weeks of training, with the training effects not significantly different between the groups (P=.558). Speed of movement of knee extension significantly increased at low resistance (1% or 20% of isometric maximum) in the WBV group only (7.4+/-1.8% and 6.3+/-2.0%, respectively) after 24 weeks of training, with no significant differences in training effect between the WBV and the RES groups (P=.391; P=.142). Counter-movement jump height enhanced significantly (P<.001) in the WBV group (19.4+/-2.8%) and the RES group (12.9+/-2.9%) after 24 weeks of training. Most of the gain in knee-extension strength and speed of movement and in counter-movement jump performance had been realized after 12 weeks of training.\n\n\nCONCLUSION\nWBV is a suitable training method and is as efficient as conventional RES training to improve knee-extension strength and speed of movement and counter-movement jump performance in older women. As previously shown in young women, it is suggested that the strength gain in older women is mainly due to the vibration stimulus and not only to the unloaded exercises performed on the WBV platform.",
"title": ""
},
{
"docid": "c5e7e7daf6c910db006d45150c97c4d1",
"text": "This paper presents the implementation of real-time automatic speech recognition (ASR) for portable devices. The speech recognition is performed offline using PocketSphinx which is the implementation of Carnegie Mellon University's Sphinx speech recognition engine for portable devices. In this work, machine Learning approach is used which converts graphemes into phonemes using the TensorFlow's Sequence-to-Sequence model to produce the pronunciations of words. This paper also explains the implementation of statistical language model for ASR. The novelty of ASR is its offline speech recognition and thus requires no Internet connection compared to other related works. A speech recognition service currently provides the cloud based processing of speech and therefore has access to the speech data of users. However, the speech is processed on the handheld device in offline ASR and therefore enhances the privacy of users.",
"title": ""
},
{
"docid": "6f176e780d94a8fa8c5b1d6d364c4363",
"text": "Current uses of smartwatches are focused solely around the wearer's content, viewed by the wearer alone. When worn on a wrist, however, watches are often visible to many other people, making it easy to quickly glance at their displays. We explore the possibility of extending smartwatch interactions to turn personal wearables into more public displays. We begin opening up this area by investigating fundamental aspects of this interaction form, such as the social acceptability and noticeability of looking at someone else's watch, as well as the likelihood of a watch face being visible to others. We then sketch out interaction dimensions as a design space, evaluating each aspect via a web-based study and a deployment of three potential designs. We conclude with a discussion of the findings, implications of the approach and ways in which designers in this space can approach public wrist-worn wearables.",
"title": ""
},
{
"docid": "3266a3d561ee91e8f08d81e1aac6ac1b",
"text": "The seminal work of Dwork et al. [ITCS 2012] introduced a metric-based notion of individual fairness. Given a task-specific similarity metric, their notion required that every pair of similar individuals should be treated similarly. In the context of machine learning, however, individual fairness does not generalize from a training set to the underlying population. We show that this can lead to computational intractability even for simple fair-learning tasks. With this motivation in mind, we introduce and study a relaxed notion of approximate metric-fairness: for a random pair of individuals sampled from the population, with all but a small probability of error, if they are similar then they should be treated similarly. We formalize the goal of achieving approximate metric-fairness simultaneously with best-possible accuracy as Probably Approximately Correct and Fair (PACF) Learning. We show that approximate metricfairness does generalize, and leverage these generalization guarantees to construct polynomialtime PACF learning algorithms for the classes of linear and logistic predictors. [email protected]. Research supported by the ISRAEL SCIENCE FOUNDATION (grant No. 5219/17). [email protected]. Research supported by the ISRAEL SCIENCE FOUNDATION (grant No. 5219/17).",
"title": ""
},
{
"docid": "107aff0162fb0b6c1f90df1bdf7174b7",
"text": "Recommender Systems based on Collaborative Filtering suggest to users items they might like. However due to data sparsity of the input ratings matrix, the step of finding similar users often fails. We propose to replace this step with the use of a trust metric, an algorithm able to propagate trust over the trust network and to estimate a trust weight that can be used in place of the similarity weight. An empirical evaluation on Epinions.com dataset shows that Recommender Systems that make use of trust information are the most effective in term of accuracy while preserving a good coverage. This is especially evident on users who provided few ratings.",
"title": ""
},
{
"docid": "ebd65c03599cc514e560f378f676cc01",
"text": "The purpose of this paper is to examine an integrated model of TAM and D&M to explore the effects of quality features, perceived ease of use, perceived usefulness on users’ intentions and satisfaction, alongside the mediating effect of usability towards use of e-learning in Iran. Based on the e-learning user data collected through a survey, structural equations modeling (SEM) and path analysis were employed to test the research model. The results revealed that ‘‘intention’’ and ‘‘user satisfaction’’ both had positive effects on actual use of e-learning. ‘‘System quality’’ and ‘‘information quality’’ were found to be the primary factors driving users’ intentions and satisfaction towards use of e-learning. At last, ‘‘perceived usefulness’’ mediated the relationship between ease of use and users’ intentions. The sample consisted of e-learning users of four public universities in Iran. Past studies have seldom examined an integrated model in the context of e-learning in developing countries. Moreover, this paper tries to provide a literature review of recent published studies in the field of e-learning. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "559be3dd29ae8f6f9a9c99951c82a8d3",
"text": "This paper presents a comprehensive literature review on environment perception for intelligent vehicles. The state-of-the-art algorithms and modeling methods for intelligent vehicles are given, with a summary of their pros and cons. A special attention is paid to methods for lane and road detection, traffic sign recognition, vehicle tracking, behavior analysis, and scene understanding. In addition, we provide information about datasets, common performance analysis, and perspectives on future research directions in this area.",
"title": ""
},
{
"docid": "bf272aa2413f1bc186149e814604fb03",
"text": "Reading has been studied for decades by a variety of cognitive disciplines, yet no theories exist which sufficiently describe and explain how people accomplish the complete task of reading real-world texts. In particular, a type of knowledge intensive reading known as creative reading has been largely ignored by the past research. We argue that creative reading is an aspect of practically all reading experiences; as a result, any theory which overlooks this will be insufficient. We have built on results from psychology, artificial intelligence, and education in order to produce a functional theory of the complete reading process. The overall framework describes the set of tasks necessary for reading to be performed. Within this framework, we have developed a theory of creative reading. The theory is implemented in the ISAAC (Integrated Story Analysis And Creativity) system, a reading system which reads science fiction stories.",
"title": ""
},
{
"docid": "adae6ec50aeaf77d3dbcd5f7f3ef8de0",
"text": "To act and plan in complex environments, we posit that agents should have a mental simulator of the world with three characteristics: (a) it should build an abstract state representing the condition of the world; (b) it should form a belief which represents uncertainty on the world; (c) it should go beyond simple step-by-step simulation, and exhibit temporal abstraction. Motivated by the absence of a model satisfying all these requirements, we propose TD-VAE, a generative sequence model that learns representations containing explicit beliefs about states several steps into the future, and that can be rolled out directly without single-step transitions. TD-VAE is trained on pairs of temporally separated time points, using an analogue of temporal difference learning used in reinforcement learning.",
"title": ""
},
{
"docid": "9db193e25cb38a7c01954c89415551b6",
"text": "This paper presents a taxonomy of anomaly detection techniques that is then used to survey and classify a number of research prototypes and commercial products. Commercial products and solutions based anomaly detection techniques are beginning to establish themselves in mainstream security solutions alongside firewalls, intrusion prevention systems and network monitoring solutions. These solutions are focused mainly on network-based anomaly detection, thus creating a new industry buzzword that describes it: Network Behavior Analysis. This classification is used predictably, pointing towards a number of areas of future research in the field of anomaly detection.",
"title": ""
},
{
"docid": "92e150f30ae9ef371ffdd7160c84719d",
"text": "Categorization is a vitally important skill that people use every day. Early theories of category learning assumed a single learning system, but recent evidence suggests that human category learning may depend on many of the major memory systems that have been hypothesized by memory researchers. As different memory systems flourish under different conditions, an understanding of how categorization uses available memory systems will improve our understanding of a basic human skill, lead to better insights into the cognitive changes that result from a variety of neurological disorders, and suggest improvements in training procedures for complex categorization tasks.",
"title": ""
},
{
"docid": "7334904bb8b95fbf9668c388d30d4d72",
"text": "Write-optimized data structures like Log-Structured Merge-tree (LSM-tree) and its variants are widely used in key-value storage systems like Big Table and Cassandra. Due to deferral and batching, the LSM-tree based storage systems need background compactions to merge key-value entries and keep them sorted for future queries and scans. Background compactions play a key role on the performance of the LSM-tree based storage systems. Existing studies about the background compaction focus on decreasing the compaction frequency, reducing I/Os or confining compactions on hot data key-ranges. They do not pay much attention to the computation time in background compactions. However, the computation time is no longer negligible, and even the computation takes more than 60% of the total compaction time in storage systems using flash based SSDs. Therefore, an alternative method to speedup the compaction is to make good use of the parallelism of underlying hardware including CPUs and I/O devices. In this paper, we analyze the compaction procedure, recognize the performance bottleneck, and propose the Pipelined Compaction Procedure (PCP) to better utilize the parallelism of CPUs and I/O devices. Theoretical analysis proves that PCP can improve the compaction bandwidth. Furthermore, we implement PCP in real system and conduct extensive experiments. The experimental results show that the pipelined compaction procedure can increase the compaction bandwidth and storage system throughput by 77% and 62% respectively.",
"title": ""
},
{
"docid": "69e90a5882bdea0055bb61463687b0c1",
"text": "www.frontiersinecology.org © The Ecological Society of America E generate a range of goods and services important for human well-being, collectively called ecosystem services. Over the past decade, progress has been made in understanding how ecosystems provide services and how service provision translates into economic value (Daily 1997; MA 2005; NRC 2005). Yet, it has proven difficult to move from general pronouncements about the tremendous benefits nature provides to people to credible, quantitative estimates of ecosystem service values. Spatially explicit values of services across landscapes that might inform land-use and management decisions are still lacking (Balmford et al. 2002; MA 2005). Without quantitative assessments, and some incentives for landowners to provide them, these services tend to be ignored by those making land-use and land-management decisions. Currently, there are two paradigms for generating ecosystem service assessments that are meant to influence policy decisions. Under the first paradigm, researchers use broad-scale assessments of multiple services to extrapolate a few estimates of values, based on habitat types, to entire regions or the entire planet (eg Costanza et al. 1997; Troy and Wilson 2006; Turner et al. 2007). Although simple, this “benefits transfer” approach incorrectly assumes that every hectare of a given habitat type is of equal value – regardless of its quality, rarity, spatial configuration, size, proximity to population centers, or the prevailing social practices and values. Furthermore, this approach does not allow for analyses of service provision and changes in value under new conditions. For example, if a wetland is converted to agricultural land, how will this affect the provision of clean drinking water, downstream flooding, climate regulation, and soil fertility? Without information on the impacts of land-use management practices on ecosystem services production, it is impossible to design policies or payment programs that will provide the desired ecosystem services. In contrast, under the second paradigm for generating policy-relevant ecosystem service assessments, researchers carefully model the production of a single service in a small area with an “ecological production function” – how provision of that service depends on local ecological variables (eg Kaiser and Roumasset 2002; Ricketts et al. 2004). Some of these production function approaches also use market prices and non-market valuation methods to estimate the economic value of the service and how that value changes under different ecological conditions. Although these methods are superior to the habitat assessment benefits transfer approach, these studies lack both the scope (number of services) and scale (geographic and temporal) to be relevant for most policy questions. What is needed are approaches that combine the rigor of the small-scale studies with the breadth of broad-scale assessments (see Boody et al. 2005; Jackson et al. 2005; ECOSYSTEM SERVICES ECOSYSTEM SERVICES ECOSYSTEM SERVICES",
"title": ""
},
{
"docid": "6b2211308ad03c0eaa3dccec5bb81b75",
"text": "Mobile developers face unique challenges when detecting and reporting crashes in apps due to their prevailing GUI event-driven nature and additional sources of inputs (e.g., sensor readings). To support developers in these tasks, we introduce a novel, automated approach called CRASHSCOPE. This tool explores a given Android app using systematic input generation, according to several strategies informed by static and dynamic analyses, with the intrinsic goal of triggering crashes. When a crash is detected, CRASHSCOPE generates an augmented crash report containing screenshots, detailed crash reproduction steps, the captured exception stack trace, and a fully replayable script that automatically reproduces the crash on a target device(s). We evaluated CRASHSCOPE's effectiveness in discovering crashes as compared to five state-of-the-art Android input generation tools on 61 applications. The results demonstrate that CRASHSCOPE performs about as well as current tools for detecting crashes and provides more detailed fault information. Additionally, in a study analyzing eight real-world Android app crashes, we found that CRASHSCOPE's reports are easily readable and allow for reliable reproduction of crashes by presenting more explicit information than human written reports.",
"title": ""
},
{
"docid": "90082b65c51cd6c4bc815d06704063cc",
"text": "Online news recommender systems aim to address the information explosion of news and make personalized recommendation for users. In general, news language is highly condensed, full of knowledge entities and common sense. However, existing methods are unaware of such external knowledge and cannot fully discover latent knowledge-level connections among news. The recommended results for a user are consequently limited to simple patterns and cannot be extended reasonably. Moreover, news recommendation also faces the challenges of high time-sensitivity of news and dynamic diversity of users’ interests. To solve the above problems, in this paper, we propose a deep knowledge-aware network (DKN) that incorporates knowledge graph representation into news recommendation. DKN is a content-based deep recommendation framework for click-through rate prediction. The key component of DKN is a multi-channel and word-entity-aligned knowledge-aware convolutional neural network (KCNN) that fuses semantic-level and knowledge-level representations of news. KCNN treats words and entities asmultiple channels, and explicitly keeps their alignment relationship during convolution. In addition, to address users’ diverse interests, we also design an attention module in DKN to dynamically aggregate a user’s history with respect to current candidate news. Through extensive experiments on a real online news platform, we demonstrate that DKN achieves substantial gains over state-of-the-art deep recommendation models. We also validate the efficacy of the usage of knowledge in DKN.",
"title": ""
},
{
"docid": "3f629998235c1cfadf67cf711b07f8b9",
"text": "The capacity to gather and timely deliver to the service level any relevant information that can characterize the service-provisioning environment, such as computing resources/capabilities, physical device location, user preferences, and time constraints, usually defined as context-awareness, is widely recognized as a core function for the development of modern ubiquitous and mobile systems. Much work has been done to enable context-awareness and to ease the diffusion of context-aware services; at the same time, several middleware solutions have been designed to transparently implement context management and provisioning in the mobile system. However, to the best of our knowledge, an in-depth analysis of the context data distribution, namely, the function in charge of distributing context data to interested entities, is still missing. Starting from the core assumption that only effective and efficient context data distribution can pave the way to the deployment of truly context-aware services, this article aims at putting together current research efforts to derive an original and holistic view of the existing literature. We present a unified architectural model and a new taxonomy for context data distribution by considering and comparing a large number of solutions. Finally, based on our analysis, we draw some of the research challenges still unsolved and identify some possible directions for future work.",
"title": ""
},
{
"docid": "e287c89edaf97b11bac2d08cb4c6b385",
"text": "In this paper, we propose a new way of augmenting our environment with information without making the user carry any devices. We propose the use of video projection to display the augmentation on the objects directly. We use a projector that can be rotated and in other ways controlled remotely by a computer, to follow objects carrying a marker. The main contribution of this paper is a system that keeps the augmentation displayed in the correct place while the object or the projector moves. We describe the hardware and software design of our system, the way certain functions such as following the marker or keeping it in focus are implemented and how to calibrate the multitude of parameters of all the subsystems.",
"title": ""
},
{
"docid": "28971d75a464178afe93e0ef0f4479c5",
"text": "OBJECTIVE\nTo compare two levels of stress (solitary confinement (SC) and non-SC) among remand prisoners as to incidence of psychiatric disorders in relation to prevalent disorders.\n\n\nMETHOD\nLongitudinal repeated assessments were carried out from the start and during the remand phase of imprisonment. Both interview-based and self-reported measures were applied to 133 remand prisoners in SC and 95 remand prisoners in non-SC randomly selected in a parallel study design.\n\n\nRESULTS\nIncidence of psychiatric disorders developed in the prison was significantly higher in SC prisoners (28%) than in non-SC prisoners (15%). Most disorders were adjustment disorders, with depressive disorders coming next. Incident psychotic disorders were rare. The difference regarding incidence was primarily explained by level of stress (i.e. prison form) rather than confounding factors. Quantitative measures of psychopathology (Hamilton Scales and General Health Questionnaire) were significantly higher in subjects with prevalent and incident disorders compared to non-disordered subjects.\n\n\nCONCLUSION\nDifferent levels of stress give rise to different incidence of psychiatric morbidity among remand prisoners. The surplus of incident disorders among SC prisoners is related to SC, which may act as a mental health hazard.",
"title": ""
},
{
"docid": "df158503822641430e6f17a43655cf2e",
"text": "Open information extraction (OIE) is the process to extract relations and their arguments automatically from textual documents without the need to restrict the search to predefined relations. In recent years, several OIE systems for the English language have been created but there is not any system for the Vietnamese language. In this paper, we propose a method of OIE for Vietnamese using a clause-based approach. Accordingly, we exploit Vietnamese dependency parsing using grammar clauses that strives to consider all possible relations in a sentence. The corresponding clause types are identified by their propositions as extractable relations based on their grammatical functions of constituents. As a result, our system is the first OIE system named vnOIE for the Vietnamese language that can generate open relations and their arguments from Vietnamese text with highly scalable extraction while being domain independent. Experimental results show that our OIE system achieves promising results with a precision of 83.71%.",
"title": ""
}
] |
scidocsrr
|
8253f4c6909b39e0f477f9f9d8f85adf
|
Effect of action video games on the spatial distribution of visuospatial attention.
|
[
{
"docid": "bb65decbaecb11cf14044b2a2cbb6e74",
"text": "The ability to remain focused on goal-relevant stimuli in the presence of potentially interfering distractors is crucial for any coherent cognitive function. However, simply instructing people to ignore goal-irrelevant stimuli is not sufficient for preventing their processing. Recent research reveals that distractor processing depends critically on the level and type of load involved in the processing of goal-relevant information. Whereas high perceptual load can eliminate distractor processing, high load on \"frontal\" cognitive control processes increases distractor processing. These findings provide a resolution to the long-standing early and late selection debate within a load theory of attention that accommodates behavioural and neuroimaging data within a framework that integrates attention research with executive function.",
"title": ""
}
] |
[
{
"docid": "13aef8ba225dd15dd013e155c319310e",
"text": "ness and Approximations Thursday, June 9, 2011 Abstractness and Approximations • This rather absurd attack goes as followsness and Approximations • This rather absurd attack goes as follows Thursday, June 9, 2011 Abstractness and Approximations • This rather absurd attack goes as follows 1. Even Turing machines are abstract models that can’t be implemented fully.ness and Approximations • This rather absurd attack goes as follows 1. Even Turing machines are abstract models that can’t be implemented fully. Thursday, June 9, 2011 Abstractness and Approximations • This rather absurd attack goes as follows 1. Even Turing machines are abstract models that can’t be implemented fully. 2. Therefore, no other more powerful model can be implemented fully.ness and Approximations • This rather absurd attack goes as follows 1. Even Turing machines are abstract models that can’t be implemented fully. 2. Therefore, no other more powerful model can be implemented fully. Thursday, June 9, 2011 Abstractness and Approximations • This rather absurd attack goes as follows 1. Even Turing machines are abstract models that can’t be implemented fully. 2. Therefore, no other more powerful model can be implemented fully.ness and Approximations • This rather absurd attack goes as follows 1. Even Turing machines are abstract models that can’t be implemented fully. 2. Therefore, no other more powerful model can be implemented fully. Thursday, June 9, 2011 • This rather absurd attack goes as follows 1. Even Turing machines are abstract models that can’t be implemented fully. 2. Therefore, no other more powerful model can be implemented fully. Abstractness and Approximations Thursday, June 9, 2011 • This rather absurd attack goes as follows 1. Even Turing machines are abstract models that can’t be implemented fully. 2. Therefore, no other more powerful model can be implemented fully. Abstractness and Approximations • Going by the same argument: Thursday, June 9, 2011 • This rather absurd attack goes as follows 1. Even Turing machines are abstract models that can’t be implemented fully. 2. Therefore, no other more powerful model can be implemented fully. Abstractness and Approximations • Going by the same argument: • Since Turing computers can’t be realized fully, Turing computation is now another “myth.” Thursday, June 9, 2011 • This rather absurd attack goes as follows 1. Even Turing machines are abstract models that can’t be implemented fully. 2. Therefore, no other more powerful model can be implemented fully. Abstractness and Approximations • Going by the same argument: • Since Turing computers can’t be realized fully, Turing computation is now another “myth.” • The problem is that Davis fails to recognize that a lot of th hypercomputational models are abstract models that no one hopes to build in the near future. Thursday, June 9, 2011 Necessity of Noncomputable Reals Thursday, June 9, 2011 Necessity of Noncomputable Reals • Another point in Davis’ argument is that almost all hypercomputation models require Physics to give them a Turing-uncomputable real number. Thursday, June 9, 2011 Necessity of Noncomputable Reals • Another point in Davis’ argument is that almost all hypercomputation models require Physics to give them a Turing-uncomputable real number. • This is false. Quite a large number of hypercomputation models don’t require non-computable reals and roughly fall into the following categories Thursday, June 9, 2011 Necessity of Noncomputable Reals • Another point in Davis’ argument is that almost all hypercomputation models require Physics to give them a Turing-uncomputable real number. • This is false. Quite a large number of hypercomputation models don’t require non-computable reals and roughly fall into the following categories • Infinite time Turing Machines Thursday, June 9, 2011 Necessity of Noncomputable Reals • Another point in Davis’ argument is that almost all hypercomputation models require Physics to give them a Turing-uncomputable real number. • This is false. Quite a large number of hypercomputation models don’t require non-computable reals and roughly fall into the following categories • Infinite time Turing Machines • Zeus Machines Thursday, June 9, 2011 Necessity of Noncomputable Reals • Another point in Davis’ argument is that almost all hypercomputation models require Physics to give them a Turing-uncomputable real number. • This is false. Quite a large number of hypercomputation models don’t require non-computable reals and roughly fall into the following categories • Infinite time Turing Machines • Zeus Machines • Kieu-type Quantum Computation Thursday, June 9, 2011 Science-based Arguments: A Meta Analysis of Davis and friends Thursday, June 9, 2011 Science-based Arguments: A Meta Analysis of Davis and friends The Main Case Science of Sciences Part 1: Chain Store Paradox Part 2: Turing-level Actors Part 3:MDL Computational Learning Theory CLT-based Model of Science",
"title": ""
},
{
"docid": "60db64d440feb7ff3290124c8409d33a",
"text": "The paper is part of a series of background papers which seeks to identify and analyze key constraints in higher education, skills development, and technology absorption in accelerating labor absorption and shared growth in South Africa. The background papers form part of the ‘Closing the Skills and Technology Gaps in South Africa’ project which was financed by the Australian Agency for International Development.",
"title": ""
},
{
"docid": "953447219edc0a03551a42af184b7b02",
"text": "While words in documents are generally treated as discrete entities, they can be embedded in a Euclidean space which reflects an a priori notion of similarity between them. In such a case, a text document can be viewed as a bag-ofembedded-words (BoEW): a set of realvalued vectors. We propose a novel document representation based on such continuous word embeddings. It consists in non-linearly mapping the wordembeddings in a higher-dimensional space and in aggregating them into a documentlevel representation. We report retrieval and clustering experiments in the case where the word-embeddings are computed from standard topic models showing significant improvements with respect to the original topic models.",
"title": ""
},
{
"docid": "a5a7e3fe9d6eaf8fc25e7fd91b74219e",
"text": "We present in this paper a new approach that uses supervised machine learning techniques to improve the performances of optimization algorithms in the context of mixed-integer programming (MIP). We focus on the branch-and-bound (B&B) algorithm, which is the traditional algorithm used to solve MIP problems. In B&B, variable branching is the key component that most conditions the efficiency of the optimization. Good branching strategies exist but are computationally expensive and usually hinder the optimization rather than improving it. Our approach consists in imitating the decisions taken by a supposedly good branching strategy, strong branching in our case, with a fast approximation. To this end, we develop a set of features describing the state of the ongoing optimization and show how supervised machine learning can be used to approximate the desired branching strategy. The approximated function is created by a supervised machine learning algorithm from a set of observed branching decisions taken by the target strategy. The experiments performed on randomly generated and standard benchmark (MIPLIB) problems show promising results.",
"title": ""
},
{
"docid": "ec189ac55b64402d843721de4fc1f15c",
"text": "DroidMiner is a new malicious Android app detection system that uses static analysis to automatically mine malicious program logic from known Android malware. DroidMiner uses a behavioral graph to abstract malware program logic into a sequence of threat modalities, and then applies machine-learning techniques to identify and label elements of the graph that match harvested threat modalities. Once trained on a mobile malware corpus, DroidMiner can automatically scan a new Android app to (i) determine whether it contains malicious modalities, (ii) diagnose the malware family to which it is most closely associated, and (iii) precisely characterize behaviors found within the analyzed app. While DroidMiner is not the first to attempt automated classification of Android applications based on Framework API calls, it is distinguished by its development of modalities that are resistant to noise insertions and its use of associative rule mining that enables automated association of malicious behaviors with modalities. We evaluate DroidMiner using 2,466 malicious apps, identified from a corpus of over 67,000 third-party market Android apps, plus an additional set of over 10,000 official market Android apps. Using this set of real-world apps, DroidMiner achieves a 95.3% detection rate, with a 0.4% false positive rate. We further evaluate DroidMiner’s ability to classify malicious apps under their proper family labels, and measure its label accuracy at 92%.",
"title": ""
},
{
"docid": "52908b59435aa899d9e452e71a87e461",
"text": "Scalability is a desirable attribute of a network, system, or process. Poor scalability can result in poor system performance, necessitating the reengineering or duplication of systems. While scalability is valued, its characteristics and the characteristics that undermine it are usually only apparent from the context. Here, we attempt to define different aspects of scalability, such as structural scalability and load scalability. Structural scalability is the ability of a system to expand in a chosen dimension without major modifications to its architecture. Load scalability is the ability of a system to perform gracefully as the offered traffic increases. It is argued that systems with poor load scalability may exhibit it because they repeatedly engage in wasteful activity, because they are encumbered with poor scheduling algorithms, because they cannot fully take advantage of parallelism, or because they are algorithmically inefficient. We qualitatively illustrate these concepts with classical examples from the literature of operating systems and local area networks, as well as an example of our own. Some of these are accompanied by rudimentary delay analysis.",
"title": ""
},
{
"docid": "a114a6cb169646a261d1c5d070d3d9a6",
"text": "The motivation to develop microgrids, as a particular form of active networks is explained and presented as an effective solution for the control of grids with high levels of distibuted energy resources. The operation, more in particular the voltage and frequency control, is discussed. Control concepts useful with microgrids are detailed and implemented. Besides technical control aspects, also economical ones are developed. Primary, secondary and tertiary control algorithms are designed operating in a completely distributed way. The theoretical concepts are tested in a extensive laboratory experiment implementing a realistic scenario by using a setup of four inverters able to communicate through an Internet connection.",
"title": ""
},
{
"docid": "19c3c2ac5e35e8e523d796cef3717d90",
"text": "The printing press long ago and the computer today have made widespread access to information possible. Learning theorists have suggested, however, that mere information is a poor way to learn. Instead, more effective learning comes through doing. While the most popularized element of today's MOOCs are the video lectures, many MOOCs also include interactive activities that can afford learning by doing. This paper explores the learning benefits of the use of informational assets (e.g., videos and text) in MOOCs, versus the learning by doing opportunities that interactive activities provide. We find that students doing more activities learn more than students watching more videos or reading more pages. We estimate the learning benefit from extra doing (1 SD increase) to be more than six times that of extra watching or reading. Our data, from a psychology MOOC, is correlational in character, however we employ causal inference mechanisms to lend support for the claim that the associations we find are causal.",
"title": ""
},
{
"docid": "2639f5d735abed38ed4f7ebf11072087",
"text": "The rising popularity of intelligent mobile devices and the daunting computational cost of deep learning-based models call for efficient and accurate on-device inference schemes. We propose a quantization scheme that allows inference to be carried out using integer-only arithmetic, which can be implemented more efficiently than floating point inference on commonly available integer-only hardware. We also co-design a training procedure to preserve end-to-end model accuracy post quantization. As a result, the proposed quantization scheme improves the tradeoff between accuracy and on-device latency. The improvements are significant even on MobileNets, a model family known for run-time efficiency, and are demonstrated in ImageNet classification and COCO detection on popular CPUs.",
"title": ""
},
{
"docid": "c35bed3db1b6e145d09db0c8272506ec",
"text": "Gamification has the potential to improve the quality of learning by better engaging students with learning activities. Our objective in this study is to evaluate a gamified learning activity along the dimensions of learning, engagement, and enjoyment. The activity made use of a gamified multiple choice quiz implemented as a software tool and was trialled in three undergraduate IT-related courses. A questionnaire survey was used to collect data to gauge levels of learning, engagement, and enjoyment. Results show that there was some degree of engagement and enjoyment. The majority of participants (77.63 per cent) reported that they were engaged enough to want to complete the quiz and 46.05 per cent stated they were happy while playing the quiz. In terms of learning, the overall results were positive since 60.53 per cent of students stated that it enhanced their learning effectiveness. A limitation of the work is that the results are self-reported and the activity was used over a short period of time. Thus, future work should include longer trial periods and evaluating improvements to learning using alternative approaches to self-reported data.",
"title": ""
},
{
"docid": "9ed31c8a584fdc5548b3aa2df10ba30b",
"text": "This paper investigates if the Activity-Theoretical methods of work development used by Engeström and others can be transformed into a day-to-day methodology for information systems practitioners. We first present and justify our theoretical framework of Activity Analysis and Development fairly extensively. In the second part we compare work development with information systems development and argue that in its less technological areas, the latter can potentially use the same methodologies as the former. In the third part, small experiments on using Activity Analysis during the earliest phases of information systems development in Nigeria and Finland are reported. In conclusion, we argue that the experiments were encouraging, but the methodology needs to be supported by further illustrative examples and training material. We argue that compared to currently used methods in the earliest and latest “phases” of systems development, Activity Analysis and Development is comprehensive, theoretically well founded, detailed and practicable. ©Scandinavian Journal of Information Systems, 2000, 12: 191191",
"title": ""
},
{
"docid": "5221c87f7ee877a0a7ac0a972df4636d",
"text": "These are exciting times for medical image processing. Innovations in deep learning and the increasing availability of large annotated medical image datasets are leading to dramatic advances in automated understanding of medical images. From this perspective, I give a personal view of how computer-aided diagnosis of medical images has evolved and how the latest advances are leading to dramatic improvements today. I discuss the impact of deep learning on automated disease detection and organ and lesion segmentation, with particular attention to applications in diagnostic radiology. I provide some examples of how time-intensive and expensive manual annotation of huge medical image datasets by experts can be sidestepped by using weakly supervised learning from routine clinically generated medical reports. Finally, I identify the remaining knowledge gaps that must be overcome to achieve clinician-level performance of automated medical image processing systems. Computer-aided diagnosis (CAD) in medical imaging has flourished over the past several decades. New advances in computer software and hardware and improved quality of images from scanners have enabled this progress. The main motivations for CAD have been to reduce error and to enable more efficient measurement and interpretation of images. From this perspective, I will describe how deep learning has led to radical changes in howCAD research is conducted and in howwell it performs. For brevity, I will include automated disease detection and image processing under the rubric of CAD. Financial Disclosure The author receives patent royalties from iCAD Medical. Disclaimer No NIH endorsement of any product or company mentioned in this manuscript should be inferred. The opinions expressed herein are the author’s and do not necessarily represent those of NIH. R.M. Summers (B) Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bldg. 10, Room 1C224D MSC 1182, Bethesda, MD 20892-1182, USA e-mail: [email protected] URL: http://www.cc.nih.gov/about/SeniorStaff/ronald_summers.html © Springer International Publishing Switzerland 2017 L. Lu et al. (eds.), Deep Learning and Convolutional Neural Networks for Medical Image Computing, Advances in Computer Vision and Pattern Recognition, DOI 10.1007/978-3-319-42999-1_1 3",
"title": ""
},
{
"docid": "bc2cc54a7b01fa7a7c3bf7a0f88bc899",
"text": "Usually bilingual word vectors are trained “online”. Mikolov et al. (2013a) showed they can also be found “offline”; whereby two pre-trained embeddings are aligned with a linear transformation, using dictionaries compiled from expert knowledge. In this work, we prove that the linear transformation between two spaces should be orthogonal. This transformation can be obtained using the singular value decomposition. We introduce a novel “inverted softmax” for identifying translation pairs, with which we improve the precision @1 of Mikolov’s original mapping from 34% to 43%, when translating a test set composed of both common and rare English words into Italian. Orthogonal transformations are more robust to noise, enabling us to learn the transformation without expert bilingual signal by constructing a “pseudo-dictionary” from the identical character strings which appear in both languages, achieving 40% precision on the same test set. Finally, we extend our method to retrieve the true translations of English sentences from a corpus of 200k Italian sentences with a precision @1 of 68%.",
"title": ""
},
{
"docid": "0a4392285df7ddb92458ffa390f36867",
"text": "A good model of object shape is essential in applications such as segmentation, detection, inpainting and graphics. For example, when performing segmentation, local constraints on the shapes can help where object boundaries are noisy or unclear, and global constraints can resolve ambiguities where background clutter looks similar to parts of the objects. In general, the stronger the model of shape, the more performance is improved. In this paper, we use a type of deep Boltzmann machine (Salakhutdinov and Hinton, International Conference on Artificial Intelligence and Statistics, 2009) that we call a Shape Boltzmann Machine (SBM) for the task of modeling foreground/background (binary) and parts-based (categorical) shape images. We show that the SBM characterizes a strong model of shape, in that samples from the model look realistic and it can generalize to generate samples that differ from training examples. We find that the SBM learns distributions that are qualitatively and quantitatively better than existing models for this task.",
"title": ""
},
{
"docid": "d2d5aba1c04a4ac0cec46ed39a24b8a8",
"text": "The objective of this paper is to critically review the literature regarding the mechanics, geometry, load application and other testing parameters of \"micro\" shear and tensile adhesion tests, and to outline their advantages and limitations. The testing of multiple specimens from a single tooth conserves teeth and allows research designs not possible using conventional 'macro' methods. Specimen fabrication, gripping and load application methods, in addition to material properties of the various components comprising the resin-tooth adhesive bond, will influence the stress distribution and consequently, the nominal bond strength and failure mode. These issues must be understood; as should the limitations inherent to strength-based testing of a complicated adhesive bond joining dissimilar substrates, for proper test selection, conduct and interpretation. Finite element analysis and comprehensive reporting of test conduct and results will further our efforts towards a standardization of test procedures. For the foreseeable future, both \"micro\" and \"macro\" bond strength tests will, as well as various morphological and spectroscopic investigative techniques, continue to be important tools for improving resin-tooth adhesion to increase the service life of dental resin-based composite restorations.",
"title": ""
},
{
"docid": "40f6307c5b8eff076dfa8bac2b4d475b",
"text": "Convolutional Neural Networks are extremely efficient architectures in image and audio recognition tasks, thanks to their ability to exploit the local translational invariance of signal classes over their domain. In this paper we consider possible generalizations of CNNs to signals defined on more general domains without the action of a translation group. In particular, we propose two constructions, one based upon a hierarchical clustering of the domain, and another based on the spectrum of the graph Laplacian. We show through experiments that for lowdimensional graphs it is possible to learn convolutional layers with a number of parameters independent of the input size, resulting in efficient deep architectures.",
"title": ""
},
{
"docid": "4041235ab6ad93290ed90cdf5e07d6e5",
"text": "This article describes Apron, a freely available library dedicated to the static analysis of the numerical variables of programs by abstract interpretation. Its goal is threefold: provide analysis implementers with ready-to-use numerical abstractions under a unified API, encourage the research in numerical abstract domains by providing a platform for integration and comparison, and provide teaching and demonstration tools to disseminate knowledge on abstract interpretation.",
"title": ""
},
{
"docid": "9bd94070d7542a466ca5cafd3429251e",
"text": "With the rise of increasingly advanced reverse engineering technique, especially more scalable symbolic execution tools, software obfuscation faces great challenges. Branch conditions contain important control flow logic of a program. Adversaries can use powerful program analysis tools to collect sensitive program properties and recover a program’s internal logic, stealing intellectual properties from the original owner. In this paper, we propose a novel control obfuscation technique that uses lambda calculus to hide the original computation semantics and makes the original program more obscure to understand and reverse engineer. Our obfuscator replaces the conditional instructions with lambda calculus function calls that simulate the same behavior with a more complicated execution model. Our experiment result shows that our obfuscation method can protect sensitive branch conditions from stateof-the-art symbolic execution techniques, with only modest overhead.",
"title": ""
},
{
"docid": "322d23354a9bf45146e4cb7c733bf2ec",
"text": "In this chapter we consider the problem of automatic facial expression analysis. Our take on this is that the field has reached a point where it needs to move away from considering experiments and applications under in-the-lab conditions, and move towards so-called in-the-wild scenarios. We assume throughout this chapter that the aim is to develop technology that can be deployed in practical applications under unconstrained conditions. While some first efforts in this direction have been reported very recently, it is still unclear what the right path to achieving accurate, informative, robust, and real-time facial expression analysis will be. To illuminate the journey ahead, we first provide in Sec. 1 an overview of the existing theories and specific problem formulations considered within the computer vision community. Then we describe in Sec. 2 the standard algorithmic pipeline which is common to most facial expression analysis algorithms. We include suggestions as to which of the current algorithms and approaches are most suited to the scenario considered. In section 3 we describe our view of the remaining challenges, and the current opportunities within the field. This chapter is thus not intended as a review of different approaches, but rather a selection of what we believe are the most suitable state-of-the-art algorithms, and a selection of exemplars chosen to characterise a specific approach. We review in section 4 some of the exciting opportunities for the application of automatic facial expression analysis to everyday practical problems and current commercial applications being exploited. Section 5 ends the chapter by summarising the major conclusions drawn. Brais Martinez School of Computer Science, Jubilee Campus, Wollaton Road, Nottingham, NG8 1BB e-mail: [email protected] Michel F. Valstar School of Computer Science, Jubilee Campus, Wollaton Road, Nottingham, NG8 1BB e-mail: [email protected]",
"title": ""
},
{
"docid": "8092ba3c116d33900e72ff79994ac45c",
"text": "We describe an expression-invariant method for face recognition by fitting an identity/expression separated 3D Morphable Model to shape data. The expression model greatly improves recognition and retrieval rates in the uncooperative setting, while achieving recognition rates on par with the best recognition algorithms in the face recognition great vendor test. The fitting is performed with a robust nonrigid ICP algorithm. It is able to perform face recognition in a fully automated scenario and on noisy data. The system was evaluated on two datasets, one with a high noise level and strong expressions, and the standard UND range scan database, showing that while expression invariance increases recognition and retrieval performance for the expression dataset, it does not decrease performance on the neutral dataset. The high recognition rates are achieved even with a purely shape based method, without taking image data into account.",
"title": ""
}
] |
scidocsrr
|
34c0809dc58c228be15a4c99c2361161
|
Engineering Privacy
|
[
{
"docid": "c70466f8b1e70fcdd4b7fe3f2cb772b2",
"text": "We present Tor, a circuit-based low-latency anonymous communication service. This second-generation Onion Routing system addresses limitations in the original design. Tor adds perfect forward secrecy, congestion control, directory servers, integrity checking, configurable exit policies, and a practical design for rendezvous points. Tor works on the real-world Internet, requires no special privileges or kernel modifications, requires little synchronization or coordination between nodes, and provides a reasonable tradeoff between anonymity, usability, and efficiency. We briefly describe our experiences with an international network of more than a dozen hosts. We close with a list of open problems in anonymous communication.",
"title": ""
},
{
"docid": "aa3da820fe9e98cb4f817f6a196c18e7",
"text": "Location awareness is an important capability for mobile computing. Yet inexpensive, pervasive positioning—a requirement for wide-scale adoption of location-aware computing—has been elusive. We demonstrate a radio beacon-based approach to location, called Place Lab, that can overcome the lack of ubiquity and high-cost found in existing location sensing approaches. Using Place Lab, commodity laptops, PDAs and cell phones estimate their position by listening for the cell IDs of fixed radio beacons, such as wireless access points, and referencing the beacons’ positions in a cached database. We present experimental results showing that 802.11 and GSM beacons are sufficiently pervasive in the greater Seattle area to achieve 20-40 meter median accuracy with nearly 100% coverage measured by availability in people’s daily",
"title": ""
}
] |
[
{
"docid": "bdffaedb490b6f3e0054b29159b1b3b5",
"text": "We explore our efforts to create a conceptual framework to describe and analyse the challenges around preparing teachers to create, sustain, and educate in a \"community of learners.\" In particular, we offer a new frame for conceptualizing teacher learning and development within communities and contexts. This conception allows us to understand the variety of ways in which teachers respond in the process of learning lo teach in the manner described by the \"Fostering a Community of Learners\" (FCL) programme. The model illustrates the ongoing interaction among individual student and teacher learning, institutional or programme learning, and the characteristics of the policy environment critical to the success of theory-intensive reform efforts such as FCL.",
"title": ""
},
{
"docid": "ed90d76e9208882e62b449e4a82842d6",
"text": "In this paper a 10 GHz quasi-Hybrid/MMIC super-regenerative transceiver/antenna chip is presented. The circuit is the highest frequency super-regenerative transceiver presented in the literature and is amongst the lowest power - certainly the lowest power at X-band frequency. The chip is fabricated on GaAs substrate and uses a MMIC process for the passive components and an RFMD PHEMT chip device bonded into the circuit for the active components. The transceiver chip measures 10 × 10 mm and consumes 0.75 mW Tx and 0.9 mW Rx. When mounted into a pcb carrier substrate containing antenna, bias circuitry and low pass filtering the board measures 26 × 42 mm and operates over a range of 1 m.",
"title": ""
},
{
"docid": "cac8aa7cfd50da05a6f973b019e8c4f5",
"text": "Deep learning has led to remarkable advances when applied to problems where the data distribution does not change over the course of learning. In stark contrast, biological neural networks continually adapt to changing domains, and solve a diversity of tasks simultaneously. Furthermore, synapses in biological neurons are not simply real-valued scalars, but possess complex molecular machinery enabling non-trivial learning dynamics. In this study, we take a first step toward bringing this biological complexity into artificial neural networks. We introduce a model of intelligent synapses that accumulate task relevant information over time, and exploit this information to efficiently consolidate memories of old tasks to protect them from being overwritten as new tasks are learned. We apply our framework to learning sequences of related classification problems, and show that it dramatically reduces catastrophic forgetting while maintaining computational efficiency.",
"title": ""
},
{
"docid": "1b7eabe7f6e62c09fa2f840fa642088b",
"text": "Hadoop is seriously limited by its MapReduce scheduler which does not scale well in heterogeneous environment. Heterogenous environment is characterized by various devices which vary greatly with respect to the capacities of computation and communication, architectures, memorizes and power. As an important extension of Hadoop, LATE MapReduce scheduling algorithm takes heterogeneous environment into consideration. However, it falls short of solving the crucial problem – poor performance due to the static manner in which it computes progress of tasks. Consequently, neither Hadoop nor LATE schedulers are desirable in heterogeneous environment. To this end, we propose SAMR: a Self-Adaptive MapReduce scheduling algorithm, which calculates progress of tasks dynamically and adapts to the continuously varying environment automatically. When a job is committed, SAMR splits the job into lots of fine-grained map and reduce tasks, then assigns them to a series of nodes. Meanwhile, it reads historical information which stored on every node and updated after every execution. Then, SAMR adjusts time weight of each stage of map and reduce tasks according to the historical information respectively. Thus, it gets the progress of each task accurately and finds which tasks need backup tasks. What’s more, it identifies slow nodes and classifies them into the sets of slow nodes dynamically. According to the information of these slow nodes, SAMR will not launch backup tasks on them, ensuring the backup tasks will not be slow tasks any more. It gets the final results of the fine-grained tasks when either slow tasks or backup tasks finish first. The proposed algorithm is evaluated by extensive experiments over various heterogeneous environment. Experimental results show that SAMR significantly decreases the time of execution up to 25% compared with Hadoop’s scheduler and up to 14% compared with LATE scheduler.",
"title": ""
},
{
"docid": "b81d07c26f8c28527c08fe8aeaffbdd8",
"text": "Remote keyless-entry systems are systems that are widely used to control access to vehicles or buildings. The system is increasingly secured against hacking attacks by use of encryption and code algorithms. However, there are effective hacker attacks that rely on jamming the wireless link from the key fob to the receiver, while the attacker is able to receive the signal from the key fob. In this paper, we show that typical envelope receivers that are often used in remote keyless-entry systems are highly vulnerable to pulsed interference as compared to continuous interference. The effects of pulsed interference on envelope detectors are analyzed through both simulations and measurements. An improved receiver design would use synchronous receivers instead, which are not very sensitive against pulsed interference.",
"title": ""
},
{
"docid": "dcfe8e834a7726aa49ea37368ffc6ff6",
"text": "Object recognition and categorization are computationally difficult tasks that are performed effortlessly by humans. Attempts have been made to emulate the computations in different parts of the primate cortex to gain a better understanding of the cortex and to design brain–machine interfaces that speak the same language as the brain. The HMAX model proposed by Riesenhuber and Poggio and extended by Serre <etal/> attempts to truly model the visual cortex. In this paper, we provide a spike-based implementation of the HMAX model, demonstrating its ability to perform biologically-plausible MAX computations as well as classify basic shapes. The spike-based model consists of 2514 neurons and 17<formula formulatype=\"inline\"><tex Notation=\"TeX\">$\\thinspace$</tex> </formula>305 synapses (S1 Layer: 576 neurons and 7488 synapses, C1 Layer: 720 neurons and 2880 synapses, S2 Layer: 576 neurons and 1152 synapses, C2 Layer: 640 neurons and 5760 synapses, and Classifier: 2 neurons and 25 synapses). Without the limits of the retina model, it will take the system 2 min to recognize rectangles and triangles in 24<formula formulatype=\"inline\"><tex Notation=\"TeX\">$\\,\\times\\,$</tex> </formula>24 pixel images. This can be reduced to 4.8 s by rearranging the lookup table so that neurons which have similar responses to the same input(s) can be placed on the same row and affected in parallel.",
"title": ""
},
{
"docid": "5487dd1976a164447c821303b53ebdf8",
"text": "Rapid and pervasive digitization of innovation processes and outcomes has upended extant theories on innovation management by calling into question fundamental assumptions about the definitional boundaries for innovation, agency for innovation, and the relationship between innovation processes and outcomes. There is a critical need for novel theorizing on digital innovation management that does not rely on such assumptions and draws on the rich and rapidly emerging research on digital technologies. We offer suggestions for such theorizing in the form of four new theorizing logics, or elements, that are likely to be valuable in constructing more accurate explanations of innovation processes and outcomes in an increasingly digital world. These logics can open new avenues for researchers to contribute to this important area. Our suggestions in this paper, coupled with the six research notes included in the special issue on digital innovation management, seek to offer a broader foundation for reinventing innovation management research in a digital world.",
"title": ""
},
{
"docid": "88081780e89cf2d49184a04a7b516ad7",
"text": "DEX files are executable files of Android applications. Since DEX files are in the format of Java bytecodes, their Java source codes can be easily obtained using static reverse engineering tools. This results in numerous Android application thefts. There are some tools (e.g. bangcle, ijiami, liapp) that protect Android applications against static reverse engineering utilizing dynamic code loading. These tools usually encrypt classes.dex in an APK file. When the application is launched, the encrypted classes.dex file is decrypted and dynamically loaded. However, these tools fail to protect multidex APKs, which include more than one DEX files (classes2.dex, classes3.dex, ...) to accommodate large-sized execution codes. In this paper, we propose a technique that protects multidex Android applications against static reverse engineering. The technique can encrypt/decrypt multiple DEX files in APK files and dynamically load them. The experimental results show that the proposed technique can effiectively protect multidex APKs.",
"title": ""
},
{
"docid": "59b7afc5c2af7de75248c90fdf5c9cd3",
"text": "Non-uniform blind deblurring for general dynamic scenes is a challenging computer vision problem as blurs arise not only from multiple object motions but also from camera shake, scene depth variation. To remove these complicated motion blurs, conventional energy optimization based methods rely on simple assumptions such that blur kernel is partially uniform or locally linear. Moreover, recent machine learning based methods also depend on synthetic blur datasets generated under these assumptions. This makes conventional deblurring methods fail to remove blurs where blur kernel is difficult to approximate or parameterize (e.g. object motion boundaries). In this work, we propose a multi-scale convolutional neural network that restores sharp images in an end-to-end manner where blur is caused by various sources. Together, we present multi-scale loss function that mimics conventional coarse-to-fine approaches. Furthermore, we propose a new large-scale dataset that provides pairs of realistic blurry image and the corresponding ground truth sharp image that are obtained by a high-speed camera. With the proposed model trained on this dataset, we demonstrate empirically that our method achieves the state-of-the-art performance in dynamic scene deblurring not only qualitatively, but also quantitatively.",
"title": ""
},
{
"docid": "c67ffe3dfa6f0fe0449f13f1feb20300",
"text": "The associations between giving a history of physical, emotional, and sexual abuse in children and a range of mental health, interpersonal, and sexual problems in adult life were examined in a community sample of women. Abuse was defined to establish groups giving histories of unequivocal victimization. A history of any form of abuse was associated with increased rates of psychopathology, sexual difficulties, decreased self-esteem, and interpersonal problems. The similarities between the three forms of abuse in terms of their association with negative adult outcomes was more apparent than any differences, though there was a trend for sexual abuse to be particularly associated to sexual problems, emotional abuse to low self-esteem, and physical abuse to marital breakdown. Abuse of all types was more frequent in those from disturbed and disrupted family backgrounds. The background factors associated with reports of abuse were themselves often associated to the same range of negative adult outcomes as for abuse. Logistic regressions indicated that some, though not all, of the apparent associations between abuse and adult problems was accounted for by this matrix of childhood disadvantage from which abuse so often emerged.",
"title": ""
},
{
"docid": "2f7e5807415398cb95f8f1ab36a0438f",
"text": "We present a Convolutional Neural Network (CNN) regression based framework for 2-D/3-D medical image registration, which directly estimates the transformation parameters from image features extracted from the DRR and the X-ray images using learned hierarchical regressors. Our framework consists of learning and application stages. In the learning stage, CNN regressors are trained using supervised machine learning to reveal the correlation between the transformation parameters and the image features. In the application stage, CNN regressors are applied on extracted image features in a hierarchical manner to estimate the transformation parameters. Our experiment results demonstrate that the proposed method can achieve real-time 2-D/3-D registration with very high (i.e., sub-milliliter) accuracy.",
"title": ""
},
{
"docid": "8411019e166f3b193905099721c29945",
"text": "In this article we recast the Dahl, LuGre, and Maxwell-slip models as extended, generalized, or semilinear Duhem models. We classified each model as either rate independent or rate dependent. Smoothness properties of the three friction models were also considered. We then studied the hysteresis induced by friction in a single-degree-of-freedom system. The resulting system was modeled as a linear system with Duhem feedback. For each friction model, we computed the corresponding hysteresis map. Next, we developed a DC servo motor testbed and performed motion experiments. We then modeled the testbed dynamics and simulated the system using all three friction models. By comparing the simulated and experimental results, it was found that the LuGre model provides the best model of the gearbox friction characteristics. A manual tuning approach was used to determine parameters that model the friction in the DC motor.",
"title": ""
},
{
"docid": "5eb03beba0ac2c94e6856d16e90799fc",
"text": "The explosive growth of malware variants poses a major threat to information security. Traditional anti-virus systems based on signatures fail to classify unknown malware into their corresponding families and to detect new kinds of malware programs. Therefore, we propose a machine learning based malware analysis system, which is composed of three modules: data processing, decision making, and new malware detection. The data processing module deals with gray-scale images, Opcode n-gram, and import functions, which are employed to extract the features of the malware. The decision-making module uses the features to classify the malware and to identify suspicious malware. Finally, the detection module uses the shared nearest neighbor (SNN) clustering algorithm to discover new malware families. Our approach is evaluated on more than 20 000 malware instances, which were collected by Kingsoft, ESET NOD32, and Anubis. The results show that our system can effectively classify the unknown malware with a best accuracy of 98.9%, and successfully detects 86.7% of the new malware.",
"title": ""
},
{
"docid": "f55142357894f2a1fe4315a070a2d3ec",
"text": "A parasitic layer-based multifunctional reconfigurable antenna array (MRAA) formed by the linear combination of four (4 <formula formulatype=\"inline\"> <tex Notation=\"TeX\">$\\times$</tex></formula> 1) identical multifunctional reconfigurable antenna (MRA) elements is presented. Each MRA produces eight modes of operation corresponding to three steerable beam directions (<formula formulatype=\"inline\"><tex Notation=\"TeX\">$\\theta_{xz}=-30^{\\circ}$</tex></formula>, 0<formula formulatype=\"inline\"><tex Notation=\"TeX\">$^{\\circ}$</tex></formula>, 30<formula formulatype=\"inline\"><tex Notation=\"TeX\">$^{\\circ}$</tex></formula>) with linear and circular polarizations in <formula formulatype=\"inline\"><tex Notation=\"TeX\">$x-z$</tex></formula> plane and another two steerable beam directions (<formula formulatype=\"inline\"><tex Notation=\"TeX\">$\\theta_{yz}=-30^{\\circ}$</tex> </formula>, 30<formula formulatype=\"inline\"><tex Notation=\"TeX\">$^{\\circ}$</tex> </formula>) in <formula formulatype=\"inline\"><tex Notation=\"TeX\">$y-z$</tex> </formula> plane with linear polarization. An individual MRA consists of an aperture-coupled driven patch antenna with a parasitic layer placed above it. The surface of the parasitic layer has a grid of 4 <formula formulatype=\"inline\"> <tex Notation=\"TeX\">$\\times$</tex></formula> 4 electrically-small rectangular-shaped metallic pixels. The adjacent pixels can be connected/disconnected by means of switching resulting in reconfigurability in beam-direction and polarization. A 4 <formula formulatype=\"inline\"><tex Notation=\"TeX\">$\\times$</tex></formula> 1 linear MRAA operating in the <formula formulatype=\"inline\"><tex Notation=\"TeX\">$\\sim$</tex> </formula>5.4–5.6 GHz is formed by the optimized MRA elements. MRA and MRAA prototypes have been fabricated and measured. The measured and simulated results agree well indicating <formula formulatype=\"inline\"><tex Notation=\"TeX\">$\\sim$</tex> </formula>13.5 dB realized array gain and <formula formulatype=\"inline\"><tex Notation=\"TeX\">$\\sim$</tex></formula>3% common bandwidth. The MRAA presents some advantages as compared to a standard antenna array: MRAA alleviates the scan loss inherit to standard antenna arrays, provides higher gain, does not need phase shifters for beam steering in certain plane, and is capable of polarization reconfigurability.",
"title": ""
},
{
"docid": "42961b66e41a155edb74cc4ab5493c9c",
"text": "OBJECTIVE\nTo determine the preventive effect of manual lymph drainage on the development of lymphoedema related to breast cancer.\n\n\nDESIGN\nRandomised single blinded controlled trial.\n\n\nSETTING\nUniversity Hospitals Leuven, Leuven, Belgium.\n\n\nPARTICIPANTS\n160 consecutive patients with breast cancer and unilateral axillary lymph node dissection. The randomisation was stratified for body mass index (BMI) and axillary irradiation and treatment allocation was concealed. Randomisation was done independently from recruitment and treatment. Baseline characteristics were comparable between the groups.\n\n\nINTERVENTION\nFor six months the intervention group (n = 79) performed a treatment programme consisting of guidelines about the prevention of lymphoedema, exercise therapy, and manual lymph drainage. The control group (n = 81) performed the same programme without manual lymph drainage.\n\n\nMAIN OUTCOME MEASURES\nCumulative incidence of arm lymphoedema and time to develop arm lymphoedema, defined as an increase in arm volume of 200 mL or more in the value before surgery.\n\n\nRESULTS\nFour patients in the intervention group and two in the control group were lost to follow-up. At 12 months after surgery, the cumulative incidence rate for arm lymphoedema was comparable between the intervention group (24%) and control group (19%) (odds ratio 1.3, 95% confidence interval 0.6 to 2.9; P = 0.45). The time to develop arm lymphoedema was comparable between the two group during the first year after surgery (hazard ratio 1.3, 0.6 to 2.5; P = 0.49). The sample size calculation was based on a presumed odds ratio of 0.3, which is not included in the 95% confidence interval. This odds ratio was calculated as (presumed cumulative incidence of lymphoedema in intervention group/presumed cumulative incidence of no lymphoedema in intervention group)×(presumed cumulative incidence of no lymphoedema in control group/presumed cumulative incidence of lymphoedema in control group) or (10/90)×(70/30).\n\n\nCONCLUSION\nManual lymph drainage in addition to guidelines and exercise therapy after axillary lymph node dissection for breast cancer is unlikely to have a medium to large effect in reducing the incidence of arm lymphoedema in the short term. Trial registration Netherlands Trial Register No NTR 1055.",
"title": ""
},
{
"docid": "73b76fa13443a4c285dc9a97cfaa22dd",
"text": "As mobile ad hoc network applications are deployed, security emerges as a central requirement. In this paper, we introduce the wormhole attack, a severe attack in ad hoc networks that is particularly challenging to defend against. The wormhole attack is possible even if the attacker has not compromised any hosts, and even if all communication provides authenticity and confidentiality. In the wormhole attack, an attacker records packets (or bits) at one location in the network, tunnels them (possibly selectively) to another location, and retransmits them there into the network. The wormhole attack can form a serious threat in wireless networks, especially against many ad hoc network routing protocols and location-based wireless security systems. For example, most existing ad hoc network routing protocols, without some mechanism to defend against the wormhole attack, would be unable to find routes longer than one or two hops, severely disrupting communication. We present a general mechanism, called packet leashes, for detecting and, thus defending against wormhole attacks, and we present a specific protocol, called TIK, that implements leashes. We also discuss topology-based wormhole detection, and show that it is impossible for these approaches to detect some wormhole topologies.",
"title": ""
},
{
"docid": "5df96510354ee3b37034a99faeff4956",
"text": "In recent years, the task of recommending hashtags for microblogs has been given increasing attention. Various methods have been proposed to study the problem from different aspects. However, most of the recent studies have not considered the differences in the types or uses of hashtags. In this paper, we introduce a novel nonparametric Bayesian method for this task. Based on the Dirichlet Process Mixture Models (DPMM), we incorporate the type of hashtag as a hidden variable. The results of experiments on the data collected from a real world microblogging service demonstrate that the proposed method outperforms stateof-the-art methods that do not consider these aspects. By taking these aspects into consideration, the relative improvement of the proposed method over the state-of-theart methods is around 12.2% in F1score.",
"title": ""
},
{
"docid": "48393a47c0f977c77ef346ef2432e8f5",
"text": "Information Systems researchers and technologists have built and investigated Decision Support Systems (DSS) for almost 40 years. This article is a narrative overview of the history of Decision Support Systems (DSS) and a means of gathering more first-hand accounts about the history of DSS. Readers are asked to comment upon the stimulus narrative titled “A Brief History of Decision Support Systems” that has been read by thousands of visitors to DSSResources.COM. Also, the stimulus narrative has been reviewed by a number of key actors who created the history of DSS. The narrative is divided into four sections: The Early Years – 1964-1975; Developing DSS Theory – 1976-1982; Expanding the Scope of Decision Support – 1979-1989; and A Technology Shift – 1990-1995.",
"title": ""
},
{
"docid": "8eb1e94c9b40a9989d0e07b68dde755c",
"text": "Virtual Private Network is a widely used technology for secure data transmission. The purpose of a VPN is to provide a secure of way of transferring sensitive data between two or more parties over an insecure channel. Flaws in the implementations of security protocols are some of the most serious security problems. This paper describes a popular VPN solution, OpenVPN, as well as methodology used to infer state machines from a security protocol, using largely automated fuzzing techniques. If a vulnerability is found, an attacker may remotely exploit vulnerable systems over the Internet. State machines can be used to specify possible sequences of sent and received messages in different states of protocol. Learning techniques allow the automatic inference of the behavior of a protocol implementation as a state machine. Additionally, fuzzing is a well known and effective testing method which allows discovering different flaws within the implementations. Combining automatic state machine inference and protocol fuzzing, it is possible to produce a universal state machine which is a good representation of the implemented protocol structure. Manually inspecting these state machines allows for a straightforward way to possibly find bugs, inaccuracies or vulnerabilities in the implementation.",
"title": ""
},
{
"docid": "ff0837ae319f4a40fdd58b91947447d7",
"text": "Text documents can be described by a number of abstract concepts such as semantic category, writing style, or sentiment. Machine learning (ML) models have been trained to automatically map documents to these abstract concepts, allowing to annotate very large text collections, more than could be processed by a human in a lifetime. Besides predicting the text's category very accurately, it is also highly desirable to understand how and why the categorization process takes place. In this paper, we demonstrate that such understanding can be achieved by tracing the classification decision back to individual words using layer-wise relevance propagation (LRP), a recently developed technique for explaining predictions of complex non-linear classifiers. We train two word-based ML models, a convolutional neural network (CNN) and a bag-of-words SVM classifier, on a topic categorization task and adapt the LRP method to decompose the predictions of these models onto words. Resulting scores indicate how much individual words contribute to the overall classification decision. This enables one to distill relevant information from text documents without an explicit semantic information extraction step. We further use the word-wise relevance scores for generating novel vector-based document representations which capture semantic information. Based on these document vectors, we introduce a measure of model explanatory power and show that, although the SVM and CNN models perform similarly in terms of classification accuracy, the latter exhibits a higher level of explainability which makes it more comprehensible for humans and potentially more useful for other applications.",
"title": ""
}
] |
scidocsrr
|
4f3376841c3e33afc3712819d0651280
|
Android anomaly detection system using machine learning classification
|
[
{
"docid": "2f5107659cba0db161fbdf390ef05d26",
"text": "Currently, in the smartphone market, Android is the platform with the highest share. Due to this popularity and also to its open source nature, Android-based smartphones are now an ideal target for attackers. Since the number of malware designed for Android devices is increasing fast, Android users are looking for security solutions aimed at preventing malicious actions from damaging their smartphones. In this paper, we describe MADAM, a Multi-level Anomaly Detector for Android Malware. MADAM concurrently monitors Android at the kernel-level and user-level to detect real malware infections using machine learning techniques to distinguish between standard behaviors and malicious ones. The first prototype of MADAM is able to detect several real malware found in the wild. The device usability is not affected by MADAM due to the low number of false positives generated after the learning phase.",
"title": ""
},
{
"docid": "7533347e8c5daf17eb09e64db0fa4394",
"text": "Android has become the most popular smartphone operating system. This rapidly increasing adoption of Android has resulted in significant increase in the number of malwares when compared with previous years. There exist lots of antimalware programs which are designed to effectively protect the users’ sensitive data in mobile systems from such attacks. In this paper, our contribution is twofold. Firstly, we have analyzed the Android malwares and their penetration techniques used for attacking the systems and antivirus programs that act against malwares to protect Android systems. We categorize many of the most recent antimalware techniques on the basis of their detection methods. We aim to provide an easy and concise view of the malware detection and protection mechanisms and deduce their benefits and limitations. Secondly, we have forecast Android market trends for the year up to 2018 and provide a unique hybrid security solution and take into account both the static and dynamic analysis an android application. Keywords—Android; Permissions; Signature",
"title": ""
}
] |
[
{
"docid": "c04e3a28b6f3f527edae534101232701",
"text": "An intelligent interface for an information retrieval system has the aims of controlling an underlying information retrieval system di rectly interacting with the user and allowing him to retrieve relevant information without the support of a human intermediary Developing intelligent interfaces for information retrieval is a di cult activity and no well established models of the functions that such systems should possess are available Despite of this di culty many intelligent in terfaces for information retrieval have been implemented in the past years This paper surveys these systems with two aims to stand as a useful entry point for the existing literature and to sketch an ana lysis of the functionalities that an intelligent interface for information retrieval has to possess",
"title": ""
},
{
"docid": "64e99944158284edb4474a2d0481f67b",
"text": "Synthesizing face sketches from real photos and its inverse have many applications. However, photo/sketch synthesis remains a challenging problem due to the fact that photo and sketch have different characteristics. In this work, we consider this task as an image-to-image translation problem and explore the recently popular generative models (GANs) to generate high-quality realistic photos from sketches and sketches from photos. Recent GAN-based methods have shown promising results on image-to-image translation problems and photo-to-sketch synthesis in particular, however, they are known to have limited abilities in generating high-resolution realistic images. To this end, we propose a novel synthesis framework called Photo-Sketch Synthesis using Multi-Adversarial Networks, (PS2-MAN) that iteratively generates low resolution to high resolution images in an adversarial way. The hidden layers of the generator are supervised to first generate lower resolution images followed by implicit refinement in the network to generate higher resolution images. Furthermore, since photo-sketch synthesis is a coupled/paired translation problem, we leverage the pair information using CycleGAN framework. Both Image Quality Assessment (IQA) and Photo-Sketch Matching experiments are conducted to demonstrate the superior performance of our framework in comparison to existing state-of-the-art solutions. Code available at: https://github.com/lidan1/PhotoSketchMAN.",
"title": ""
},
{
"docid": "bd0ad585dcc655cca1ae753a15056027",
"text": "Intrusion detection corresponds to a suite of techniques that are used to identify attacks against computers and network infrastructures. Anomaly detection is a key element of intrusion detection in which perturbations of normal behavior suggest the presence of intentionally or unintentionally induced attacks, faults, defects, etc. This paper focuses on a detailed comparative study of several anomaly detection schemes for identifying different network intrusions. Several existing supervised and unsupervised anomaly detection schemes and their variations are evaluated on the DARPA 1998 data set of network connections [9] as well as on real network data using existing standard evaluation techniques as well as using several specific metrics that are appropriate when detecting attacks that involve a large number of connections. Our experimental results indicate that some anomaly detection schemes appear very promising when detecting novel intrusions in both DARPA’98 data and real network data.",
"title": ""
},
{
"docid": "73e27f751c8027bac694f2e876d4d910",
"text": "The numerous and diverse applications of the Internet of Things (IoT) have the potential to change all areas of daily life of individuals, businesses, and society as a whole. The vision of a pervasive IoT spans a wide range of application domains and addresses the enabling technologies needed to meet the performance requirements of various IoT applications. In order to accomplish this vision, this paper aims to provide an analysis of literature in order to propose a new classification of IoT applications, specify and prioritize performance requirements of such IoT application classes, and give an insight into state-of-the-art technologies used to meet these requirements, all from telco’s perspective. A deep and comprehensive understanding of the scope and classification of IoT applications is an essential precondition for determining their performance requirements with the overall goal of defining the enabling technologies towards fifth generation (5G) networks, while avoiding over-specification and high costs. Given the fact that this paper presents an overview of current research for the given topic, it also targets the research community and other stakeholders interested in this contemporary and attractive field for the purpose of recognizing research gaps and recommending new research directions.",
"title": ""
},
{
"docid": "ed98eb7aa069c00e2be8a27ef889b623",
"text": "The class imbalance problem has been known to hinder the learning performance of classification algorithms. Various real-world classification tasks such as text categorization suffer from this phenomenon. We demonstrate that active learning is capable of solving the problem.",
"title": ""
},
{
"docid": "5b34624e72b1ed936ddca775cca329ca",
"text": "The advent of Cloud computing as a newmodel of service provisioning in distributed systems encourages researchers to investigate its benefits and drawbacks on executing scientific applications such as workflows. One of the most challenging problems in Clouds is workflow scheduling, i.e., the problem of satisfying the QoS requirements of the user as well as minimizing the cost of workflow execution. We have previously designed and analyzed a two-phase scheduling algorithm for utility Grids, called Partial Critical Paths (PCP), which aims to minimize the cost of workflow execution while meeting a userdefined deadline. However, we believe Clouds are different from utility Grids in three ways: on-demand resource provisioning, homogeneous networks, and the pay-as-you-go pricing model. In this paper, we adapt the PCP algorithm for the Cloud environment and propose two workflow scheduling algorithms: a one-phase algorithmwhich is called IaaS Cloud Partial Critical Paths (IC-PCP), and a two-phase algorithm which is called IaaS Cloud Partial Critical Paths with Deadline Distribution (IC-PCPD2). Both algorithms have a polynomial time complexity which make them suitable options for scheduling large workflows. The simulation results show that both algorithms have a promising performance, with IC-PCP performing better than IC-PCPD2 in most cases. © 2012 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "88fae4c5e17a8a869ebde7d7cf99ecf0",
"text": "The method of cDNA-AFLP allows detection of differentially expressed transcripts using PCR. This report provides a detailed and updated protocol for the cDNA-AFLP procedure and an analysis of interactions between its various parameters. We studied the effects of PCR cycle number and template dilution level on the number of transcript derived fragments (TDFs). We also examined the use of magnetic beads to synthesise cDNA and the effect of MgCl2 concentration during amplification. Finally, we determined the detection level of the cDNA-AFLP method using TDFs of various sizes and composition. We could detect TDFs corresponding to a single copy per cell of a specific transcript in a cDNA-AFLP pattern, indicating high sensitivity of the method. Also, there was no correlation between concentration of detectable TDF and the fragment size, stressing the high stringency of the amplification reactions. Theoretical considerations and specific applications of the method are discussed.",
"title": ""
},
{
"docid": "f2fdd2f5a945d48c323ae6eb3311d1d0",
"text": "Distributed computing systems such as clouds continue to evolve to support various types of scientific applications, especially scientific workflows, with dependable, consistent, pervasive, and inexpensive access to geographically-distributed computational capabilities. Scheduling multiple workflows on distributed computing systems like Infrastructure-as-a-Service (IaaS) clouds is well recognized as a fundamental NP-complete problem that is critical to meeting various types of Quality-of-Service (QoS) requirements. In this paper, we propose a multiobjective optimization workflow scheduling approach based on dynamic game-theoretic model aiming at reducing workflow make-spans, reducing total cost, and maximizing system fairness in terms of workload distribution among heterogeneous cloud virtual machines (VMs). We conduct extensive case studies as well based on various well-known scientific workflow templates and real-world third-party commercial IaaS clouds. Experimental results clearly suggest that our proposed approach outperform traditional ones by achieving lower workflow make-spans, lower cost, and better system fairness.",
"title": ""
},
{
"docid": "409a45b65fdd9e85ae54265c44863db5",
"text": "Use of leaf meters to provide an instantaneous assessment of leaf chlorophyll has become common, but calibration of meter output into direct units of leaf chlorophyll concentration has been difficult and an understanding of the relationship between these two parameters has remained elusive. We examined the correlation of soybean (Glycine max) and maize (Zea mays L.) leaf chlorophyll concentration, as measured by organic extraction and spectrophotometric analysis, with output (M) of the Minolta SPAD-502 leaf chlorophyll meter. The relationship is non-linear and can be described by the equation chlorophyll (μmol m−2)=10(M0.265), r 2=0.94. Use of such an exponential equation is theoretically justified and forces a more appropriate fit to a limited data set than polynomial equations. The exact relationship will vary from meter to meter, but will be similar and can be readily determined by empirical methods. The ability to rapidly determine leaf chlorophyll concentrations by use of the calibration method reported herein should be useful in studies on photosynthesis and crop physiology.",
"title": ""
},
{
"docid": "f805061b807322d39729320b77c74e67",
"text": "In the past, self-infliction of sharp force was a classic form of suicide, while in modern times it is quite rare, constituting only 2% to 3% of all self-inflicted deaths. In Japan, the jigai (Japanese characters: see text) ritual is a traditional method of female suicide, carried out by cutting the jugular vein using a knife called a tantō. The jigai ritual is the feminine counterpart of seppuku (well-known as harakiri), the ritual suicide of samurai warriors, which was carried out by a deep slash into the abdomen. In contrast to seppuku, jigai can be performed without assistance, which was fundamental for seppuku.The case we describe here involves an unusual case of suicide in which the victim was a male devotee of Japanese culture and weapons. He was found dead in his bathtub with a deep slash in the right lateral-cervical area, having cut only the internal jugular vein with a tantō knife, exactly as specified by the jigai ritual.",
"title": ""
},
{
"docid": "2b9b7b218e112447fa4cdd72085d3916",
"text": "A 48-year-old female patient presented with gigantomastia. The sternal notch-nipple distance was 55 cm for the right breast and 50 cm for the left. Vertical mammaplasty based on the superior pedicle was performed. The resected tissue weighed 3400 g for the right breast and 2800 g for the left breast. The outcome was excellent with respect to symmetry, shape, size, residual scars, and sensitivity of the nipple-areola complex. Longer pedicles or larger resections were not found in the literature on vertical mammaplasty applications. In our opinion, by using the vertical mammaplasty technique in gigantomastia it is possible to achieve a well-projecting shape and preserve NAC sensitivity.",
"title": ""
},
{
"docid": "8f183ac262aac98c563bf9dcc69b1bf5",
"text": "Functional infrared thermal imaging (fITI) is considered a promising method to measure emotional autonomic responses through facial cutaneous thermal variations. However, the facial thermal response to emotions still needs to be investigated within the framework of the dimensional approach to emotions. The main aim of this study was to assess how the facial thermal variations index the emotional arousal and valence dimensions of visual stimuli. Twenty-four participants were presented with three groups of standardized emotional pictures (unpleasant, neutral and pleasant) from the International Affective Picture System. Facial temperature was recorded at the nose tip, an important region of interest for facial thermal variations, and compared to electrodermal responses, a robust index of emotional arousal. Both types of responses were also compared to subjective ratings of pictures. An emotional arousal effect was found on the amplitude and latency of thermal responses and on the amplitude and frequency of electrodermal responses. The participants showed greater thermal and dermal responses to emotional than to neutral pictures with no difference between pleasant and unpleasant ones. Thermal responses correlated and the dermal ones tended to correlate with subjective ratings. Finally, in the emotional conditions compared to the neutral one, the frequency of simultaneous thermal and dermal responses increased while both thermal or dermal isolated responses decreased. Overall, this study brings convergent arguments to consider fITI as a promising method reflecting the arousal dimension of emotional stimulation and, consequently, as a credible alternative to the classical recording of electrodermal activity. The present research provides an original way to unveil autonomic implication in emotional processes and opens new perspectives to measure them in touchless conditions.",
"title": ""
},
{
"docid": "6bae81e837f4a498ae4c814608aac313",
"text": "person’s ability to focus on his or her primary task. Distractions occur especially in mobile environments, because walking, driving, or other real-world interactions often preoccupy the user. A pervasivecomputing environment that minimizes distraction must be context aware, and a pervasive-computing system must know the user’s state to accommodate his or her needs. Context-aware applications provide at least two fundamental services: spatial awareness and temporal awareness. Spatially aware applications consider a user’s relative and absolute position and orientation. Temporally aware applications consider the time schedules of public and private events. With an interdisciplinary class of Carnegie Mellon University (CMU) students, we developed and implemented a context-aware, pervasive-computing environment that minimizes distraction and facilitates collaborative design.",
"title": ""
},
{
"docid": "39b7ab83a6a0d75b1ec28c5ff485b98d",
"text": "Video object segmentation is a fundamental step in many advanced vision applications. Most existing algorithms are based on handcrafted features such as HOG, super-pixel segmentation or texturebased techniques, while recently deep features have been found to be more efficient. Existing algorithms observe performance degradation in the presence of challenges such as illumination variations, shadows, and color camouflage. To handle these challenges we propose a fusion based moving object segmentation algorithm which exploits color as well as depth information using GAN to achieve more accuracy. Our goal is to segment moving objects in the presence of challenging background scenes, in real environments. To address this problem, GAN is trained in an unsupervised manner on color and depth information independently with challenging video sequences. During testing, the trained GAN generates backgrounds similar to that in the test sample. The generated background samples are then compared with the test sample to segment moving objects. The final result is computed by fusion of object boundaries in both modalities, RGB and the depth. The comparison of our proposed algorithm with five state-of-the-art methods on publicly available dataset has shown the strength of our algorithm for moving object segmentation in videos in the presence of challenging real scenarios.",
"title": ""
},
{
"docid": "b3c83fc9495387f286ea83d00673b5b3",
"text": "A new walk compensation method for a pulsed time-of-flight rangefinder is suggested. The receiver channel operates without gain control using leading edge timing discrimination principle. The generated walk error is compensated for by measuring the pulse length and knowing the relation between the walk error and pulse length. The walk compensation is possible also at the range where the signal is clipped and where the compensation method by amplitude measurement is impossible. Based on the simulations walk error can be compensated within the dynamic range of 1:30 000.",
"title": ""
},
{
"docid": "da2f91adcb64786177733357a2cd0da7",
"text": "Object-oriented programming is as much a different way of designing programs as it is a different way of designing programming languages. This paper describes what it is like to design systems in Smalltalk. In particular, since a major motivation for object-oriented programming is software reuse, this paper describes how classes are developed so that they will be reusable.",
"title": ""
},
{
"docid": "f9570306e0d115d08cc6e69161955fcf",
"text": "Abstract—In this paper, two basic switching cells, P-cell and Ncell, are presented to investigate the topological nature of power electronics circuits. Both cells consist of a switching device and a diode and are the basic building blocks for almost all power electronics circuits. The mirror relationship of the P-cell and Ncell will be revealed. This paper describes the two basic switching cells and shows how all dc-dc converters, voltage source inverters, current source inverters, and multilevel converters are constituted from the two basic cells. Through these two basic cells, great insights about the topology of all power electronics circuits can be obtained for the construction and decomposition of existing power electronic circuits. New power conversion circuits can now be easily derived and invented.",
"title": ""
},
{
"docid": "ec9c15e543444e88cc5d636bf1f6e3b9",
"text": "Which ZSL method is more robust to GZSL? An Empirical Study and Analysis of Generalized Zero-Shot Learning for Object Recognition in the Wild Wei-Lun Chao*1, Soravit Changpinyo*1, Boqing Gong2, and Fei Sha1,3 1U. of Southern California, 2U. of Central Florida, 3U. of California, Los Angeles NSF IIS-1566511, 1065243, 1451412, 1513966, 1208500, CCF-1139148, USC Graduate Fellowship, a Google Research Award, an Alfred P. Sloan Research Fellowship and ARO# W911NF-12-1-0241 and W911NF-15-1-0484.",
"title": ""
},
{
"docid": "b37064e74a2c88507eacb9062996a911",
"text": "This article builds a theoretical framework to help explain governance patterns in global value chains. It draws on three streams of literature – transaction costs economics, production networks, and technological capability and firm-level learning – to identify three variables that play a large role in determining how global value chains are governed and change. These are: (1) the complexity of transactions, (2) the ability to codify transactions, and (3) the capabilities in the supply-base. The theory generates five types of global value chain governance – hierarchy, captive, relational, modular, and market – which range from high to low levels of explicit coordination and power asymmetry. The article highlights the dynamic and overlapping nature of global value chain governance through four brief industry case studies: bicycles, apparel, horticulture and electronics.",
"title": ""
}
] |
scidocsrr
|
ee66d297c6a3dadb53188dd878ddd815
|
Association of excessive smartphone use with psychological well-being among university students in Chiang Mai, Thailand
|
[
{
"docid": "decd813dfea894afdceb55b3ca087487",
"text": "BACKGROUND\nAddiction to smartphone usage is a common worldwide problem among adults, which might negatively affect their wellbeing. This study investigated the prevalence and factors associated with smartphone addiction and depression among a Middle Eastern population.\n\n\nMETHODS\nThis cross-sectional study was conducted in 2017 using a web-based questionnaire distributed via social media. Responses to the Smartphone Addiction Scale - Short version (10-items) were rated on a 6-point Likert scale, and their percentage mean score (PMS) was commuted. Responses to Beck's Depression Inventory (20-items) were summated (range 0-60); their mean score (MS) was commuted and categorized. Higher scores indicated higher levels of addiction and depression. Factors associated with these outcomes were identified using descriptive and regression analyses. Statistical significance was set at P < 0.05.\n\n\nRESULTS\nComplete questionnaires were 935/1120 (83.5%), of which 619 (66.2%) were females and 316 (33.8%) were males. The mean ± standard deviation of their age was 31.7 ± 11 years. Majority of participants obtained university education 766 (81.9%), while 169 (18.1%) had school education. The PMS of addiction was 50.2 ± 20.3, and MS of depression was 13.6 ± 10.0. A significant positive linear relationship was present between smart phone addiction and depression (y = 39.2 + 0.8×; P < 0.001). Significantly higher smartphone addiction scores were associated with younger age users, (β = - 0.203, adj. P = 0.004). Factors associated with higher depression scores were school educated users (β = - 2.03, adj. P = 0.01) compared to the university educated group and users with higher smart phone addiction scores (β =0.194, adj. P < 0.001).\n\n\nCONCLUSIONS\nThe positive correlation between smartphone addiction and depression is alarming. Reasonable usage of smart phones is advised, especially among younger adults and less educated users who could be at higher risk of depression.",
"title": ""
}
] |
[
{
"docid": "56a23ce1028b432c9c6361b2fc4be64c",
"text": "Training machine learning models at scale is a popular workload for distributed data flow systems. However, as these systems were originally built to fulfill quite different requirements it remains an open question how effectively they actually perform for ML workloads. In this paper we argue that benchmarking of large scale ML systems should consider state of the art, single machine libraries as baselines and sketch such a benchmark for distributed data flow systems. We present an experimental evaluation of a representative problem for XGBoost, LightGBM and Vowpal Wabbit and compare them to Apache Spark MLlib with respect to both: runtime and prediction quality. Our results indicate that while being able to robustly scale with increasing data set size, current generation data flow systems are surprisingly inefficient at training machine learning models at need substantial resources to come within reach of the performance of single machine libraries.",
"title": ""
},
{
"docid": "e249d8d00610ef1e5e48fdc39b63c803",
"text": "With the increasing availability of metropolitan transportation data, such as those from vehicle GPSs (Global Positioning Systems) and road-side sensors, it becomes viable for authorities, operators, as well as individuals to analyze the data for a better understanding of the transportation system and possibly improved utilization and planning of the system. We report our experience in building the VAST (Visual Analytics for Smart Transportation) system. Our key observation is that metropolitan transportation data are inherently visual as they are spatio-temporal around road networks. Therefore, we visualize traffic data together with digital maps and support analytical queries through this interactive visual interface. As a case study, we demonstrate VAST on real-world taxi GPS and meter data sets from 15, 000 taxis running two months in a Chinese city of over 10 million population. We discuss the technical challenges in data cleaning, storage, visualization, and query processing, and offer our first-hand lessons learned from developing the system.",
"title": ""
},
{
"docid": "725e826f13a17fe73369e85733431e32",
"text": "This study aims to explore the determinants influencing usage intention in mobile social media from the user motivation and the Theory of Planned Behavior (TPB) perspectives. Based on TPB, this study added three motivations, namely entertainment, sociality, and information, into the TPB model, and further examined the moderating effect of posters and lurkers in the relationships of the proposed model. A structural equation modeling was used and 468 LINE users in Taiwan were investigated. The results revealed that entertainment, sociality, and information are positively associated with behavioral attitude. Moreover, behavioral attitude, subjective norms, and perceived behavioral control are positively associated with usage intention. Furthermore, posters likely post messages on the LINE because of entertainment, sociality, and information, but they are not significantly subject to subjective norms. In contrast, lurkers tend to read, not write messages on the LINE because of entertainment and information rather than sociality and perceived behavioral control.",
"title": ""
},
{
"docid": "acab22843b765af574dacaa3dd594853",
"text": "PURPOSE OF REVIEW\nTo look critically at recent research articles that pertain to children and adolescents who present with genital injuries.\n\n\nRECENT FINDINGS\nEmerging evidence supports links to long-term psychological sequelae of child sexual abuse. Parents should be educated to instruct their children regarding types of child abuse and prevention. 'Medicalization' of female genital mutilation (FGM) by health providers, including 'cutting or pricking', is condemned by international organizations.\n\n\nSUMMARY\nGenital injuries whether accidental or intentional need to be reported with standardized terminology to allow for comparisons between reported outcomes. Motor vehicle accidents associated with pelvic fractures may result in bladder or urethral trauma. Adverse long-term psychosocial behaviors may be sequelae of child sexual abuse. FGM is willful damage to healthy organs for nontherapeutic reasons, and a form of violence against girls and women. Healthcare providers should counsel women suffering from the consequences of FGM, advise them to seek care, counsel them to resist reinfibulation, and prevent this procedure from being performed on their daughters.",
"title": ""
},
{
"docid": "38a174189aa2fceadfac4badb8a8b96e",
"text": "An architectural heritage object carries heterogeneous and multi-layered information beyond physical characteristics. It requires an integrated representation of various types of information for understanding and management prior to the decision-making process of conservation. This requirement is a twofold manner consisting of representation and management processes. There exists a variety of approaches for representation of heritage objects in digital three-dimensional (3D) environment, but the selection of the appropriate one according to the needs is crucial. On one hand, there have been recently great attempts to adopt Building Information Modeling (BIM) for historical buildings. Nevertheless, the related works in the topic focus mainly on pre-processing of data, such as the integration of born-digital material into a BIM environment and the creation of parametric objects according to historical building characteristics. As the information management of a historical building requires enhanced attribute management and integration of different datasets, further investigation on the BIM capabilities in management terms is crucial. On the other hand, Geographical Information Systems (GIS) have great potentials in exploring spatial relationships, but their potential in 3D representation is still somehow limited. The paper reviews and evaluates the roles of BIM and GIS, highlighting their advantages and disadvantages for integration, retrieval and management of heterogeneous data in the context of historical buildings.",
"title": ""
},
{
"docid": "8436d04875e6a0e350aabeeb1c5a691b",
"text": "Interleaving techniques are widely used to reduce input/output ripples, to increase the efficiency and to increase the power capacity of the boost converters. This paper presents an analysis, design and implementation of a high-power multileg interleaved DC/DC boost converter with a digital signal processor (DSP) based controller. This research focuses on non-isolated DC/DC converter that interfaces the fuel cell to the powertrain of the hybrid electric vehicles. In this paper, two-phase interleaved boost converter (IBC) with digital phase-shift control scheme is proposed in order to reduce the input current ripples, to reduce the output voltage ripples and to reduce the size of passive components with high efficiency for high power applications. The digital control based on DSP is proposed to solve the associated synchronization problem with interleaving converters. In addition, the real time workshop (RTW) is used for automatic real-time code generation. The proposed converter is compared with other topologies, such as conventional boost converter (BC) and multi-device boost converter (MDBC) in order to examine its performance. Moreover, a generalized small-signal model with complete parameters of these DC/DC converters is derived. The PWM DC/DC converter topologies and their control are simulated and investigated by using MATLAB/SIMULINK. Experimentally, a dual-loop average current control implemented in TMS320F2808 DSP is employed to achieve the fast transient response. Furthermore, the simulation and experimental results are provided.",
"title": ""
},
{
"docid": "e54a6ff961fe04d8d5c7700077ae1979",
"text": "Extracting meaningful relationships with semantic significance from biomedical literature is often a challenging task. BioCreative V track4 challenge for the first time has organized a comprehensive shared task to test the robustness of the text-mining algorithms in extracting semantically meaningful assertions from the evidence statement in biomedical text. In this work, we tested the ability of a rule-based semantic parser to extract Biological Expression Language (BEL) statements from evidence sentences culled out of biomedical literature as part of BioCreative V Track4 challenge. The system achieved an overall best F-measure of 21.29% in extracting the complete BEL statement. For relation extraction, the system achieved an F-measure of 65.13% on test data set. Our system achieved the best performance in five of the six criteria that was adopted for evaluation by the task organizers. Lack of ability to derive semantic inferences, limitation in the rule sets to map the textual extractions to BEL function were some of the reasons for low performance in extracting the complete BEL statement. Post shared task we also evaluated the impact of differential NER components on the ability to extract BEL statements on the test data sets besides making a single change in the rule sets that translate relation extractions into a BEL statement. There is a marked improvement by over 20% in the overall performance of the BELMiner's capability to extract BEL statement on the test set. The system is available as a REST-API at http://54.146.11.205:8484/BELXtractor/finder/.\n\n\nDatabase URL\nhttp://54.146.11.205:8484/BELXtractor/finder/.",
"title": ""
},
{
"docid": "f9c6f688bc93df9966ed425720045aea",
"text": "The main contribution of this work is a new paradigm for image representation and image compression. We describe a new multilayered representation technique for images. An image is parsed into a superposition of coherent layers: piecewise smooth regions layer, textures layer, etc. The multilayered decomposition algorithm consists in a cascade of compressions applied successively to the image itself and to the residuals that resulted from the previous compressions. During each iteration of the algorithm, we code the residual part in a lossy way: we only retain the most significant structures of the residual part, which results in a sparse representation. Each layer is encoded independently with a different transform, or basis, at a different bitrate, and the combination of the compressed layers can always be reconstructed in a meaningful way. The strength of the multilayer approach comes from the fact that different sets of basis functions complement each others: some of the basis functions will give reasonable account of the large trend of the data, while others will catch the local transients, or the oscillatory patterns. This multilayered representation has a lot of beautiful applications in image understanding, and image and video coding. We have implemented the algorithm and we have studied its capabilities.",
"title": ""
},
{
"docid": "b3d0d3c21c4596a4be7f212285a273d1",
"text": "The medial temporal lobe (MTL) plays a crucial role in supporting memory for events, but the functional organization of regions in the MTL remains controversial, especially regarding the extent to which different subregions support recognition based on familiarity or recollection. Here we review results from functional neuroimaging studies showing that, whereas activity in the hippocampus and posterior parahippocampal gyrus is disproportionately associated with recollection, activity in the anterior parahippocampal gyrus is disproportionately associated with familiarity. The results are consistent with the idea that the parahippocampal cortex (located in the posterior parahippocampal gyrus) supports recollection by encoding and retrieving contextual information, whereas the hippocampus supports recollection by associating item and context information. By contrast, perirhinal cortex (located in the anterior parahippocampal gyrus) supports familiarity by encoding and retrieving specific item information. We discuss the implications of a 'binding of item and context' (BIC) model for studies of recognition memory. This model argues that there is no simple mapping between MTL regions and recollection and familiarity, but rather that the involvement of MTL regions in these processes depends on the specific demands of the task and the type of information involved. We highlight several predictions for future imaging studies that follow from the BIC model.",
"title": ""
},
{
"docid": "04476184ca103b9d8012827615fc84a5",
"text": "In order to investigate the local filtering behavior of the Retinex model, we propose a new implementation in which paths are replaced by 2-D pixel sprays, hence the name \"random spray Retinex.\" A peculiar feature of this implementation is the way its parameters can be controlled to perform spatial investigation. The parameters' tuning is accomplished by an unsupervised method based on quantitative measures. This procedure has been validated via user panel tests. Furthermore, the spray approach has faster performances than the path-wise one. Tests and results are presented and discussed",
"title": ""
},
{
"docid": "e45fff410b042234cc6fda764a982532",
"text": "The fisheye camera has been widely studied in the field of robot vision since it can capture a wide view of the scene at one time. However, serious image distortion handers it from being widely used. To remedy this, this paper proposes an improved fisheye lens calibration and distortion correction method. First, an improved automatic detection of checkerboards is presented to avoid the original constraint and user intervention that usually existed in the conventional methods. A state-of-the-art corner detection method is evaluated and its strengths and shortcomings are analyzed. An adaptively automatic corner detection algorithm is implemented to overcome the shortcomings. Then, a precise mathematical model based on the law of fisheye lens imaging is modeled, which assumes that the imaging function can be described by a Taylor series expansion, followed by a nonlinear refinement based on the maximum likelihood criterion. With the proposed corner detection and mathematical model of fisheye imaging, both intrinsic and external parameters of the fisheye camera can be correctly calibrated. Finally, the radial distortion of the fisheye image can be corrected by incorporating the calibrated parameters. Experimental results validate the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "5527521d567290192ea26faeb6e7908c",
"text": "With the rapid development of spectral imaging techniques, classification of hyperspectral images (HSIs) has attracted great attention in various applications such as land survey and resource monitoring in the field of remote sensing. A key challenge in HSI classification is how to explore effective approaches to fully use the spatial–spectral information provided by the data cube. Multiple kernel learning (MKL) has been successfully applied to HSI classification due to its capacity to handle heterogeneous fusion of both spectral and spatial features. This approach can generate an adaptive kernel as an optimally weighted sum of a few fixed kernels to model a nonlinear data structure. In this way, the difficulty of kernel selection and the limitation of a fixed kernel can be alleviated. Various MKL algorithms have been developed in recent years, such as the general MKL, the subspace MKL, the nonlinear MKL, the sparse MKL, and the ensemble MKL. The goal of this paper is to provide a systematic review of MKL methods, which have been applied to HSI classification. We also analyze and evaluate different MKL algorithms and their respective characteristics in different cases of HSI classification cases. Finally, we discuss the future direction and trends of research in this area.",
"title": ""
},
{
"docid": "5441fd0abacc609d625f2570491c2e26",
"text": "number of parts or modules within the system; rather, it scales with the number of possible interactions between parts and modules. A methodology for evolving the control systems of autonomous robots has not yet been well established. In this paper we will show different examples of applications of evolutionary robotics to real robots by describing three different approaches to develop neural controllers for mobile robots. In all the experiments described real robots are involved and are indeed the ultimate means of evaluating the success and the results of the procedures employed. Each approach will be compared with the others and the relative advantages and drawbacks will be discussed. Last, but not least, we will try to tackle a few important issues related to the design of the hardware and of the evolutionary conditions in which the control system of the autonomous agent should evolve. (b) autonomous robots interact with an external environment and, therefore, the way in which they behave in the environment determines the stimuli they will receive in input (Parisi, Cecconi, and Nolfi, 1990). Each motor action has two different effects: (1) it determines how well the system performs with respect to the given task; (2) it determines the next input stimuli which will be perceived by the system (this last point strongly affects the success or the failure of a sequence of actions). Determining the correct motor action that the system should perform in order to experience good input stimuli, is thus extremely difficult because any motor action may have long term consequences. Also, the choice of a given motor action is often the result of the previous sequence of actions. A final source of uncertainty in the design of the system is the fact that often the interaction between the system and the environment is not perfectly known in advance.",
"title": ""
},
{
"docid": "71260dbe5738afd4285f8e1e0e0571ad",
"text": "Software tools have been used in software development for a long time now. They are used for, among other things, performance analysis, testing and verification, debugging and building applications. Software tools can be very simple and lightweight, e.g. linkers, or very large and complex, e.g. computer-assisted software engineering (CASE) tools and integrated development environments (IDEs). Some tools support particular phases of the project cycle while others can be used with a speicfic software development model or technology. Some aspects of software development, like risk management, are done throughout the whole project from inception to commissioning. The aim of this paper is to demonstrate the need for an intelligent risk assessment and management tool for both agile or traditional (or their combination) methods in software development. The authors propose a model, whose development is subject of further research, which can be investigated for use in developing intelligent risk management tools",
"title": ""
},
{
"docid": "8bc7698e1c8e4ef835f76a7a22128d68",
"text": "The parallel data accesses inherent to large-scale data-intensive scientific computing require that data servers handle very high I/O concurrency. Concurrent requests from different processes or programs to hard disk can cause disk head thrashing between different disk regions, resulting in unacceptably low I/O performance. Current storage systems either rely on the disk scheduler at each data server, or use SSD as storage, to minimize this negative performance effect. However, the ability of the scheduler to alleviate this problem by scheduling requests in memory is limited by concerns such as long disk access times, and potential loss of dirty data with system failure. Meanwhile, SSD is too expensive to be widely used as the major storage device in the HPC environment. We propose iTransformer, a scheme that employs a small SSD to schedule requests for the data on disk. Being less space constrained than with more expensive DRAM, iTransformer can buffer larger amounts of dirty data before writing it back to the disk, or prefetch a larger volume of data in a batch into the SSD. In both cases high disk efficiency can be maintained even for concurrent requests. Furthermore, the scheme allows the scheduling of requests in the background to hide the cost of random disk access behind serving process requests. Finally, as a non-volatile memory, concerns about the quantity of dirty data are obviated. We have implemented iTransformer in the Linux kernel and tested it on a large cluster running PVFS2. Our experiments show that iTransformer can improve the I/O throughput of the cluster by 35% on average for MPI/IO benchmarks of various data access patterns.",
"title": ""
},
{
"docid": "7cd992aec08167cb16ea1192a511f9aa",
"text": "In this thesis, we will present an Echo State Network (ESN) to investigate hierarchical cognitive control, one of the functions of Prefrontal Cortex (PFC). This ESN is designed with the intention to implement it as a robot controller, making it useful for biologically inspired robot control and for embodied and embedded PFC research. We will apply the ESN to a n-back task and a Wisconsin Card Sorting task to confirm the hypothesis that topological mapping of temporal and policy abstraction over the PFC can be explained by the effects of two requirements: a better preservation of information when information is processed in different areas, versus a better integration of information when information is processed in a single area.",
"title": ""
},
{
"docid": "262302228a88025660c0add90d500518",
"text": "Social network analysis provides meaningful information about behavior of network members that can be used for diverse applications such as classification, link prediction. However, network analysis is computationally expensive because of feature learning for different applications. In recent years, many researches have focused on feature learning methods in social networks. Network embedding represents the network in a lower dimensional representation space with the same properties which presents a compressed representation of the network. In this paper, we introduce a novel algorithm named “CARE” for network embedding that can be used for different types of networks including weighted, directed and complex. Current methods try to preserve local neighborhood information of nodes, whereas the proposed method utilizes local neighborhood and community information of network nodes to cover both local and global structure of social networks. CARE builds customized paths, which are consisted of local and global structure of network nodes, as a basis for network embedding and uses the Skip-gram model to learn representation vector of nodes. Subsequently, stochastic gradient descent is applied to optimize our objective function and learn the final representation of nodes. Our method can be scalable when new nodes are appended to network without information loss. Parallelize generation of customized random walks is also used for speeding up CARE. We evaluate the performance of CARE on multi label classification and link prediction tasks. Experimental results on various networks indicate that the proposed method outperforms others in both Micro and Macro-f1 measures for different size of training data.",
"title": ""
},
{
"docid": "0c4de7ce6574bb22d3cb0b9a7f3d5498",
"text": "Purpose – The purpose of this paper is to attempts to provide further insight into IS adoption by investigating how 12 factors within the technology-organization-environment framework explain smalland medium-sized enterprises’ (SMEs) adoption of enterprise resource planning (ERP) software. Design/methodology/approach – The approach for data collection was questionnaire survey involving executives of SMEs drawn from six fast service enterprises with strong operations in Port Harcourt. The mode of sampling was purposive and snow ball and analysis involves logistic regression test; the likelihood ratios, Hosmer and Lemeshow’s goodness of fit, and Nagelkerke’s R provided the necessary lenses. Findings – The 12 hypothesized relationships were supported with each factor differing in its statistical coefficient and some bearing negative values. ICT infrastructures, technical know-how, perceived compatibility, perceived values, security, and firm’s size were found statistically significant adoption determinants. Although, scope of business operations, trading partners’ readiness, demographic composition, subjective norms, external supports, and competitive pressures were equally critical but their negative coefficients suggest they pose less of an obstacle to adopters than to non-adopters. Thus, adoption of ERP by SMEs is more driven by technological factors than by organizational and environmental factors. Research limitations/implications – The study is limited by its scope of data collection and phases, therefore extended data are needed to apply the findings to other sectors/industries and to factor in the implementation and post-adoption phases in order to forge a more integrated and holistic adoption framework. Practical implications – The model may be used by IS vendors to make investment decisions, to meet customers’ needs, and to craft informed marketing programs that would appeal to actual and potential adopters and cause them to progress in the customer loyalty ladder. Originality/value – The paper contributes to the growing research on IS innovations’ adoption by using factors within the T-O-E framework to explains SMEs’ adoption of ERP.",
"title": ""
},
{
"docid": "c1b5b1dcbb3e7ff17ea6ad125bbc4b4b",
"text": "This article focuses on a new type of wireless devices in the domain between RFIDs and sensor networks—Energy-Harvesting Active Networked Tags (EnHANTs). Future EnHANTs will be small, flexible, and self-powered devices that can be attached to objects that are traditionally not networked (e.g., books, furniture, toys, produce, and clothing). Therefore, they will provide the infrastructure for various tracking applications and can serve as one of the enablers for the Internet of Things. We present the design considerations for the EnHANT prototypes, developed over the past 4 years. The prototypes harvest indoor light energy using custom organic solar cells, communicate and form multihop networks using ultra-low-power Ultra-Wideband Impulse Radio (UWB-IR) transceivers, and dynamically adapt their communications and networking patterns to the energy harvesting and battery states. We describe a small-scale testbed that uniquely allows evaluating different algorithms with trace-based light energy inputs. Then, we experimentally evaluate the performance of different energy-harvesting adaptive policies with organic solar cells and UWB-IR transceivers. Finally, we discuss the lessons learned during the prototype and testbed design process.",
"title": ""
},
{
"docid": "c83db87d7ac59e1faf75b408953e1324",
"text": "PURPOSE\nThis project was conducted to obtain information about reading problems of adults with traumatic brain injury (TBI) with mild-to-moderate cognitive impairments and to investigate how these readers respond to reading comprehension strategy prompts integrated into digital versions of text.\n\n\nMETHOD\nParticipants from 2 groups, adults with TBI (n = 15) and matched controls (n = 15), read 4 different 500-word expository science passages linked to either a strategy prompt condition or a no-strategy prompt condition. The participants' reading comprehension was evaluated using sentence verification and free recall tasks.\n\n\nRESULTS\nThe TBI and control groups exhibited significant differences on 2 of the 5 reading comprehension measures: paraphrase statements on a sentence verification task and communication units on a free recall task. Unexpected group differences were noted on the participants' prerequisite reading skills. For the within-group comparison, participants showed significantly higher reading comprehension scores on 2 free recall measures: words per communication unit and type-token ratio. There were no significant interactions.\n\n\nCONCLUSION\nThe results help to elucidate the nature of reading comprehension in adults with TBI with mild-to-moderate cognitive impairments and endorse further evaluation of reading comprehension strategies as a potential intervention option for these individuals. Future research is needed to better understand how individual differences influence a person's reading and response to intervention.",
"title": ""
}
] |
scidocsrr
|
7963dd8fd823eed13f91f78f3777db40
|
An Efficient, Electrically Small Antenna Designed for VHF and UHF Applications
|
[
{
"docid": "1ebc62dc8dfeaf9c547e7fe3d4d21ae7",
"text": "Electrically small antennas are generally presumed to exhibit high impedance mismatch (high VSWR), low efficiency, high quality factor (Q); and, therefore, narrow operating bandwidth. For an electric or magnetic dipole antenna, there is a fundamental lower bound for the quality factor that is determined as a function of the antenna's occupied physical volume. In this paper, the quality factor of a resonant, electrically small electric dipole is minimized by allowing the antenna geometry to utilize the occupied spherical volume to the greatest extent possible. A self-resonant, electrically small electric dipole antenna is presented that exhibits an impedance near 50 Ohms, an efficiency in excess of 95% and a quality factor that is within 1.5 times the fundamental lower bound at a value of ka less than 0.27. Through an arrangement of the antenna's wire geometry, the electrically small dipole's polarization is converted from linear to elliptical (with an axial ratio of 3 dB), resulting in a further reduction in the quality factor. The elliptically polarized, electrically small antenna exhibits an impedance near 50 Ohms, an efficiency in excess of 95% and it has an omnidirectional, figure-eight radiation pattern.",
"title": ""
}
] |
[
{
"docid": "f857000c14d894b7d487556436b19cb0",
"text": "Since IO performance on HPC machines strongly depends on machine characteristics and configuration, it is important to carefully tune IO libraries and make good use of appropriate library APIs. For instance, on current petascale machines, independent IO tends to outperform collective IO, in part due to bottlenecks at the metadata server. The problem is exacerbated by scaling issues, since each IO library scales differently on each machine, and typically, operates efficiently to different levels of scaling on different machines. With scientific codes being run on a variety of HPC resources, efficient code execution requires us to address three important issues: (1) end users should be able to select the most efficient IO methods for their codes, with minimal effort in terms of code updates or alterations; (2) such performance-driven choices should not prevent data from being stored in the desired file formats, since those are crucial for later data analysis; and (3) it is important to have efficient ways of identifying and selecting certain data for analysis, to help end users cope with the flood of data produced by high end codes. This paper employs ADIOS, the ADaptable IO System, as an IO API to address (1)–(3) above. Concerning (1), ADIOS makes it possible to independently select the IO methods being used by each grouping of data in an application, so that end users can use those IO methods that exhibit best performance based on both IO patterns and the underlying hardware. In this paper, we also use this facility of ADIOS to experimentally evaluate on petascale machines alternative methods for high performance IO. Specific examples studied include methods that use strong file consistency vs. delayed parallel data consistency, as that provided by MPI-IO or POSIX IO. Concerning (2), to avoid linking IO methods to specific file formats and attain high IO performance, ADIOS introduces an efficient intermediate file format, termed BP, which can be converted, at small cost, to the standard file formats used by analysis tools, such as NetCDF and HDF-5. Concerning (3), associated with BP are efficient methods for data characterization, which compute attributes that can be used to identify data sets without having to inspect or analyze the entire data contents of large files.",
"title": ""
},
{
"docid": "d0c4997c611d8759805d33cf1ad9eef1",
"text": "The automatic evaluation of text-based assessment items, such as short answers or essays, is an open and important research challenge. In this paper, we compare several features for the classification of short open-ended responses to questions related to a large first-year health sciences course. These features include a) traditional n-gram models; b) entity URIs (Uniform Resource Identifier) and c) entity mentions extracted using a semantic annotation API; d) entity mention embeddings based on GloVe, and e) entity URI embeddings extracted from Wikipedia. These features are used in combination with classification algorithms to discriminate correct answers from incorrect ones. Our results show that, on average, n-gram features performed the best in terms of precision and entity mentions in terms of f1-score. Similarly, in terms of accuracy, entity mentions and n-gram features performed the best. Finally, features based on dense vector representations such as entity embeddings and mention embeddings obtained the best f1-score for predicting correct answers.",
"title": ""
},
{
"docid": "7d285ca842be3d85d218dd70f851194a",
"text": "CONTEXT\nThe Atkins diet books have sold more than 45 million copies over 40 years, and in the obesity epidemic this diet and accompanying Atkins food products are popular. The diet claims to be effective at producing weight loss despite ad-libitum consumption of fatty meat, butter, and other high-fat dairy products, restricting only the intake of carbohydrates to under 30 g a day. Low-carbohydrate diets have been regarded as fad diets, but recent research questions this view.\n\n\nSTARTING POINT\nA systematic review of low-carbohydrate diets found that the weight loss achieved is associated with the duration of the diet and restriction of energy intake, but not with restriction of carbohydrates. Two groups have reported longer-term randomised studies that compared instruction in the low-carbohydrate diet with a low-fat calorie-reduced diet in obese patients (N Engl J Med 2003; 348: 2082-90; Ann Intern Med 2004; 140: 778-85). Both trials showed better weight loss on the low-carbohydrate diet after 6 months, but no difference after 12 months. WHERE NEXT?: The apparent paradox that ad-libitum intake of high-fat foods produces weight loss might be due to severe restriction of carbohydrate depleting glycogen stores, leading to excretion of bound water, the ketogenic nature of the diet being appetite suppressing, the high protein-content being highly satiating and reducing spontaneous food intake, or limited food choices leading to decreased energy intake. Long-term studies are needed to measure changes in nutritional status and body composition during the low-carbohydrate diet, and to assess fasting and postprandial cardiovascular risk factors and adverse effects. Without that information, low-carbohydrate diets cannot be recommended.",
"title": ""
},
{
"docid": "cd230b3fa34267564380bdd0abe55c74",
"text": "Healthcare data are a valuable source of healthcare intelligence. Sharing of healthcare data is one essential step to make healthcare system smarter and improve the quality of healthcare service. Healthcare data, one personal asset of patient, should be owned and controlled by patient, instead of being scattered in different healthcare systems, which prevents data sharing and puts patient privacy at risks. Blockchain is demonstrated in the financial field that trusted, auditable computing is possible using a decentralized network of peers accompanied by a public ledger. In this paper, we proposed an App (called Healthcare Data Gateway (HGD)) architecture based on blockchain to enable patient to own, control and share their own data easily and securely without violating privacy, which provides a new potential way to improve the intelligence of healthcare systems while keeping patient data private. Our proposed purpose-centric access model ensures patient own and control their healthcare data; simple unified Indicator-Centric Schema (ICS) makes it possible to organize all kinds of personal healthcare data practically and easily. We also point out that MPC (Secure Multi-Party Computing) is one promising solution to enable untrusted third-party to conduct computation over patient data without violating privacy.",
"title": ""
},
{
"docid": "b9a750307d3bed3b1c62047325d8857b",
"text": "OBJECTIVE\nTo assess predictors of CVD mortality among men with and without diabetes and to assess the independent effect of diabetes on the risk of CVD death.\n\n\nRESEARCH DESIGN AND METHODS\nParticipants in this cohort study were screened from 1973 to 1975; vital status has been ascertained over an average of 12 yr of follow-up (range 11-13 yr). Participants were 347,978 men aged 35-57 yr, screened in 20 centers for MRFIT. The outcome measure was CVD mortality.\n\n\nRESULTS\nAmong 5163 men who reported taking medication for diabetes, 1092 deaths (603 CVD deaths) occurred in an average of 12 yr of follow-up. Among 342,815 men not taking medication for diabetes, 20,867 deaths were identified, 8965 ascribed to CVD. Absolute risk of CVD death was much higher for diabetic than nondiabetic men of every age stratum, ethnic background, and risk factor level--overall three times higher, with adjustment for age, race, income, serum cholesterol level, sBP, and reported number of cigarettes/day (P < 0.0001). For men both with and without diabetes, serum cholesterol level, sBP, and cigarette smoking were significant predictors of CVD mortality. For diabetic men with higher values for each risk factor and their combinations, absolute risk of CVD death increased more steeply than for nondiabetic men, so that absolute excess risk for diabetic men was progressively greater than for nondiabetic men with higher risk factor levels.\n\n\nCONCLUSIONS\nThese findings emphasize the importance of rigorous sustained intervention in people with diabetes to control blood pressure, lower serum cholesterol, and abolish cigarette smoking, and the importance of considering nutritional-hygienic approaches on a mass scale to prevent diabetes.",
"title": ""
},
{
"docid": "ac0119255806976213d61029247b14f1",
"text": "Virtual reality training systems are commonly used in a variety of domains, and it is important to understand how the realism of a training simulation influences training effectiveness. We conducted a controlled experiment to test the effects of display and scenario properties on training effectiveness for a visual scanning task in a simulated urban environment. The experiment varied the levels of field of view and visual complexity during a training phase and then evaluated scanning performance with the simulator's highest levels of fidelity and scene complexity. To assess scanning performance, we measured target detection and adherence to a prescribed strategy. The results show that both field of view and visual complexity significantly affected target detection during training; higher field of view led to better performance and higher visual complexity worsened performance. Additionally, adherence to the prescribed visual scanning strategy during assessment was best when the level of visual complexity during training matched that of the assessment conditions, providing evidence that similar visual complexity was important for learning the technique. The results also demonstrate that task performance during training was not always a sufficient measure of mastery of an instructed technique. That is, if learning a prescribed strategy or skill is the goal of a training exercise, performance in a simulation may not be an appropriate indicator of effectiveness outside of training-evaluation in a more realistic setting may be necessary.",
"title": ""
},
{
"docid": "b7f53aa4b1e68f05bee2205dd55b975a",
"text": "We study the problem of off-policy evaluation (OPE) in reinforcement learning (RL), where the goal is to estimate the performance of a policy from the data generated by another policy(ies). In particular, we focus on the doubly robust (DR) estimators that consist of an importance sampling (IS) component and a performance model, and utilize the low (or zero) bias of IS and low variance of the model at the same time. Although the accuracy of the model has a huge impact on the overall performance of DR, most of the work on using the DR estimators in OPE has been focused on improving the IS part, and not much on how to learn the model. In this paper, we propose alternative DR estimators, called more robust doubly robust (MRDR), that learn the model parameter by minimizing the variance of the DR estimator. We first present a formulation for learning the DR model in RL. We then derive formulas for the variance of the DR estimator in both contextual bandits and RL, such that their gradients w.r.t. the model parameters can be estimated from the samples, and propose methods to efficiently minimize the variance. We prove that the MRDR estimators are strongly consistent and asymptotically optimal. Finally, we evaluate MRDR in bandits and RL benchmark problems, and compare its performance with the existing methods.",
"title": ""
},
{
"docid": "559cf3f81cbe308df222e37af69de758",
"text": "The aim of our study was to evaluate effectiveness of ultrasound treatment applied with exercise therapy in patients with ankylosing spondylitis. Fifty-two patients, who were diagnosed according to modified New York criteria, were aged 25–60, and have spine pain, were randomly assigned to two groups. Ultrasound (US) and exercise therapy were applied to treatment group (27); placebo US treatment and exercise therapy were applied to control group (25). Patients were evaluated before treatment, at the end of treatment, and 4 weeks after the treatment. Daily and night pain, morning stiffness, patient global assessment (PGA), doctor global assessment (DGA), Bath Ankylosing Spondylitis Disease Activity Index (BASDAI), Bath Ankylosing Spondylitis Functional Index (BASFI), Bath Ankylosing Spondylitis Metrology Index (BASMI), Ankylosing Spondylitis Quality of Life (ASQoL) questionnaire, Ankylosing Spondylitis Disease Activity Score (ASDAS) erythrocyte sedimentation rate (ESR), and ASDAS C-reactive protein (CRP) were used as clinical parameters. In US group, all parameters showed significant improvements at 2 and 6 weeks, in comparison with the baseline. In placebo US group, significant improvement was obtained for all parameters (except tragus-to-wall distance and modified Schober test at 2 weeks and lumbar side flexion and modified Schober test at 6 weeks). Comparison of the groups showed significantly superior results of US group for parameters of BASMI (p < 0.05), tragus–wall distance (p < 0.05), PGA (p < 0.01), and DGA (p < 0.05) at 2 weeks as well as for the parameters of daily pain (p < 0.01), PGA (p < 0.05), DGA (p < 0.01), BASDAI (p < 0.05), ASDAS-CRP (p < 0.05), ASDAS-ESR (p < 0.01), lumbar side flexion (p < 0.01), the modified Schober test (p < 0.01), and ASQoL (p < 0.05) at 6 weeks. Our study showed that ultrasound treatment increases the effect of exercise in patients with ankylosing spondylitis.",
"title": ""
},
{
"docid": "b6c69ee2b9bce4c60c3ef9eaff07f93f",
"text": "Videos taken in the wild sometimes contain unexpected rain streaks, which brings difficulty in subsequent video processing tasks. Rain streak removal in a video (RSRV) is thus an important issue and has been attracting much attention in computer vision. Different from previous RSRV methods formulating rain streaks as a deterministic message, this work first encodes the rains in a stochastic manner, i.e., a patch-based mixture of Gaussians. Such modification makes the proposed model capable of finely adapting a wider range of rain variations instead of certain types of rain configurations as traditional. By integrating with the spatiotemporal smoothness configuration of moving objects and low-rank structure of background scene, we propose a concise model for RSRV, containing one likelihood term imposed on the rain streak layer and two prior terms on the moving object and background scene layers of the video. Experiments implemented on videos with synthetic and real rains verify the superiority of the proposed method, as compared with the state-of-the-art methods, both visually and quantitatively in various performance metrics.",
"title": ""
},
{
"docid": "6e9ba961906276190f56831f702d433c",
"text": "Analysis of histopathology slides is a critical step for many diagnoses, and in particular in oncology where it defines the gold standard. In the case of digital histopathological analysis, highly trained pathologists must review vast wholeslide-images of extreme digital resolution (100, 000 pixels) across multiple zoom levels in order to locate abnormal regions of cells, or in some cases single cells, out of millions. The application of deep learning to this problem is hampered not only by small sample sizes, as typical datasets contain only a few hundred samples, but also by the generation of ground-truth localized annotations for training interpretable classification and segmentation models. We propose a method for disease localization in the context of weakly supervised learning, where only image-level labels are available during training. Even without pixel-level annotations, we are able to demonstrate performance comparable with models trained with strong annotations on the Camelyon-16 lymph node metastases detection challenge. We accomplish this through the use of pre-trained deep convolutional networks, feature embedding, as well as learning via top instances and negative evidence, a multiple instance learning technique from the field of semantic segmentation and object detection.",
"title": ""
},
{
"docid": "90ea7dc1052ffc5e06a09cf4199be320",
"text": "This paper presents the ringing suppressing method in class D resonant inverter operating at 13.56MHz for wireless power transfer systems. The ringing loop in half-bridge topology inverter at 13.56MHz strongly effect to the stable, performance and efficiency of the inverter. Typically, this ringing can be damped by using a snubber circuit which is placed parallel to the MOSFETs or using a damping circuit which is place inside the ringing loop. But in the resonant inverter with high power and high frequency, the snubber circuit or general damping circuit may reduce performance and efficiency of inverter. A new damping circuit combining with drive circuit design solution is proposed in this paper. The simulation and experiment results showed that the proposed design significantly suppresses the ringing current and ringing voltage in the circuit. The power loss on the MOSFETs is reduced while the efficiency of inverter increases 2% to obtain 93.1% at 1.2kW output power. The inverter becomes more stable and compact.",
"title": ""
},
{
"docid": "a40e91ecac0f70e04cc1241797786e77",
"text": "In much of his writings on poverty, famines, and malnutrition, Amartya Sen argues that Democracy is the best way to avoid famines partly because of its ability to use a free press, and that the Indian experience since independence confirms this. His argument is partly empirical, but also relies on some a priori assumptions about human motivation. In his “Democracy as a Universal Value” he claims: Famines are easy to prevent if there is a serious effort to do so, and a democratic government, facing elections and criticisms from opposition parties and independent newspapers, cannot help but make such an effort. Not surprisingly, while India continued to have famines under British rule right up to independence ...they disappeared suddenly with the establishment of a multiparty democracy and a free press.",
"title": ""
},
{
"docid": "ca4e3f243b2868445ecb916c081e108e",
"text": "The task in the multi-agent path finding problem (MAPF) is to find paths for multiple agents, each with a different start and goal position, such that agents do not collide. It is possible to solve this problem optimally with algorithms that are based on the A* algorithm. Recently, we proposed an alternative algorithm called Conflict-Based Search (CBS) (Sharon et al. 2012), which was shown to outperform the A*-based algorithms in some cases. CBS is a two-level algorithm. At the high level, a search is performed on a tree based on conflicts between agents. At the low level, a search is performed only for a single agent at a time. While in some cases CBS is very efficient, in other cases it is worse than A*-based algorithms. This paper focuses on the latter case by generalizing CBS to Meta-Agent CBS (MA-CBS). The main idea is to couple groups of agents into meta-agents if the number of internal conflicts between them exceeds a given bound. MACBS acts as a framework that can run on top of any complete MAPF solver. We analyze our new approach and provide experimental results demonstrating that it outperforms basic CBS and other A*-based optimal solvers in many cases. Introduction and Background In the multi-agent path finding (MAPF) problem, we are given a graph, G(V,E), and a set of k agents labeled a1 . . . ak. Each agent ai has a start position si ∈ V and goal position gi ∈ V . At each time step an agent can either move to a neighboring location or can wait in its current location. The task is to return the least-cost set of actions for all agents that will move each of the agents to its goal without conflicting with other agents (i.e., without being in the same location at the same time or crossing the same edge simultaneously in opposite directions). MAPF has practical applications in robotics, video games, vehicle routing, and other domains (Silver 2005; Dresner & Stone 2008). In its general form, MAPF is NPcomplete, because it is a generalization of the sliding tile puzzle, which is NP-complete (Ratner & Warrnuth 1986). There are many variants to the MAPF problem. In this paper we consider the following common setting. The cumulative cost function to minimize is the sum over all agents of the number of time steps required to reach the goal location (Standley 2010; Sharon et al. 2011a). Both move Copyright c © 2012, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. and wait actions cost one. A centralized computing setting with a single CPU that controls all the agents is assumed. Note that a centralized computing setting is logically equivalent to a decentralized setting where each agent has its own computing power but agents are fully cooperative with full knowledge sharing and free communication. There are two main approaches for solving the MAPF in the centralized computing setting: the coupled and the decoupled approaches. In the decoupled approach, paths are planned for each agent separately. Algorithms from the decoupled approach run relatively fast, but optimality and even completeness are not always guaranteed (Silver 2005; Wang & Botea 2008; Jansen & Sturtevant 2008). New complete (but not optimal) decoupled algorithms were recently introduced for trees (Khorshid, Holte, & Sturtevant 2011) and for general graphs (Luna & Bekris 2011). Our aim is to solve the MAPF problem optimally and therefore the focus of this paper is on the coupled approach. In this approach MAPF is formalized as a global, singleagent search problem. One can activate an A*-based algorithm that searches a state space that includes all the different ways to permute the k agents into |V | locations. Consequently, the state space that is searched by the A*-based algorithms grow exponentially with the number of agents. Hence, finding the optimal solutions with A*-based algorithms requires significant computational expense. Previous optimal solvers dealt with this large search space in several ways. Ryan (2008; 2010) abstracted the problem into pre-defined structures such as cliques, halls and rings. He then modeled and solved the problem as a CSP problem. Note that the algorithm Ryan proposed does not necessarily returns the optimal solutions. Standley (2010; 2011) partitioned the given problem into smaller independent problems, if possible. Sharon et. al. (2011a; 2011b) suggested the increasing cost search tree (ICTS) a two-level framework where the high-level phase searches a tree with exact path costs for each of the agents and the low-level phase aims to verify whether there is a solution of this cost. In this paper we focus on the new Conflict Based Search algorithm (CBS) (Sharon et al. 2012) which optimally solves MAPF. CBS is a two-level algorithm where the highlevel search is performed on a constraint tree (CT) whose nodes include constraints on time and locations of a single agent. At each node in the constraint tree a low-level search is performed to find individual paths for all agents under the constraints given by the high-level node. Sharon et al. (2011a; 2011b; 2012) showed that the behavior of optimal MAPF algorithms can be very sensitive to characteristics of the given problem instance such as the topology and size of the graph, the number of agents, the branching factor etc. There is no universally dominant algorithm; different algorithms work well in different circumstances. In particular, experimental results have shown that CBS can significantly outperform all existing optimal MAPF algorithms on some domains (Sharon et al. 2012). However, Sharon et al. (2012) also identified cases where the CBS algorithm performs poorly. In such cases, CBS may even perform exponentially worse than A*. In this paper we aim at mitigating the worst-case performance of CBS by generalizing CBS into a new algorithm called Meta-agent CBS (MA-CBS). In MA-CBS the number of conflicts allowed at the high-level phase between any pair of agents is bounded by a predefined parameter B. When the number of conflicts exceed B, the conflicting agents are merged into a meta-agent and then treated as a joint composite agent by the low-level solver. By bounding the number of conflicts between any pair of agents, we prevent the exponential worst-case of basic CBS. This results in an new MAPF solver that significantly outperforms existing algorithms in a variety of domains. We present both theoretical and empirical support for this claim. In the low-level search, MA-CBS can use any complete MAPF solver. Thus, MA-CBS can be viewed as a solving framework and future MAPF algorithms could also be used by MA-CBS to improve its performance. Furthermore, we show that the original CBS algorithm corresponds to the extreme cases where B = ∞ (never merge agents), and the Independence Dependence (ID) framework (Standley 2010) is the other extreme case where B = 0 (always merge agents when conflicts occur). Thus, MA-CBS allows a continuum between CBS and ID, by setting different values of B between these two extremes. The Conflict Based Search Algorithm (CBS) The MA-CBS algorithm presented in this paper is based on the CBS algorithm (Sharon et al. 2012). We thus first describe the CBS algorithm in detail. Definitions for CBS We use the term path only in the context of a single agent and use the term solution to denote a set of k paths for the given set of k agents. A constraint for a given agent ai is a tuple (ai, v, t) where agent ai is prohibited from occupying vertex v at time step t.1 During the course of the algorithm, agents are associated with constraints. A consistent path for agent ai is a path that satisfies all its constraints. Likewise, a consistent solution is a solution that is made up from paths, such that the path for agent ai is consistent with the constraints of ai. A conflict is a tuple (ai, aj , v, t) where agent ai and agent aj occupy vertex v at time point t. A solution (of k paths) is valid if all its A conflict (as well as a constraint) may apply also to an edge when two agents traverse the same edge in opposite directions. paths have no conflicts. A consistent solution can be invalid if, despite the fact that the paths are consistent with their individual agent constraints, these paths still have conflicts. The key idea of CBS is to grow a set of constraints for each of the agents and find paths that are consistent with these constraints. If these paths have conflicts, and are thus invalid, the conflicts are resolved by adding new constraints. CBS works in two levels. At the high-level phase conflicts are found and constraints are added. At the low-level phase, the paths of the agents are updated to be consistent with the new constraints. We now describe each part of this process. High-level: Search the Constraint Tree (CT) At the high-level, CBS searches a constraint tree (CT). A CT is a binary tree. Each node N in the CT contains the following fields of data: 1. A set of constraints (N.constraints). The root of the CT contains an empty set of constraints. The child of a node in the CT inherits the constraints of the parent and adds one new constraint for one agent. 2. A solution (N.solution). A set of k paths, one path for each agent. The path for agent ai must be consistent with the constraints of ai. Such paths are found by the lowlevel search algorithm. 3. The total cost (N.cost). The cost of the current solution (summation over all the single-agent path costs). We denote this cost the f -value of the node. Node N in the CT is a goal node when N.solution is valid, i.e., the set of paths for all agents have no conflicts. The high-level phase performs a best-first search on the CT where nodes are ordered by their costs. Processing a node in the CT Given the list of constraints for a node N of the CT, the low-level search is invoked. This search returns one shortest path for each agent, ai, that is consistent with all the constraints associated with ai in node N . Once a consistent path has be",
"title": ""
},
{
"docid": "f244f0de1cde8f083fed3a3495aa261e",
"text": "In this paper, we propose a multimodal search engine that combines visual and textual cues to retrieve items from a multimedia database aesthetically similar to the query. The goal of our engine is to enable intuitive retrieval of fashion merchandise such as clothes or furniture. Existing search engines treat textual input only as an additional source of information about the query image and do not correspond to the reallife scenario where the user looks for ”the same shirt but of denim”. Our novel method, dubbed DeepStyle, mitigates those shortcomings by using a joint neural network architecture to model contextual dependencies between features of different modalities. We prove the robustness of this approach on two different challenging datasets of fashion items and furniture where our DeepStyle engine outperforms baseline methods by 18-21% on the tested datasets. Our search engine is commercially deployed and available through a Web-based application.",
"title": ""
},
{
"docid": "117de8844d5a6c506d69de65ae6b62ae",
"text": "Computer-based conversational agents are becoming ubiquitous. However, for these systems to be engaging and valuable to the user, they must be able to express emotion, in addition to providing informative responses. Humans rely on much more than language during conversations; visual information is key to providing context. We present the first example of an image-grounded conversational agent using visual sentiment, facial expression and scene features. We show that key qualities of the generated dialogue can be manipulated by the features used for training the agent. We evaluate our model on a large and very challenging real-world dataset of conversations from social media (Twitter). The image-grounding leads to significantly more informative, emotional and specific responses, and the exact qualities can be tuned depending on the image features used. Furthermore, our model improves the objective quality of dialogue responses when evaluated on standard natural language metrics.",
"title": ""
},
{
"docid": "298577e8e659dc68f5ac5071a4bde225",
"text": "Policy search such as reinforcement learning and evolutionary computation is a framework for finding an optimal policy of control problems, but it usually requires a huge number of samples. Importance sampling is a common tool to use samples drawn from a proposal distribution different from the targeted one, and it is widely used by the policy search methods to update the policy from a set of datasets that are collected by previous sampling distributions. However, the proposal distribution is created by a mixture of previous distributions with fixed mixing weights in most of previous studies, and it is often numerically unstable. To overcome this problem, we propose the method of adaptive multiple importance sampling that optimizes the mixing coefficients to minimize the variance of the importance sampling estimator while utilizing as many samples as possible. We apply the proposed method to the five policy search methods such as PGPE, PoWER, CMA-ES, REPS, and NES, and their algorithms are evaluated by some benchmark control tasks. Experimental results show that all the five methods improve sample efficiency. In addition, we show that optimizing the mixing weights achieves stable learning.",
"title": ""
},
{
"docid": "2a827ddb30be8cdc3ecaf09da2e898de",
"text": "There is an increasing interest on accelerating neural networks for real-time applications. We study the studentteacher strategy, in which a small and fast student network is trained with the auxiliary information learned from a large and accurate teacher network. We propose to use conditional adversarial networks to learn the loss function to transfer knowledge from teacher to student. The proposed method is particularly effective for relatively small student networks. Moreover, experimental results show the effect of network size when the modern networks are used as student. We empirically study the trade-off between inference time and classification accuracy, and provide suggestions on choosing a proper student network.",
"title": ""
},
{
"docid": "456a246b468feb443e0ed576173d6d46",
"text": "Automatic person re-identification (re-id) across camera boundaries is a challenging problem. Approaches have to be robust against many factors which influence the visual appearance of a person but are not relevant to the person's identity. Examples for such factors are pose, camera angles, and lighting conditions. Person attributes are a semantic high level information which is invariant across many such influences and contain information which is often highly relevant to a person's identity. In this work we develop a re-id approach which leverages the information contained in automatically detected attributes. We train an attribute classifier on separate data and include its responses into the training process of our person re-id model which is based on convolutional neural networks (CNNs). This allows us to learn a person representation which contains information complementary to that contained within the attributes. Our approach is able to identify attributes which perform most reliably for re-id and focus on them accordingly. We demonstrate the performance improvement gained through use of the attribute information on multiple large-scale datasets and report insights into which attributes are most relevant for person re-id.",
"title": ""
},
{
"docid": "c75c4f2acf49dd4d52116eae7559f6a5",
"text": "In 2005, Kreidstein first proposed the term \"Cutis pleonasmus,\" a Greek term meaning \"redundancy,\" which refers to the excessive skin that remains after massive weight loss. Cutis pleonasmus is clearly distinguishable from other diseases showing increased laxity of the skin, such as pseudoxanthoma elasticum, congenital and acquired generalized cutis laxa. Although individuals who are severely overweight are few and bariatric surgeries are less common in Korea than in the West, the number of these patients is increasing due to changes to Western life styles. We report a case for a 24-year-old man who presented with generalized lax and loose skin after massive weight loss. He was diagnosed with cutis pleonasmus based on the history of great weight loss, characteristic clinical features and normal histological findings. To the best of our knowledge, this is the first report of cutis pleonasmus in Korea.",
"title": ""
},
{
"docid": "addad4069782620549e7a357e2c73436",
"text": "Drivable region detection is challenging since various types of road, occlusion or poor illumination condition have to be considered in a outdoor environment, particularly at night. In the past decade, Many efforts have been made to solve these problems, however, most of the already existing methods are designed for visible light cameras, which are inherently inefficient under low light conditions. In this paper, we present a drivable region detection algorithm designed for thermal-infrared cameras in order to overcome the aforementioned problems. The novelty of the proposed method lies in the utilization of on-line road initialization with a highly scene-adaptive sampling mask. Furthermore, our prior road information extraction is tailored to enforce temporal consistency among a series of images. In this paper, we also propose a large number of experiments in various scenarios (on-road, off-road and cluttered road). A total of about 6000 manually annotated images are made available in our website for the research community. Using this dataset, we compared our method against multiple state-of-the-art approaches including convolutional neural network (CNN) based methods to emphasize the robustness of our approach under challenging situations.",
"title": ""
}
] |
scidocsrr
|
db89827cd60ac5152401c3487b2f1a29
|
Dark Clouds, Io&#!+, and [Crystal Ball Emoji]: Projecting Network Anxieties with Alternative Design Metaphors
|
[
{
"docid": "3b49747ef98ebcfa515fb10a22f08017",
"text": "This paper reports a qualitative study of thriving older people and illustrates the findings with design fiction. Design research has been criticized as \"solutionist\" i.e. solving problems that don't exist or providing \"quick fixes\" for complex social, political and environmental problems. We respond to this critique by presenting a \"solutionist\" board game used to generate design concepts. Players are given data cards and technology dice, they move around the board by pitching concepts that would support positive aging. We argue that framing concept design as a solutionist game explicitly foregrounds play, irony and the limitations of technological intervention. Three of the game concepts are presented as design fictions in the form of advertisements for products and services that do not exist. The paper argues that design fiction can help create a space for design beyond solutionism.",
"title": ""
}
] |
[
{
"docid": "0a4a124589dffca733fa9fa87dc94b35",
"text": "where ri is the reward in cycle i of a given history, and the expected value is taken over all possible interaction histories of π and μ. The choice of γi is a subtle issue that controls how greedy or far sighted the agent should be. Here we use the near-harmonic γi := 1/i2 as this produces an agent with increasing farsightedness of the order of its current age [Hutter2004]. As we desire an extremely general definition of intelligence for arbitrary systems, our space of environments should be as large as possible. An obvious choice is the space of all probability measures, however this causes serious problems as we cannot even describe some of these measures in a finite way.",
"title": ""
},
{
"docid": "35586c00530db3fd928512134b4927ec",
"text": "Basic definitions concerning the multi-layer feed-forward neural networks are given. The back-propagation training algorithm is explained. Partial derivatives of the objective function with respect to the weight and threshold coefficients are derived. These derivatives are valuable for an adaptation process of the considered neural network. Training and generalisation of multi-layer feed-forward neural networks are discussed. Improvements of the standard back-propagation algorithm are reviewed. Example of the use of multi-layer feed-forward neural networks for prediction of carbon-13 NMR chemical shifts of alkanes is given. Further applications of neural networks in chemistry are reviewed. Advantages and disadvantages of multilayer feed-forward neural networks are discussed.",
"title": ""
},
{
"docid": "5addf869fb072fb047b9e4ff4f1dc3eb",
"text": "This paper presents type classes, a new approach to ad-hoc polymorphism. Type classes permit overloading of arithmetic operators such as multiplication, and generalise the “eqtype variables” of Standard ML. Type classes extend the Hindley/Milner polymorphic type system, and provide a new approach to issues that arise in object-oriented programming, bounded type quantification, and abstract data types. This paper provides an informal introduction to type classes, and defines them formally by means of type inference rules.",
"title": ""
},
{
"docid": "db964a7761ac16c63196ab32f4559e2e",
"text": "We present an end-to-end system that goes from video sequences to high resolution, editable, dynamically controllable face models. The capture system employs synchronized video cameras and structured light projectors to record videos of a moving face from multiple viewpoints. A novel spacetime stereo algorithm is introduced to compute depth maps accurately and overcome over-fitting deficiencies in prior work. A new template fitting and tracking procedure fills in missing data and yields point correspondence across the entire sequence without using markers. We demonstrate a data-driven, interactive method for inverse kinematics that draws on the large set of fitted templates and allows for posing new expressions by dragging surface points directly. Finally, we describe new tools that model the dynamics in the input sequence to enable new animations, created via key-framing or texture-synthesis techniques.",
"title": ""
},
{
"docid": "e30d9d9d9ed4b57a2b414d5b97c13bab",
"text": "This paper describes the design, implementation, and evaluation of MiMaze, a distributed multiplayer game on the Internet. The major contribution of this work is to have designed and implemented a completely distributed communication architecture based on IP multicast. MiMaze uses multicast communication system based on RTP/UDP/ IP, and a distributed synchronization mechanisms to guarantee the consistency of the game, regardless network delay. This paper provides evaluation results on the Mbone",
"title": ""
},
{
"docid": "eb92c76e00ed0970bbec416e49607394",
"text": "This paper proposes an air-core transformer integration method, which mounts the transformer straightly into the multi-layer PCB, and maintains the proper distance between the inner transformer and other components on the top layer. Compared with other 3D integration method, the air-core transformer is optimized and modeled carefully to avoid the electromagnetic interference (EMI) of the magnetic fields. The integration method reduces the PCB area significantly, ensuring higher power density and similar efficiency as the conventional planar layout because the air-core transformer magnetic field does not affect other components. Moreover, the converters with the integrated PCB transformer can be manufactured with high consistency. With the air-core transformer, the overall height is only the sum of twice the PCB thickness and components height. In addition, the proposed integration method reduces the power loop inductance by 64%. It is applied to two resonant flyback converters operating at 20 MHz with Si MOSFETs, and 30 MHz with eGaN HEMTs respectively. The full load efficiency of the 30 MHz prototype is 80.1% with 5 V input and 5 V/ 2 W output. It achieves the power density of 32 W/in3.",
"title": ""
},
{
"docid": "bc3f2f0c2e33668668714dcebe1365a2",
"text": "Our dexterous hand is a fundmanetal human feature that distinguishes us from other animals by enabling us to go beyond grasping to support sophisticated in-hand object manipulation. Our aim was the design of a dexterous anthropomorphic robotic hand that matches the human hand's 24 degrees of freedom, under-actuated by seven motors. With the ability to replicate human hand movements in a naturalistic manner including in-hand object manipulation. Therefore, we focused on the development of a novel thumb and palm articulation that would facilitate in-hand object manipulation while avoiding mechanical design complexity. Our key innovation is the use of a tendon-driven ball joint as a basis for an articulated thumb. The design innovation enables our under-actuated hand to perform complex in-hand object manipulation such as passing a ball between the fingers or even writing text messages on a smartphone with the thumb's end-point while holding the phone in the palm of the same hand. We then proceed to compare the dexterity of our novel robotic hand design to other designs in prosthetics, robotics and humans using simulated and physical kinematic data to demonstrate the enhanced dexterity of our novel articulation exceeding previous designs by a factor of two. Our innovative approach achieves naturalistic movement of the human hand, without requiring translation in the hand joints, and enables teleoperation of complex tasks, such as single (robot) handed messaging on a smartphone without the need for haptic feedback. Our simple, under-actuated design outperforms current state-of-the-art prostheses or robotic and prosthetic hands regarding abilities that encompass from grasps to activities of daily living which involve complex in-hand object manipulation.",
"title": ""
},
{
"docid": "a9121a1211704006dc8de14a546e3bdc",
"text": "This paper describes and compares two straightforward approaches for dependency parsing with partial annotations (PA). The first approach is based on a forest-based training objective for two CRF parsers, i.e., a biaffine neural network graph-based parser (Biaffine) and a traditional log-linear graph-based parser (LLGPar). The second approach is based on the idea of constrained decoding for three parsers, i.e., a traditional linear graphbased parser (LGPar), a globally normalized neural network transition-based parser (GN3Par) and a traditional linear transition-based parser (LTPar). For the test phase, constrained decoding is also used for completing partial trees. We conduct experiments on Penn Treebank under three different settings for simulating PA, i.e., random, most uncertain, and divergent outputs from the five parsers. The results show that LLGPar is most effective in directly learning from PA, and other parsers can achieve best performance when PAs are completed into full trees by LLGPar.",
"title": ""
},
{
"docid": "6f18b8e0a1e7c835dc6f94bfa8d96437",
"text": "Recent years have witnessed the rise of the gut microbiota as a major topic of research interest in biology. Studies are revealing how variations and changes in the composition of the gut microbiota influence normal physiology and contribute to diseases ranging from inflammation to obesity. Accumulating data now indicate that the gut microbiota also communicates with the CNS — possibly through neural, endocrine and immune pathways — and thereby influences brain function and behaviour. Studies in germ-free animals and in animals exposed to pathogenic bacterial infections, probiotic bacteria or antibiotic drugs suggest a role for the gut microbiota in the regulation of anxiety, mood, cognition and pain. Thus, the emerging concept of a microbiota–gut–brain axis suggests that modulation of the gut microbiota may be a tractable strategy for developing novel therapeutics for complex CNS disorders.",
"title": ""
},
{
"docid": "5cf71fc03658cd7210ac2a764f1425d7",
"text": "Most existing pose robust methods are too computational complex to meet practical applications and their performance under unconstrained environments are rarely evaluated. In this paper, we propose a novel method for pose robust face recognition towards practical applications, which is fast, pose robust and can work well under unconstrained environments. Firstly, a 3D deformable model is built and a fast 3D model fitting algorithm is proposed to estimate the pose of face image. Secondly, a group of Gabor filters are transformed according to the pose and shape of face image for feature extraction. Finally, PCA is applied on the pose adaptive Gabor features to remove the redundances and Cosine metric is used to evaluate the similarity. The proposed method has three advantages: (1) The pose correction is applied in the filter space rather than image space, which makes our method less affected by the precision of the 3D model, (2) By combining the holistic pose transformation and local Gabor filtering, the final feature is robust to pose and other negative factors in face recognition, (3) The 3D structure and facial symmetry are successfully used to deal with self-occlusion. Extensive experiments on FERET and PIE show the proposed method outperforms state-of-the-art methods significantly, meanwhile, the method works well on LFW.",
"title": ""
},
{
"docid": "6b527c906789f6e32cd5c28f684d9cc8",
"text": "This paper addresses an essential application of microkernels; its role in virtualization for embedded systems. Virtualization in embedded systems and microkernel-based virtualization are topics of intensive research today. As embedded systems specifically mobile phones are evolving to do everything that a PC does, employing virtualization in this case is another step to make this vision a reality. Hence, recently, much time and research effort have been employed to validate ways to host virtualization on embedded system processors i.e., the ARM processors. This paper reviews the research work that have had significant impact on the implementation approaches of virtualization in embedded systems and how these approaches additionally provide security features that are beneficial to equipment manufacturers, carrier service providers and end users.",
"title": ""
},
{
"docid": "c5443c3bdfed74fd643e7b6c53a70ccc",
"text": "Background\nAbsorbable suture suspension (Silhouette InstaLift, Sinclair Pharma, Irvine, CA) is a novel, minimally invasive system that utilizes a specially manufactured synthetic suture to help address the issues of facial aging, while minimizing the risks associated with historic thread lifting modalities.\n\n\nObjectives\nThe purpose of the study was to assess the safety, efficacy, and patient satisfaction of the absorbable suture suspension system in regards to facial rejuvenation and midface volume enhancement.\n\n\nMethods\nThe first 100 treated patients who underwent absorbable suture suspension, by the senior author, were critically evaluated. Subjects completed anonymous surveys evaluating their experience with the new modality.\n\n\nResults\nSurvey results indicate that absorbable suture suspension is a tolerable (96%) and manageable (89%) treatment that improves age related changes (83%), which was found to be in concordance with our critical review.\n\n\nConclusions\nAbsorbable suture suspension generates high patient satisfaction by nonsurgically lifting mid and lower face and neck skin and has the potential to influence numerous facets of aesthetic medicine. The study provides a greater understanding concerning patient selection, suture trajectory, and possible adjuvant therapies.\n\n\nLevel of Evidence 4",
"title": ""
},
{
"docid": "8eb161e363d55631148ed3478496bbd5",
"text": "This paper proposes a new power-factor-correction (PFC) topology, and explains its operation principle, its control mechanism, related application problems followed by experimental results. In this proposed topology, critical-conduction-mode (CRM) interleaved technique is applied to a bridgeless PFC in order to achieve high efficiency by combining benefits of each topology. This application is targeted toward low to middle power applications that normally employs continuous-conductionmode boost converter. key words: PFC, Interleaved, critical-conduction-mode, totem-pole",
"title": ""
},
{
"docid": "4f296caa2ee4621a8e0858bfba701a3b",
"text": "This paper considers the problem of assessing visual aesthetic quality with semantic information. We cast the assessment problem as the main task among a multi-task deep model, and argue that semantic recognition offers the key to addressing this problem. Based on convolutional neural networks, we propose a general multi-task framework with four different structures. In each structure, aesthetic quality assessment task and semantic recognition task are leveraged, and different features are explored to improve the quality assessment. Moreover, an effective strategy of keeping a balanced effect between the semantic task and aesthetic task is developed to optimize the parameters of our framework. The correlation analysis among the tasks validates the importance of the semantic recognition in aesthetic quality assessment. Extensive experiments verify the effectiveness of the proposed multi-task framework, and further corroborate the",
"title": ""
},
{
"docid": "edaa08010e5399de62e255e86637d342",
"text": "Concolic testing combines program execution and symbolic analysis to explore the execution paths of a software program. In this paper, we develop the first concolic testing approach for Deep Neural Networks (DNNs). More specifically, we utilise quantified linear arithmetic over rationals to express test requirements that have been studied in the literature, and then develop a coherent method to perform concolic testing with the aim of better coverage. Our experimental results show the effectiveness of the concolic testing approach in both achieving high coverage and finding adversarial examples.",
"title": ""
},
{
"docid": "6483733f9cfd2eaacb5f368e454416db",
"text": "A paradigm shift is required to prevent the increasing automobile accident deaths that are mostly due to the inattentive behavior of drivers. Knowledge of gaze region can provide valuable information regarding a driver's point of attention. Accurate and inexpensive gaze classification systems in cars can improve safe driving. However, monitoring real-time driving behaviors and conditions presents some challenges: dizziness due to long drives, extreme lighting variations, glasses reflections, and occlusions. Past studies on gaze detection in cars have been chiefly based on head movements. The margin of error in gaze detection increases when drivers gaze at objects by moving their eyes without moving their heads. To solve this problem, a pupil center corneal reflection (PCCR)-based method has been considered. However, the error of accurately detecting the pupil center and corneal reflection center is increased in a car environment due to various environment light changes, reflections on glasses surface, and motion and optical blurring of captured eye image. In addition, existing PCCR-based methods require initial user calibration, which is difficult to perform in a car environment. To address this issue, we propose a deep learning-based gaze detection method using a near-infrared (NIR) camera sensor considering driver head and eye movement that does not require any initial user calibration. The proposed system is evaluated on our self-constructed database as well as on open Columbia gaze dataset (CAVE-DB). The proposed method demonstrated greater accuracy than the previous gaze classification methods.",
"title": ""
},
{
"docid": "a88b2916f73dedabceda574f10a93672",
"text": "A key component of a mobile robot system is the ability to localize itself accurately and, simultaneously, to build a map of the environment. Most of the existing algorithms are based on laser range finders, sonar sensors or artificial landmarks. In this paper, we describe a vision-based mobile robot localization and mapping algorithm, which uses scale-invariant image features as natural landmarks in unmodified environments. The invariance of these features to image translation, scaling and rotation makes them suitable landmarks for mobile robot localization and map building. With our Triclops stereo vision system, these landmarks are localized and robot ego-motion is estimated by least-squares minimization of the matched landmarks. Feature viewpoint variation and occlusion are taken into account by maintaining a view direction for each landmark. Experiments show that these visual landmarks are robustly matched, robot pose is estimated and a consistent three-dimensional map is built. As image features are not noise-free, we carry out error analysis for the landmark positions and the robot pose. We use Kalman filters to track these landmarks in a dynamic environment, resulting in a database map with landmark positional uncertainty. KEY WORDS—localization, mapping, visual landmarks, mobile robot",
"title": ""
},
{
"docid": "a48b7c679008235568d3d431081277b4",
"text": "This paper discusses the security aspects of a registration protocol in a mobile satellite communication system. We propose a new mobile user authentication and data encryption scheme for mobile satellite communication systems. The scheme can remedy a replay attack.",
"title": ""
},
{
"docid": "ffed6abc3134f30d267342e83931ee64",
"text": "This paper discusses General Random Utility Models (GRUMs). These are a class of parametric models that generate partial ranks over alternatives given attributes of agents and alternatives. We propose two preference elicitation scheme for GRUMs developed from principles in Bayesian experimental design, one for social choice and the other for personalized choice. We couple this with a general Monte-CarloExpectation-Maximization (MC-EM) based algorithm for MAP inference under GRUMs. We also prove uni-modality of the likelihood functions for a class of GRUMs. We examine the performance of various criteria by experimental studies, which show that the proposed elicitation scheme increases the precision of estimation.",
"title": ""
}
] |
scidocsrr
|
dffc068fc44ed963f45587de548e87aa
|
(Cross-)Browser Fingerprinting via OS and Hardware Level Features
|
[
{
"docid": "2b23a37f6047128e6c8a577e2f4343be",
"text": "Worldwide, the number of people and the time spent browsing the web keeps increasing. Accordingly, the technologies to enrich the user experience are evolving at an amazing pace. Many of these evolutions provide for a more interactive web (e.g., boom of JavaScript libraries, weekly innovations in HTML5), a more available web (e.g., explosion of mobile devices), a more secure web (e.g., Flash is disappearing, NPAPI plugins are being deprecated), and a more private web (e.g., increased legislation against cookies, huge success of extensions such as Ghostery and AdBlock). Nevertheless, modern browser technologies, which provide the beauty and power of the web, also provide a darker side, a rich ecosystem of exploitable data that can be used to build unique browser fingerprints. Our work explores the validity of browser fingerprinting in today's environment. Over the past year, we have collected 118,934 fingerprints composed of 17 attributes gathered thanks to the most recent web technologies. We show that innovations in HTML5 provide access to highly discriminating attributes, notably with the use of the Canvas API which relies on multiple layers of the user's system. In addition, we show that browser fingerprinting is as effective on mobile devices as it is on desktops and laptops, albeit for radically different reasons due to their more constrained hardware and software environments. We also evaluate how browser fingerprinting could stop being a threat to user privacy if some technological evolutions continue (e.g., disappearance of plugins) or are embraced by browser vendors (e.g., standard HTTP headers).",
"title": ""
}
] |
[
{
"docid": "16ff5b993508f962550b6de495c9d651",
"text": "Finding similar procedures in stripped binaries has various use cases in the domains of cyber security and intellectual property. Previous works have attended this problem and came up with approaches that either trade throughput for accuracy or address a more relaxed problem.\n In this paper, we present a cross-compiler-and-architecture approach for detecting similarity between binary procedures, which achieves both high accuracy and peerless throughput. For this purpose, we employ machine learning alongside similarity by composition: we decompose the code into smaller comparable fragments, transform these fragments to vectors, and build machine learning-based predictors for detecting similarity between vectors that originate from similar procedures.\n We implement our approach in a tool called Zeek and evaluate it by searching similarities in open source projects that we crawl from the world-wide-web. Our results show that we perform 250X faster than state-of-the-art tools without harming accuracy.",
"title": ""
},
{
"docid": "edab0c2cc3f04bd56fa76d8e6b339525",
"text": "In this letter, a compact ultrathin quad-band polarization-insensitive metamaterial absorber with a wide angle of absorption is proposed. The unit cell of the proposed structure comprises conductive cross dipoles loaded with split-ring resonators. The proposed absorber exhibits simulated peak absorption of <inline-formula> <tex-math notation=\"LaTeX\">$\\text{96.15}\\% $</tex-math></inline-formula>, <inline-formula><tex-math notation=\"LaTeX\"> $\\text{99.17}\\% $</tex-math></inline-formula>, <inline-formula><tex-math notation=\"LaTeX\">$\\text{99.75}\\%,$</tex-math> </inline-formula> and <inline-formula><tex-math notation=\"LaTeX\">$\\text{98.75}\\% $</tex-math></inline-formula> at 3.68, 8.58, 10.17, and 14.93 GHz, respectively. The proposed multiband absorber is ultrathin and compact in configuration with a thickness of <inline-formula><tex-math notation=\"LaTeX\">$\\text{0.0122}\\,\\lambda $</tex-math> </inline-formula> and a unit cell size of 0.122 <inline-formula><tex-math notation=\"LaTeX\">$\\lambda$</tex-math> </inline-formula> (corresponding to the lowest frequency). Moreover, by understanding the interaction of the unit cell with incident electromagnetic radiation, a conceptual equivalent circuit model is developed, which is used to understand the influence of coupling on the quad band of absorption. The simulated response of the proposed design demonstrates that it has quad-band polarization-insensitive absorption characteristics. In addition, the proposed absorber shows high absorption for an oblique incidence angle up to <inline-formula><tex-math notation=\"LaTeX\">$6{0^ \\circ }$</tex-math></inline-formula> for both transverse-electric and transverse-magnetic polarizations.",
"title": ""
},
{
"docid": "f9580093dcf61a9d6905265cfb3a0d32",
"text": "The rapid adoption of electronic health records (EHR) provides a comprehensive source for exploratory and predictive analytic to support clinical decision-making. In this paper, we investigate how to utilize EHR to tailor treatments to individual patients based on their likelihood to respond to a therapy. We construct a heterogeneous graph which includes two domains (patients and drugs) and encodes three relationships (patient similarity, drug similarity, and patient-drug prior associations). We describe a novel approach for performing a label propagation procedure to spread the label information representing the effectiveness of different drugs for different patients over this heterogeneous graph. The proposed method has been applied on a real-world EHR dataset to help identify personalized treatments for hypercholesterolemia. The experimental results demonstrate the effectiveness of the approach and suggest that the combination of appropriate patient similarity and drug similarity analytics could lead to actionable insights for personalized medicine. Particularly, by leveraging drug similarity in combination with patient similarity, our method could perform well even on new or rarely used drugs for which there are few records of known past performance.",
"title": ""
},
{
"docid": "77da7651b0e924d363c859d926e8c9da",
"text": "Manual feedback in basic robot-assisted minimally invasive surgery (RMIS) training can consume a significant amount of time from expert surgeons’ schedule and is prone to subjectivity. In this paper, we explore the usage of different holistic features for automated skill assessment using only robot kinematic data and propose a weighted feature fusion technique for improving score prediction performance. Moreover, we also propose a method for generating ‘task highlights’ which can give surgeons a more directed feedback regarding which segments had the most effect on the final skill score. We perform our experiments on the publicly available JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS) and evaluate four different types of holistic features from robot kinematic data—sequential motion texture (SMT), discrete Fourier transform (DFT), discrete cosine transform (DCT) and approximate entropy (ApEn). The features are then used for skill classification and exact skill score prediction. Along with using these features individually, we also evaluate the performance using our proposed weighted combination technique. The task highlights are produced using DCT features. Our results demonstrate that these holistic features outperform all previous Hidden Markov Model (HMM)-based state-of-the-art methods for skill classification on the JIGSAWS dataset. Also, our proposed feature fusion strategy significantly improves performance for skill score predictions achieving up to 0.61 average spearman correlation coefficient. Moreover, we provide an analysis on how the proposed task highlights can relate to different surgical gestures within a task. Holistic features capturing global information from robot kinematic data can successfully be used for evaluating surgeon skill in basic surgical tasks on the da Vinci robot. Using the framework presented can potentially allow for real-time score feedback in RMIS training and help surgical trainees have more focused training.",
"title": ""
},
{
"docid": "f27cf894faef9a475b011f44fbf57777",
"text": "Traditional convolutional neural networks (CNN) are stationary and feedforward. They neither change their parameters during evaluation nor use feedback from higher to lower layers. Real brains, however, do. So does our Deep Attention Selective Network (dasNet) architecture. DasNet’s feedback structure can dynamically alter its convolutional filter sensitivities during classification. It harnesses the power of sequential processing to improve classification performance, by allowing the network to iteratively focus its internal attention on some of its convolutional filters. Feedback is trained through direct policy search in a huge million-dimensional parameter space, through scalable natural evolution strategies (SNES). On the CIFAR-10 and CIFAR-100 datasets, dasNet outperforms the previous state-of-the-art model on unaugmented datasets.",
"title": ""
},
{
"docid": "807e008d5c7339706f8cfe71e9ced7ba",
"text": "Current competitive challenges induced by globalization and advances in information technology have forced companies to focus on managing customer relationships, and in particular customer satisfaction, in order to efficiently maximize revenues. This paper reports exploratory research based on a mail survey addressed to the largest 1,000 Greek organizations. The objectives of the research were: to investigate the extent of the usage of customerand market-related knowledge management (KM) instruments and customer relationship management (CRM) systems by Greek organizations and their relationship with demographic and organizational variables; to investigate whether enterprises systematically carry out customer satisfaction and complaining behavior research; and to examine the impact of the type of the information system used and managers’ attitudes towards customer KM practices. In addition, a conceptual model of CRM development stages is proposed. The findings of the survey show that about half of the organizations of the sample do not adopt any CRM philosophy. The remaining organizations employ instruments to conduct customer satisfaction and other customer-related research. However, according to the proposed model, they are positioned in the first, the preliminary CRM development stage. The findings also suggest that managers hold positive attitudes towards CRM and that there is no significant relationship between the type of the transactional information system used and the extent to which customer satisfaction research is performed by the organizations. The paper concludes by discussing the survey findings and proposing future",
"title": ""
},
{
"docid": "f71b1df36ee89cdb30a1dd29afc532ea",
"text": "Finite state machines are a standard tool to model event-based control logic, and dynamic programming is a staple of optimal decision-making. We combine these approaches in the context of radar resource management for Naval surface warfare. There is a friendly (Blue) force in the open sea, equipped with one multi-function radar and multiple ships. The enemy (Red) force consists of missiles that target the Blue force's radar. The mission of the Blue force is to foil the enemy's threat by careful allocation of radar resources. Dynamically composed finite state machines are used to formalize the model of the battle space and dynamic programming is applied to our dynamic state machine model to generate an optimal policy. To achieve this in near-real-time and a changing environment, we use approximate dynamic programming methods. Example scenario illustrating the model and simulation results are presented.",
"title": ""
},
{
"docid": "6fb23797eebcdcacf1805ef51af7557b",
"text": "Global Positioning System (GPS) is a satellite based navigation system developed and declared operational by the U.S department of defense in the year 1995. It provides position, velocity and time everywhere, on or near the surface of the earth. To achieve nation's security different countries are developing regional navigation satellite systems. In this context India also has developed its regional navigation satellite system called as Indian Regional Navigation Satellite System (IRNSS) with a constellation of seven satellites. The IRNSS is expected to provide positional accuracy of 10 m over Indian landmass and 20 m, over Indian Ocean. IRNSS is featured with highly accurate position, velocity and timing information for authorized users. Studying the satellite coverage area is very essential because it is an important parameter for the analysis of user positioning. In this paper, an algorithm to estimate coverage area of GPS and IRNSS is explained. Using this algorithm, earth's surface coverage of IRNSS 5 and 7 satellite vehicles (SVs) are investigated. The best and worst cases of IRNSS 5 and 7 SV's constellations are analyzed. It is observed that in the worst case the coverage is reduced to a large extent. Subsequently, IRNSS is augmented with GPS and the earth's coverage is estimated. Comparative analysis of IRNSS, GPS and IRNSS augmented with GPS is also performed in terms of surface coverage. The augmentation has caused improvement in the specified performance parameter.",
"title": ""
},
{
"docid": "16fa5c87b0877188b3b225458012df0f",
"text": "Segmentation is one of the essential tasks in image processing. Thresholding is one of the simplest techniques for performing image segmentation. Multilevel thresholding is a simple and effective technique. The primary objective of bi-level or multilevel thresholding for image segmentation is to determine a best thresholding value. To achieve multilevel thresholding various techniques has been proposed. A study of some nature inspired metaheuristic algorithms for multilevel thresholding for image segmentation is conducted. Here, we study about Particle swarm optimization (PSO) algorithm, artificial bee colony optimization (ABC), Ant colony optimization (ACO) algorithm and Cuckoo search (CS) algorithm. Keywords—Ant colony optimization, Artificial bee colony optimization, Cuckoo search algorithm, Image segmentation, Multilevel thresholding, Particle swarm optimization.",
"title": ""
},
{
"docid": "00dc409a1dea3d6fe773b0262afe2392",
"text": "In this paper, we present a study of a novel problem, i.e. topic-based citation recommendation, which involves recommending papers to be referred to. Traditionally, this problem is usually treated as an engineering issue and dealt with using heuristics. This paper gives a formalization of topic-based citation recommendation and proposes a discriminative approach to this problem. Specifically, it proposes a two-layer Restricted Boltzmann Machine model, called RBMCS, which can discover topic distributions of paper content and citation relationship simultaneously. Experimental results demonstrate that RBM-CS can significantly outperform baseline methods for citation recommendation.",
"title": ""
},
{
"docid": "eeff4d71a0af418828d5783a041b466f",
"text": "In recent years, advances in hardware technology have facilitated ne w ways of collecting data continuously. In many applications such as network monitorin g, the volume of such data is so large that it may be impossible to store the data on disk. Furthermore, even when the data can be stored, the volume of th incoming data may be so large that it may be impossible to process any partic ular record more than once. Therefore, many data mining and database op erati ns such as classification, clustering, frequent pattern mining and indexing b ecome significantly more challenging in this context. In many cases, the data patterns may evolve continuously, as a result of which it is necessary to design the mining algorithms effectively in order to accou nt f r changes in underlying structure of the data stream. This makes the solution s of the underlying problems even more difficult from an algorithmic and computa tion l point of view. This book contains a number of chapters which are caref ully chosen in order to discuss the broad research issues in data streams. The purp ose of this chapter is to provide an overview of the organization of the stream proces sing and mining techniques which are covered in this book.",
"title": ""
},
{
"docid": "b7c0864be28d70d49ae4a28fb7d78f04",
"text": "UNLABELLED\nThe replacement of crowns and bridges is a common procedure for many dental practitioners. When correctly planned and executed, fixed prostheses will provide predictable function, aesthetics and value for money. However, when done poorly, they are more likely to fail prematurely and lead to irreversible damage to the teeth and supporting structures beneath. Sound diagnosis, assessment and technical skills are essential when dealing with failed or failing fixed restorations. These skills are essential for the 21st century dentist. This paper, with treated clinical examples, illustrates the areas of technical skill and clinical decisions needed for this type of work. It also provides advice on how the risk of premature failure can, in general, be further reduced. The article also confirms the very real risk in the UK of dento-legal problems when patients experience unexpected problems with their crowns and bridges.\n\n\nCLINICAL RELEVANCE\nThis paper outlines clinical implications of failed fixed prosthodontics to the dental surgeon. It also discusses factors that we can all use to predict and reduce the risk of premature restoration failure. Restoration design, clinical execution and patient factors are the most frequent reasons for premature problems. It is worth remembering (and informing patients) that the health of the underlying supporting dental tissue is often irreversibly compromised at the time of fixed restoration failure.",
"title": ""
},
{
"docid": "8a905d0abdc1a6a8daeb44137fa980ee",
"text": "In the mobile game industry, Free-to-Play games are dominantly released, and therefore player retention and purchases have become important issues. In this paper, we propose a game player model for predicting when players will leave a game. Firstly, we define player churn in the game and extract features that contain the properties of the player churn from the player logs. And then we tackle the problem of imbalanced datasets. Finally, we exploit classification algorithms from machine learning and evaluate the performance of the proposed prediction model using cross-validation. Experimental results show that the proposed model has high accuracy enough to predict churn for real-world application.",
"title": ""
},
{
"docid": "0e68fa08edfc2dcb52585b13d0117bf1",
"text": "Knowledge graphs contain knowledge about the world and provide a structured representation of this knowledge. Current knowledge graphs contain only a small subset of what is true in the world. Link prediction approaches aim at predicting new links for a knowledge graph given the existing links among the entities. Tensor factorization approaches have proved promising for such link prediction problems. Proposed in 1927, Canonical Polyadic (CP) decomposition is among the first tensor factorization approaches. CP generally performs poorly for link prediction as it learns two independent embedding vectors for each entity, whereas they are really tied. We present a simple enhancement of CP (which we call SimplE) to allow the two embeddings of each entity to be learned dependently. The complexity of SimplE grows linearly with the size of embeddings. The embeddings learned through SimplE are interpretable, and certain types of background knowledge can be incorporated into these embeddings through weight tying. We prove SimplE is fully expressive and derive a bound on the size of its embeddings for full expressivity. We show empirically that, despite its simplicity, SimplE outperforms several state-of-the-art tensor factorization techniques. SimplE’s code is available on GitHub at https://github.com/Mehran-k/SimplE.",
"title": ""
},
{
"docid": "bb8cf42ab1b066e4647ce53a6666af35",
"text": "This paper presents a high energy efficient, parasitic free and low complex readout integrated circuit for capacitive sensors. A very low power consumption is achieved by replacing a power hungry operation amplifier by a subthreshold inverter instead in a switched capacitor amplifier(SC-amp) and reducing the supply voltage of all digital circuits in the system. A fast respond finite gain compensation method is utilized to reduce the gain error of the SC-amp and increase the energy efficiency of the readout circuit. A two-step auto calibration is applied to eliminate the offset from nonideal effects of the SC-amp and comparator delay. The readout system is implemented and simulated in TSMC 90 nm CMOS technology. With supply voltage of 1 V, simulation shows that the circuit can achieve 10.4 bit resolution while consuming only 3 μW during 640 μs conversion time. The digital output code has little sensitivity to temperature variation.",
"title": ""
},
{
"docid": "bc06e1fe5064a2b68d6b181b2953b4e2",
"text": "Now, we come to offer you the right catalogues of book to open. hackers heroes of the computer revolution is one of the literary work in this world in suitable to be reading material. That's not only this book gives reference, but also it will show you the amazing benefits of reading a book. Developing your countless minds is needed; moreover you are kind of people with great curiosity. So, the book is very appropriate for you.",
"title": ""
},
{
"docid": "d34759a882df6bc482b64530999bcda3",
"text": "The Static Single Assignment (SSA) form is a program representation used in many optimizing compilers. The key step in converting a program to SSA form is called φ-placement. Many algorithms for φ-placement have been proposed in the literature, but the relationships between these algorithms are not well understood.In this article, we propose a framework within which we systematically derive (i) properties of the SSA form and (ii) φ-placement algorithms. This framework is based on a new relation called merge which captures succinctly the structure of a program's control flow graph that is relevant to its SSA form. The φ-placement algorithms we derive include most of the ones described in the literature, as well as several new ones. We also evaluate experimentally the performance of some of these algorithms on the SPEC92 benchmarks.Some of the algorithms described here are optimal for a single variable. However, their repeated application is not necessarily optimal for multiple variables. We conclude the article by describing such an optimal algorithm, based on the transitive reduction of the merge relation, for multi-variable φ-placement in structured programs. The problem for general programs remains open.",
"title": ""
},
{
"docid": "12b1f774967739ea12a1ddcfe43f2faf",
"text": "Herbal drug authentication is an important task in traditional medicine; however, it is challenged by the limitations of traditional authentication methods and the lack of trained experts. DNA barcoding is conspicuous in almost all areas of the biological sciences and has already been added to the British pharmacopeia and Chinese pharmacopeia for routine herbal drug authentication. However, DNA barcoding for the Korean pharmacopeia still requires significant improvements. Here, we present a DNA barcode reference library for herbal drugs in the Korean pharmacopeia and developed a species identification engine named KP-IDE to facilitate the adoption of this DNA reference library for the herbal drug authentication. Using taxonomy records, specimen records, sequence records, and reference records, KP-IDE can identify an unknown specimen. Currently, there are 6,777 taxonomy records, 1,054 specimen records, 30,744 sequence records (ITS2 and psbA-trnH) and 285 reference records. Moreover, 27 herbal drug materials were collected from the Seoul Yangnyeongsi herbal medicine market to give an example for real herbal drugs authentications. Our study demonstrates the prospects of the DNA barcode reference library for the Korean pharmacopeia and provides future directions for the use of DNA barcoding for authenticating herbal drugs listed in other modern pharmacopeias.",
"title": ""
},
{
"docid": "99c99f927c3c416ba8c01c15c0c2f28c",
"text": "Online Social Rating Networks (SRNs) such as Epinions and Flixter, allow users to form several implicit social networks, through their daily interactions like co-commenting on the same products, or similarly co-rating products. The majority of earlier work in Rating Prediction and Recommendation of products (e.g. Collaborative Filtering) mainly takes into account ratings of users on products. However, in SRNs users can also built their explicit social network by adding each other as friends. In this paper, we propose Social-Union, a method which combines similarity matrices derived from heterogeneous (unipartite and bipartite) explicit or implicit SRNs. Moreover, we propose an effective weighting strategy of SRNs influence based on their structured density. We also generalize our model for combining multiple social networks. We perform an extensive experimental comparison of the proposed method against existing rating prediction and product recommendation algorithms, using synthetic and two real data sets (Epinions and Flixter). Our experimental results show that our Social-Union algorithm is more effective in predicting rating and recommending products in SRNs.",
"title": ""
},
{
"docid": "5d527ad4493860a8d96283a5c58c3979",
"text": "Phase retrieval problems involve solving linear equations, but with missing sign (or phase, for complex numbers) information. More than four decades after it was first proposed, the seminal error reduction algorithm of Gerchberg and Saxton and Fienup is still the popular choice for solving many variants of this problem. The algorithm is based on alternating minimization; i.e., it alternates between estimating the missing phase information, and the candidate solution. Despite its wide usage in practice, no global convergence guarantees for this algorithm are known. In this paper, we show that a (resampling) variant of this approach converges geometrically to the solution of one such problem-finding a vector x from y, A, where y = |ATx| and |z| denotes a vector of element-wise magnitudes of z-under the assumption that A is Gaussian. Empirically, we demonstrate that alternating minimization performs similar to recently proposed convex techniques for this problem (which are based on “lifting” to a convex matrix problem) in sample complexity and robustness to noise. However, it is much more efficient and can scale to large problems. Analytically, for a resampling version of alternating minimization, we show geometric convergence to the solution, and sample complexity that is off by log factors from obvious lower bounds. We also establish close to optimal scaling for the case when the unknown vector is sparse. Our work represents the first theoretical guarantee for alternating minimization (albeit with resampling) for any variant of phase retrieval problems in the non-convex setting.",
"title": ""
}
] |
scidocsrr
|
8a86788244d84c2b27191e6cd5a6570a
|
The first years in an L 2-speaking environment : A comparison of Japanese children and adults learning American English
|
[
{
"docid": "1a8954d4cacde8eb4785a4192a3ed070",
"text": "This study examined the production and perception of English vowels by highly experienced native Italian speakers of English. The subjects were selected on the basis of the age at which they arrived in Canada and began to learn English, and how much they continued to use Italian. Vowel production accuracy was assessed through an intelligibility test in which native English-speaking listeners attempted to identify vowels spoken by the native Italian subjects. Vowel perception was assessed using a categorial discrimination test. The later in life the native Italian subjects began to learn English, the less accurately they produced and perceived English vowels. Neither of two groups of early Italian/English bilinguals differed significantly from native speakers of English either for production or perception. This finding is consistent with the hypothesis of the speech learning model [Flege, in Speech Perception and Linguistic Experience: Theoretical and Methodological Issues (York, Timonium, MD, 1995)] that early bilinguals establish new categories for vowels found in the second language (L2). The significant correlation observed to exist between the measures of L2 vowel production and perception is consistent with another hypothesis of the speech learning model, viz., that the accuracy with which L2 vowels are produced is limited by how accurately they are perceived.",
"title": ""
},
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "fe3afe69ec27189400e65e8bdfc5bf0b",
"text": "speech learning changes over the life span and to explain why \"earlier is better\" as far as learning to pronounce a second language (L2) is concerned. An assumption we make is that the phonetic systems used in the production and perception of vowels and consonants remain adaptive over the life span, and that phonetic systems reorganize in response to sounds encountered in an L2 through the addition of new phonetic categories, or through the modification of old ones. The chapter is organized in the following way. Several general hypotheses concerning the cause of foreign accent in L2 speech production are summarized in the introductory section. In the next section, a model of L2 speech learning that aims to account for age-related changes in L2 pronunciation is presented. The next three sections present summaries of empirical research dealing with the production and perception of L2 vowels, word-initial consonants, and word-final consonants. The final section discusses questions of general theoretical interest, with special attention to a featural (as opposed to a segmental) level of analysis. Although nonsegmental (Le., prosodic) dimensions are an important source of foreign accent, the present chapter focuses on phoneme-sized units of speech. Although many different languages are learned as an L2, the focus is on the acquisition of English.",
"title": ""
}
] |
[
{
"docid": "e9838d3c33d19bdd20a001864a878757",
"text": "FPGAs are increasingly popular as application-specific accelerators because they lead to a good balance between flexibility and energy efficiency, compared to CPUs and ASICs. However, the long routing time imposes a barrier on FPGA computing, which significantly hinders the design productivity. Existing attempts of parallelizing the FPGA routing either do not fully exploit the parallelism or suffer from an excessive quality loss. Massive parallelism using GPUs has the potential to solve this issue but faces non-trivial challenges.\n To cope with these challenges, this work presents Corolla, a GPU-accelerated FPGA routing method. Corolla enables applying the GPU-friendly shortest path algorithm in FPGA routing, leveraging the idea of problem size reduction by limiting the search in routing subgraphs. We maintain the convergence after problem size reduction using the dynamic expansion of the routing resource subgraphs. In addition, Corolla explores the fine-grained single-net parallelism and proposes a hybrid approach to combine the static and dynamic parallelism on GPU. To explore the coarse-grained multi-net parallelism, Corolla proposes an effective method to parallelize mutli-net routing while preserving the equivalent routing results as the original single-net routing. Experimental results show that Corolla achieves an average of 18.72x speedup on GPU with a tolerable loss in the routing quality and sustains a scalable speedup on large-scale routing graphs. To our knowledge, this is the first work to demonstrate the effectiveness of GPU-accelerated FPGA routing.",
"title": ""
},
{
"docid": "65d6b6c8316ced0328f367697ad8606e",
"text": "Smart devices equipped with powerful sensing, computing and networking capabilities have proliferated lately, ranging from popular smartphones and tablets to Internet appliances, smart TVs, and others that will soon appear (e.g., watches, glasses, and clothes). One key feature of such devices is their ability to incorporate third-party apps from a variety of markets. This poses strong security and privacy issues to users and infrastructure operators, particularly through software of malicious (or dubious) nature that can easily get access to the services provided by the device and collect sensory data and personal information. Malware in current smart devices -mostly smartphones and tablets- have rocketed in the last few years, in some cases supported by sophisticated techniques purposely designed to overcome security architectures currently in use by such devices. Even though important advances have been made on malware detection in traditional personal computers during the last decades, adopting and adapting those techniques to smart devices is a challenging problem. For example, power consumption is one major constraint that makes unaffordable to run traditional detection engines on the device, while externalized (i.e., cloud-based) techniques rise many privacy concerns. This article examines the problem of malware in smart devices and recent progress made in detection techniques. We first present a detailed analysis on how malware has evolved over the last years for the most popular platforms. We identify exhibited behaviors, pursued goals, infection and distribution strategies, etc. and provide numerous examples through case studies of the most relevant specimens. We next survey, classify and discuss efforts made on detecting both malware and other suspicious software (grayware), concentrating on the 20 most relevant techniques proposed between 2010 and 2013. Based on the conclusions extracted from this study, we finally provide constructive discussion on open research problems and areas where we believe that more work is needed.",
"title": ""
},
{
"docid": "67e7b542e876c213540c747934fd3557",
"text": "This paper presents preliminary work on musical instruments ontology design, and investigates heterogeneity and limitations in existing instrument classification schemes. Numerous research to date aims at representing information about musical instruments. The works we examined are based on the well known Hornbostel and Sach’s classification scheme. We developed representations using the Ontology Web Language (OWL), and compared terminological and conceptual heterogeneity using SPARQL queries. We found evidence to support that traditional designs based on taxonomy trees lead to ill-defined knowledge representation, especially in the context of an ontology for the Semantic Web. In order to overcome this issue, it is desirable to have an instrument ontology that exhibits a semantically rich structure.",
"title": ""
},
{
"docid": "a3345ad4a18be52b478d3e75cf05a371",
"text": "In the course of the routine use of NMR as an aid for organic chemistry, a day-to-day problem is the identification of signals deriving from common contaminants (water, solvents, stabilizers, oils) in less-than-analytically-pure samples. This data may be available in the literature, but the time involved in searching for it may be considerable. Another issue is the concentration dependence of chemical shifts (especially 1H); results obtained two or three decades ago usually refer to much more concentrated samples, and run at lower magnetic fields, than today’s practice. We therefore decided to collect 1H and 13C chemical shifts of what are, in our experience, the most popular “extra peaks” in a variety of commonly used NMR solvents, in the hope that this will be of assistance to the practicing chemist.",
"title": ""
},
{
"docid": "8621332351bd2af6148a891d183f3eae",
"text": "Recent researches on neural network have shown signicant advantage in machine learning over traditional algorithms based on handcraed features and models. Neural network is now widely adopted in regions like image, speech and video recognition. But the high computation and storage complexity of neural network inference poses great diculty on its application. CPU platforms are hard to oer enough computation capacity. GPU platforms are the rst choice for neural network process because of its high computation capacity and easy to use development frameworks. On the other hand, FPGA-based neural network inference accelerator is becoming a research topic. With specically designed hardware, FPGA is the next possible solution to surpass GPU in speed and energy eciency. Various FPGA-based accelerator designs have been proposed with soware and hardware optimization techniques to achieve high speed and energy eciency. In this paper, we give an overview of previous work on neural network inference accelerators based on FPGA and summarize the main techniques used. An investigation from soware to hardware, from circuit level to system level is carried out to complete analysis of FPGA-based neural network inference accelerator design and serves as a guide to future work.",
"title": ""
},
{
"docid": "16dd74e72700ce82502f75054b5c3fe6",
"text": "Multiple access (MA) technology is of most importance for 5G. Non-orthogonal multiple access (NOMA) utilizing power domain and advanced receiver has been considered as a promising candidate MA technology recently. In this paper, the NOMA concept is presented toward future enhancements of spectrum efficiency in lower frequency bands for downlink of 5G system. Key component technologies of NOMA are presented and discussed including multiuser transmission power allocation, scheduling algorithm, receiver design and combination of NOMA with multi-antenna technology. The performance gains of NOMA are evaluated by system-level simulations with very practical assumptions. Under multiple configurations and setups, the achievable system-level gains of NOMA are shown promising even when practical considerations were taken into account.",
"title": ""
},
{
"docid": "1c9c3b03db8c453897cf9598ce794b34",
"text": "Contents Introduction 2 Chapter I. The geometry of curves on S 2 3 § 1. The elementary geometry of smooth curves and wavefronts 3 § 2. Contact manifolds, their Legendrian submanifolds and their fronts 9 § 3. Dual curves and derivative curves of fronts 10 § 4. The caustic and the derivatives of fronts 12 Chapter II. Quaternions and the triality theorem 13 § 5. Quaternions and the standard contact structures on the sphere S 3 13 § 6. Quaternions and contact elements of the sphere 5? 15 § 7. The action of quaternions on the contact elements of the sphere 5| 18 § 8. The action of right shifts on left-invariant fields 20 § 9. The duality of j-fronts and fc-fronts of «-Legendrian curves 20 Chapter III. Quaternions and curvatures 22 § 10. The spherical radii of curvature of fronts 22 § 11. Quaternions and caustics 23 § 12. The geodesic curvature of the derivative curve 24 § 13. The derivative of a small curve and the derivative of curvature of the curve 28 Chapter IV. The characteristic chain and spherical indices of a hyper-surface 30 § 14. The characteristic 2-chain 31 § 15. The indices of hypersurfaces on a sphere 33 § 16. Indices as linking coefficients 35 § 17. The indices of hypersurfaces on a sphere as intersection indices 36 § 18. Proofs of the index theorems 38 § 19. The indices of fronts of Legendrian submanifolds on an even-dimensional sphere 40 Chapter V. Exact Lagrangian curves on a sphere and their Maslov indices 44 § 20. Exact Lagrangian curves and their Legendrian lifts 45 V. I. Arnol'd § 21. The integral of a horizontal form as the area of the characteristic chain 48 §22. A horizontal contact form as a Levi-Civita connection and a generalized Gauss-Bonnet formula 49 § 23. Proof of the formula for the Maslov index 52 § 24. The area-length duality 54 §25. The parities of fronts and caustics 56 Chapter VI. The Bennequin invariant and the spherical invariant J + 57 § 26. The spherical invariant J + 58 § 27. The topological meaning of the invariant SJ + 59 Chapter VII. Pseudo-functions 60 §28. The quasi-functions of Chekanov 61 § 29. From quasi-functions on the cylinder to pseudo-functions on the sphere, and conversely 62 § 30. Conjectures concerning pseudo-functions 63 §31. Space curves and Sturm's theorem 66 Bibliography 68",
"title": ""
},
{
"docid": "ec06587bff3d5c768ab9083bd480a875",
"text": "Wireless sensor networks are an emerging technology for low-cost, unattended monitoring of a wide range of environments, and their importance has been enforced by the recent delivery of the IEEE 802.15.4 standard for the physical and MAC layers and the forthcoming Zigbee standard for the network and application layers. The fast progress of research on energy efficiency, networking, data management and security in wireless sensor networks, and the need to compare with the solutions adopted in the standards motivates the need for a survey on this field.",
"title": ""
},
{
"docid": "6c03036f1b5af68fbaa9f516f850f94f",
"text": "Although initially introduced and studied in the late 1960s and early 1970s, statistical methods of Markov source or hidden Markov modeling have become increasingly popular in the last several years. There are two strong reasons why this has occurred. First the models are very rich in mathematical structure and hence can form the theoretical basis for use in a wide range of applications. Second the models, when applied properly, work very well in practice for several important applications. In this paper we attempt to carefully and methodically review the theoretical aspects of this type of statistical modeling and show how they have been applied to selected problems in machine recognition of speech.",
"title": ""
},
{
"docid": "dec1296463199214ef67c1c9f5b848be",
"text": "The scope of this second edition of the introduction to fundamental distributed programming abstractions has been extended to cover 'Byzantine fault tolerance'. It includes algorithms to Whether rgui and function or matrix. Yes no plotting commands the same dim. For scenarios such as is in which are available packages still! The remote endpoint the same model second example in early. Variables are omitted the one way, datagram transports inherently support which used by swayne cook. The sense if you do this is somewhat. Under which they were specified by declaring the vector may make. It as not be digitally signed like the binding configuration. The states and unordered factors the printing of either rows. In the appropriate interpreter if and that locale. In this and can be ignored, for has. Values are used instead of choice the probability density. There are recognized read only last two http the details see below specify. One mode namely this is used. Look at this will contain a vector of multiple. Wilks you will look at this is quite hard. The character expansion are copied when character. For fitting function takes an expression, so called the object. However a parameter data analysis and, rbind or stem and qqplot. The result is in power convenience and the outer true as many. Functions can reduce the requester. In that are vectors or, data into a figure five values for linear regressions. Like structures are the language and stderr would fit hard to rules. Messages for reliable session concretely, ws rm standard bindings the device will launch a single. Consider the users note that device this. Alternatively ls can remove directory say consisting. The common example it has gone into groups ws rm support whenever you. But the previous commands can be used graphical parameters to specified. Also forms of filepaths and all the receiver. For statistical methods require some rather inflexible.",
"title": ""
},
{
"docid": "e624a94c8440c1a8f318f5a56c353632",
"text": "“Neural coding” is a popular metaphor in neuroscience, where objective properties of the world are communicated to the brain in the form of spikes. Here I argue that this metaphor is often inappropriate and misleading. First, when neurons are said to encode experimental parameters, the implied communication channel consists of both the experimental and biological system. Thus, the terms “neural code” are used inappropriately when “neuroexperimental code” would be more accurate, although less insightful. Second, the brain cannot be presumed to decode neural messages into objective properties of the world, since it never gets to observe those properties. To avoid dualism, codes must relate not to external properties but to internal sensorimotor models. Because this requires structured representations, neural assemblies cannot be the basis of such codes. Third, a message is informative to the extent that the reader understands its language. But the neural code is private to the encoder since only the message is communicated: each neuron speaks its own language. It follows that in the neural coding metaphor, the brain is a Tower of Babel. Finally, the relation between input signals and actions is circular; that inputs do not preexist to outputs makes the coding paradigm problematic. I conclude that the view that spikes are messages is generally not tenable. An alternative proposition is that action potentials are actions on other neurons and the environment, and neurons interact with each other rather than exchange messages. . CC-BY 4.0 International license not peer-reviewed) is the author/funder. It is made available under a The copyright holder for this preprint (which was . http://dx.doi.org/10.1101/168237 doi: bioRxiv preprint first posted online Jul. 27, 2017;",
"title": ""
},
{
"docid": "9096a4dac61f8a87da4f5cbfca5899a8",
"text": "OBJECTIVE\nTo evaluate the CT findings of ruptured corpus luteal cysts.\n\n\nMATERIALS AND METHODS\nSix patients with a surgically proven ruptured corpus luteal cyst were included in this series. The prospective CT findings were retrospectively analyzed in terms of the size and shape of the cyst, the thickness and enhancement pattern of its wall, the attenuation of its contents, and peritoneal fluid.\n\n\nRESULTS\nThe mean diameter of the cysts was 2.8 (range, 1.5-4.8) cm; three were round and three were oval. The mean thickness of the cyst wall was 4.7 (range, 1-10) mm; in all six cases it showed strong enhancement, and in three was discontinuous. In five of six cases, the cystic contents showed high attenuation. Peritoneal fluid was present in all cases, and its attenuation was higher, especially around the uterus and adnexa, than that of urine present in the bladder.\n\n\nCONCLUSION\nIn a woman in whom CT reveals the presence of an ovarian cyst with an enhancing rim and highly attenuated contents, as well as highly attenuated peritoneal fluid, a ruptured corpus luteal cyst should be suspected. Other possible evidence of this is focal interruption of the cyst wall and the presence of peritoneal fluid around the adnexa.",
"title": ""
},
{
"docid": "640fd96e02d8aa69be488323f77b40ba",
"text": "Low Power Wide Area (LPWA) connectivity, a wireless wide area technology that is characterized for interconnecting devices with low bandwidth connectivity and focusing on range and power efficiency, is seen as one of the fastest-growing components of Internet-of-Things (IoT). The LPWA connectivity is used to serve a diverse range of vertical applications, including agriculture, consumer, industrial, logistic, smart building, smart city and utilities. 3GPP has defined the maiden Narrowband IoT (NB-IoT) specification in Release 13 (Rel-13) to accommodate the LPWA demand. Several major cellular operators, such as China Mobile, Deutsch Telekom and Vodafone, have announced their NB-IoT trials or commercial network in year 2017. In Telekom Malaysia, we have setup a NB-IoT trial network for End-to-End (E2E) integration study. Our experimental assessment showed that the battery lifetime target for NB-IoT devices as stated by 3GPP utilizing latest-to-date Commercial Off-The-Shelf (COTS) NB-IoT modules is yet to be realized. Finally, several recommendations on how to optimize the battery lifetime while designing firmware for NB-IoT device are also provided.",
"title": ""
},
{
"docid": "0d57c3d4067d94f867e7e06becd48519",
"text": "This thesis investigates the evolutionary plausibility of the Minimalist Program. Is such a theory of language reasonable given the assumption that the human linguistic capacity has been subject to the usual forces and processes of evolution? More generally, this thesis is a comment on the manner in which theories of language can and should be constrained. What are the constraints that must be taken into account when constructing a theory of language? These questions are addressed by applying evidence gathered in evolutionary biology to data from linguistics. The development of generative syntactic theorising in the late 20th century has led to a much redesigned conception of the human language faculty. The driving question ‘why is language the way it is?’ has prompted assumptions of simplicity, perfection, optimality, and economy for language; a minimal system operating in an economic fashion to fit into the larger cognitive architecture in a perfect manner. Studies in evolutionary linguistics, on the other hand, have been keen to demonstrate that language is complex, redundant, and adaptive, Pinker & Bloom’s (1990) seminal paper being perhaps the prime example of this. The question is whether these opposing views can be married in any way. Interdisciplinary evidence is brought to bear on this problem, demonstrating that any reconciliation is impossible. Evolutionary biology shows that perfection, simplicity, and economy do not arise in typically evolving systems, yet the Minimalist Program attaches these characteristics to language. It shows that evolvable systems exhibit degeneracy, modularity, and robustness, yet the Minimalist Program must rule these features out for language. It shows that evolution exhibits a trend towards complexity, yet the Minimalist Program excludes such a depiction of language.",
"title": ""
},
{
"docid": "b83e537a2c8dcd24b096005ef0cb3897",
"text": "We present Deep Speaker, a neural speaker embedding system that maps utterances to a hypersphere where speaker similarity is measured by cosine similarity. The embeddings generated by Deep Speaker can be used for many tasks, including speaker identification, verification, and clustering. We experiment with ResCNN and GRU architectures to extract the acoustic features, then mean pool to produce utterance-level speaker embeddings, and train using triplet loss based on cosine similarity. Experiments on three distinct datasets suggest that Deep Speaker outperforms a DNN-based i-vector baseline. For example, Deep Speaker reduces the verification equal error rate by 50% (relatively) and improves the identification accuracy by 60% (relatively) on a text-independent dataset. We also present results that suggest adapting from a model trained with Mandarin can improve accuracy for English speaker recognition.",
"title": ""
},
{
"docid": "a00cc13a716439c75a5b785407b02812",
"text": "A novel current feedback programming principle and circuit architecture are presented, compatible with LED displays utilizing the 2T1C pixel structure. The new pixel programming approach is compatible with all TFT backplane technologies and can compensate for non-uniformities in both threshold voltage and carrier mobility of the OLED pixel drive TFT, due to a feedback loop that modulates the gate of the driving transistor according to the OLED current. The circuit can be internal or external to the integrated display data driver. Based on simulations and data gathered through a fabricated prototype driver, a pixel drive current of 20 nA can be programmed within an addressing time ranging from 10 μs to 50 μs.",
"title": ""
},
{
"docid": "4c4a28724bf847de8e57765f869c4f3f",
"text": "Emotional sensitivity, emotion regulation and impulsivity are fundamental topics in research of borderline personality disorder (BPD). Studies using fMRI examining the neural correlates concerning these topics is growing and has just begun understanding the underlying neural correlates in BPD. However, there are strong similarities but also important differences in results of different studies. It is therefore important to know in more detail what these differences are and how we should interpret these. In present review a critical light is shed on the fMRI studies examining emotional sensitivity, emotion regulation and impulsivity in BPD patients. First an outline of the methodology and the results of the studies will be given. Thereafter important issues that remained unanswered and topics to improve future research are discussed. Future research should take into account the limited power of previous studies and focus more on BPD specificity with regard to time course responses, different regulation strategies, manipulation of self-regulation, medication use, a wider range of stimuli, gender effects and the inclusion of a clinical control group.",
"title": ""
},
{
"docid": "6fa6a26b351c45ac5f33f565bc9c01e8",
"text": "Transfer learning, or inductive transfer, refers to the transfer of knowledge from a source task to a target task. In the context of convolutional neural networks (CNNs), transfer learning can be implemented by transplanting the learned feature layers from one CNN (derived from the source task) to initialize another (for the target task). Previous research has shown that the choice of the source CNN impacts the performance of the target task. In the current literature, there is no principled way for selecting a source CNN for a given target task despite the increasing availability of pre-trained source CNNs. In this paper we investigate the possibility of automatically ranking source CNNs prior to utilizing them for a target task. In particular, we present an information theoretic framework to understand the source-target relationship and use this as a basis to derive an approach to automatically rank source CNNs in an efficient, zero-shot manner. The practical utility of the approach is thoroughly evaluated using the PlacesMIT dataset, MNIST dataset and a real-world MRI database. Experimental results demonstrate the efficacy of the proposed ranking method for transfer learning.",
"title": ""
},
{
"docid": "6230a799c42909009835e99884cc7319",
"text": "This study investigates relationships between privacy concerns, uncertainty reduction behaviors, and self-disclosure among online dating participants, drawing on uncertainty reduction theory and the warranting principle. The authors propose a conceptual model integrating privacy concerns, self-efficacy, and Internet experience with uncertainty reduction strategies and amount of self-disclosure and then test this model on a nationwide sample of online dating participants (N = 562). The study findings confirm that the frequency of use of uncertainty reduction strategies is predicted by three sets of online dating concerns—personal security, misrepresentation, and recognition—as well as selfefficacy in online dating. Furthermore, the frequency of uncertainty reduction strategies mediates the relationship between these variables and amount of self-disclosure with potential online dating partners. The authors explore the theoretical implications of these findings for our understanding of uncertainty reduction, warranting, and self-disclosure processes in online contexts.",
"title": ""
},
{
"docid": "cc3d14ebbba039241634d45dad8bfb03",
"text": "Digital humanities scholars strongly need a corpus exploration method that provides topics easier to interpret than standard LDA topic models. To move towards this goal, here we propose a combination of two techniques, called Entity Linking and Labeled LDA. Our method identifies in an ontology a series of descriptive labels for each document in a corpus. Then it generates a specific topic for each label. Having a direct relation between topics and labels makes interpretation easier; using an ontology as background knowledge limits label ambiguity. As our topics are described with a limited number of clear-cut labels, they promote interpretability and support the quantitative evaluation of the obtained results. We illustrate the potential of the approach by applying it to three datasets, namely the transcription of speeches from the European Parliament fifth mandate, the Enron Corpus and the Hillary Clinton Email Dataset. While some of these resources have already been adopted by the natural language processing community, they still hold a large potential for humanities scholars, part of which could be exploited in studies that will adopt the fine-grained exploration method presented in this paper.",
"title": ""
}
] |
scidocsrr
|
2666c4c32aaf0714fee99f0a31981ca8
|
Learning Named Entity Recognition from Wikipedia
|
[
{
"docid": "d95ee6cd088919de0df4087f5413eda5",
"text": "Wikipedia provides a knowledge base for computing word relatedness in a more structured fashion than a search engine and with more coverage than WordNet. In this work we present experiments on using Wikipedia for computing semantic relatedness and compare it to WordNet on various benchmarking datasets. Existing relatedness measures perform better using Wikipedia than a baseline given by Google counts, and we show that Wikipedia outperforms WordNet when applied to the largest available dataset designed for that purpose. The best results on this dataset are obtained by integrating Google, WordNet and Wikipedia based measures. We also show that including Wikipedia improves the performance of an NLP application processing naturally occurring texts.",
"title": ""
}
] |
[
{
"docid": "e5020601a6e4b2c07868ffc0f84498ae",
"text": "We describe a combined nonlinear acoustic echo cancellation and residual echo suppression system. The echo canceler uses parallel Hammerstein branches consisting of fixed nonlinear basis functions and linear adaptive filters. The residual echo suppressor uses an Artificial Neural Network for modeling of the residual echo spectrum from spectral features computed from the far-end signal. We show that modeling nonlinear effects both in the echo canceler and in the echo suppressor leads to an increased performance of the combined system.",
"title": ""
},
{
"docid": "7193757d4791a170ad3286288657c6d9",
"text": "The Facebook News Feed prioritizes posts for display by ranking them more prominently in the News Feed, based on users’ past interactions with the system. This study investigated constraints imposed on social interactions by the algorithm, by triggering participants’ awareness of “missed posts” in their Friends’ Timelines that they did not remember seeing before. If the algorithm prioritizes posts from people that users feel closer to and want to stay in touch with, participants should be less likely to report missed posts from close Friends. However, the results showed that relationship closeness had no effect on the likelihood of noticing a missed post, after controlling for how many Facebook Friends participants had and the accuracy of participants’ memories for their Friends’ Facebook activity. Also, missed posts from close Friends were more surprising, even when participants believed that the actions of the system caused the missed posts, indicating that these instances represent participants’ unmet expectations for the behavior of their News Feeds. Because Facebook posts present opportunities for feedback important for social support and maintaining social ties, this could indicate bias in the way the algorithm promotes content that could affect users’ ability to maintain relationships on Facebook. These findings have implications for approaches to improve user control and increase transparency in systems that use algorithmic filtering.",
"title": ""
},
{
"docid": "43a84d7fc14e52e93ab2df5db6660a2b",
"text": "The advent of regenerative medicine has brought us the opportunity to regenerate, modify and restore human organs function. Stem cells, a key resource in regenerative medicine, are defined as clonogenic, self-renewing, progenitor cells that can generate into one or more specialized cell types. Stem cells have been classified into three main groups: embryonic stem cells (ESCs), induced pluripotent stem cells (iPSCs) and adult/postnatal stem cells (ASCs). The present review focused the attention on ASCs, which have been identified in many perioral tissues such as dental pulp, periodontal ligament, follicle, gingival, alveolar bone and papilla. Human dental pulp stem cells (hDPSCs) are ectodermal-derived stem cells, originating from migrating neural crest cells and possess mesenchymal stem cell properties. During last decade, hDPSCs have received extensive attention in the field of tissue engineering and regenerative medicine due to their accessibility and ability to differentiate in several cell phenotypes. In this review, we have carefully described the potential of hDPSCs to differentiate into odontoblasts, osteocytes/osteoblasts, adipocytes, chondrocytes and neural cells.",
"title": ""
},
{
"docid": "453f41a114c7ea73289df070bf31f3ff",
"text": "In this work, we present a new algorithm and benchmark dataset for stain separation in histology images. Histology is a critical and ubiquitous task in medical practice and research, serving as a gold standard of diagnosis for many diseases. Automating routine histology analysis tasks could reduce health care costs and improve diagnostic accuracy. One challenge in automation is that histology slides vary in their stain intensity and color; we therefore seek a digital method to normalize the appearance of histology images. As histology slides often have multiple stains on them that must be normalized independently, stain separation must occur before normalization. We propose a new digital stain separation method for the universally-used hematoxylin and eosin stain; this method improves on the state-of-the-art by adjusting the contrast of its eosin-only estimate and including a notion of stain interaction. To validate this method, we have collected a new benchmark dataset via chemical destaining containing ground truth images for stain separation, which we release publicly. Our experiments show that our method achieves more accurate stain separation than two comparison methods and that this improvement in separation accuracy leads to improved normalization.",
"title": ""
},
{
"docid": "140d81bc2d9d125ed43946ddee94d2e4",
"text": "Cluster analysis plays an important role in decision-making process for many knowledge-based systems. There exist a wide variety of different approaches for clustering applications including the heuristic techniques, probabilistic models, and traditional hierarchical algorithms. In this paper, a novel heuristic approach based on big bang–big crunch algorithm is proposed for clustering problems. The proposed method not only takes advantage of heuristic nature to alleviate typical clustering algorithms such as k-means, but it also benefits from the memory-based scheme as compared to its similar heuristic techniques. Furthermore, the performance of the proposed algorithm is investigated based on several benchmark test functions as well as on the well-known datasets. The experimental results show the significant superiority of the proposed method over the similar algorithms.",
"title": ""
},
{
"docid": "d8761988a5ea8af6617410acd2c38709",
"text": "Cyberbullying is a new form of violence that is expressed through electronic media and has given rise to concern for parents, educators and researchers. In this paper, an association between cyberbullying and adolescent mental health will be assessed through a systematic review of two databases: PubMed and Virtual Health Library (BVS). The prevalence of cyberbullying ranged from 6.5% to 35.4%. Previous or current experiences of traditional bullying were associated with victims and perpetrators of cyberbullying. Daily use of three or more hours of Internet, web camera, text messages, posting personal information and harassing others online were associated with cyberbullying. Cybervictims and cyberbullies had more emotional and psychosomatic problems, social difficulties and did not feel safe and cared for in school. Cyberbullying was associated with moderate to severe depressive symptoms, substance use, ideation and suicide attempts. Health professionals should be aware of the violent nature of interactions occurring in the virtual environment and its harm to the mental health of adolescents.",
"title": ""
},
{
"docid": "f85a890925cd52411775f76430549dde",
"text": "This article examines the surgical techniques of rhinoplasty in relation to aesthetic considerations of various ethnic groups. Rhinoplasty in general is challenging, particularly in the ethnic population. When considering rhinoplasty in ethnic patients one must determine their aesthetic goals, which in many cases might deviate from the so-called norm of the \"North European nose.\" An experienced rhinoplastic surgeon should be able to navigate his or her way through the nuances of the various ethnic subsets. Keeping this in mind and following the established tenets in rhinoplasty, one can expect a pleasing and congruous nose without radically violating ethnicity.",
"title": ""
},
{
"docid": "f25c0b1fef38b7322197d61dd5dcac41",
"text": "Hepatocellular carcinoma (HCC) is one of the most common malignancies worldwide and one of the few malignancies with an increasing incidence in the USA. While the relationship between HCC and its inciting risk factors (e.g., hepatitis B, hepatitis C and alcohol liver disease) is well defined, driving genetic alterations are still yet to be identified. Clinically, HCC tends to be hypervascular and, for that reason, transarterial chemoembolization has proven to be effective in managing many patients with localized disease. More recently, angiogenesis has been targeted effectively with pharmacologic strategies, including monoclonal antibodies against VEGF and the VEGF receptor, as well as small-molecule kinase inhibitors of the VEGF receptor. Targeting angiogenesis with these approaches has been validated in several different solid tumors since the initial approval of bevacizumab for advanced colon cancer in 2004. In HCC, only sorafenib has been shown to extend survival in patients with advanced HCC and has opened the door for other anti-angiogenic strategies. Here, we will review the data supporting the targeting of the VEGF axis in HCC and the preclinical and early clinical development of bevacizumab.",
"title": ""
},
{
"docid": "65f06fff6bc896bfabc8ad22eda486f4",
"text": "Emotions can be evoked in humans by images. Most previous works on image emotion analysis mainly used the elements-of-art-based low-level visual features. However, these features are vulnerable and not invariant to the different arrangements of elements. In this paper, we investigate the concept of principles-of-art and its influence on image emotions. Principles-of-art-based emotion features (PAEF) are extracted to classify and score image emotions for understanding the relationship between artistic principles and emotions. PAEF are the unified combination of representation features derived from different principles, including balance, emphasis, harmony, variety, gradation, and movement. Experiments on the International Affective Picture System (IAPS), a set of artistic photography and a set of peer rated abstract paintings, demonstrate the superiority of PAEF for affective image classification and regression (with about 5% improvement on classification accuracy and 0.2 decrease in mean squared error), as compared to the state-of-the-art approaches. We then utilize PAEF to analyze the emotions of master paintings, with promising results.",
"title": ""
},
{
"docid": "a5f78c3708a808fd39c4ced6152b30b8",
"text": "Building ontology for wireless network intrusion detection is an emerging method for the purpose of achieving high accuracy, comprehensive coverage, self-organization and flexibility for network security. In this paper, we leverage the power of Natural Language Processing (NLP) and Crowdsourcing for this purpose by constructing lightweight semi-automatic ontology learning framework which aims at developing a semantic-based solution-oriented intrusion detection knowledge map using documents from Scopus. Our proposed framework uses NLP as its automatic component and Crowdsourcing is applied for the semi part. The main intention of applying both NLP and Crowdsourcing is to develop a semi-automatic ontology learning method in which NLP is used to extract and connect useful concepts while in uncertain cases human power is leveraged for verification. This heuristic method shows a theoretical contribution in terms of lightweight and timesaving ontology learning model as well as practical value by providing solutions for detecting different types of intrusions.",
"title": ""
},
{
"docid": "e48dae70582d949a60a5f6b5b05117a7",
"text": "Background: Multiple-Valued Logic (MVL) is the non-binary-valued system, in which more than two levels of information content are available, i.e., L>2. In modern technologies, the dual level binary logic circuits have normally been used. However, these suffer from several significant issues such as the interconnection considerations including the parasitics, area and power dissipation. The MVL circuits have been proved to be consisting of reduced circuitry and increased efficiency in terms of higher utilization of the circuit resources through multiple levels of voltage. Innumerable algorithms have been developed for designing such MVL circuits. Extended form is one of the algebraic techniques used in designing these MVL circuits. Voltage mode design has also been employed for constructing various types of MVL circuits. Novelty: This paper proposes a novel MVLTRANS inverter, designed using conventional CMOS and pass transistor logic based MVLPTL inverter. Binary to MVL Converter/Encoder and MVL to binary Decoder/Converter are also presented in the paper. In addition to the proposed decoder circuit, a 4-bit novel MVL Binary decoder circuit is also proposed. Tools Used: All these circuits are designed, implemented and verified using Cadence® Virtuoso tools using 180 nm technology library.",
"title": ""
},
{
"docid": "bdd56cd8b9ec6dcdc6ff87fa5bed80ac",
"text": "The battery is a fundamental component of electric vehicles, which represent a step forward towards sustainable mobility. Lithium chemistry is now acknowledged as the technology of choice for energy storage in electric vehicles. However, several research points are still open. They include the best choice of the cell materials and the development of electronic circuits and algorithms for a more effective battery utilization. This paper initially reviews the most interesting modeling approaches for predicting the battery performance and discusses the demanding requirements and standards that apply to ICs and systems for battery management. Then, a general and flexible architecture for battery management implementation and the main techniques for state-of-charge estimation and charge balancing are reported. Finally, we describe the design and implementation of an innovative BMS, which incorporates an almost fully-integrated active charge equalizer.",
"title": ""
},
{
"docid": "5e14b45ea93a6fcecd8c654802b80208",
"text": "Generative adversarial learning is a popular new approach to training generative models which has been proven successful for other related problems as well. The general idea is to maintain an oracle D that discriminates between the expert’s data distribution and that of the generative model G. The generative model is trained to capture the expert’s distribution by maximizing the probability of D misclassifying the data it generates. Overall, the system is differentiable end-toend and is trained using basic backpropagation. This type of learning was successfully applied to the problem of policy imitation in a model-free setup. However, a model-free approach does not allow the system to be differentiable, which requires the use of high-variance gradient estimations. In this paper we introduce the Model based Adversarial Imitation Learning (MAIL) algorithm. A model-based approach for the problem of adversarial imitation learning. We show how to use a forward model to make the system fully differentiable, which enables us to train policies using the (stochastic) gradient of D. Moreover, our approach requires relatively few environment interactions, and fewer hyper-parameters to tune. We test our method on the MuJoCo physics simulator and report initial results that surpass the current state-of-the-art.",
"title": ""
},
{
"docid": "d8976ced1eeda885c3083f847fbbbb41",
"text": "We analyze the Stanford Natural Language Inference (SNLI) corpus in an investigation of bias and stereotyping in NLP data. The human-elicitation protocol employed in the construction of the SNLI makes it prone to amplifying bias and stereotypical associations, which we demonstrate statistically (using pointwise mutual information) and with qualitative examples.",
"title": ""
},
{
"docid": "1facd226c134b22f62613073deffce60",
"text": "We present two experiments examining the impact of navigation techniques on users' navigation performance and spatial memory in a zoomable user interface (ZUI). The first experiment with 24 participants compared the effect of egocentric body movements with traditional multi-touch navigation. The results indicate a 47% decrease in path lengths and a 34% decrease in task time in favor of egocentric navigation, but no significant effect on users' spatial memory immediately after a navigation task. However, an additional second experiment with 8 participants revealed such a significant increase in performance of long-term spatial memory: The results of a recall task administered after a 15-minute distractor task indicate a significant advantage of 27% for egocentric body movements in spatial memory. Furthermore, a questionnaire about the subjects' workload revealed that the physical demand of the egocentric navigation was significantly higher but there was less mental demand.",
"title": ""
},
{
"docid": "9489643bf8bfa3659b9b09f5716a7d3b",
"text": "We show that the gradient descent algorithm provides an implicit regularization effect in the learning of over-parameterized matrix factorization models and one-hidden-layer neural networks with quadratic activations. Concretely, we show that given Õ(dr) random linear measurements of a rank r positive semidefinite matrix X, we can recover X by parameterizing it by UU> with U ∈ Rd×d and minimizing the squared loss, even if r d. We prove that starting from a small initialization, gradient descent recovers X in Õ( √ r) iterations approximately. The results solve the conjecture of Gunasekar et al. Gunasekar et al. (2017) under the restricted isometry property. The technique can be applied to analyzing neural networks with one-hidden-layer quadratic activations with some technical modifications.",
"title": ""
},
{
"docid": "59d57e31357eb72464607e89ba4ba265",
"text": "Cloud Computing is emerging today as a commercial infrastructure that eliminates the need for maintaining expensive computing hardware. Through the use of virtualization, clouds promise to address with the same shared set of physical resources a large user base with different needs. Thus, clouds promise to be for scientists an alternative to clusters, grids, and supercomputers. However, virtualization may induce significant performance penalties for the demanding scientific computing workloads. In this work we present an evaluation of the usefulness of the current cloud computing services for scientific computing. We analyze the performance of the Amazon EC2 platform using micro-benchmarks, kernels, and e-Science workloads. We also compare using long-term traces the performance characteristics and cost models of clouds with those of other platforms accessible to scientists. While clouds are still changing, our results indicate that the current cloud services need an order of magnitude in performance improvement to be useful to the scientific community. Wp 1 http://www.pds.ewi.tudelft.nl/∼iosup/ S. Ostermann et al. Wp Early Cloud Computing EvaluationWp PDS",
"title": ""
},
{
"docid": "4ce2afb5c21d9d78bdf8ffb45eec5ded",
"text": "CONTEXT\nSurvival estimates help individualize goals of care for geriatric patients, but life tables fail to account for the great variability in survival. Physical performance measures, such as gait speed, might help account for variability, allowing clinicians to make more individualized estimates.\n\n\nOBJECTIVE\nTo evaluate the relationship between gait speed and survival.\n\n\nDESIGN, SETTING, AND PARTICIPANTS\nPooled analysis of 9 cohort studies (collected between 1986 and 2000), using individual data from 34,485 community-dwelling older adults aged 65 years or older with baseline gait speed data, followed up for 6 to 21 years. Participants were a mean (SD) age of 73.5 (5.9) years; 59.6%, women; and 79.8%, white; and had a mean (SD) gait speed of 0.92 (0.27) m/s.\n\n\nMAIN OUTCOME MEASURES\nSurvival rates and life expectancy.\n\n\nRESULTS\nThere were 17,528 deaths; the overall 5-year survival rate was 84.8% (confidence interval [CI], 79.6%-88.8%) and 10-year survival rate was 59.7% (95% CI, 46.5%-70.6%). Gait speed was associated with survival in all studies (pooled hazard ratio per 0.1 m/s, 0.88; 95% CI, 0.87-0.90; P < .001). Survival increased across the full range of gait speeds, with significant increments per 0.1 m/s. At age 75, predicted 10-year survival across the range of gait speeds ranged from 19% to 87% in men and from 35% to 91% in women. Predicted survival based on age, sex, and gait speed was as accurate as predicted based on age, sex, use of mobility aids, and self-reported function or as age, sex, chronic conditions, smoking history, blood pressure, body mass index, and hospitalization.\n\n\nCONCLUSION\nIn this pooled analysis of individual data from 9 selected cohorts, gait speed was associated with survival in older adults.",
"title": ""
},
{
"docid": "e571f38e03ac5d13eef8e6e44b3dd62e",
"text": "Stability problems of continuous-time recurrent neural networks have been extensively studied, and many papers have been published in the literature. The purpose of this paper is to provide a comprehensive review of the research on stability of continuous-time recurrent neural networks, including Hopfield neural networks, Cohen-Grossberg neural networks, and related models. Since time delay is inevitable in practice, stability results of recurrent neural networks with different classes of time delays are reviewed in detail. For the case of delay-dependent stability, the results on how to deal with the constant/variable delay in recurrent neural networks are summarized. The relationship among stability results in different forms, such as algebraic inequality forms, M-matrix forms, linear matrix inequality forms, and Lyapunov diagonal stability forms, is discussed and compared. Some necessary and sufficient stability conditions for recurrent neural networks without time delays are also discussed. Concluding remarks and future directions of stability analysis of recurrent neural networks are given.",
"title": ""
},
{
"docid": "31d9ba4a6ba6f6d0742476f0677740ba",
"text": "Typically, a machine learning model of automatic music emotion recognition is trained to learn the relationship between music features and perceived emotion values. However, simply assigning an emotion value to a clip in the training phase does not work well because the perceived emotion of a clip varies from person to person. To resolve this problem, we propose a novel approach that represents the perceived emotion of a clip as a probability distribution in the emotion plane. In addition, we develop a methodology that predicts the emotion distribution of a clip by estimating the emotion mass at discrete samples of the emotion plane. We also develop model fusion algorithms to integrate different perceptual dimensions of music listening and to enhance the modeling of emotion perception. The effectiveness of the proposed approach is validated through an extensive performance study. An average R2 statistics of 0.5439 for emotion prediction is achieved. We also show how this approach can be applied to enhance our understanding of music emotion.",
"title": ""
}
] |
scidocsrr
|
ebfbb3d720a4fcc0e8f642fd02b0cb6e
|
Large-scale Semantic Parsing without Question-Answer Pairs
|
[
{
"docid": "eede682da157ac788a300e9c3080c460",
"text": "We study question answering as a machine learning problem, and induce a function that maps open-domain questions to queries over a database of web extractions. Given a large, community-authored, question-paraphrase corpus, we demonstrate that it is possible to learn a semantic lexicon and linear ranking function without manually annotating questions. Our approach automatically generalizes a seed lexicon and includes a scalable, parallelized perceptron parameter estimation scheme. Experiments show that our approach more than quadruples the recall of the seed lexicon, with only an 8% loss in precision.",
"title": ""
},
{
"docid": "59c24fb5b9ac9a74b3f89f74b332a27c",
"text": "This paper addresses the problem of learning to map sentences to logical form, given training data consisting of natural language sentences paired with logical representations of their meaning. Previous approaches have been designed for particular natural languages or specific meaning representations; here we present a more general method. The approach induces a probabilistic CCG grammar that represents the meaning of individual words and defines how these meanings can be combined to analyze complete sentences. We use higher-order unification to define a hypothesis space containing all grammars consistent with the training data, and develop an online learning algorithm that efficiently searches this space while simultaneously estimating the parameters of a log-linear parsing model. Experiments demonstrate high accuracy on benchmark data sets in four languages with two different meaning representations.",
"title": ""
}
] |
[
{
"docid": "b5372d4cad87aab69356ebd72aed0e0b",
"text": "Web content nowadays can also be accessed through new generation of Internet connected TVs. However, these products failed to change users’ behavior when consuming online content. Users still prefer personal computers to access Web content. Certainly, most of the online content is still designed to be accessed by personal computers or mobile devices. In order to overcome the usability problem of Web content consumption on TVs, this paper presents a knowledge graph based video generation system that automatically converts textual Web content into videos using semantic Web and computer graphics based technologies. As a use case, Wikipedia articles are automatically converted into videos. The effectiveness of the proposed system is validated empirically via opinion surveys. Fifty percent of survey users indicated that they found generated videos enjoyable and 42 % of them indicated that they would like to use our system to consume Web content on their TVs.",
"title": ""
},
{
"docid": "8604589b2c45d6190fdbc50073dfda23",
"text": "Many real world, complex phenomena have an underlying structure of evolving networks where nodes and links are added and removed over time. A central scientific challenge is the description and explanation of network dynamics, with a key test being the prediction of short and long term changes. For the problem of short-term link prediction, existing methods attempt to determine neighborhood metrics that correlate with the appearance of a link in the next observation period. Here, we provide a novel approach to predicting future links by applying an evolutionary algorithm (Covariance Matrix Evolution) to weights which are used in a linear combination of sixteen neighborhood and node similarity indices. We examine reciprocal reply networks of Twitter users constructed at the time scale of weeks, both as a test of our general method and as a problem of scientific interest in itself. Our evolved predictors exhibit a thousand-fold improvement over random link prediction, to our knowledge strongly outperforming all extant methods. Based on our findings, we suggest possible factors which may be driving the evolution of Twitter reciprocal reply networks.",
"title": ""
},
{
"docid": "364455a6985047e1935f490c77fca0e0",
"text": "We address how to measure the information propagation probability between users given certain contents. In sharp contrast to existing works that oversimplify the propagation model as predefined distributions, our approach fundamentally attempts to answer why users are influenced (e.g., by content or relations) and whether the corresponding influential features (e.g., hidden factors) can be inferred from the propagation in the entire network. In particular, we propose a novel method to deeply learn the unified feature representations for both user pair and content, where the homogeneous feature similarity can be used to estimate the propagation probability between users with given content. The features are dubbed content–social influential feature since we consider not only the content of the propagation information but also how it propagates over the social network. We design a fast asynchronous parallel algorithm for the feature learning. Through extensive experiments on a real-world social network with 53 million users and 838 million tweets, we show significantly improved performance as compared to other state-of-the-art methods on various social influence analysis tasks.",
"title": ""
},
{
"docid": "d7bc62e7fca922f9b97e42deff85d010",
"text": "In this paper, we propose an extractive multi-document summarization (MDS) system using joint optimization and active learning for content selection grounded in user feedback. Our method interactively obtains user feedback to gradually improve the results of a state-of-the-art integer linear programming (ILP) framework for MDS. Our methods complement fully automatic methods in producing highquality summaries with a minimum number of iterations and feedbacks. We conduct multiple simulation-based experiments and analyze the effect of feedbackbased concept selection in the ILP setup in order to maximize the user-desired content in the summary.",
"title": ""
},
{
"docid": "159fcd866264df1d4c100f4da32d93b6",
"text": "Understanding the correlation between two different scores for the same set of items is a common problem in graph analysis and information retrieval. The most commonly used statistics that quantifies this correlation is Kendall's tau; however, the standard definition fails to capture that discordances between items with high rank are more important than those between items with low rank. Recently, a new measure of correlation based on average precision has been proposed to solve this problem, but like many alternative proposals in the literature it assumes that there are no ties in the scores. This is a major deficiency in a number of contexts, and in particular when comparing centrality scores on large graphs, as the obvious baseline, indegree, has a very large number of ties in social networks and web graphs. We propose to extend Kendall's definition in a natural way to take into account weights in the presence of ties. We prove a number of interesting mathematical properties of our generalization and describe an O(n\\log n) algorithm for its computation. We also validate the usefulness of our weighted measure of correlation using experimental data on social networks and web graphs.",
"title": ""
},
{
"docid": "0cf1c430d24a93f5d4da9200fbda41d4",
"text": "For some time I have been involved in efforts to develop computer-controlled systems for instruction. One such effort has been a computer-assistedinstruction (CAI) program for teaching reading in the primary grades (Atkinson, 1974) and another for teaching computer science at the college level (Atkinson, in press). The goal has been to use psychological theory to devise optimal instructional procedures—procedures that make moment-by-moment decisions based on the student's unique response history. To help guide some of the theoretical aspects of this work, research has also been done on the restricted but well-defined problem of optimizing the teaching of a foreign language vocabulary. This is an area in which mathematical models provide an accurate description of learning, and these models can be used in conjunction with the methods of control theory to develop precise algorithms for sequencing instruction among vocabulary items. Some of this work has been published, and those who have read about it know that the optimization schemes are quite effective—far more effective than procedures that permit the learner to make his own instructional decisions (Atkinson, 1972a, 1972b; Atkinson & Paulson, 1972). In conducting these vocabulary learning experiments, I have been struck by the incredible variability in learning rates across subjects. Even Stanford University students, who are a fairly select sample, display impressively large betweensubject differences. These differences may reflect differences in fundamental abilities, but it is easy to demonstrate that they also depend on the strategies that subjects bring to bear on the task. Good learners can introspect with ease about a \"bag of tricks\" for learning vocabulary items, whereas poor",
"title": ""
},
{
"docid": "0c7636279e14e75ce44e01f3cbd90de6",
"text": "Neural abstractive summarization has been increasingly studied, where the prior work mainly focused on summarizing single-speaker documents (news, scientific publications, etc). In dialogues, there are diverse interactive patterns between speakers, which are usually defined as dialogue acts. The interactive signals may provide informative cues for better summarizing dialogues. This paper proposes to explicitly leverage dialogue acts in a neural summarization model, where a sentence-gated mechanism is designed for modeling the relationships between dialogue acts and the summary. The experiments show that our proposed model significantly improves the abstractive summarization performance compared to the state-of-the-art baselines on the AMI meeting corpus, demonstrating the usefulness of the interactive signal provided by dialogue acts.1",
"title": ""
},
{
"docid": "7153e58a0f4b89be0ac6d4a97237317e",
"text": "When trying to learn a model for the prediction of an outcome given a set of covariates, a statistician has many estimation procedures in their toolbox. A few examples of these candidate learners are: least squares, least angle regression, random forests, and spline regression. Previous articles (van der Laan and Dudoit (2003); van der Laan et al. (2006); Sinisi et al. (2007)) theoretically validated the use of cross validation to select an optimal learner among many candidate learners. Motivated by this use of cross validation, we propose a new prediction method for creating a weighted combination of many candidate learners to build the super learner. This article proposes a fast algorithm for constructing a super learner in prediction which uses V-fold cross-validation to select weights to combine an initial set of candidate learners. In addition, this paper contains a practical demonstration of the adaptivity of this so called super learner to various true data generating distributions. This approach for construction of a super learner generalizes to any parameter which can be defined as a minimizer of a loss function.",
"title": ""
},
{
"docid": "2fb3eac8622f512d1acc75874a9e25de",
"text": "DuraCap is a solar-powered energy harvesting system that stores harvested energy in supercapacitors and is voltage-compatible with lithium-ion batteries. The use of supercapacitors instead of batteries enables DuraCap to extend the operational life time from tens of months to tens of years. DuraCap addresses two additional problems with micro-solar systems: inefficient operation of supercapacitors during cold booting, and maximum power point tracking (MPPT) over a variety of solar panels. Our approach is to dedicate a smaller supercapacitor to cold booting before handing over to the array of larger-value supercapacitors. For MPPT, we designed a bound-control circuit for PFM regulator switching and an I-V tracer to enable self-configuring over the panel's aging process and replacement. Experimental results show the DuraCap system to achieve high conversion efficiency and minimal downtime.",
"title": ""
},
{
"docid": "d9a99642b106ad3f63134916bd75329b",
"text": "We extend Convolutional Neural Networks (CNNs) on flat and regular domains (e.g. 2D images) to curved 2D manifolds embedded in 3D Euclidean space that are discretized as irregular surface meshes and widely used to represent geometric data in Computer Vision and Graphics. We define surface convolution on tangent spaces of a surface domain, where the convolution has two desirable properties: 1) the distortion of surface domain signals is locally minimal when being projected to the tangent space, and 2) the translation equi-variance property holds locally, by aligning tangent spaces for neighboring points with the canonical torsion-free parallel transport that preserves tangent space metric. To implement such a convolution, we rely on a parallel N -direction frame field on the surface that minimizes the field variation and therefore is as compatible as possible to and approximates the parallel transport. On the tangent spaces equipped with parallel frames, the computation of surface convolution becomes standard routine. The tangential frames have N rotational symmetry that must be disambiguated, which we resolve by duplicating the surface domain to construct its covering space induced by the parallel frames and grouping the feature maps into N sets accordingly; each surface convolution is computed on the N branches of the cover space with their respective feature maps while the kernel weights are shared. To handle the irregular data points of a discretized surface mesh while being able to share trainable kernel weights, we make the convolution semi-discrete, i.e. the convolution kernels are smooth polynomial functions, and their convolution with discrete surface data points becomes discrete sampling and weighted summation. In addition, pooling and unpooling operations for surface CNNs on a mesh are computed along the mesh hierarchy built through simplification. The presented surface-based CNNs allow us to do effective deep learning on surface meshes using network structures very similar to those for flat and regular domains. In particular, we show that for various tasks, including classification, segmentation and non-rigid registration, surface CNNs using only raw input signals achieve superior performances than other neural network models using sophisticated pre-computed input features, and enable a simple non-rigid human-body registration procedure by regressing to restpose positions directly.",
"title": ""
},
{
"docid": "764a65489d21db9fc0c004b8e0532167",
"text": "A concept of using planar circuit resonance to disable 3D cavity resonance inside a rectangular waveguide filter is demonstrated. Switchable RF MEMS planar resonators are introduced inside the resonant cavities of a high Q-factor iris bandpass filter to turn the filter ON and OFF. The measurement confirms that this high Q-factor filter with insertion loss better than 0.1 dB can be converted to a bandstop filter with an isolation better than 30 dB for the same frequency and bandwidth.",
"title": ""
},
{
"docid": "f63dc3a5ceb6df8596410f1fdc7047c3",
"text": "This paper presents an energy management system (EMS) for a stand-alone droop-controlled microgrid, which adjusts generators output power to minimize fuel consumption and also ensures stable operation. It has previously been shown that frequency-droop gains have a significant effect on stability in such microgrids. Relationship between these parameters and stability margins are therefore identified, using qualitative analysis and small-signal techniques. This allows them to be selected to ensure stability. Optimized generator outputs are then implemented in real-time by the EMS, through adjustments to droop characteristics within this constraint. Experimental results from a laboratory-sized microgrid confirm the EMS function.",
"title": ""
},
{
"docid": "f555d9c9aeb28059138527bc190a1a10",
"text": "This paper presents a novel method for entity disambiguation in anonymized graphs using local neighborhood structure. Most existing approaches leverage node information, which might not be available in several contexts due to privacy concerns, or information about the sources of the data. We consider this problem in the supervised setting where we are provided only with a base graph and a set of nodes labelled as ambiguous or unambiguous. We characterize the similarity between two nodes based on their local neighborhood structure using graph kernels; and solve the resulting classification task using SVMs. We give empirical evidence on two real-world datasets, comparing our approach to a state-of-the-art method, highlighting the advantages of our approach. We show that using less information, our method is significantly better in terms of either speed or accuracy or both. We also present extensions of two existing graphs kernels, namely, the direct product kernel and the shortest-path kernel, with significant improvements in accuracy. For the direct product kernel, our extension also provides significant computational benefits. Moreover, we design and implement the algorithms of our method to work in a distributed fashion using the GraphLab framework, ensuring high scalability.",
"title": ""
},
{
"docid": "428697d3ec6992c3158f3f0b2690c155",
"text": "Severe infections represent the main cause of neonatal mortality accounting for more than one million neonatal deaths worldwide every year. Antibiotics are the most commonly prescribed medications in neonatal intensive care units. The benefits of antibiotic therapy when indicated are clearly enormous, but the continued and widespread use of antibiotics has generated over the years a strong selective pressure on microorganisms, favoring the emergence of resistant strains. Health agencies worldwide are galvanizing attention toward antibiotic resistance in gram-positive and gram-negative bacteria. Infections in neonatal units due to multidrug and extensively multidrug resistant bacteria are rising and are already seriously challenging antibiotic treatment options. While there is a growing choice of agents against multi-resistant gram-positive bacteria, new options for multi-resistant gram-negative bacteria in the clinical practice have decreased significantly in the last 20 years making the treatment of infections caused by multidrug-resistant pathogens challenging mostly in neonates. Treatment options are currently limited and will be some years before any new treatment for neonates become available for clinical use, if ever. The aim of the review is to highlight the current knowledge on antibiotic resistance in the neonatal population, the possible therapeutic choices, and the prevention strategies to adopt in order to reduce the emergency and spread of resistant strains.",
"title": ""
},
{
"docid": "316ead33d0313804b7aa95570427e375",
"text": "We consider an optimal control problem with a deterministic finite horizon and state variable dynamics given by a Markovswitching jump-diffusion stochastic differential equation. Our main results extend the dynamic programming technique to this larger family of stochastic optimal control problems. More specifically, we provide a detailed proof of Bellman’s optimality principle (or dynamic programming principle) and obtain the corresponding Hamilton-Jacobi-Belman equation, which turns out to be a partial integro-differential equation due to the extra terms arising from the Lévy process and the Markov process. As an application of our results, we study a finite horizon consumptioninvestment problem for a jump-diffusion financial market consisting of one risk-free asset and one risky asset whose coefficients are assumed to depend on the state of a continuous time finite state Markov process. We provide a detailed study of the optimal strategies for this problem, for the economically relevant families of power utilities and logarithmic utilities.",
"title": ""
},
{
"docid": "1d0027a2e7778a4d6f8206c9b7c4ebed",
"text": "Code smells represent symptoms of poor implementation choices. Previous studies found that these smells make source code more difficult to maintain, possibly also increasing its fault-proneness. There are several approaches that identify smells based on code analysis techniques. However, we observe that many code smells are intrinsically characterized by how code elements change over time. Thus, relying solely on structural information may not be sufficient to detect all the smells accurately. We propose an approach to detect five different code smells, namely Divergent Change, Shotgun Surgery, Parallel Inheritance, Blob, and Feature Envy, by exploiting change history information mined from versioning systems. We applied approach, coined as HIST (Historical Information for Smell deTection), to eight software projects written in Java, and wherever possible compared with existing state-of-the-art smell detectors based on source code analysis. The results indicate that HIST's precision ranges between 61% and 80%, and its recall ranges between 61% and 100%. More importantly, the results confirm that HIST is able to identify code smells that cannot be identified through approaches solely based on code analysis.",
"title": ""
},
{
"docid": "8cdd54a8bd288692132b57cb889b2381",
"text": "This research deals with the soft computing methodology of fuzzy cognitive map (FCM). Here a mathematical description of FCM is presented and a new methodology based on fuzzy logic techniques for developing the FCM is examined. The capability and usefulness of FCM in modeling complex systems and the application of FCM to modeling and describing the behavior of a heat exchanger system is presented. The applicability of FCM to model the supervisor of complex systems is discussed and the FCM-supervisor for evaluating the performance of a system is constructed; simulation results are presented and discussed.",
"title": ""
},
{
"docid": "aecc5e00e4be529c76d6d629310c8b5c",
"text": "For a user to perceive continuous interactive response time in a visualization tool, the rule of thumb is that it must process, deliver, and display rendered results for any given interaction in under 100 milliseconds. In many visualization systems, successive interactions trigger independent queries and caching of results. Consequently, computationally expensive queries like multidimensional clustering cannot keep up with rapid sequences of interactions, precluding visual benefits such as motion parallax. In this paper, we describe a heuristic prefetching technique to improve the interactive response time of KMeans clustering in dynamic query visualizations of multidimensional data. We address the tradeoff between high interaction and intense query computation by observing how related interactions on overlapping data subsets produce similar clustering results, and characterizing these similarities within a parameter space of interaction. We focus on the two-dimensional parameter space defined by the minimum and maximum values of a time range manipulated by dragging and stretching a one-dimensional filtering lens over a plot of time series data. Using calculation of nearest neighbors of interaction points in parameter space, we reuse partial query results from prior interaction sequences to calculate both an immediate best-effort clustering result and to schedule calculation of an exact result. The method adapts to user interaction patterns in the parameter space by reprioritizing the interaction neighbors of visited points in the parameter space. A performance study on Mesonet meteorological data demonstrates that the method is a significant improvement over the baseline scheme in which interaction triggers on-demand, exact-range clustering with LRU caching. We also present initial evidence that approximate, temporary clustering results are sufficiently accurate (compared to exact results) to convey useful cluster structure during rapid and protracted interaction.",
"title": ""
},
{
"docid": "03fa5f5f6b6f307fc968a2b543e331a1",
"text": "In recent years, several noteworthy large, cross-domain, and openly available knowledge graphs (KGs) have been created. These include DBpedia, Freebase, OpenCyc, Wikidata, and YAGO. Although extensively in use, these KGs have not been subject to an in-depth comparison so far. In this survey, we provide data quality criteria according to which KGs can be analyzed and analyze and compare the above mentioned KGs. Furthermore, we propose a framework for finding the most suitable KG for a given setting.",
"title": ""
}
] |
scidocsrr
|
85964a0b9565026d2ee4a8c651be74aa
|
A Location-Based Mobile Crowdsensing Framework Supporting a Massive Ad Hoc Social Network Environment
|
[
{
"docid": "8663eba079098e2e5635e4a274a7b036",
"text": "Mobile crowdsensing can enable numerous attractive novel sensing applications due to the prominent advantages such as wide spatiotemporal coverage, low cost, good scalability, pervasive application scenarios, etc. In mobile crowdsensing applications, incentive mechanisms are necessary to stimulate more potential smartphone users and to achieve good service quality. In this paper, we focus on exploring truthful incentive mechanisms for a novel and practical scenario where the tasks are time window dependent, and the platform has strong requirement of data integrity. We present a universal system model for this scenario based on reverse auction framework and formulate the problem as the Social Optimization User Selection (SOUS) problem. We design two incentive mechanisms, MST and MMT. In single time window case, we design an optimal algorithm based on dynamic programming to select users. Then we determine the payment for each user by VCG auction; while in multiple time window case, we show the general SOUS problem is NP-hard, and we design MMT based on greedy approach, which approximates the optimal solution within a factor of In|W| + 1, where |W| is the length of sensing time window defined by the platform. Through both rigorous theoretical analysis and extensive simulations, we demonstrate that the proposed mechanisms achieve high computation efficiency, individual rationality and truthfulness.",
"title": ""
}
] |
[
{
"docid": "8e43c1db8273796fbdcb53420b65664b",
"text": "Psychological differences between women and men, far from being invariant as a biological explanation would suggest, fluctuate in magnitude across cultures. Moreover, contrary to the implications of some theoretical perspectives, gender differences in personality, values, and emotions are not smaller, but larger, in American and European cultures, in which greater progress has been made toward gender equality. This research on gender differences in self-construals involving 950 participants from 5 nations/cultures (France, Belgium, the Netherlands, the United States, and Malaysia) illustrates how variations in social comparison processes across cultures can explain why gender differences are stronger in Western cultures. Gender differences in the self are a product of self-stereotyping, which occurs when between-gender social comparisons are made. These social comparisons are more likely, and exert a greater impact, in Western nations. Both correlational and experimental evidence supports this explanation.",
"title": ""
},
{
"docid": "97ea0397c2bff1af7ae9de457cce6b79",
"text": "The behavior of a glide-symmetric holey periodic structure as electromagnetic bandgap is studied in this letter. A number of numerical simulations have been carried out in order to define the importance of each constituent parameter of the unit cell. Our proposed structure finds potential application in antennas and circuits based on gap waveguide technology for the millimeter band. The experimental verifications confirm the effects previously analyzed with the numerical studies.",
"title": ""
},
{
"docid": "1b22c3d5bb44340fcb66a1b44b391d71",
"text": "The contrast in real world scenes is often beyond what consumer cameras can capture. For these situations, High Dynamic Range (HDR) images can be generated by taking multiple exposures of the same scene. When fusing information from different images, however, the slightest change in the scene can generate artifacts which dramatically limit the potential of this solution. We present a technique capable of dealing with a large amount of movement in the scene: we find, in all the available exposures, patches consistent with a reference image previously selected from the stack. We generate the HDR image by averaging the radiance estimates of all such regions and we compensate for camera calibration errors by removing potential seams. We show that our method works even in cases when many moving objects cover large regions of the scene.",
"title": ""
},
{
"docid": "80ac2373b3a01ab0f1f2665f0e070aa4",
"text": "This paper presents an overview of the state of the art control strategies specifically designed to coordinate distributed energy storage (ES) systems in microgrids. Power networks are undergoing a transition from the traditional model of centralised generation towards a smart decentralised network of renewable sources and ES systems, organised into autonomous microgrids. ES systems can provide a range of services, particularly when distributed throughout the power network. The introduction of distributed ES represents a fundamental change for power networks, increasing the network control problem dimensionality and adding long time-scale dynamics associated with the storage systems’ state of charge levels. Managing microgrids with many small distributed ES systems requires new scalable control strategies that are robust to power network and communication network disturbances. This paper reviews the range of services distributed ES systems can provide, and the control challenges they introduce. The focus of this paper is a presentation of the latest decentralised, centralised and distributed multi-agent control strategies designed to coordinate distributed microgrid ES systems. Finally, multi-agent control with agents satisfying Wooldridge’s definition of intelligence is proposed as a promising direction for future research.",
"title": ""
},
{
"docid": "10d380b25a03c608c11fe5dde545f4b4",
"text": "The increasing complexity and diversity of technical products plus the massive amount of product-related data overwhelms humans dealing with them at all stages of the life-cycle. We present a novel architecture for building smart products that are able to interact with humans in a natural and proactive way, and assist and guide them in performing their tasks. Further, we show how communication capabilities of smart products are used to account for the limited resources of individual products by leveraging resources provided by the environment or other smart products for storage and natural interaction.",
"title": ""
},
{
"docid": "b29269803892fd88a03857ef0f050eb5",
"text": "This paper presents the ultracapacitors and the fuel cell (FC) connection for hybrid electric vehicles (HEVs) applications. An original method for the embedded energy management is proposed. This method is used to share the energetic request of the HEV between the ultracapacitors and the FC. The ultracapacitors are linked to dc-bus through the buck-boost converter, and the FC is connected to dc-bus via a boost converter. An asynchronous machine is used like traction motor or generator, and it is connected to dc-bus through an inverter. A dc-motor is used to drive the asynchronous machine during the decelerations and the braking operations. The main contribution of this paper is focused on the embedded energy management based on the new European drive cycle (NEDC), using polynomial control technique. The performances of the proposed control method are evaluated through some simulations and the experimental tests dedicated to HEVs applications.",
"title": ""
},
{
"docid": "3aae793e9abb72b35709851fe9f1ac43",
"text": "In this paper, a CPW-fed antenna-in-package (AiP) operating at millimeter wave (mmWave) based on a wafer-level packaging technology with through silicon via (TSV) interconnections is proposed, designed, and measured. The designed antenna consists of two-stacked high-resistivity silicon (HRSi) substrates. One is the bottom HRSi substrate with thickness of 750 μm, which carries the slot radiator and the CPW feeding. The other one is the top HRSi substrate with thickness of 200 μm carrying a patch, which is placed on the radiating element for antenna gain and efficiency improvement. The vertical interconnects in this structure are designed using the TSVs built on a HRSi wafer, which are designed to carry the radio frequency (RF) signals up to mmWave. RF path transitions are carefully designed to minimize the return loss within 10 dB in the frequency band of concern. The designed AiP is fabricated and measured, and the measured results basically match the simulation results. It is demonstrated that a wider bandwidth and less-sensitive input impedance versus the fabrication process accuracy are obtained with the designed structure in this paper. The measured results show the radiation in the broadside of the structure with gain around 2.4 dBi from 76 to 93 GHz.",
"title": ""
},
{
"docid": "e92523a656b96996d72db0c8697a46aa",
"text": "For many of the world’s languages, the Bible is the only significant bilingual, or even monolingual, text, making it a unique training resource for tasks such as translation, named entity analysis, and transliteration. Given the Bible’s small size, however, the output of standard word alignment tools can be extremely noisy, making downstream tasks difficult. In this work, we develop and release a novel resource of 1129 aligned Bible person and place names across 591 languages, which was constructed and improved using several approaches including weighted edit distance, machine-translation-based transliteration models, and affixal induction and transformation models. Our models outperform a widely used word aligner on 97% of test words, showing the particular efficacy of our approach on the impactful task of broadly multilingual named-entity alignment and translation across a remarkably large number of world languages. We further illustrate the utility of our translation matrix for the multilingual learning of name-related affixes and their semantics as well as transliteration of named entities.",
"title": ""
},
{
"docid": "691d326a4d59a530f5142d4c15a8467b",
"text": "Previous open Relation Extraction (open RE) approaches mainly rely on linguistic patterns and constraints to extract important relational triples from large-scale corpora. However, they lack of abilities to cover diverse relation expressions or measure the relative importance of candidate triples within a sentence. It is also challenging to name the relation type of a relational triple merely based on context words, which could limit the usefulness of open RE in downstream applications. We propose a novel importancebased open RE approach by exploiting the global structure of a dependency tree to extract salient triples. We design an unsupervised method to name relation types by grounding relational triples to a large-scale Knowledge Base (KB) schema, leveraging KB triples and weighted context words associated with relational triples. Experiments on the English Slot Filling 2013 dataset demonstrate that our approach achieves 8.1% higher F-score over stateof-the-art open RE methods.",
"title": ""
},
{
"docid": "1af04b15ee299c2f83dfb645f3f8e499",
"text": "In this paper, we show that convolutional neural networks can be directly applied to temporal low-level acoustic features to identify emotionally salient regions without the need for defining or applying utterance-level statistics. We show how a convolutional neural network can be applied to minimally hand-engineered features to obtain competitive results on the IEMOCAP and MSP-IMPROV datasets. In addition, we demonstrate that, despite their common use across most categories of acoustic features, utterance-level statistics may obfuscate emotional information. Our results suggest that convolutional neural networks with Mel Filterbanks (MFBs) can be used as a replacement for classifiers that rely on features obtained from applying utterance-level statistics.",
"title": ""
},
{
"docid": "96d2e884c65205ef458214594f8b64f5",
"text": "The weak methods occur pervasively in AI systems and may form die basic methods for all intelligent systems. The purpose of this paper is to characterize die weak methods and to explain how and why they arise in intelligent systems. We propose an organization, called a universal weak method that provides functionality of all the weak methods.* A universal weak method is an organizational scheme for knowledge that produces the appropriate search behavior given the available task-domain knowledge. We present a problem solving architecture, called SOAR, in which we realize a universal weak method. We then demonstrate the universal weak method with a variety of weak methods on a set of tasks. This research was sponsored by die Defense Advanced Research Projects Agency (DOD), ARPA Order No: 3597, monitored by die Air Force Avionics Laboratory Under Contract F33515-78-C-155L The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of die Defense Advanced Research Projects Agency or the US Government.",
"title": ""
},
{
"docid": "a63de6665b453dc7ca11bb85f51d314b",
"text": "The state of the art in and the future of robotics are discussed. The potential paths to the long-term vision of robots that work alongside people in homes and workplaces as useful, capable collaborators are discussed. Robot manipulation in human environments is expected to grow in the coming years as more researchers seek to create robots that actively help in the daily lives of people",
"title": ""
},
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "accb879062cf9c2e6fa3fb636f33b333",
"text": "The CLEF eRisk 2018 challenge focuses on early detection of signs of depression or anorexia using posts or comments over social media. The eRisk lab has organized two tasks this year and released two different corpora for the individual tasks. The corpora are developed using the posts and comments over Reddit, a popular social media. The machine learning group at Ramakrishna Mission Vivekananda Educational and Research Institute (RKMVERI), India has participated in this challenge and individually submitted five results to accomplish the objectives of these two tasks. The paper presents different machine learning techniques and analyze their performance for early risk prediction of anorexia or depression. The techniques involve various classifiers and feature engineering schemes. The simple bag of words model has been used to perform ada boost, random forest, logistic regression and support vector machine classifiers to identify documents related to anorexia or depression in the individual corpora. We have also extracted the terms related to anorexia or depression using metamap, a tool to extract biomedical concepts. Theerefore, the classifiers have been implemented using bag of words features and metamap features individually and subsequently combining these features. The performance of the recurrent neural network is also reported using GloVe and Fasttext word embeddings. Glove and Fasttext are pre-trained word vectors developed using specific corpora e.g., Wikipedia. The experimental analysis on the training set shows that the ada boost classifier using bag of words model outperforms the other methods for task1 and it achieves best score on the test set in terms of precision over all the runs in the challenge. Support vector machine classifier using bag of words model outperforms the other methods in terms of fmeasure for task2. The results on the test set submitted to the challenge suggest that these framework achieve reasonably good performance.",
"title": ""
},
{
"docid": "629c6c7ca3db9e7cad2572c319ec52f0",
"text": "Recent research on pornography suggests that perception of addiction predicts negative outcomes above and beyond pornography use. Research has also suggested that religious individuals are more likely to perceive themselves to be addicted to pornography, regardless of how often they are actually using pornography. Using a sample of 686 unmarried adults, this study reconciles and expands on previous research by testing perceived addiction to pornography as a mediator between religiosity and relationship anxiety surrounding pornography. Results revealed that pornography use and religiosity were weakly associated with higher relationship anxiety surrounding pornography use, whereas perception of pornography addiction was highly associated with relationship anxiety surrounding pornography use. However, when perception of pornography addiction was inserted as a mediator in a structural equation model, pornography use had a small indirect effect on relationship anxiety surrounding pornography use, and perception of pornography addiction partially mediated the association between religiosity and relationship anxiety surrounding pornography use. By understanding how pornography use, religiosity, and perceived pornography addiction connect to relationship anxiety surrounding pornography use in the early relationship formation stages, we hope to improve the chances of couples successfully addressing the subject of pornography and mitigate difficulties in romantic relationships.",
"title": ""
},
{
"docid": "11d1978a3405f63829e02ccb73dcd75f",
"text": "The performance of two commercial simulation codes, Ansys Fluent and Comsol Multiphysics, is thoroughly examined for a recently established two-phase flow benchmark test case. In addition, the commercial codes are directly compared with the newly developed academic code, FeatFlow TP2D. The results from this study show that the commercial codes fail to converge and produce accurate results, and leave much to be desired with respect to direct numerical simulation of flows with free interfaces. The academic code on the other hand was shown to be computationally efficient, produced very accurate results, and outperformed the commercial codes by a magnitude or more.",
"title": ""
},
{
"docid": "ca79ff3c016e4e8221088117a7604e3b",
"text": "There are about 150 million jellyfish stings every year. \"Portuguese man of war\" is responsible for substantial proportion of stings worldwide. The biggest risk from a jellyfish stings may come from incorrect management. A 42-year-old woman was severely stung by venomous marine animal while bathing in waters of the Thai Gulf. It was most likely \"Portuguese man of war\". The patient didn't remember while being rescued. Looking at damages it seems that first aid was incorrect. Inappropriate and delayed management caused disfiguring scars. On the ground of this case, first aid for \"Portuguese man of war\" stings is reminded.",
"title": ""
},
{
"docid": "a95400eda4b42c0e1dcf02cefd945787",
"text": "Kajal Rai Research Scholar, Department of Computer Science and Applications, Panjab University, Chandigarh, India Email: [email protected] M. Syamala Devi Professor, Department of Computer Science and Applications, Panjab University, Chandigarh, India Email: [email protected] Ajay Guleria System Manager, Computer Center, Panjab University, Chandigarh, India Email: [email protected] -----------------------------------------------------ABSTRACT-----------------------------------------------------An Intrusion Detection System (IDS) is a defense measure that supervises activities of the computer network and reports the malicious activities to the network administrator. Intruders do many attempts to gain access to the network and try to harm the organization’s data. Thus the security is the most important aspect for any type of organization. Due to these reasons, intrusion detection has been an important research issue. An IDS can be broadly classified as Signature based IDS and Anomaly based IDS. In our proposed work, the decision tree algorithm is developed based on C4.5 decision tree approach. Feature selection and split value are important issues for constructing a decision tree. In this paper, the algorithm is designed to address these two issues. The most relevant features are selected using information gain and the split value is selected in such a way that makes the classifier unbiased towards most frequent values. Experimentation is performed on NSL-KDD (Network Security Laboratory Knowledge Discovery and Data Mining) dataset based on number of features. The time taken by the classifier to construct the model and the accuracy achieved is analyzed. It is concluded that the proposed Decision Tree Split (DTS) algorithm can be used for signature based intrusion detection.",
"title": ""
},
{
"docid": "b70716877c23701d0897ab4a42a5beba",
"text": "We propose an end-to-end deep learning architecture that produces a 3D shape in triangular mesh from a single color image. Limited by the nature of deep neural network, previous methods usually represent a 3D shape in volume or point cloud, and it is non-trivial to convert them to the more ready-to-use mesh model. Unlike the existing methods, our network represents 3D mesh in a graph-based convolutional neural network and produces correct geometry by progressively deforming an ellipsoid, leveraging perceptual features extracted from the input image. We adopt a coarse-to-fine strategy to make the whole deformation procedure stable, and define various of mesh related losses to capture properties of different levels to guarantee visually appealing and physically accurate 3D geometry. Extensive experiments show that our method not only qualitatively produces mesh model with better details, but also achieves higher 3D shape estimation accuracy compared to the state-of-the-art.",
"title": ""
},
{
"docid": "26d5237c912977223e0ba45c0f949e3d",
"text": "Generally speaking, ‘Education’ is utilized in three senses: Knowledge, Subject and a Process. When a person achieves degree up to certain level we do not call it education .As for example if a person has secured Masters degree then we utilize education it a very narrower sense and call that the person has achieved education up to Masters Level. In the second sense, education is utilized in a sense of discipline. As for example if a person had taken education as a paper or as a discipline during his study in any institution then we utilize education as a subject. In the third sense, education is utilized as a process. In fact when we talk of education, we talk in the third sense i.e. education as a process. Thus, we talk what is education as a process? What are their importances etc.? The following debate on education will discuss education in this sense and we will talk education as a process.",
"title": ""
}
] |
scidocsrr
|
da1563156ce7d278080ea0e68841333d
|
Illuminant estimation and detection using near-infrared
|
[
{
"docid": "5824a316f20751183676850c119c96cd",
"text": " Proposed method – Max-RGB & Gray-World • Instantiations of Minkowski norm – Optimal illuminant estimate • L6 norm: Working best overall",
"title": ""
}
] |
[
{
"docid": "49740b1faa60a212297926fec63de0ce",
"text": "In addition to information, text contains attitudinal, and more specifically, emotional content. This paper explores the text-based emotion prediction problemempirically, using supervised machine learning with the SNoW learning architecture. The goal is to classify the emotional affinity of sentences in the narrative domain of children’s fairy tales, for subsequent usage in appropriate expressive rendering of text-to-speech synthesis. Initial experiments on a preliminary data set of 22 fairy tales show encouraging results over a na ı̈ve baseline and BOW approach for classification of emotional versus non-emotional contents, with some dependency on parameter tuning. We also discuss results for a tripartite model which covers emotional valence, as well as feature set alternations. In addition, we present plans for a more cognitively sound sequential model, taking into consideration a larger set of basic emotions.",
"title": ""
},
{
"docid": "e8d2bad4083a4a6cf5f96aedd5112f3f",
"text": "Mechanic's hands is a poorly defined clinical finding that has been reported in a variety of rheumatologic diseases. Morphologic descriptions include hyperkeratosis on the sides of the digits that sometimes extends to the distal tips, diffuse palmar scale, and (more recently observed) linear discrete scaly papules in a similar lateral distribution. The association of mechanic's hands with dermatomyositis, although recognized, is still debatable. In this review, most studies have shown that mechanic's hands is commonly associated with dermatomyositis and displays histopathologic findings of interface dermatitis, colloid bodies, and interstitial mucin, which are consistent with a cutaneous connective tissue disease. A more specific definition of this entity would help to determine its usefulness in classifying and clinically identifying patients with dermatomyositis, with implications related to subsequent screening for associated comorbidities in this setting.",
"title": ""
},
{
"docid": "bceaae2a05d673bc576f365d6a0254ee",
"text": "OBJECTIVE\nResults from a recent series of surveys from 9 states and the District of Columbia by the Community Childhood Hunger Identification Project (CCHIP) provide an estimate that 4 million American children experience prolonged periodic food insufficiency and hunger each year, 8% of the children under the age of 12 in this country. The same studies show that an additional 10 million children are at risk for hunger. The current study examined the relationship between hunger as defined by the CCHIP measure (food insufficiency attributable to constrained resources) and variables reflecting the psychosocial functioning of low-income, school-aged children.\n\n\nMETHODS\nThe study group included 328 parents and children from a CCHIP study of families with at least 1 child under the age of 12 years living in the city of Pittsburgh and the surrounding Allegheny County. A two-stage area probability sampling design with standard cluster techniques was used. All parents whose child was between the ages of 6 and 12 years at the time of interview were asked to complete a Pediatric Symptom Checklist, a brief parent-report questionnaire that assesses children's emotional and behavioral symptoms. Hunger status was defined by parent responses to the standard 8 food-insufficiency questions from the CCHIP survey that are used to classify households and children as \"hungry,\" \"at-risk for hunger,\" or \"not hungry.\"\n\n\nRESULTS\nIn an area probability sample of low-income families, those defined as hungry on the CCHIP measure were significantly more likely to have clinical levels of psychosocial dysfunction on the Pediatric Symptom Checklist than children defined as at-risk for hunger or not hungry. Analysis of individual items and factor scores on the Pediatric Symptom Checklist showed that virtually all behavioral, emotional, and academic problems were more prevalent in hungry children, but that aggression and anxiety had the strongest degree of association with experiences of hunger.\n\n\nCONCLUSION\nChildren from families that report multiple experiences of food insufficiency and hunger are more likely to show behavioral, emotional, and academic problems on a standardized measure of psychosocial dysfunction than children from the same low-income communities whose families do not report experiences of hunger. Although causality cannot be determined from a cross-sectional design, the strength of these findings suggests the importance of greater awareness on the part of health care providers and public health officials of the role of food insufficiency and hunger in the lives of poor children.",
"title": ""
},
{
"docid": "7c287295e022480314d8a2627cd12cef",
"text": "The causal role of human papillomavirus infections in cervical cancer has been documented beyond reasonable doubt. The association is present in virtually all cervical cancer cases worldwide. It is the right time for medical societies and public health regulators to consider this evidence and to define its preventive and clinical implications. A comprehensive review of key studies and results is presented.",
"title": ""
},
{
"docid": "b3bcf4d5962cd2995d21cfbbe9767b9d",
"text": "In computer, Cloud of Things (CoT) it is a Technique came by integrated two concepts Internet of Things(IoT) and Cloud Computing. Therefore, Cloud of Things is a currently a wide area of research and development. This paper discussed the concept of Cloud of Things (CoT) in detail and explores the challenges, open research issues, and various tools that can be used with Cloud of Things (CoT). As a result, this paper gives a knowledge and platform to explore Cloud of Things (CoT), and it gives new ideas for researchers to find the open research issues and solution to challenges.",
"title": ""
},
{
"docid": "ca8389d51dfd4941a1924037496cad6e",
"text": "The TREC question answering (QA) track was the first large-sc ale evaluation of open-domain question answering systems. In addition to successfully fostering research on the QA task, the track h s also been used to investigate appropriate evaluation me thodologies for question answering systems. This paper gives a brief histor y of the TREC QA track, motivating the decisions made in its im plementation and summarizing the results. The lessons learned from the tr ack will be used to evolve new QA evaluations for both the trac k nd the ARDA AQUAINT program.",
"title": ""
},
{
"docid": "ea4e0cb8ac63a26319e5567e53b1a053",
"text": "Markov chains are widely used in the context of performance and reliability evaluation of systems of various nature. Model checking of such chains with respect to a given (branching) temporal logic formula has been proposed for both the discrete [17, 6] and the continuous time setting [4, 8]. In this paper, we describe a prototype model checker for discrete and continuous-time Markov chains, the Erlangen–Twente Markov Chain Checker (E T MC), where properties are expressed in appropriate extensions of CTL. We illustrate the general benefits of this approach and discuss the structure of the tool. Furthermore we report on first successful applications of the tool to non-trivial examples, highlighting lessons learned during development and application of E T MC.",
"title": ""
},
{
"docid": "2855a1f420ed782317c1598c9d9c185e",
"text": "Ranking authors is vital for identifying a researcher’s impact and his standing within a scientific field. There are many different ranking methods (e.g., citations, publications, h-index, PageRank, and weighted PageRank), but most of them are topic-independent. This paper proposes topic-dependent ranks based on the combination of a topic model and a weighted PageRank algorithm. The Author-Conference-Topic (ACT) model was used to extract topic distribution of individual authors. Two ways for combining the ACT model with the PageRank algorithm are proposed: simple combination (I_PR) or using a topic distribution as a weighted vector for PageRank (PR_t). Information retrieval was chosen as the test field and representative authors for different topics at different time phases were identified. Principal Component Analysis (PCA) was applied to analyze the ranking difference between I_PR and PR_t.",
"title": ""
},
{
"docid": "bf56462f283d072c4157d5c5665eead3",
"text": "Various scientific computations have become so complex, and thus computation tools play an important role. In this paper, we explore the state-of-the-art framework providing high-level matrix computation primitives with MapReduce through the case study approach, and demonstrate these primitives with different computation engines to show the performance and scalability. We believe the opportunity for using MapReduce in scientific computation is even more promising than the success to date in the parallel systems literature.",
"title": ""
},
{
"docid": "5d673d1b6755e3e1d451ca17644cf3ec",
"text": "The Achilles Heel of stochastic optimization algorithms is getting trapped on local optima. Novelty Search mitigates this problem by encouraging exploration in all interesting directions by replacing the performance objective with a reward for novel behaviors. This reward for novel behaviors has traditionally required a human-crafted, behavioral distance function. While Novelty Search is a major conceptual breakthrough and outperforms traditional stochastic optimization on certain problems, it is not clear how to apply it to challenging, high-dimensional problems where specifying a useful behavioral distance function is difficult. For example, in the space of images, how do you encourage novelty to produce hawks and heroes instead of endless pixel static? Here we propose a new algorithm, the Innovation Engine, that builds on Novelty Search by replacing the human-crafted behavioral distance with a Deep Neural Network (DNN) that can recognize interesting differences between phenotypes. The key insight is that DNNs can recognize similarities and differences between phenotypes at an abstract level, wherein novelty means interesting novelty. For example, a DNN-based novelty search in the image space does not explore in the low-level pixel space, but instead creates a pressure to create new types of images (e.g., churches, mosques, obelisks, etc.). Here, we describe the long-term vision for the Innovation Engine algorithm, which involves many technical challenges that remain to be solved. We then implement a simplified version of the algorithm that enables us to explore some of the algorithm’s key motivations. Our initial results, in the domain of images, suggest that Innovation Engines could ultimately automate the production of endless streams of interesting solutions in any domain: for example, producing intelligent software, robot controllers, optimized physical components, and art.",
"title": ""
},
{
"docid": "30e8c48f6995c177f9a9e88b2642cdae",
"text": "In this paper, we evaluate the capability of the high spatial resolution airborne Digital Airborne Imaging System (DAIS) imagery for detailed vegetation classification at the alliance level with the aid of ancillary topographic data. Image objects as minimum classification units were generated through the Fractal Net Evolution Approach (FNEA) segmentation using eCognition software. For each object, 52 features were calculated including spectral features, textures, topographic features, and geometric features. After statistically ranking the importance of these features with the classification and regression tree algorithm (CART), the most effective features for classification were used to classify the vegetation. Due to the uneven sample size for each class, we chose a non-parametric (nearest neighbor) classifier. We built a hierarchical classification scheme and selected features for each of the broadest categories to carry out the detailed classification, which significantly improved the accuracy. Pixel-based maximum likelihood classification (MLC) with comparable features was used as a benchmark in evaluating our approach. The objectbased classification approach overcame the problem of saltand-pepper effects found in classification results from traditional pixel-based approaches. The method takes advantage of the rich amount of local spatial information present in the irregularly shaped objects in an image. This classification approach was successfully tested at Point Reyes National Seashore in Northern California to create a comprehensive vegetation inventory. Computer-assisted classification of high spatial resolution remotely sensed imagery has good potential to substitute or augment the present ground-based inventory of National Park lands. Introduction Remote sensing provides a useful source of data from which updated land-cover information can be extracted for assessing and monitoring vegetation changes. In the past several decades, airphoto interpretation has played an important role in detailed vegetation mapping (Sandmann and Lertzman, 2003), while applications of coarser spatial resolution satellite Object-based Detailed Vegetation Classification with Airborne High Spatial Resolution Remote Sensing Imagery Qian Yu, Peng Gong, Nick Clinton, Greg Biging, Maggi Kelly, and Dave Schirokauer imagery such as Landsat Thematic Mapper (TM) and SPOT High Resolution Visible (HRV) alone have often proven insufficient or inadequate for differentiating species-level vegetation in detailed vegetation studies (Kalliola and Syrjanen, 1991; Harvey and Hill, 2001). Classification accuracy is reported to be only 40 percent or less for thematic information extraction at the species-level with these image types (Czaplewski and Patterson, 2003). However, high spatial resolution remote sensing is becoming increasingly available; airborne and spaceborne multispectral imagery can be obtained at spatial resolutions at or better than 1 m. The utility of high spatial resolution for automated vegetation composition classification needs to be evaluated (Ehlers et al., 2003). High spatial resolution imagery initially thrives on the application of urban-related feature extraction has been used (Jensen and Cowen, 1999; Benediktsson et al., 2003; Herold et al., 2003a), but there has not been as much work in detailed vegetation mapping using high spatial resolution imagery. This preference for urban areas is partly due to the proximity of the spectral signatures for different species and the difficulties in capturing texture features for vegetation (Carleer and Wolff, 2004). While high spatial resolution remote sensing provides more information than coarse resolution imagery for detailed observation on vegetation, increasingly smaller spatial resolution does not necessarily benefit classification performance and accuracy (Marceau et al., 1990; Gong and Howarth, 1992b; Hay et al., 1996; Hsieh et al., 2001). With the increase in spatial resolution, single pixels no longer capture the characteristics of classification targets. The increase in intra-class spectral variability causes a reduction of statistical separability between classes with traditional pixel-based classification approaches. Consequently, classification accuracy is reduced, and the classification results show a salt-and-pepper effect, with individual pixels classified differently from their neighbors. To overcome this so-called H-resolution problem, some pixel-based methods have already been implemented, mainly consisting of three categories: (a) image pre-processing, such as low-pass filter and texture analysis (Gong et al., 1992; Hill and Foody, 1994), (b) contextual classification (Gong and Howarth, 1992a), and (c) post-classification processing, such as mode filtering, morphological filtering, rule-based processing, and probabilistic relaxation (Gong and Howarth, 1989; Shackelford and Davis, 2003; Sun et al., 2003). A common aspect of these methods is that they incorporate spatial information to characterize each class using neighborhood relationships. PHOTOGRAMMETRIC ENGINEER ING & REMOTE SENS ING J u l y 2006 799 Qian Yu, Peng Gong, Nick Clinton, Greg Biging, and Maggi Kelly are with the Department of Environmental Science, Policy and Management, 137 Mulford Hall, University of California, Berkeley, CA 94720-3110 ([email protected]). Peng Gong is with the State Key Laboratory of Remote Sensing Science, Jointly Sponsored by the Institute of Remote Sensing Applications, Chinese Academy of Sciences and Beijing Normal University, 100101, Beijing, China. Dave Schirokauer is with the Point Reyes National Seashore, Point Reyes, CA 94956. Photogrammetric Engineering & Remote Sensing Vol. 72, No. 7, July 2006, pp. 799–811. 0099-1112/06/7207–0799/$3.00/0 © 2006 American Society for Photogrammetry and Remote Sensing 04-153 6/9/06 9:57 AM Page 799",
"title": ""
},
{
"docid": "4d297680cd342f46a5a706c4969273b8",
"text": "Theory on passwords has lagged practice, where large providers use back-end smarts to survive with imperfect technology.",
"title": ""
},
{
"docid": "976064ba00f4eb2020199f264d29dae2",
"text": "Social network analysis is a large and growing body of research on the measurement and analysis of relational structure. Here, we review the fundamental concepts of network analysis, as well as a range of methods currently used in the field. Issues pertaining to data collection, analysis of single networks, network comparison, and analysis of individual-level covariates are discussed, and a number of suggestions are made for avoiding common pitfalls in the application of network methods to substantive questions.",
"title": ""
},
{
"docid": "456d376029d594170c81dbe455a4086a",
"text": "Long range, low power networks are rapidly gaining acceptance in the Internet of Things (IoT) due to their ability to economically support long-range sensing and control applications while providing multi-year battery life. LoRa is a key example of this new class of network and is being deployed at large scale in several countries worldwide. As these networks move out of the lab and into the real world, they expose a large cyber-physical attack surface. Securing these networks is therefore both critical and urgent. This paper highlights security issues in LoRa and LoRaWAN that arise due to the choice of a robust but slow modulation type in the protocol. We exploit these issues to develop a suite of practical attacks based around selective jamming. These attacks are conducted and evaluated using commodity hardware. The paper concludes by suggesting a range of countermeasures that can be used to mitigate the attacks.",
"title": ""
},
{
"docid": "14acb15d79002d7c5b6b50daa082425a",
"text": "Recent conceptualizations of trends in the structure of U.S. industry have focused on the relative importance of markets, hierarchies, and hybrid intermediate forms. This paper advances the discussion by distinguishing three ideal–typical forms of organization and their corresponding coordination mechanisms: market/price, hierarchy/authority, and community/trust. Different institutions combine the three forms/mechanisms in different proportions. Economic and organizational theory have shown that, compared to trust, price and authority are relatively ineffective means of dealing with knowledge-based assets. Therefore, as knowledge becomes increasingly important in our economy, we should expect high-trust institutional forms to proliferate. A review of trends in employment relations, interdivisional relations, and interfirm relations finds evidence suggesting that the effect of growing knowledge-intensity may indeed be a trend toward greater reliance on trust. There is also reason to believe that the form of trust most effective in this context is a distinctively modern kind—‘‘reflective trust’’—as opposed to traditionalistic, ‘‘blind’’ trust. Such a trend to reflective trust appears to threaten the privileges of currently dominant social actors, and these actors’ resistance, in combination with the complex interdependencies between price, authority, and trust mechanisms, imparts a halting character to the trend. But the momentum of this trend nevertheless appears to be selfreinforcing, which suggests that it may ultimately challenge the foundations of our capitalist form of society while simultaneously creating the foundations of a new, postcapitalist form. (Knowledge; Trust; Market; Hierarchy; Capitalism) Introduction Considerable attention has been focused recently on data suggesting that the secular trend toward larger firms and establishments has stalled and may be reversing (Brynjolfsson et al. 1994). Some observers argue that the underlying new trend is toward the disintegration of large hierarchical firms and their replacement by small entrepreneurial firms coordinated by markets (Birch 1987). This argument, however, understates the persistence of large firms, ignores transformations underway within these firms, and masks the growth of network relations among firms. How, then, should one interpret the current wave of changes in organizational forms? Zenger and Hesterly (1997) propose that the underlying trend is a progressive swelling of the zone between hierarchy and market. They point to a proliferation of hybrid organizational forms that introduce high-powered marketlike incentives into firms and hierarchical controls into markets (Holland and Lockett 1997, make a similar argument). This proposition is more valid empirically than a one-sided characterization of current trends as a shift from hierarchy to market. The ‘‘swelling-middle’’ thesis is also a step beyond Williamson’s (1991) unjustified assertion that such hybrid forms are infeasible or inefficient. However, this paper argues that Zenger and Hesterlys’ thesis, too, is fundamentally flawed in that it ignores a third increasingly significant coordination mechanism: trust. In highlighting the importance of trust, this essay adds to a burgeoning literature (e.g. Academy of Management Review 1998; further references below); my goal is to pull together several strands of this literature to advance a line of reflection that positions trust as a central construct in a broader argument. In outline, the argument is, first, that alongside themarket ideal-typical form of organization which relies on the price mechanism, and the hi rarchy form which relies on authority, there is a third form, the community form which relies on trust. Empirically observed arrangements typically embody a mix of the three ideal-typical organization forms and rely on a correponding mix of price, hierarchy, and trust mechanisms. Second, based on a well-established body of economic and sociological theory, I argue that trust has uniquely PAUL S. ADLER Market, Hierarchy, and Trust 216 ORGANIZATION SCIENCE/Vol. 12, No. 2, March–April 2001 effective properties for the coordination of knowledgeintensive activities within and between organizations. Third, given a broad consensus that modern economies are becoming increasingly knowledge intensive, the first two premises imply that trust is likely to become increasingly important in the mechanism mix. I present indices of such a knowledge-driven trend to trust within and between firms, specifically in the employment relationship, in interdivisional relations, and in interfirm relations. Fourth, I discuss the difficulties encountered by the trust mechanism in a capitalist society and the resulting mutation of trust itself. Finally, the concluding section discusses the broader effects of this intraand interfirm trend to trust, and argues that this trend progressively undermines the legitimacy of the capitalist form of society, and simultaneously lays the foundations for a new form. Both the theory and the data underlying these conclusions are subject to debate: I will summarize the key points of contention, and it will become obvious that we are far from theoretical or empirical consensus. In the form of an essay rather than a scientific paper, my argument will be speculative and buttressed by only suggestive rather than compelling evidence. My goal, however, is to enrich organizational research by enhancing its engagement with debates in the broader field of social theory. The Limits of Market and Hierarchy Knowledge is a remarkable substance. Unlike other resources, most forms of knowledge grow rather than diminish with use. Knowledge tends, therefore, to play an increasingly central role in economic development over time. Increasing knowledge-intensity takes two forms: the rising education level of the workforce (living or subjective knowledge) and the growing scientific and technical knowledge materialized in new equipment and new products (embodied or objectified knowledge). Recapitulating a long tradition of scholarship in economics and organization theory, this section argues that neither market nor hierarchy, nor any combination of the two, is particularly well suited to the challenges of the knowledge economy. To draw out the implications of this argument, I will assume that real institutions, notably empirically observed markets and firms, embody varying mixes of three ideal-typical organizational forms and their corresponding coordination mechanisms: (a) the hierarchy form relies on the authority mechanism, (b) the market form relies on price, and (c) the community form relies on trust. For brevity’s sake, an organizational form and its corresponding mechanism will be referred to as an organizing ‘‘mode.’’ Modes typically appear in varying proportions in different institutions. For example, interfirm relations in real markets embody and rely on varying degrees of trust and hierarchical authority, even if their primary mechanism is price. Similarly, real firms’ internal operations typically rely to some extent on both trust and price signals, even if their primary coordination mechanism is authority. Hierarchy uses authority (legitimate power) to create and coordinate a horizontal and vertical division of labor. Under hierarchy, knowledge is treated as a scarce resource and is therefore concentrated, along with the corresponding decision rights, in specialized functional units and at higher levels of the organization. A large body of organizational research has shown that an institution structured by this mechanism may be efficient in the performance of routine partitioned tasks but encounters enormous difficulty in the performance of innovation tasks requiring the generation of new knowledge (e.g., Burns and Stalker 1961, Bennis and Slater 1964, Mintzberg 1979, Scott 1992, Daft 1998). When specialized units are told to cooperate in tasks that typically encounter unanticipated problems requiring novel solutions, tasks such as the development of a new product, the hierarchical form gives higher-level managers few levers with which to ensure that the participating units will collaborate. By their nonroutine nature, such tasks cannot be preprogrammed, and the creative collaboration they require cannot be simply commanded. Similarly, the vertical differentiation of hierarchy is effective for routine tasks, facilitating downward communication of explicit knowledge and commands, but less effective when tasks are nonroutine, because lower levels lack both the knowledge needed to create new knowledge and the incentives to transmit new ideas upward. Firms thus invariably supplement their primary organizational mode, hierarchy/authority, with other modes that can mitigate the hierarchy/ authority mode’s weaknesses. The market form, as distinct from the actual functioning of most real markets, relies on the price mechanism to coordinate competing suppliers and anonymous buyers. With standard goods and strong property rights, marginal pricing promises to optimize production and allocation jointly. The dynamics of competition, supply, and demand lead to a price at which social welfare is Pareto optimal (that is, no one’s welfare can be increased without reducing someone else’s). A substantial body of modern economic theory has shown, however, that the price mechanism fails to optimize the production and allocation of knowledge (Arrow 1962, Stiglitz 1994). Knowledge is a ‘‘public good’’; that is, like radio transmission, its availability to one consumer is not diminished by its use by PAUL S. ADLER Market, Hierarchy, and Trust ORGANIZATION SCIENCE/Vol. 12, No. 2, March–April 2001 217 another. With knowledge, as with other public goods, reliance on the market/price mode forces a trade-off between production and allocation. On the one hand, production of new knowledge would be optimized by establishing strong intellectual property rights tha",
"title": ""
},
{
"docid": "eab3dff1aecb9cec903e0bbe67b5a66d",
"text": "With a pace of about twice the observed rate of global warming, the temperature on the Qinghai-Tibetan Plateau (Earth's 'third pole') has increased by 0.2 °C per decade over the past 50 years, which results in significant permafrost thawing and glacier retreat. Our review suggested that warming enhanced net primary production and soil respiration, decreased methane (CH(4)) emissions from wetlands and increased CH(4) consumption of meadows, but might increase CH(4) emissions from lakes. Warming-induced permafrost thawing and glaciers melting would also result in substantial emission of old carbon dioxide (CO(2)) and CH(4). Nitrous oxide (N(2)O) emission was not stimulated by warming itself, but might be slightly enhanced by wetting. However, there are many uncertainties in such biogeochemical cycles under climate change. Human activities (e.g. grazing, land cover changes) further modified the biogeochemical cycles and amplified such uncertainties on the plateau. If the projected warming and wetting continues, the future biogeochemical cycles will be more complicated. So facing research in this field is an ongoing challenge of integrating field observations with process-based ecosystem models to predict the impacts of future climate change and human activities at various temporal and spatial scales. To reduce the uncertainties and to improve the precision of the predictions of the impacts of climate change and human activities on biogeochemical cycles, efforts should focus on conducting more field observation studies, integrating data within improved models, and developing new knowledge about coupling among carbon, nitrogen, and phosphorus biogeochemical cycles as well as about the role of microbes in these cycles.",
"title": ""
},
{
"docid": "18defc8666f7fea7ae89ff3d5d833e0a",
"text": "[1] We present a new approach to extracting spatially and temporally continuous ground deformation fields from interferometric synthetic aperture radar (InSAR) data. We focus on unwrapped interferograms from a single viewing geometry, estimating ground deformation along the line-of-sight. Our approach is based on a wavelet decomposition in space and a general parametrization in time. We refer to this approach as MInTS (Multiscale InSAR Time Series). The wavelet decomposition efficiently deals with commonly seen spatial covariances in repeat-pass InSAR measurements, since the coefficients of the wavelets are essentially spatially uncorrelated. Our time-dependent parametrization is capable of capturing both recognized and unrecognized processes, and is not arbitrarily tied to the times of the SAR acquisitions. We estimate deformation in the wavelet-domain, using a cross-validated, regularized least squares inversion. We include a model-resolution-based regularization, in order to more heavily damp the model during periods of sparse SAR acquisitions, compared to during times of dense acquisitions. To illustrate the application of MInTS, we consider a catalog of 92 ERS and Envisat interferograms, spanning 16 years, in the Long Valley caldera, CA, region. MInTS analysis captures the ground deformation with high spatial density over the Long Valley region.",
"title": ""
},
{
"docid": "29097a62fcfa349cdd9be06e86098014",
"text": "Metaphor is a pervasive feature of human language that enables us to conceptualize and communicate abstract concepts using more concrete terminology. Unfortunately, it is also a feature that serves to confound a computer’s ability to comprehend natural human language. We present a method to detect linguistic metaphors by inducing a domainaware semantic signature for a given text and compare this signature against a large index of known metaphors. By training a suite of binary classifiers using the results of several semantic signature-based rankings of the index, we are able to detect linguistic metaphors in unstructured text at a significantly higher precision as compared to several baseline approaches.",
"title": ""
},
{
"docid": "58c2f9f5f043f87bc51d043f70565710",
"text": "T strategic use of first-party content by two-sided platforms is driven by two key factors: the nature of buyer and seller expectations (favorable versus unfavorable) and the nature of the relationship between first-party content and third-party content (complements or substitutes). Platforms facing unfavorable expectations face an additional constraint: their prices and first-party content investment need to be such that low (zero) participation equilibria are eliminated. This additional constraint typically leads them to invest more (less) in first-party content relative to platforms facing favorable expectations when firstand third-party content are substitutes (complements). These results hold with both simultaneous and sequential entry of the two sides. With two competing platforms—incumbent facing favorable expectations and entrant facing unfavorable expectations— and multi-homing on one side of the market, the incumbent always invests (weakly) more in first-party content relative to the case in which it is a monopolist.",
"title": ""
},
{
"docid": "7f897e5994685f0b158da91cef99c855",
"text": "Cloud computing and its pay-as-you-go model continue to provide significant cost benefits and a seamless service delivery model for cloud consumers. The evolution of small-scale and large-scale geo-distributed datacenters operated and managed by individual cloud service providers raises new challenges in terms of effective global resource sharing and management of autonomously-controlled individual datacenter resources. Earlier solutions for geo-distributed clouds have focused primarily on achieving global efficiency in resource sharing that results in significant inefficiencies in local resource allocation for individual datacenters leading to unfairness in revenue and profit earned. In this paper, we propose a new contracts-based resource sharing model for federated geo-distributed clouds that allows cloud service providers to establish resource sharing contracts with individual datacenters apriori for defined time intervals during a 24 hour time period. Based on the established contracts, individual cloud service providers employ a cost-aware job scheduling and provisioning algorithm that enables tasks to complete and meet their response time requirements. The proposed techniques are evaluated through extensive experiments using realistic workloads and the results demonstrate the effectiveness, scalability and resource sharing efficiency of the proposed model.",
"title": ""
}
] |
scidocsrr
|
b7dcd778b44e844d7976d2aeef5d3224
|
Use Fewer Instances of the Letter "i": Toward Writing Style Anonymization
|
[
{
"docid": "21384ea8d80efbf2440fb09a61b03be2",
"text": "We present a new class of statistical de- anonymization attacks against high-dimensional micro-data, such as individual preferences, recommendations, transaction records and so on. Our techniques are robust to perturbation in the data and tolerate some mistakes in the adversary's background knowledge. We apply our de-anonymization methodology to the Netflix Prize dataset, which contains anonymous movie ratings of 500,000 subscribers of Netflix, the world's largest online movie rental service. We demonstrate that an adversary who knows only a little bit about an individual subscriber can easily identify this subscriber's record in the dataset. Using the Internet Movie Database as the source of background knowledge, we successfully identified the Netflix records of known users, uncovering their apparent political preferences and other potentially sensitive information.",
"title": ""
},
{
"docid": "986f58dd52542107b094a2142a1e4243",
"text": "We investigate the degree to which modern web browsers are subject to “device fingerprinting” via the version and configuration information that they will transmit to websites upon request. We implemented one possible fingerprinting algorithm, and collected these fingerprints from a large sample of browsers that visited our test side, panopticlick.eff.org. We observe that the distribution of our fingerprint contains at least 18.1 bits of entropy, meaning that if we pick a browser at random, at best we expect that only one in 286,777 other browsers will share its fingerprint. Among browsers that support Flash or Java, the situation is worse, with the average browser carrying at least 18.8 bits of identifying information. 94.2% of browsers with Flash or Java were unique in our sample. By observing returning visitors, we estimate how rapidly browser fingerprints might change over time. In our sample, fingerprints changed quite rapidly, but even a simple heuristic was usually able to guess when a fingerprint was an “upgraded” version of a previously observed browser’s fingerprint, with 99.1% of guesses correct and a false positive rate of only 0.86%. We discuss what privacy threat browser fingerprinting poses in practice, and what countermeasures may be appropriate to prevent it. There is a tradeoff between protection against fingerprintability and certain kinds of debuggability, which in current browsers is weighted heavily against privacy. Paradoxically, anti-fingerprinting privacy technologies can be selfdefeating if they are not used by a sufficient number of people; we show that some privacy measures currently fall victim to this paradox, but others do not.",
"title": ""
},
{
"docid": "c0e70347999c028516eb981a15b8a6c8",
"text": "Many commercial websites use recommender systems to help customers locate products and content. Modern recommenders are based on collaborative filtering: they use patterns learned from users' behavior to make recommendations, usually in the form of related-items lists. The scale and complexity of these systems, along with the fact that their outputs reveal only relationships between items (as opposed to information about users), may suggest that they pose no meaningful privacy risk. In this paper, we develop algorithms which take a moderate amount of auxiliary information about a customer and infer this customer's transactions from temporal changes in the public outputs of a recommender system. Our inference attacks are passive and can be carried out by any Internet user. We evaluate their feasibility using public data from popular websites Hunch, Last. fm, Library Thing, and Amazon.",
"title": ""
},
{
"docid": "c70466f8b1e70fcdd4b7fe3f2cb772b2",
"text": "We present Tor, a circuit-based low-latency anonymous communication service. This second-generation Onion Routing system addresses limitations in the original design. Tor adds perfect forward secrecy, congestion control, directory servers, integrity checking, configurable exit policies, and a practical design for rendezvous points. Tor works on the real-world Internet, requires no special privileges or kernel modifications, requires little synchronization or coordination between nodes, and provides a reasonable tradeoff between anonymity, usability, and efficiency. We briefly describe our experiences with an international network of more than a dozen hosts. We close with a list of open problems in anonymous communication.",
"title": ""
},
{
"docid": "7f652be9bde8f47d166e7bbeeb3a535b",
"text": "One of the problems often associated with online anonymity is that it hinders social accountability, as substantiated by the high levels of cybercrime. Although identity cues are scarce in cyberspace, individuals often leave behind textual identity traces. In this study we proposed the use of stylometric analysis techniques to help identify individuals based on writing style. We incorporated a rich set of stylistic features, including lexical, syntactic, structural, content-specific, and idiosyncratic attributes. We also developed the Writeprints technique for identification and similarity detection of anonymous identities. Writeprints is a Karhunen-Loeve transforms-based technique that uses a sliding window and pattern disruption algorithm with individual author-level feature sets. The Writeprints technique and extended feature set were evaluated on a testbed encompassing four online datasets spanning different domains: email, instant messaging, feedback comments, and program code. Writeprints outperformed benchmark techniques, including SVM, Ensemble SVM, PCA, and standard Karhunen-Loeve transforms, on the identification and similarity detection tasks with accuracy as high as 94% when differentiating between 100 authors. The extended feature set also significantly outperformed a baseline set of features commonly used in previous research. Furthermore, individual-author-level feature sets generally outperformed use of a single group of attributes.",
"title": ""
}
] |
[
{
"docid": "928eb797289d2630ff2e701ced782a14",
"text": "The restricted Boltzmann machine (RBM) has received an increasing amount of interest in recent years. It determines good mapping weights that capture useful latent features in an unsupervised manner. The RBM and its generalizations have been successfully applied to a variety of image classification and speech recognition tasks. However, most of the existing RBM-based models disregard the preservation of the data manifold structure. In many real applications, the data generally reside on a low-dimensional manifold embedded in high-dimensional ambient space. In this brief, we propose a novel graph regularized RBM to capture features and learning representations, explicitly considering the local manifold structure of the data. By imposing manifold-based locality that preserves constraints on the hidden layer of the RBM, the model ultimately learns sparse and discriminative representations. The representations can reflect data distributions while simultaneously preserving the local manifold structure of data. We test our model using several benchmark image data sets for unsupervised clustering and supervised classification problem. The results demonstrate that the performance of our method exceeds the state-of-the-art alternatives.",
"title": ""
},
{
"docid": "ba4ffbb6c3dc865f803cbe31b52919c5",
"text": "This investigation is one in a series of studies that address the possibility of stroke rehabilitation using robotic devices to facilitate “adaptive training.” Healthy subjects, after training in the presence of systematically applied forces, typically exhibit a predictable “after-effect.” A critical question is whether this adaptive characteristic is preserved following stroke so that it might be exploited for restoring function. Another important question is whether subjects benefit more from training forces that enhance their errors than from forces that reduce their errors. We exposed hemiparetic stroke survivors and healthy age-matched controls to a pattern of disturbing forces that have been found by previous studies to induce a dramatic adaptation in healthy individuals. Eighteen stroke survivors made 834 movements in the presence of a robot-generated force field that pushed their hands proportional to its speed and perpendicular to its direction of motion — either clockwise or counterclockwise. We found that subjects could adapt, as evidenced by significant after-effects. After-effects were not correlated with the clinical scores that we used for measuring motor impairment. Further examination revealed that significant improvements occurred only when the training forces magnified the original errors, and not when the training forces reduced the errors or were zero. Within this constrained experimental task we found that error-enhancing therapy (as opposed to guiding the limb closer to the correct path) to be more effective than therapy that assisted the subject.",
"title": ""
},
{
"docid": "78d73915513030c9bc553b3694d91915",
"text": "This paper reports on a flexible micro planar coil for an MR (Magnetic Resonance) catheter. High resolution images can be obtained by using the MR catheter. Two types of coils of 5 mm and 20 mm in diameter were fabricated. Both of the coils were designed to have inductance of 1.0 muH at 8.5 MHz. The proto-type MR catheters were made by attaching the micro coils to the tip of acrylic pipes. In order to evaluate the MR catheters, the sensitive area and SNR (Signal to Noise Ratio) were measured. SNRs of the MR catheters were about three to eight times higher than that of a standard medical coil. Furthermore, we demonstrated that high resolution MR images can be achieved by using the MR catheters. MR images of a gumbo (Abelmoschus esculentus) were acquired with 0.5 x 0.5 x 1.0 mm3 resolution. While clear MR images were not able to be taken by the standard medical coil due to its low SNR, we were able to observe the gumbo and distinguished its seeds one by one in the MR images taken by the MR catheters.",
"title": ""
},
{
"docid": "5e63c7f6d86b634d8a2b7e0746eaa0d2",
"text": "A famous theorem of Szemerédi asserts that given any density 0 < δ ≤ 1 and any integer k ≥ 3, any set of integers with density δ will contain infinitely many proper arithmetic progressions of length k. For general k there are essentially four known proofs of this fact; Szemerédi’s original combinatorial proof using the Szemerédi regularity lemma and van der Waerden’s theorem, Furstenberg’s proof using ergodic theory, Gowers’ proof using Fourier analysis and the inverse theory of additive combinatorics, and the more recent proofs of Gowers and Rödl-Skokan using a hypergraph regularity lemma. Of these four, the ergodic theory proof is arguably the shortest, but also the least elementary, requiring passage (via the Furstenberg correspondence principle) to an infinitary measure preserving system, and then decomposing a general ergodic system relative to a tower of compact extensions. Here we present a quantitative, self-contained version of this ergodic theory proof, and which is “elementary” in the sense that it does not require the axiom of choice, the use of infinite sets or measures, or the use of the Fourier transform or inverse theorems from additive combinatorics. It also gives explicit (but extremely poor) quantitative bounds.",
"title": ""
},
{
"docid": "80cee0fa7114113732febe7f55b18a16",
"text": "A novel paradigm that changes the scene for the modern communication and computation systems is the Edge Computing. It is not a coincidence that terms like Mobile Cloud Computing, Cloudlets, Fog Computing, and Mobile-Edge Computing are gaining popularity both in academia and industry. In this paper, we embrace all these terms under the umbrella concept of “Edge Computing” to name the trend where computational infrastructures hence the services themselves are getting closer to the end user. However, we observe that bringing computational infrastructures to the proximity of the user does not magically solve all technical challenges. Moreover, it creates complexities of its own when not carefully handled. In this paper, these challenges are discussed in depth and categorically analyzed. As a solution direction, we propose that another major trend in networking, namely software-defined networking (SDN), should be taken into account. SDN, which is not proposed specifically for Edge Computing, can in fact serve as an enabler to lower the complexity barriers involved and let the real potential of Edge Computing be achieved. To fully demonstrate our ideas, initially, we put forward a clear collaboration model for the SDN-Edge Computing interaction through practical architectures and show that SDN related mechanisms can feasibly operate within the Edge Computing infrastructures. Then, we provide a detailed survey of the approaches that comprise the Edge Computing domain. A comparative discussion elaborates on where these technologies meet as well as how they differ. Later, we discuss the capabilities of SDN and align them with the technical shortcomings of Edge Computing implementations. We thoroughly investigate the possible modes of operation and interaction between the aforementioned technologies in all directions and technically deduce a set of “Benefit Areas” which is discussed in detail. Lastly, as SDN is an evolving technology, we give the future directions for enhancing the SDN development so that it can take this collaboration to a further level.",
"title": ""
},
{
"docid": "4ae0df0ab2ff49391561f7014b0f3648",
"text": "Multi-task Inverse Reinforcement Learning (IRL) is the problem of inferring multiple reward functions from expert demonstrations. Prior work, built on Bayesian IRL, is unable to scale to complex environments due to computational constraints. This paper contributes a formulation of multi-task IRL in the more computationally efficient Maximum Causal Entropy (MCE) IRL framework. Experiments show our approach can perform one-shot imitation learning in a gridworld environment that single-task IRL algorithms need hundreds of demonstrations to solve. We outline preliminary work using meta-learning to extend our method to the function approximator setting of modern MCE IRL algorithms. Evaluating on multi-task variants of common simulated robotics benchmarks, we discover serious limitations of these IRL algorithms, and conclude with suggestions for further work.",
"title": ""
},
{
"docid": "abe5bdf6a17cf05b49ac578347a3ca5d",
"text": "To realize the broad vision of pervasive computing, underpinned by the “Internet of Things” (IoT), it is essential to break down application and technology-based silos and support broad connectivity and data sharing; the cloud being a natural enabler. Work in IoT tends toward the subsystem, often focusing on particular technical concerns or application domains, before offloading data to the cloud. As such, there has been little regard given to the security, privacy, and personal safety risks that arise beyond these subsystems; i.e., from the wide-scale, cross-platform openness that cloud services bring to IoT. In this paper, we focus on security considerations for IoT from the perspectives of cloud tenants, end-users, and cloud providers, in the context of wide-scale IoT proliferation, working across the range of IoT technologies (be they things or entire IoT subsystems). Our contribution is to analyze the current state of cloud-supported IoT to make explicit the security considerations that require further work.",
"title": ""
},
{
"docid": "b200a40d95e184e486a937901c606e12",
"text": "0749-5978/$ see front matter 2008 Elsevier Inc. A doi:10.1016/j.obhdp.2008.06.003 * Corresponding author. E-mail address: [email protected] (S. Thau). Based on uncertainty management theory [Lind, E. A., & Van den Bos, K., (2002). When fairness works: Toward a general theory of uncertainty management. In Staw, B. M., & Kramer, R. M. (Eds.), Research in organizational behavior (Vol. 24, pp. 181–223). Greenwich, CT: JAI Press.], two studies tested whether a management style depicting situational uncertainty moderates the relationship between abusive supervision and workplace deviance. Study 1, using survey data from 379 subordinates of various industries, found that the positive relationship between abusive supervision and organizational deviance was stronger when authoritarian management style was low (high situational uncertainty) rather than high (low situational uncertainty). No significant interaction effect was found on interpersonal deviance. Study 2, using survey data from 1477 subordinates of various industries, found that the positive relationship between abusive supervision and supervisor-directed and organizational deviance was stronger when employees’ perceptions of their organization’s management style reflected high rather than low situational uncertainty. 2008 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "c5e472baf4c0304c8c5ab1f4ef3cf2e7",
"text": "The products of nonenzymatic glycation and oxidation of proteins and lipids, the advanced glycation end products (AGEs), accumulate in a wide variety of environments. AGEs may be generated rapidly or over long times stimulated by a range of distinct triggering mechanisms, thereby accounting for their roles in multiple settings and disease states. A critical property of AGEs is their ability to activate receptor for advanced glycation end products (RAGE), a signal transduction receptor of the immunoglobulin superfamily. It is our hypothesis that due to such interaction, AGEs impart a potent impact in tissues, stimulating processes linked to inflammation and its consequences. We hypothesize that AGEs cause perturbation in a diverse group of diseases, such as diabetes, inflammation, neurodegeneration, and aging. Thus, we propose that targeting this pathway may represent a logical step in the prevention/treatment of the sequelae of these disorders.",
"title": ""
},
{
"docid": "aa278c7ee719b877c946cf9e0dd383f5",
"text": "Recent popularization of camera devices, including action cams and smartphones, enables us to record videos in everyday life and share them through the Internet. Video blog is a recent approach for sharing videos, in which users enjoy expressing themselves in blog posts with attractive videos. Generating such videos, however, requires users to review vast amount of raw videos and edit them appropriately, which keeps users away from doing so. In this paper, we propose a novel video summarization method for helping users to create a video blog post. Unlike typical video summarization methods, the proposed method utilizes the text, which is written for a video blog post, and makes the video summary consistent with the content of the text. For this, we perform video summarization by solving an optimization problem, in which an objective function involves the content similarity between the summarized video and the text. Our user study with 20 participants has demonstrated that our proposed method is suitable to create video blog posts compared with conventional methods for video summarization.",
"title": ""
},
{
"docid": "61e7b3c7de15f87ed86ffb355d1b126c",
"text": "Temporal action detection is a very important yet challenging problem, since videos in real applications are usually long, untrimmed and contain multiple action instances. This problem requires not only recognizing action categories but also detecting start time and end time of each action instance. Many state-of-the-art methods adopt the \"detection by classification\" framework: first do proposal, and then classify proposals. The main drawback of this framework is that the boundaries of action instance proposals have been fixed during the classification step. To address this issue, we propose a novel Single Shot Action Detector (SSAD) network based on 1D temporal convolutional layers to skip the proposal generation step via directly detecting action instances in untrimmed video. On pursuit of designing a particular SSAD network that can work effectively for temporal action detection, we empirically search for the best network architecture of SSAD due to lacking existing models that can be directly adopted. Moreover, we investigate into input feature types and fusion strategies to further improve detection accuracy. We conduct extensive experiments on two challenging datasets: THUMOS 2014 and MEXaction2. When setting Intersection-over-Union threshold to 0.5 during evaluation, SSAD significantly outperforms other state-of-the-art systems by increasing mAP from $19.0%$ to $24.6%$ on THUMOS 2014 and from 7.4% to $11.0%$ on MEXaction2.",
"title": ""
},
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "02da1ba3f2370e3840caf68497c39b02",
"text": "Myiasis is an infestation of human tissue by the larvae of certain flies. There are many forms of myiasis, including localized furuncular myiasis, creeping dermal myiasis and wound and body cavity myiasis.1 Cordylobia anthropophaga (the Tumbu fly) and Dermatobia hominis (the human botfly) are the most common causes of myiasis in Africa and tropical America respectively. The genus Cordylobia also contains two less common species, C. ruandae and C. rodhaini. The usual hosts of C. rodhaini are various mammals (particularly rodents), and and humans are accidentally infested. Figure 1 shows the life cycle of C. rodhaini, which occurs over 55 to -67 days.3 The female fly deposits her eggs on dry sand polluted with the excrement of animals or on human clothing. In about 3 days, the larva is activated by the warm body of the host, hatches and invades the skin. As the larva matures, it induces a furuncular swelling. In 12 to -15 days, the larva reaches a length of about 23 mm, exits the skin and falls to the ground to pupate. The adult fly emerges in 23 to -26 days, and the life cycle resumes. In humans, the skin lesion starts as a red papule that gradually enlarges and develops into a furuncle. In the center of the lesion an opening forms, through which the larva breaths and discharges its serosanguinous feces. The lesion is associated with increasing pain until the larva exits the skin. The disease is usually uncomplicated and self-limiting.",
"title": ""
},
{
"docid": "1623c4b3dad0caf250df0cbe32af3f63",
"text": "This paper describes and evaluates a high-fidelity, low-cost haptic interface for tele-operation. The interface is a wearable vibrotactile glove containing miniature voice coils that provides continuous, proportional force information to the user's finger-tips. In psychophysical experiments, correlated variations in the frequency and amplitude of the stimulators extended the user's perceptual response range compared to varying amplitude or frequency alone. In an adaptive, force-limited, pick-and-place manipulation task, the interface allowed users to control the grip forces more effectively than no feedback or binary feedback, which produced equivalent performance. A sorting experiment established that proportional tactile feedback enhances the user's ability to discriminate the relative properties of objects, such as weight. We conclude that correlated amplitude and frequency signals, simulating force in a remote environment, substantially improve teleoperation.",
"title": ""
},
{
"docid": "9e259cafd152ad35dcd04e6a9c7d65ab",
"text": "Second-order pooling, a.k.a. bilinear pooling, has proven effective for deep learning based visual recognition. However, the resulting second-order networks yield a final representation that is orders of magnitude larger than that of standard, first-order ones, making them memory-intensive and cumbersome to deploy. Here, we introduce a general, parametric compression strategy that can produce more compact representations than existing compression techniques, yet outperform both compressed and uncompressed second-order models. Our approach is motivated by a statistical analysis of the network’s activations, relying on operations that lead to a Gaussian-distributed final representation, as inherently used by first-order deep networks. As evidenced by our experiments, this lets us outperform the state-of-the-art first-order and second-order models on several benchmark recognition datasets.",
"title": ""
},
{
"docid": "a06274d9bf6dba90ea0178ec11a20fb6",
"text": "Osteoporosis has become one of the most prevalent and costly diseases in the world. It is a metabolic disease characterized by reduction in bone mass due to an imbalance between bone formation and resorption. Osteoporosis causes fractures, prolongs bone healing, and impedes osseointegration of dental implants. Its pathological features include osteopenia, degradation of bone tissue microstructure, and increase of bone fragility. In traditional Chinese medicine, the herb Rhizoma Drynariae has been commonly used to treat osteoporosis and bone nonunion. However, the precise underlying mechanism is as yet unclear. Osteoprotegerin is a cytokine receptor shown to play an important role in osteoblast differentiation and bone formation. Hence, activators and ligands of osteoprotegerin are promising drug targets and have been the focus of studies on the development of therapeutics against osteoporosis. In the current study, we found that naringin could synergistically enhance the action of 1α,25-dihydroxyvitamin D3 in promoting the secretion of osteoprotegerin by osteoblasts in vitro. In addition, naringin can also influence the generation of osteoclasts and subsequently bone loss during organ culture. In conclusion, this study provides evidence that natural compounds such as naringin have the potential to be used as alternative medicines for the prevention and treatment of osteolysis.",
"title": ""
},
{
"docid": "4c7c455b644180fbca5a0abf032153ed",
"text": "Robots that solve complex tasks in environments too dangerous for humans to enter are desperately needed, e.g., for search and rescue applications. We describe our mobile manipulation robot Momaro, with which we participated successfully in the DARPA Robotics Challenge. It features a unique locomotion design with four legs ending in steerable wheels, which allows it both to drive omnidirectionally and to step over obstacles or climb. Furthermore, we present advanced communication and teleoperation approaches, which include immersive 3D visualization, and 6D tracking of operator head and arm motions. The proposed system is evaluated in the DARPA Robotics Challenge, the DLR SpaceBot Cup Qualification and lab experiments. We also discuss the lessons learned from the competitions.",
"title": ""
},
{
"docid": "92a00453bc0c2115a8b37e5acc81f193",
"text": "Choosing the appropriate software development methodology is something which continues to occupy the minds of many IT professionals. The introduction of “Agile” development methodologies such as XP and SCRUM held the promise of improved software quality and reduced delivery times. Combined with a Lean philosophy, there would seem to be potential for much benefit. While evidence does exist to support many of the Lean/Agile claims, we look here at how such methodologies are being adopted in the rigorous environment of safety-critical embedded software development due to its high regulation. Drawing on the results of a systematic literature review we find that evidence is sparse for Lean/Agile adoption in these domains. However, where it has been trialled, “out-of-the-box” Agile practices do not seem to fully suit these environments but rather tailored Agile versions combined with more planbased practices seem to be making inroads.",
"title": ""
},
{
"docid": "18233af1857390bff51d2e713bc766d9",
"text": "Name disambiguation is a perennial challenge for any large and growing dataset but is particularly significant for scientific publication data where documents and ideas are linked through citations and depend on highly accurate authorship. Differentiating personal names in scientific publications is a substantial problem as many names are not sufficiently distinct due to the large number of researchers active in most academic disciplines today. As more and more documents and citations are published every year, any system built on this data must be continually retrained and reclassified to remain relevant and helpful. Recently, some incremental learning solutions have been proposed, but most of these have been limited to small-scale simulations and do not exhibit the full heterogeneity of the millions of authors and papers in real world data. In our work, we propose a probabilistic model that simultaneously uses a rich set of metadata and reduces the amount of pairwise comparisons needed for new articles. We suggest an approach to disambiguation that classifies in an incremental fashion to alleviate the need for retraining the model and re-clustering all papers and uses fewer parameters than other algorithms. Using a published dataset, we obtained the highest K-measure which is a geometric mean of cluster and author-class purity. Moreover, on a difficult author block from the Clarivate Analytics Web of Science, we obtain higher precision than other algorithms.",
"title": ""
},
{
"docid": "2f7ba7501fcf379b643867c7d5a9d7bf",
"text": "The connections in many networks are not merely binary entities, either present or not, but have associated weights that record their strengths relative to one another. Recent studies of networks have, by and large, steered clear of such weighted networks, which are often perceived as being harder to analyze than their unweighted counterparts. Here we point out that weighted networks can in many cases be analyzed using a simple mapping from a weighted network to an unweighted multigraph, allowing us to apply standard techniques for unweighted graphs to weighted ones as well. We give a number of examples of the method, including an algorithm for detecting community structure in weighted networks and a simple proof of the maximum-flow-minimum-cut theorem.",
"title": ""
}
] |
scidocsrr
|
f7457a883318e55b7afe570ef16d2751
|
Testing Docker Performance for HPC Applications
|
[
{
"docid": "0d95f43ba40942b83e5f118b01ebf923",
"text": "Containers are a lightweight virtualization method for running multiple isolated Linux systems under a common host operating system. Container-based computing is revolutionizing the way applications are developed and deployed. A new ecosystem has emerged around the Docker platform to enable container based computing. However, this revolution has yet to reach the HPC community. In this paper, we provide background on Linux Containers and Docker, and how they can be of value to the scientific and HPC community. We will explain some of the use cases that motivate the need for user defined images and the uses of Docker. We will describe early work in deploying and integrating Docker into an HPC environment, and some of the pitfalls and challenges we encountered. We will discuss some of the security implications of using Docker and how we have addressed those for a shared user system typical of HPC centers. We will also provide performance measurements to illustrate the low overhead of containers. While our early work has been on cluster-based/CS-series systems, we will describe some preliminary assessment of supporting Docker on Cray XC series supercomputers, and a potential partnership with Cray to explore the feasibility and approaches to using Docker on large systems. Keywords-Docker; User Defined Images; containers; HPC systems",
"title": ""
}
] |
[
{
"docid": "5b4def6b0a13152578198b41da0cdecf",
"text": "For autonomous vehicles, the ability to detect and localize surrounding vehicles is critical. It is fundamental for further processing steps like collision avoidance or path planning. This paper introduces a convolutional neural network- based vehicle detection and localization method using point cloud data acquired by a LIDAR sensor. Acquired point clouds are transformed into bird's eye view elevation images, where each pixel represents a grid cell of the horizontal x-y plane. We intentionally encode each pixel using three channels, namely the maximal, median and minimal height value of all points within the respective grid. A major advantage of this three channel representation is that it allows us to utilize common RGB image-based detection networks without modification. The bird's eye view elevation images are processed by a two stage detector. Due to the nature of the bird's eye view, each pixel of the image represent ground coordinates, meaning that the bounding box of detected vehicles correspond directly to the horizontal position of the vehicles. Therefore, in contrast to RGB-based detectors, we not just detect the vehicles, but simultaneously localize them in ground coordinates. To evaluate the accuracy of our method and the usefulness for further high-level applications like path planning, we evaluate the detection results based on the localization error in ground coordinates. Our proposed method achieves an average precision of 87.9% for an intersection over union (IoU) value of 0.5. In addition, 75% of the detected cars are localized with an absolute positioning error of below 0.2m.",
"title": ""
},
{
"docid": "d049a1779a8660f689f1da5daada69dc",
"text": "Due to their complex nature, it is hard to characterize the ways in which machine learning models can misbehave or be exploited when deployed. Recent work on adversarial examples, i.e. inputs with minor perturbations that result in substantially different model predictions, is helpful in evaluating the robustness of these models by exposing the adversarial scenarios where they fail. However, these malicious perturbations are often unnatural, not semantically meaningful, and not applicable to complicated domains such as language. In this paper, we propose a framework to generate natural and legible adversarial examples that lie on the data manifold, by searching in semantic space of dense and continuous data representation, utilizing the recent advances in generative adversarial networks. We present generated adversaries to demonstrate the potential of the proposed approach for black-box classifiers for a wide range of applications such as image classification, textual entailment, and machine translation. We include experiments to show that the generated adversaries are natural, legible to humans, and useful in evaluating and analyzing black-box classifiers.",
"title": ""
},
{
"docid": "aa74bb5c6dbb758e0a68e10b1f35f3c9",
"text": "College students differ in their approaches to challenging course assignments. While some prefer to begin their assignments early, others postpone their work until the last minute. The present study adds to the procrastination literature by examining the links among self-compassionate attitudes, motivation, and procrastination tendency. A sample of college undergraduates completed four online surveys. Individuals with low, moderate, and high levels of self-compassion were compared on measures of motivation anxiety, achievement goal orientation, and procrastination tendency. Data analyses revealed that individuals with high self-compassion reported dramatically less motivation anxiety and procrastination tendency than those with low or moderate self-compassion. The practical importance of studying self-views as potential triggers for procrastination behavior and directions for future research are discussed.",
"title": ""
},
{
"docid": "54ef7c7dae7a8ff508c45b192d975c2b",
"text": "In order to realize performance gain of a robot or an artificial arm, the end-effector which exhibits the same function as human beings and can respond to various objects and environment needs to be realized. Then, we developed the new hand which paid its attention to the structure of human being's hand which realize operation in human-like manipulation (called TUAT/Karlsruhe Humanoid Hand). Since this humanoid hand has the structure of adjusting grasp shape and grasp force automatically, it does not need a touch sensor and feedback control. It is designed for the humanoid robot which has to work autonomously or interactively in cooperation with humans and for an artificial arm for handicapped persons. The ideal end-effectors for such an artificial arm or a humanoid would be able to use the tools and objects that a person uses when working in the same environment. If this humanoid hand can operate the same tools, a machine and furniture, it may be possible to work under the same environment as human beings. As a result of adopting a new function of a palm and the thumb, the robot hand could do the operation which was impossible until now. The humanoid hand realized operations which hold a kitchen knife, grasping a fan, a stick, uses the scissors and uses chopsticks.",
"title": ""
},
{
"docid": "11068c7b8ce924c7d83736f23475c30a",
"text": "Both oxytocin and serotonin modulate affiliative responses to partners and offspring. Animal studies suggest a crucial role of oxytocin in mammalian parturition and lactation but also in parenting and social interactions with offspring. The serotonergic system may also be important through its influence on mood and the release of oxytocin. We examined the role of serotonin transporter (5-HTT) and oxytocin receptor (OXTR) genes in explaining differences in sensitive parenting in a community sample of 159 Caucasian, middle-class mothers with their 2-year-old toddlers at risk for externalizing behavior problems, taking into account maternal educational level, maternal depression and the quality of the marital relationship. Independent genetic effects of 5-HTTLPR SCL6A4 and OXTR rs53576 on observed maternal sensitivity were found. Controlling for differences in maternal education, depression and marital discord, parents with the possibly less efficient variants of the serotonergic (5-HTT ss) and oxytonergic (AA/AG) system genes showed lower levels of sensitive responsiveness to their toddlers. Two-way and three-way interactions with marital discord or depression were not significant. This first study on the role of both OXTR and 5-HTT genes in human parenting points to molecular genetic differences that may be implicated in the production of oxytocin explaining differences in sensitive parenting.",
"title": ""
},
{
"docid": "1298ddbeea84f6299e865708fd9549a6",
"text": "Since its invention in the early 1960s (Rotman and Turner, 1963), the Rotman Lens has proven itself to be a useful beamformer for designers of electronically scanned arrays. Inherent in its design is a true time delay phase shift capability that is independent of frequency and removes the need for costly phase shifters to steer a beam over wide angles. The Rotman Lens has a long history in military radar, but it has also been used in communication systems. This article uses the developed software to design and analyze a microstrip Rotman Lens for the Ku band. The initial lens design will come from a tool based on geometrical optics (GO). A second stage of analysis will be performed using a full wave finite difference time domain (FDTD) solver. The results between the first-cut design tool and the comprehensive FDTD solver will be compared, and some of the design trades will be explored to gauge their impact on the performance of the lens.",
"title": ""
},
{
"docid": "6f95d8bcaefcc99209279dadb1beb0a6",
"text": "Public cloud software marketplaces already offer users a wealth of choice in operating systems, database management systems, financial software, and virtual networking, all deployable and configurable at the click of a button. Unfortunately, this level of customization has not extended to emerging hypervisor-level services, partly because traditional virtual machines (VMs) are fully controlled by only one hypervisor at a time. Currently, a VM in a cloud platform cannot concurrently use hypervisorlevel services from multiple third-parties in a compartmentalized manner. We propose the notion of a multihypervisor VM, which is an unmodified guest that can simultaneously use services from multiple coresident, but isolated, hypervisors. We present a new virtualization architecture, called Span virtualization, that leverages nesting to allow multiple hypervisors to concurrently control a guest’s memory, virtual CPU, and I/O resources. Our prototype of Span virtualization on the KVM/QEMU platform enables a guest to use services such as introspection, network monitoring, guest mirroring, and hypervisor refresh, with performance comparable to traditional nested VMs.",
"title": ""
},
{
"docid": "a325d0761491f814d3f5743e44868c74",
"text": "This paper reviews the literature on child neglect with respect to child outcomes, prevention and intervention, and implications for policy. First, the prevalence of the problem is discussed and then potential negative outcomes for neglected children, including behavior problems, low self-esteem, poor school performance, and maladjustment/psychopathology, are discussed. Risk factors and current child neglect interventions are then reviewed. Popular family support programs, such as family preservation, have mixed success rates for preventing child neglect. The successes and shortcomings of other programs are also examined with a focus on implications for future research and policy. Overall, the research supports a multidisciplinary approach to assessment, intervention, and research on child neglect. Furthermore, the need for a combined effort among parents, community members, professionals, and policymakers to increase awareness and prevention endeavors is discussed. Targeted attempts to educate all involved parties should focus on early intervention during specific encounters with atrisk families via medical settings, school settings, and parent education programs.",
"title": ""
},
{
"docid": "ba533a610f95d44bf5416e17b07348dd",
"text": "It is argued that, hidden within the flow of signals from typical cameras, through image processing, to display media, is a homomorphic filter. While homomorphic filtering is often desirable, there are some occasions where it is not. Thus, cancellation of this implicit homomorphic filter is proposed, through the introduction of an antihomomorphic filter. This concept gives rise to the principle of quantigraphic image processing, wherein it is argued that most cameras can be modeled as an array of idealized light meters each linearly responsive to a semi-monotonic function of the quantity of light received, integrated over a fixed spectral response profile. This quantity depends only on the spectral response of the sensor elements in the camera. A particular class of functional equations, called comparametric equations, is introduced as a basis for quantigraphic image processing. These are fundamental to the analysis and processing of multiple images differing only in exposure. The \"gamma correction\" of an image is presented as a simple example of a comparametric equation, for which it is shown that the underlying quantigraphic function does not pass through the origin. Thus, it is argued that exposure adjustment by gamma correction is inherently flawed, and alternatives are provided. These alternatives, when applied to a plurality of images that differ only in exposure, give rise to a new kind of processing in the \"amplitude domain\". The theoretical framework presented in this paper is applicable to the processing of images from nearly all types of modern cameras. This paper is a much revised draft of a 1992 peer-reviewed but unpublished report by the author, entitled \"Lightspace and the Wyckoff principle.\"",
"title": ""
},
{
"docid": "fb836666c993b27b99f6c789dd0aae05",
"text": "Software transactions have received significant attention as a way to simplify shared-memory concurrent programming, but insufficient focus has been given to the precise meaning of software transactions or their interaction with other language features. This work begins to rectify that situation by presenting a family of formal languages that model a wide variety of behaviors for software transactions. These languages abstract away implementation details of transactional memory, providing high-level definitions suitable for programming languages. We use small-step semantics in order to represent explicitly the interleaved execution of threads that is necessary to investigate pertinent issues.\n We demonstrate the value of our core approach to modeling transactions by investigating two issues in depth. First, we consider parallel nesting, in which parallelism and transactions can nest arbitrarily. Second, we present multiple models for weak isolation, in which nontransactional code can violate the isolation of a transaction. For both, type-and-effect systems let us soundly and statically restrict what computation can occur inside or outside a transaction. We prove some key language-equivalence theorems to confirm that under sufficient static restrictions, in particular that each mutable memory location is used outside transactions or inside transactions (but not both), no program can determine whether the language implementation uses weak isolation or strong isolation.",
"title": ""
},
{
"docid": "b52312f9fbf86ce0dbf475623b472d8d",
"text": "The vascular pattern of the supraspinatus tendon was studied in 18 human anatomic specimens. The ages of the specimens ranged from 26 to 84 years. Selective vascular injection with a silicon-rubber compound allowed visualization of the vascular bed of the rotator cuff and humeral head. The presence of a hypovascular or critical zone close to the insertion of the supraspinatus tendon into the humeral head was confirmed. However, only a uniformly sparse vascular distribution was found at the articular side, as opposed to the well-vascularized bursal side. This was also confirmed with histologic sections of the tendon. The poor vascularity of the tendon in this area could be a significant factor in the pathogenesis of degenerative rotator cuff tears.",
"title": ""
},
{
"docid": "48a476d5100f2783455fabb6aa566eba",
"text": "Phylogenies are usually dated by calibrating interior nodes against the fossil record. This relies on indirect methods that, in the worst case, misrepresent the fossil information. Here, we contrast such node dating with an approach that includes fossils along with the extant taxa in a Bayesian total-evidence analysis. As a test case, we focus on the early radiation of the Hymenoptera, mostly documented by poorly preserved impression fossils that are difficult to place phylogenetically. Specifically, we compare node dating using nine calibration points derived from the fossil record with total-evidence dating based on 343 morphological characters scored for 45 fossil (4--20 complete) and 68 extant taxa. In both cases we use molecular data from seven markers (∼5 kb) for the extant taxa. Because it is difficult to model speciation, extinction, sampling, and fossil preservation realistically, we develop a simple uniform prior for clock trees with fossils, and we use relaxed clock models to accommodate rate variation across the tree. Despite considerable uncertainty in the placement of most fossils, we find that they contribute significantly to the estimation of divergence times in the total-evidence analysis. In particular, the posterior distributions on divergence times are less sensitive to prior assumptions and tend to be more precise than in node dating. The total-evidence analysis also shows that four of the seven Hymenoptera calibration points used in node dating are likely to be based on erroneous or doubtful assumptions about the fossil placement. With respect to the early radiation of Hymenoptera, our results suggest that the crown group dates back to the Carboniferous, ∼309 Ma (95% interval: 291--347 Ma), and diversified into major extant lineages much earlier than previously thought, well before the Triassic. [Bayesian inference; fossil dating; morphological evolution; relaxed clock; statistical phylogenetics.].",
"title": ""
},
{
"docid": "953997d170fa1a4aafe643c328802a30",
"text": "Recently we have developed a new algorithm, PROVEAN (<u>Pro</u>tein <u>V</u>ariation <u>E</u>ffect <u>An</u>alyzer), for predicting the functional effect of protein sequence variations, including single amino acid substitutions and small insertions and deletions [2]. The prediction is based on the change, caused by a given variation, in the similarity of the query sequence to a set of its related protein sequences. For this prediction, the algorithm is required to compute a semi-global pairwise sequence alignment score between the query sequence and each of the related sequences. Using dynamic programming, it takes O(n · m) time to compute alignment score between the query sequence Q of length n and a related sequence S of length m. Thus given l different variations in Q, in a naive way it would take O(l · n · m) time to compute the alignment scores between each of the variant query sequences and S. In this paper, we present a new approach to efficiently compute the pairwise alignment scores for l variations, which takes O((n + l) · m) time when the length of variations is bounded by a constant. In this approach, we further utilize the solutions of overlapping subproblems, which are already used by dynamic programming approach. Our algorithm has been used to build a new database for precomputed prediction scores for all possible single amino acid substitutions, single amino acid insertions, and up to 10 amino acids deletions in about 91K human proteins (including isoforms), where l becomes very large, that is, l = O(n). The PROVEAN source code and web server are available at http://provean.jcvi.org.",
"title": ""
},
{
"docid": "4fa9db557f53fa3099862af87337cfa9",
"text": "With the rapid development of E-commerce, recent years have witnessed the booming of online advertising industry, which raises extensive concerns of both academic and business circles. Among all the issues, the task of Click-through rates (CTR) prediction plays a central role, as it may influence the ranking and pricing of online ads. To deal with this task, the Factorization Machines (FM) model is designed for better revealing proper combinations of basic features. However, the sparsity of ads transaction data, i.e., a large proportion of zero elements, may severely disturb the performance of FM models. To address this problem, in this paper, we propose a novel Sparse Factorization Machines (SFM) model, in which the Laplace distribution is introduced instead of traditional Gaussian distribution to model the parameters, as Laplace distribution could better fit the sparse data with higher ratio of zero elements. Along this line, it will be beneficial to select the most important features or conjunctions with the proposed SFM model. Furthermore, we develop a distributed implementation of our SFM model on Spark platform to support the prediction task on mass dataset in practice. Comprehensive experiments on two large-scale real-world datasets clearly validate both the effectiveness and efficiency of our SFM model compared with several state-of-the-art baselines, which also proves our assumption that Laplace distribution could be more suitable to describe the online ads transaction data.",
"title": ""
},
{
"docid": "cdb87a9db48b78e193d9229282bd3b67",
"text": "While large-scale automatic grading of student programs for correctness is widespread, less effort has focused on automating feedback for good programming style:} the tasteful use of language features and idioms to produce code that is not only correct, but also concise, elegant, and revealing of design intent. We hypothesize that with a large enough (MOOC-sized) corpus of submissions to a given programming problem, we can observe a range of stylistic mastery from naïve to expert, and many points in between, and that we can exploit this continuum to automatically provide hints to learners for improving their code style based on the key stylistic differences between a given learner's submission and a submission that is stylistically slightly better. We are developing a methodology for analyzing and doing feature engineering on differences between submissions, and for learning from instructor-provided feedback as to which hints are most relevant. We describe the techniques used to do this in our prototype, which will be deployed in a residential software engineering course as an alpha test prior to deploying in a MOOC later this year.",
"title": ""
},
{
"docid": "5f1269a603d68ab4faeadfcf9478fa0e",
"text": "A simple and inexpensive approach for extracting the threedimensional shape of objects is presented. It is based on `weak structured lighting'; it di ers from other conventional structured lighting approaches in that it requires very little hardware besides the camera: a desk-lamp, a pencil and a checkerboard. The camera faces the object, which is illuminated by the desk-lamp. The user moves a pencil in front of the light source casting a moving shadow on the object. The 3D shape of the object is extracted from the spatial and temporal location of the observed shadow. Experimental results are presented on three di erent scenes demonstrating that the error in reconstructing the surface is less than 1%.",
"title": ""
},
{
"docid": "9afc7d1d90b9ee67f4dcbca1f8feebea",
"text": "Ontology-Based Data Access has been studied so far for relational structures and deployed on top of relational databases. This paradigm enables a uniform access to heterogeneous data sources, also coping with incomplete information. Whether OBDA is suitable also for non-relational structures, like those shared by increasingly popular NOSQL languages, is still an open question. In this paper, we study the problem of answering ontology-mediated queries on top of key-value stores. We formalize the data model and core queries of these systems, and introduce a rule language to express lightweight ontologies on top of data. We study the decidability and data complexity of query answering in this setting.",
"title": ""
},
{
"docid": "3adc09c401d6faccd116fc8b1ff654de",
"text": "Super-refractory status epilepticus is defined as status epilepticus that continues or recurs 24 h or more after the onset of anaesthetic therapy, including those cases where status epilepticus recurs on the reduction or withdrawal of anaesthesia. It is an uncommon but important clinical problem with high mortality and morbidity rates. This article reviews the treatment approaches. There are no controlled or randomized studies, and so therapy has to be based on clinical reports and opinion. The published world literature on the following treatments was critically evaluated: anaesthetic agents, anti-epileptic drugs, magnesium infusion, pyridoxine, steroids and immunotherapy, ketogenic diet, hypothermia, emergency resective neurosurgery and multiple subpial transection, transcranial magnetic stimulation, vagal nerve stimulation, deep brain stimulation, electroconvulsive therapy, drainage of the cerebrospinal fluid and other older drug therapies. The importance of treating the identifying cause is stressed. A protocol and flowchart for managing super-refractory status epilepticus is suggested. In view of the small number of published reports, there is an urgent need for the establishment of a database of outcomes of individual therapies.",
"title": ""
},
{
"docid": "b630a6b346edfb073c120cb70169b884",
"text": "Image tracing is a foundational component of the workflow in graphic design, engineering, and computer animation, linking hand-drawn concept images to collections of smooth curves needed for geometry processing and editing. Even for clean line drawings, modern algorithms often fail to faithfully vectorize junctions, or points at which curves meet; this produces vector drawings with incorrect connectivity. This subtle issue undermines the practical application of vectorization tools and accounts for hesitance among artists and engineers to use automatic vectorization software. To address this issue, we propose a novel image vectorization method based on state-of-the-art mathematical algorithms for frame field processing. Our algorithm is tailored specifically to disambiguate junctions without sacrificing quality.",
"title": ""
}
] |
scidocsrr
|
10c905db6afb0cb7281c2375da025be7
|
Probable networks and plausible predictions - a review of practical Bayesian methods for supervised neural networks
|
[
{
"docid": "7b232b0ac1a4e7249b33bd54ddeba2b3",
"text": "We present an analysis of how the generalization performance (expected test set error) relates to the expected training set error for nonlinear learning systems, such as multilayer perceptrons and radial basis functions. The principal result is the following relationship (computed to second order) between the expected test set and tlaining set errors: (1) Here, n is the size of the training sample e, u;f f is the effective noise variance in the response variable( s), ,x is a regularization or weight decay parameter, and Peff(,x) is the effective number of parameters in the nonlinear model. The expectations ( ) of training set and test set errors are taken over possible training sets e and training and test sets e' respectively. The effective number of parameters Peff(,x) usually differs from the true number of model parameters P for nonlinear or regularized models; this theoretical conclusion is supported by Monte Carlo experiments. In addition to the surprising result that Peff(,x) ;/; p, we propose an estimate of (1) called the generalized prediction error (GPE) which generalizes well established estimates of prediction risk such as Akaike's F P E and AI C, Mallows Cp, and Barron's PSE to the nonlinear setting.! lCPE and Peff(>\") were previously introduced in Moody (1991). 847",
"title": ""
}
] |
[
{
"docid": "b31ebdbd7edc0b30b0529a85fab0b612",
"text": "In this paper, we present RFMS, the real-time flood monitoring system with wireless sensor networks, which is deployed in two volcanic islands Ulleung-do and Dok-do located in the East Sea near to the Korean peninsula and developed for flood monitoring. RFMS measures river and weather conditions through wireless sensor nodes equipped with different sensors. Measured information is employed for early-warning via diverse types of services such as SMS (short message service) and a Web service.",
"title": ""
},
{
"docid": "df0560468320fae679974bfb828b27e9",
"text": "Two advanced modelling approaches, Multi-Level Models and Artificial Neural Networks are employed to model house prices. These approaches and the standard Hedonic Price Model are compared in terms of predictive accuracy, capability to capture location information, and their explanatory power. These models are applied to 2001-2013 house prices in the Greater Bristol area, using secondary data from the Land Registry, the Population Census and Neighbourhood Statistics so that these models could be applied nationally. The results indicate that MLM offers good predictive accuracy with high explanatory power, especially if neighbourhood effects are explored at multiple spatial scales.",
"title": ""
},
{
"docid": "2a58189812fe0f585794bed8734c632a",
"text": "China has become one of the largest entertainment markets in the world in recent years. Due to the success of Xiaomi, many Chinese pop music industry entrepreneurs believe \"Fans Economy\" works in the pop music industry. \"Fans Economy\" is based on the assumption that pop music consumer market could be segmented based on artists. Each music artist has its own exclusive loyal fans. In this paper, we provide an insightful study of the pop music artists and fans social network. Particularly, we segment the pop music consumer market and pop music artists respectively. Our results show that due to the Matthew Effect and limited diversity of consumer market, \"Fans Economy\" does not work for the Chinese pop music industry.",
"title": ""
},
{
"docid": "c460ac78bb06e7b5381506f54200a328",
"text": "Efficient virtual machine (VM) management can dramatically reduce energy consumption in data centers. Existing VM management algorithms fall into two categories based on whether the VMs' resource demands are assumed to be static or dynamic. The former category fails to maximize the resource utilization as they cannot adapt to the dynamic nature of VMs' resource demands. Most approaches in the latter category are heuristical and lack theoretical performance guarantees. In this work, we formulate dynamic VM management as a large-scale Markov Decision Process (MDP) problem and derive an optimal solution. Our analysis of real-world data traces supports our choice of the modeling approach. However, solving the large-scale MDP problem suffers from the curse of dimensionality. Therefore, we further exploit the special structure of the problem and propose an approximate MDP-based dynamic VM management method, called MadVM. We prove the convergence of MadVM and analyze the bound of its approximation error. Moreover, MadVM can be implemented in a distributed system, which should suit the needs of real data centers. Extensive simulations based on two real-world workload traces show that MadVM achieves significant performance gains over two existing baseline approaches in power consumption, resource shortage and the number of VM migrations. Specifically, the more intensely the resource demands fluctuate, the more MadVM outperforms.",
"title": ""
},
{
"docid": "d05e4998114dd485a3027f2809277512",
"text": "Although neural tensor networks (NTNs) have been successful in many natural language processing tasks, they require a large number of parameters to be estimated, which often results in overfitting and long training times. We address these issues by applying eigendecomposition to each slice matrix of a tensor to reduce the number of parameters. We evaluate our proposed NTN models in two tasks. First, the proposed models are evaluated in a knowledge graph completion task. Second, a recursive NTN (RNTN) extension of the proposed models is evaluated on a logical reasoning task. The experimental results show that our proposed models learn better and faster than the original (R)NTNs.",
"title": ""
},
{
"docid": "98cb849504f344253bc879704c698f1e",
"text": "Serverless computing provides a small runtime container to execute lines of codes without infrastructure management which is similar to Platform as a Service (PaaS) but a functional level. Amazon started the event-driven compute named Lambda functions in 2014 with a 25 concurrent limitation, but it now supports at least a thousand of concurrent invocation to process event messages generated by resources like databases, storage and system logs. Other providers, i.e., Google, Microsoft, and IBM offer a dynamic scaling manager to handle parallel requests of stateless functions in which additional containers are provisioning on new compute nodes for distribution. However, while functions are often developed for microservices and lightweight workload, they are associated with distributed data processing using the concurrent invocations. We claim that the current serverless computing environments can support dynamic applications in parallel when a partitioned task is executable on a small function instance. We present results of throughput, network bandwidth, a file I/O and compute performance regarding the concurrent invocations. We deployed a series of functions for distributed data processing to address the elasticity and then demonstrated the differences between serverless computing and virtual machines for cost efficiency and resource utilization.",
"title": ""
},
{
"docid": "cdef5f6a50c1f427e8f37be3c6ebbccf",
"text": "In this article, we summarize the 5G mobile communication requirements and challenges. First, essential requirements for 5G are pointed out, including higher traffic volume, indoor or hotspot traffic, and spectrum, energy, and cost efficiency. Along with these changes of requirements, we present a potential step change for the evolution toward 5G, which shows that macro-local coexisting and coordinating paths will replace one macro-dominated path as in 4G and before. We hereafter discuss emerging technologies for 5G within international mobile telecommunications. Challenges and directions in hardware, including integrated circuits and passive components, are also discussed. Finally, a whole picture for the evolution to 5G is predicted and presented.",
"title": ""
},
{
"docid": "3fa21ebc002a40b4558b3b0820d5cde9",
"text": "We present the first ontology-based Vietnamese QA system KbQAS where a new knowledge acquisition approach for analyzing English and Vietnamese questions is integrated.",
"title": ""
},
{
"docid": "ceca6669dd871dc97ead2c2a1f16dbd7",
"text": "Location-Based Services have become increasingly popular due to the prevalence of smart devices and location-sharing applications such as Facebook and Foursquare. The protection of people’s sensitive location data in such applications is an important requirement. Conventional location privacy protection methods, however, such as manually defining privacy rules or asking users to make decisions each time they enter a new location may be overly complex, intrusive or unwieldy. An alternative is to use machine learning to predict people’s privacy preferences and automatically configure settings. Model-based machine learning classifiers may be too computationally complex to be used in real-world applications, or suffer from poor performance when training data are insufficient. In this paper we propose a location-privacy recommender that can provide people with recommendations of appropriate location privacy settings through user-user collaborative filtering. Using a realworld location-sharing dataset, we show that the prediction accuracy of our scheme (73.08%) is similar to the best performance of model-based classifiers (75.30%), and at the same time causes fewer privacy leaks (11.75% vs 12.70%). Our scheme further outperforms model-based classifiers when there are insufficient training data. Since privacy preferences are innately private, we make our recommender privacy-aware by obfuscating people’s preferences. Our results show that obfuscation leads to a minimal loss of prediction accuracy (0.76%).",
"title": ""
},
{
"docid": "e7bb89000329245bccdecbc80549109c",
"text": "This paper presents a tutorial overview of the use of coupling between nonadjacent resonators to produce transmission zeros at real frequencies in microwave filters. Multipath coupling diagrams are constructed and the relative phase shifts of multiple paths are observed to produce the known responses of the cascaded triplet and quadruplet sections. The same technique is also used to explore less common nested cross-coupling structures and to predict their behavior. A discussion of the effects of nonzero electrical length coupling elements is presented. Finally, a brief categorization of the various synthesis and implementation techniques available for these types of filters is given.",
"title": ""
},
{
"docid": "fa9650513e6d1c73b64a282c62e0f379",
"text": "In monocular 3D human pose estimation a common setup is to first detect 2D positions and then lift the detection into 3D coordinates. Many algorithms suffer from overfitting to camera positions in the training set. We propose a siamese architecture that learns a rotation equivariant hidden representation to reduce the need for data augmentation. Our method is evaluated on multiple databases with different base networks and shows a consistent improvement of error metrics. It achieves state-of-the-art cross-camera error rate among algorithms that use estimated 2D joint coordinates only.",
"title": ""
},
{
"docid": "25ddc3ec356593af0bc1498dc958e746",
"text": "Analysis of social network data is often hampered by non-response and missing data. Recent studies show the negative effects of missing actors and ties on the structural properties of social networks. This means that the results of social network analyses can be severely biased if missing ties were ignored and only complete cases were analyzed. To overcome the problems created by missing data, several treatment methods are proposed in the literature: model-based methods within the framework of exponential random graph models, and imputation methods. In this paper we focus on the latter group of methods, and investigate the use of some simple imputation procedures to handle missing network data. The results of a simulation study show that ignoring the missing data can have large negative effects on structural properties of the network. Missing data treatment based on simple imputation procedures, however, does also have large negative effects and simple imputations can only successfully correct for non-response in a few specific situations.",
"title": ""
},
{
"docid": "f0af945042c44b20d6bd9f81a0b21b6b",
"text": "We investigate a technique to adapt unsupervised word embeddings to specific applications, when only small and noisy labeled datasets are available. Current methods use pre-trained embeddings to initialize model parameters, and then use the labeled data to tailor them for the intended task. However, this approach is prone to overfitting when the training is performed with scarce and noisy data. To overcome this issue, we use the supervised data to find an embedding subspace that fits the task complexity. All the word representations are adapted through a projection into this task-specific subspace, even if they do not occur on the labeled dataset. This approach was recently used in the SemEval 2015 Twitter sentiment analysis challenge, attaining state-of-the-art results. Here we show results improving those of the challenge, as well as additional experiments in a Twitter Part-Of-Speech tagging task.",
"title": ""
},
{
"docid": "40bdadc044f5342534ba5387c47c6456",
"text": "A numerical study of atmospheric turbulence effects on wind-turbine wakes is presented. Large-eddy simulations of neutrally-stratified atmospheric boundary layer flows through stand-alone wind turbines were performed over homogeneous flat surfaces with four different aerodynamic roughness lengths. Emphasis is placed on the structure and characteristics of turbine wakes in the cases where the incident flows to the turbine have the same mean velocity at the hub height but different mean wind shears and turbulence intensity levels. The simulation results show that the different turbulence intensity levels of the incoming flow lead to considerable influence on the spatial distribution of the mean velocity deficit, turbulence intensity, and turbulent shear stress in the wake region. In particular, when the turbulence intensity level of the incoming flow is higher, the turbine-induced wake (velocity deficit) recovers faster, and the locations of the maximum turbulence intensity and turbulent stress are closer to the turbine. A detailed analysis of the turbulence kinetic energy budget in the wakes reveals also an important effect of the incoming flow turbulence level on the magnitude and spatial distribution of the shear production and transport terms.",
"title": ""
},
{
"docid": "0d2b905bc0d7f117d192a8b360cc13f0",
"text": "We investigate a previously unknown phase of phosphorus that shares its layered structure and high stability with the black phosphorus allotrope. We find the in-plane hexagonal structure and bulk layer stacking of this structure, which we call \"blue phosphorus,\" to be related to graphite. Unlike graphite and black phosphorus, blue phosphorus displays a wide fundamental band gap. Still, it should exfoliate easily to form quasi-two-dimensional structures suitable for electronic applications. We study a likely transformation pathway from black to blue phosphorus and discuss possible ways to synthesize the new structure.",
"title": ""
},
{
"docid": "b132b6aedba7415f2ccaa3783fafd271",
"text": "Recent technologies enable electronic and RF circuits in communication devices and radar to be miniaturized and become physically smaller in size. Antenna design has been one of the key limiting constraints to the development of small communication terminals and also in meeting next generation and radar requirements. Multiple antenna technologies (MATs) have gained much attention in the last few years because of the huge gain. MATs can enhance the reliability and the channel capacity levels. Furthermore, multiple antenna systems can have a big contribution to reduce the interference both in the uplink and the downlink. To increase the communication systems reliability, multiple antennas can be installed at the transmitter or/and at the receiver. The idea behind multiple antenna diversity is to supply the receiver by multiple versions of the same signal transmitted via independent channels. In modern communication transceiver and radar systems, primary aims are to direct high power RF signal from transmitter to antenna while preventing leakage of that large signal into more sensitive frontend of receiver. So, a Single-Pole Double-Throw (SPDT) Transmitter/Receiver (T/R) Switch plays an important role. In this paper, design of smart distributed subarray MIMO (DS-MIMO) microstrip antenna system with controller unit and frequency agile has been introduced and investigated. All the entire proposed antenna system has been evaluated using a commercial software. The final proposed design has been fabricated and the radiation characteristics have been illustrated using network analyzer to meet the requirements for communication and radar applications.",
"title": ""
},
{
"docid": "54bf44e04920bdaa7388dbbbbd34a1a8",
"text": "TIDs have been detected using various measurement techniques, including HF sounders, incoherent scatter radars, in-situ measurements, and optical techniques. However, there is still much we do not yet know or understand about TIDs. Observations of TIDs have tended to be sparse, and there is a need for additional observations to provide new scientific insight into the geophysical source phenomenology and wave propagation physics. The dense network of GPS receivers around the globe offers a relatively new data source to observe and monitor TIDs. In this paper, we use Total Electron Content (TEC) measurements from 4000 GPS receivers throughout the continental United States to observe TIDs associated with the 11 March 2011 Tohoku tsunami. The tsunami propagated across the Pacific to the US west coast over several hours, and corresponding TIDs were observed over Hawaii, and via the network of GPS receivers in the US. The network of GPS receivers in effect provides a 2D spatial map of TEC perturbations, which can be used to calculate TID parameters, including horizontal wavelength, speed, and period. Well-formed, planar traveling ionospheric disturbances were detected over the west coast of the US ten hours after the earthquake. Fast Fourier transform analysis of the observed waveforms revealed that the period of the wave was 15.1 minutes with a horizontal wavelength of 194.8 km, phase velocity of 233.0 m/s, and an azimuth of 105.2 (propagating nearly due east in the direction of the tsunami wave). These results are consistent with TID observations in airglow measurements from Hawaii earlier in the day, and with other GPS TEC observations. The vertical wavelength of the TID was found to be 43.5 km. The TIDs moved at the same velocity as the tsunami itself. Much work is still needed in order to fully understand the ocean-atmosphere coupling mechanisms, which could lead to the development of effective tsunami detection/warning systems. The work presented in this paper demonstrates a technique for the study of ionospheric perturbations that can affect navigation, communications and surveillance systems.",
"title": ""
},
{
"docid": "c534935b7ba93e32d8138ecc2046f4e9",
"text": "This paper reviews the findings of several studies and surveys that address the increasing popularity and usage of so-called fitness “gamification.” Fitness gamification is used as an overarching and information term for the use of video game elements in non-gaming systems to improve user experience and user engagement. In this usage, game components such as a scoreboard, competition amongst friends, and awards and achievements are employed to motivate users to achieve personal health goals. The rise in smartphone usage has also increased the number of mobile fitness applications that utilize gamification principles. The most popular and successful fitness applications are the ones that feature an assemblage of workout tracking, social sharing, and achievement systems. This paper provides an overview of gamification, a description of gamification characteristics, and specific examples of how fitness gamification applications function and is used.",
"title": ""
},
{
"docid": "e5e4349bb677bb128dcf1385c34cdf41",
"text": "The occurrence of eight phosphorus flame retardants (PFRs) was investigated in 53 composite food samples from 12 food categories, collected in 2015 for a Swedish food market basket study. 2-ethylhexyl diphenyl phosphate (EHDPHP), detected in most food categories, had the highest median concentrations (9 ng/g ww, pastries). It was followed by triphenyl phosphate (TPHP) (2.6 ng/g ww, fats/oils), tris(1,3-dichloro-2-propyl) phosphate (TDCIPP) (1.0 ng/g ww, fats/oils), tris(2-chloroethyl) phosphate (TCEP) (1.0 ng/g ww, fats/oils), and tris(1-chloro-2-propyl) phosphate (TCIPP) (0.80 ng/g ww, pastries). Tris(2-ethylhexyl) phosphate (TEHP), tri-n-butyl phosphate (TNBP), and tris(2-butoxyethyl) phosphate (TBOEP) were not detected in the analyzed food samples. The major contributor to the total dietary intake was EHDPHP (57%), and the food categories which contributed the most to the total intake of PFRs were processed food, such as cereals (26%), pastries (10%), sugar/sweets (11%), and beverages (17%). The daily per capita intake of PFRs (TCEP, TPHP, EHDPHP, TDCIPP, TCIPP) from food ranged from 406 to 3266 ng/day (or 6-49 ng/kg bw/day), lower than the health-based reference doses. This is the first study reporting PFR intakes from other food categories than fish (here accounting for 3%). Our results suggest that the estimated human dietary exposure to PFRs may be equally important to the ingestion of dust.",
"title": ""
}
] |
scidocsrr
|
882bd9a62aed06de9101606095c055ad
|
Predicting Complete 3D Models of Indoor Scenes
|
[
{
"docid": "e77dc44a5b42d513bdbf4972d62a74f9",
"text": "Clothing recognition is an extremely challenging problem due to wide variation in clothing item appearance, layering, and style. In this paper, we tackle the clothing parsing problem using a retrieval based approach. For a query image, we find similar styles from a large database of tagged fashion images and use these examples to parse the query. Our approach combines parsing from: pre-trained global clothing models, local clothing models learned on the fly from retrieved examples, and transferred parse masks (paper doll item transfer) from retrieved examples. Experimental evaluation shows that our approach significantly outperforms state of the art in parsing accuracy.",
"title": ""
},
{
"docid": "8ea35692d8d57d321faf7b79be464f42",
"text": "We introduce a novel approach to the problem of localizing objects in an image and estimating their fine-pose. Given exact CAD models, and a few real training images with aligned models, we propose to leverage the geometric information from CAD models and appearance information from real images to learn a model that can accurately estimate fine pose in real images. Specifically, we propose FPM, a fine pose parts-based model, that combines geometric information in the form of shared 3D parts in deformable part based models, and appearance information in the form of objectness to achieve both fast and accurate fine pose estimation. Our method significantly outperforms current state-ofthe-art algorithms in both accuracy and speed.",
"title": ""
}
] |
[
{
"docid": "fc441aa80879bbc47a44fb3bd6e37393",
"text": "Hypodense liver lesions are commonly detected in CT, so their segmentation and characterization are essential for diagnosis and treatment. Methods for automatic detection and segmentation of liver lesions were developed to support this task. The detection algorithm uses an object-based image analysis approach, allowing for effectively integrating domain knowledge and reasoning processes into the detection logic. The method is intended to succeed in cases typically difficult for computer-aided detection systems, especially low contrast of hypodense lesions relative to healthy tissue. The detection stage is followed by a dedicated segmentation algorithm needed to synthesize 3D segmentations for all true-positive findings. The automated method provides an overall detection rate of 77.8% with a precision of 0.53 and performs better than other related methods. The final lesion segmentation delivers appropriate quality in 89% of the detected cases, as evaluated by two radiologists. A new automated liver lesion detection algorithm employs the strengths of an object-based image analysis approach. The combination of automated detection and segmentation provides promising results with potential to improve diagnostic liver lesion evaluation.",
"title": ""
},
{
"docid": "9a04006d0328b838b9360a381401e436",
"text": "In this paper, a novel approach for two-loop control of the DC-DC flyback converter in discontinuous conduction mode is presented by using sliding mode controller. The proposed controller can regulate output of the converter in wide range of input voltage and load resistance. In order to verify accuracy and efficiency of the developed sliding mode controller, proposed method is simulated in MATLAB/Simulink. It is shown that the developed controller has faster dynamic response compared with standard integrated circuit (MIC38C42-5) based regulators.",
"title": ""
},
{
"docid": "40301ce30d9fe4846baa6a2d2052eefd",
"text": "Object detection from repository of images is challenging task in the area of computer vision and image processing in this work we present object classification and detection using cifar-10 data set with intended classification and detection of airplain images. So we used convolutional neural network on keras with tensorflow support the experimental results shows the time required to train, test and create the model in limited computing system. We train the system with 60,000 images with 25 epochs each epoch is taking 722to760 seconds in training step on tensorflow cpu system. At the end of 25 epochs the training accuracy is 96 percentage and the system can recognition input images based on train model and the output is respective label of images.",
"title": ""
},
{
"docid": "a50763db7b9c73ab5e29389d779c343d",
"text": "Near to real-time emotion recognition is a promising task for human-computer interaction (HCI) and human-robot interaction (HRI). Using knowledge about the user's emotions depends upon the possibility to extract information about users' emotions during HCI or HRI without explicitly asking users about the feelings they are experiencing. To be able to sense the user's emotions without interrupting the HCI, we present a new method applied to the emotional experience of the user for extracting semantic information from the autonomic nervous system (ANS) signals associated with emotions. We use the concepts of 1st person - where the subject consciously (and subjectively) extracts the semantic meaning of a given lived experience, (e.g. `I felt amused') - and 3rd person approach - where the experimenter interprets the semantic meaning of the subject's experience from a set of externally (and objectively) measured variables (e.g. galvanic skin response measures). Based on the 3rd person approach, our technique aims at psychologically interpreting physiological parameters (skin conductance and heart rate), and at producing a continuous extraction of the user's affective state during HCI or HRI. We also combine it with the 1st person approach measure which allows a tailored interpretation of the physiological measure closely related to the user own emotional experience",
"title": ""
},
{
"docid": "c451c403cd151e483b54ffc7e35a8083",
"text": "This paper presents a graph-based visual simultaneous localization and mapping (SLAM) system using straight lines as features. Compared with point features, lines provide far richer information about the structure of the environment and make it possible to infer spatial semantics from the map. Using a stereo rig as the sole sensor, our proposed system utilizes many advanced techniques, such as motion estimation, pose optimization, and bundle adjustment. We use two different representations to parameterize 3-D lines in this paper: Plücker line coordinates for efficient initialization of newly observed line features and projection of 3-D lines, and orthonormal representation for graph optimization. The proposed system is tested with indoor and outdoor sequences, and it exhibits better reconstruction performance against a point-based SLAM system in line-rich environments.",
"title": ""
},
{
"docid": "69986adaf1759ce9111f3f582ef35b65",
"text": "Bazila Akbar Kahn, 2013. Interaction of Physical Activity, Mental Health, Health Locus of Control and Quality of Life: A Study on University Students in Pakistan. Department of Sport Sciences. University of Jyväskylä. Master’s Thesis of Sport and Exercise Psychology. 66 pages Physical activity involvement is considered as beneficial both for physiological and psychological health. In Pakistani society an elevated level of physical inactivity has been identified lately. Nevertheless, studies examining the association between physical activity and psychological health are limited to the young population of university students in Pakistan. University students are considered to be at a risk stage due to academic stress and physiological changes Therefore, the purpose of this study was to explore the associations between physical activity, quality of life and psychological health related variables to university students in Pakistan. Participants (N=378) of the current study were from seven universities in Pakistan (265 female, 112 males). General Health Questionnaire, SF-36 quality of life matrix, multidimensional health locus of control and international physical activity questionnaire were administered. Results reveal a large number of students as physically inactive (37.6%). t-Test revealed male students were more active and having a better quality of life in comparison to the female. The high prevalence of psychological distress (25%) has also been identified by using correlation. Results indicated a linear positive relationship of physical activity with mental component summary and a negative association with psychological distress. Conversely, psychological distress was negatively related overall health related quality of life and PA. Results also demonstrated that students with a better internal locus of control were discovered to be more physically active. Findings were discussed in comparison with studies from other countries e.g. US, UK, Norway, Poland, Turkey and Australia However, the results suggest the replication of the study with a larger sample size. Additionally, it is also imperative to explore the barriers to PA among the student population in Pakistan. Keys words: Physical activity, mental health, health locus of control, psychological distress, university students in Pakistan.",
"title": ""
},
{
"docid": "8174a4a425dc7f097be101a8461268a0",
"text": "One of the problems with mobile media devices is that they may distract users during critical everyday tasks, such as navigating the streets of a busy city. We addressed this issue in the design of eyeLook: a platform for attention sensitive mobile computing. eyeLook appliances use embedded low cost eyeCONTACT sensors (ECS) to detect when the user looks at the display. We discuss two eyeLook applications, seeTV and seeTXT, that facilitate courteous media consumption in mobile contexts by using the ECS to respond to user attention. seeTV is an attentive mobile video player that automatically pauses content when the user is not looking. seeTXT is an attentive speed reading application that flashes words on the display, advancing text only when the user is looking. By making mobile media devices sensitive to actual user attention, eyeLook allows applications to gracefully transition users between consuming media, and managing life.",
"title": ""
},
{
"docid": "ddbf68174da624f4d2f19fc25cafc870",
"text": "Large scale streaming systems aim to provide high throughput and low latency. They are often used to run mission-critical applications, and must be available 24x7. Thus such systems need to adapt to failures and inherent changes in workloads, with minimal impact on latency and throughput. Unfortunately, existing solutions require operators to choose between achieving low latency during normal operation and incurring minimal impact during adaptation. Continuous operator streaming systems, such as Naiad and Flink, provide low latency during normal execution but incur high overheads during adaptation (e.g., recovery), while micro-batch systems, such as Spark Streaming and FlumeJava, adapt rapidly at the cost of high latency during normal operations.\n Our key observation is that while streaming workloads require millisecond-level processing, workload and cluster properties change less frequently. Based on this, we develop Drizzle, a system that decouples the processing interval from the coordination interval used for fault tolerance and adaptability. Our experiments on a 128 node EC2 cluster show that on the Yahoo Streaming Benchmark, Drizzle can achieve end-to-end record processing latencies of less than 100ms and can get 2-3x lower latency than Spark. Drizzle also exhibits better adaptability, and can recover from failures 4x faster than Flink while having up to 13x lower latency during recovery.",
"title": ""
},
{
"docid": "7f09bdd6a0bcbed0d9525c5d20cf8cbb",
"text": "Distributed are increasing being thought of as a platform for decentralised applications — DApps — and the the focus for many is shifting from Bitcoin to Smart Contracts. It’s thought that encoding contracts and putting them “on the blockchain” will result in a new generation of organisations that are leaner and more efficient than their forebears (“Capps”?”), disrupting these forebears in the process. However, the most interesting aspect of Bitcoin and blockchain is that it involved no new technology, no new math. Their emergence was due to changes in the environment: the priceperformance and penetration of broadband networks reached a point that it was economically viable for a decentralised solution, such as Bitcoin to compete with traditional payment (international remittance) networks. This is combining with another trend — the shift from monolithic firms to multi-sided markets such as AirBnb et al and the rise of “platform businesses” — to enable a new class of solution to emerge. These new solutions enable firms to interact directly, without the need for a facilitator such as a market, exchange, or even a blockchain. In the past these facilitators were firms. More recently they have been “platform businesses.” In the future they may not exist at all. The shift to a distributed environment enables us to reconsider many of the ideas from distributed AI and linked data. Where are the opportunities? How can we avoid the mistakes of the past?",
"title": ""
},
{
"docid": "065417a0c2e82cbd33798de1be98042f",
"text": "Deep neural networks usually require large labeled datasets to construct accurate models; however, in many real-world scenarios, such as medical image segmentation, labeling data are a time-consuming and costly human (expert) intelligent task. Semi-supervised methods leverage this issue by making use of a small labeled dataset and a larger set of unlabeled data. In this paper, we present a flexible framework for semi-supervised learning that combines the power of supervised methods that learn feature representations using state-of-the-art deep convolutional neural networks with the deeply embedded clustering algorithm that assigns data points to clusters based on their probability distributions and feature representations learned by the networks. Our proposed semi-supervised learning algorithm based on deeply embedded clustering (SSLDEC) learns feature representations via iterations by alternatively using labeled and unlabeled data points and computing target distributions from predictions. During this iterative procedure, the algorithm uses labeled samples to keep the model consistent and tuned with labeling, as it simultaneously learns to improve feature representation and predictions. The SSLDEC requires a few hyper-parameters and thus does not need large labeled validation sets, which addresses one of the main limitations of many semi-supervised learning algorithms. It is also flexible and can be used with many state-of-the-art deep neural network configurations for image classification and segmentation tasks. To this end, we implemented and tested our approach on benchmark image classification tasks as well as in a challenging medical image segmentation scenario. In benchmark classification tasks, the SSLDEC outperformed several state-of-the-art semi-supervised learning methods, achieving 0.46% error on MNIST with 1000 labeled points and 4.43% error on SVHN with 500 labeled points. In the iso-intense infant brain MRI tissue segmentation task, we implemented SSLDEC on a 3D densely connected fully convolutional neural network where we achieved significant improvement over supervised-only training as well as a semi-supervised method based on pseudo-labeling. Our results show that the SSLDEC can be effectively used to reduce the need for costly expert annotations, enhancing applications, such as automatic medical image segmentation.",
"title": ""
},
{
"docid": "1050845816f29b50360eb6f2277071be",
"text": "Natural language interactive narratives are a variant of traditional branching storylines where player actions are expressed in natural language rather than by selecting among choices. Previous efforts have handled the richness of natural language input using machine learning technologies for text classification, bootstrapping supervised machine learning approaches with human-in-the-loop data acquisition or by using expected player input as fake training data. This paper explores a third alternative, where unsupervised text classifiers are used to automatically route player input to the most appropriate storyline branch. We describe the Data-driven Interactive Narrative Engine (DINE), a web-based tool for authoring and deploying natural language interactive narratives. To compare the performance of different algorithms for unsupervised text classification, we collected thousands of user inputs from hundreds of crowdsourced participants playing 25 different scenarios, and hand-annotated them to create a goldstandard test set. Through comparative evaluations, we identified an unsupervised algorithm for narrative text classification that approaches the performance of supervised text classification algorithms. We discuss how this technology supports authors in the rapid creation and deployment of interactive narrative experiences, with authorial burdens similar to that of traditional branching storylines.",
"title": ""
},
{
"docid": "113c3cd96356d966f35af94d7606cd52",
"text": "Statistical learning of relations between entities is a popular approach to address the problem of missing data in Knowledge Graphs. In this work we study how relational learning can be enhanced with background of a special kind: event logs, that are sequences of entities that may occur in the graph. Events naturally appear in many important applications as background. We propose various embedding models that combine entities of a Knowledge Graph and event logs. Our evaluation shows that our approach outperforms state-of-the-art baselines on real-world manufacturing and road traffic Knowledge Graphs, as well as in a controlled scenario that mimics manufacturing processes.",
"title": ""
},
{
"docid": "a3148ce66c9cd871df7f3ec008d7666c",
"text": "This priming study investigates the role of conceptual structure during language production, probing whether English speakers are sensitive to the structure of the event encoded by a prime sentence. In two experiments, participants read prime sentences aloud before describing motion events. Primes differed in 1) syntactic frame, 2) degree of lexical and conceptual overlap with target events, and 3) distribution of event components within frames. Results demonstrate that conceptual overlap between primes and targets led to priming of (a) the information that speakers chose to include in their descriptions of target events, (b) the way that information was mapped to linguistic elements, and (c) the syntactic structures that were built to communicate that information. When there was no conceptual overlap between primes and targets, priming was not successful. We conclude that conceptual structure is a level of representation activated during priming, and that it has implications for both Message Planning and Linguistic Formulation.",
"title": ""
},
{
"docid": "d04e975e48bd385a69fdf58c93103fd3",
"text": "In this paper we will present a low-phase-noise wide-tuning-range oscillator suitable for scaled CMOS processes. It switches between the two resonant modes of a high-order LC resonator that consists of two identical LC tanks coupled by capacitor and transformer. The mode switching method does not add lossy switches to the resonator and thus doubles frequency tuning range without degrading phase noise performance. Moreover, the coupled resonator leads to 3 dB lower phase noise than a single LC tank, which provides a way of achieving low phase noise in scaled CMOS process. Finally, the novel way of using inductive and capacitive coupling jointly decouples frequency separation and tank impedances of the two resonant modes, and makes it possible to achieve balanced performance. The proposed structure is verified by a prototype in a low power 65 nm CMOS process, which covers all cellular bands with a continuous tuning range of 2.5-5.6 GHz and meets all stringent phase noise specifications of cellular standards. It uses a 0.6 V power supply and achieves excellent phase noise figure-of-merit (FoM) of 192.5 dB at 3.7 GHz and >; 188 dB across the entire tuning range. This demonstrates the possibility of achieving low phase noise and wide tuning range at the same time in scaled CMOS processes.",
"title": ""
},
{
"docid": "7d278c1a5359ccd0dfcc236ba3a47614",
"text": "Humanoid robots may require a degree of compliance at joint level for improving efficiency, shock tolerance, and safe interaction with humans. The presence of joint elasticity, however, complexifies the control design of humanoid robots. This paper proposes a control framework to extend momentum based controllers developed for stiff actuation to the case of series elastic actuators. The key point is to consider the motor velocities as an intermediate control input, and then apply high-gain control to stabilise the desired motor velocities achieving momentum control. Simulations carried out on a model of the robot iCub verify the soundness of the proposed approach.",
"title": ""
},
{
"docid": "a0ffe6a1e991a7e34b3256560f11889f",
"text": "This paper presents a GPU-based stereo matching system with good performance in both accuracy and speed. The matching cost volume is initialized with an AD-Census measure, aggregated in dynamic cross-based regions, and updated in a scanline optimization framework to produce the disparity results. Various errors in the disparity results are effectively handled in a multi-step refinement process. Each stage of the system is designed with parallelism considerations such that the computations can be accelerated with CUDA implementations. Experimental results demonstrate the accuracy and the efficiency of the system: currently it is the top performer in the Middlebury benchmark, and the results are achieved on GPU within 0.1 seconds. We also provide extra examples on stereo video sequences and discuss the limitations of the system.",
"title": ""
},
{
"docid": "a53935e12b0a18d6555315149fdb4563",
"text": "With the prevalence of mobile devices such as smartphones and tablets, the ways people access to the Internet have changed enormously. In addition to the information that can be recorded by traditional Web-based e-commerce like frequent online shopping stores and browsing histories, mobile devices are capable of tracking sophisticated browsing behavior. The aim of this study is to utilize users' browsing behavior of reading hotel reviews on mobile devices and subsequently apply text-mining techniques to construct user interest profiles to make personalized hotel recommendations. Specifically, we design and implement an app where the user can search hotels and browse hotel reviews, and every gesture the user has performed on the touch screen when reading the hotel reviews is recorded. We then identify the paragraphs of hotel reviews that a user has shown interests based on the gestures the user has performed. Text mining techniques are applied to construct the interest profile of the user according to the review content the user has seriously read. We collect more than 5,000 reviews of hotels in Taipei, the largest metropolitan area of Taiwan, and recruit 18 users to participate in the experiment. Experimental results demonstrate that the recommendations made by our system better match the user's hotel selections than previous approaches.",
"title": ""
},
{
"docid": "0635201161de0266c7d658edf15fe8d9",
"text": "We present a zero-knowledge argument for NP with low communication complexity, low concrete cost for both the prover and the verifier, and no trusted setup, based on standard cryptographic assumptions. Communication is proportional to d log G (for d the depth and G the width of the verifying circuit) plus the square root of the witness size. When applied to batched or data-parallel statements, the prover's runtime is linear and the verifier's is sub-linear in the verifying circuit size, both with good constants. In addition, witness-related communication can be reduced, at the cost of increased verifier runtime, by leveraging a new commitment scheme for multilinear polynomials, which may be of independent interest. These properties represent a new point in the tradeoffs among setup, complexity assumptions, proof size, and computational cost. We apply the Fiat-Shamir heuristic to this argument to produce a zero-knowledge succinct non-interactive argument of knowledge (zkSNARK) in the random oracle model, based on the discrete log assumption, which we call Hyrax. We implement Hyrax and evaluate it against five state-of-the-art baseline systems. Our evaluation shows that, even for modest problem sizes, Hyrax gives smaller proofs than all but the most computationally costly baseline, and that its prover and verifier are each faster than three of the five baselines.",
"title": ""
},
{
"docid": "444bcff9a7fdcb80041aeb01b8724eed",
"text": "The morphologic anatomy of the liver is described as 2 main and 2 accessory lobes. The more recent functional anatomy of the liver is based on the distribution of the portal pedicles and the location of the hepatic veins. The liver is divided into 4 sectors, some of them composed of 2 segments. In all, there are 8 segments. According to the anatomy, typical hepatectomies (or “réglées”) are those which are performed along anatomical scissurae. The 2 main technical conceptions of typical hepatectomies are those with preliminary vascular control (Lortat-Jacob's technique) and hepatectomies with primary parenchymatous transection (Ton That Tung's technique). A good knowledge of the anatomy of the liver is a prerequisite for anatomical surgery of this organ. L'anatomie morphologique du foie permet d'individualiser 2 lobes principaux et 2 lobes accessoires. L'anatomie fonctionnelle du foie, plus récemment décrite, est fondée sur la distribution des pédicules portaux et sur la localisation des veines sus-hépatiques. Le foie est divisé en 4 secteurs, eux-mÊmes composés en général de 2 segments. Au total, il y a 8 segments. Selon les données anatomiques, les hépatectomies typiques (ou réglées) sont celles qui sont réalisées le long des scissures anatomiques. Les deux conceptions principales des exérèses hépatiques typiques sont, du point de vue technique, les hépatectomies avec contrÔle vasculaire préalable (technique de Lortat-Jacob) et les hépatectomies avec abord transparenchymateux premier (technique de Ton That Tung). Une connaissance approfondie de l'anatomie du foie est une condition préalable à la réalisation d'une chirurgie anatomique de cet organe.",
"title": ""
},
{
"docid": "f492f0121eba327778151a462e32e7b4",
"text": "We describe the instructional software JFLAP 4.0 and how it can be used to provide a hands-on formal languages and automata theory course. JFLAP 4.0 doubles the number of chapters worth of material from JFLAP 3.1, now covering topics from eleven of thirteen chapters for a semester course. JFLAP 4.0 has easier interactive approaches to previous topics and covers many new topics including three parsing algorithms, multi-tape Turing machines, L-systems, and grammar transformations.",
"title": ""
}
] |
scidocsrr
|
b45cfc372948de69bc9ca002282f769b
|
Multi-range Reasoning for Machine Comprehension
|
[
{
"docid": "9387c02974103731846062b549022819",
"text": "Machine comprehension of text is an important problem in natural language processing. A recently released dataset, the Stanford Question Answering Dataset (SQuAD), offers a large number of real questions and their answers created by humans through crowdsourcing. SQuAD provides a challenging testbed for evaluating machine comprehension algorithms, partly because compared with previous datasets, in SQuAD the answers do not come from a small set of candidate answers and they have variable lengths. We propose an end-to-end neural architecture for the task. The architecture is based on match-LSTM, a model we proposed previously for textual entailment, and Pointer Net, a sequence-to-sequence model proposed by Vinyals et al. (2015) to constrain the output tokens to be from the input sequences. We propose two ways of using Pointer Net for our task. Our experiments show that both of our two models substantially outperform the best results obtained by Rajpurkar et al. (2016) using logistic regression and manually crafted features.",
"title": ""
},
{
"docid": "da2f99dd979a1c4092c22ed03537bbe8",
"text": "Several large cloze-style context-questionanswer datasets have been introduced recently: the CNN and Daily Mail news data and the Children’s Book Test. Thanks to the size of these datasets, the associated text comprehension task is well suited for deep-learning techniques that currently seem to outperform all alternative approaches. We present a new, simple model that uses attention to directly pick the answer from the context as opposed to computing the answer using a blended representation of words in the document as is usual in similar models. This makes the model particularly suitable for question-answering problems where the answer is a single word from the document. Our model outperforms models previously proposed for these tasks by a large margin.",
"title": ""
},
{
"docid": "a0e4080652269445c6e36b76d5c8cd09",
"text": "Enabling a computer to understand a document so that it can answer comprehension questions is a central, yet unsolved goal of NLP. A key factor impeding its solution by machine learned systems is the limited availability of human-annotated data. Hermann et al. (2015) seek to solve this problem by creating over a million training examples by pairing CNN and Daily Mail news articles with their summarized bullet points, and show that a neural network can then be trained to give good performance on this task. In this paper, we conduct a thorough examination of this new reading comprehension task. Our primary aim is to understand what depth of language understanding is required to do well on this task. We approach this from one side by doing a careful hand-analysis of a small subset of the problems and from the other by showing that simple, carefully designed systems can obtain accuracies of 72.4% and 75.8% on these two datasets, exceeding current state-of-the-art results by over 5% and approaching what we believe is the ceiling for performance on this task.1",
"title": ""
},
{
"docid": "5487ee527ef2a9f3afe7f689156e7e4d",
"text": "Many NLP tasks including machine comprehension, answer selection and text entailment require the comparison between sequences. Matching the important units between sequences is a key to solve these problems. In this paper, we present a general “compare-aggregate” framework that performs word-level matching followed by aggregation using Convolutional Neural Networks. We particularly focus on the different comparison functions we can use to match two vectors. We use four different datasets to evaluate the model. We find that some simple comparison functions based on element-wise operations can work better than standard neural network and neural tensor network.",
"title": ""
}
] |
[
{
"docid": "b8f6411673d866c6464509b6fa7e9498",
"text": "In computer vision there has been increasing interest in learning hashing codes whose Hamming distance approximates the data similarity. The hashing functions play roles in both quantizing the vector space and generating similarity-preserving codes. Most existing hashing methods use hyper-planes (or kernelized hyper-planes) to quantize and encode. In this paper, we present a hashing method adopting the k-means quantization. We propose a novel Affinity-Preserving K-means algorithm which simultaneously performs k-means clustering and learns the binary indices of the quantized cells. The distance between the cells is approximated by the Hamming distance of the cell indices. We further generalize our algorithm to a product space for learning longer codes. Experiments show our method, named as K-means Hashing (KMH), outperforms various state-of-the-art hashing encoding methods.",
"title": ""
},
{
"docid": "3f68dbc9b9de4627e39c0c8a57fecde9",
"text": "The mitigation of crop loss due to damaging freezes requires accurate air temperature prediction models. Previous work established that the Ward-style artificial neural network (ANN) is a suitable tool for developing such models. The current research focused on developing ANN models with reduced average prediction error by increasing the number of distinct observations used in training, adding additional input terms that describe the date of an observation, increasing the duration of prior weather data included in each observation, and reexamining the number of hidden nodes used in the network. Models were created to predict air temperature at hourly intervals from one to 12 hours ahead. Each ANN model, consisting of a network architecture and set of associated parameters, was evaluated by instantiating and training 30 networks and calculating the mean absolute error (MAE) of the resulting networks for some set of input patterns. The inclusion of seasonal input terms, up to 24 hours of prior weather information, and a larger number of processing nodes were some of the improvements that reduced average prediction error compared to previous research across all horizons. For example, the four-hour MAE of 1.40°C was 0.20°C, or 12.5%, less than the previous model. Prediction MAEs eight and 12 hours ahead improved by 0.17°C and 0.16°C, respectively, improvements of 7.4% and 5.9% over the existing model at these horizons. Networks instantiating the same model but with different initial random weights often led to different prediction errors. These results strongly suggest that ANN model developers should consider instantiating and training multiple networks with different initial weights to establish preferred model parameters. Keywords—Decision support systems, frost protection, fruit, time-series prediction, weather modeling",
"title": ""
},
{
"docid": "5db1e7db73ae18802d04ed122ace42b0",
"text": "Phishing is an online identity theft that aims to steal sensitive information such as username, password and online banking details from its victims. Phishing education needs to be considered as a means to combat this threat. This paper reports on a design and development of a mobile game prototype as an educational tool helping computer users to protect themselves against phishing attacks. The elements of a game design framework for avoiding phishing attacks were used to address the game design issues. Our mobile game design aimed to enhance the users' avoidance behaviour through motivation to protect themselves against phishing threats. A think-aloud study was conducted, along with a preand post-test, to assess the game design framework though the developed mobile game prototype. The study results showed a significant improvement of participants' phishing avoidance behaviour in their post-test assessment. Furthermore, the study findings suggest that participants' threat perception, safeguard effectiveness, self-efficacy, perceived severity and perceived susceptibility elements positively impact threat avoidance behaviour, whereas safeguard cost had a negative impact on it. © 2016 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "2e6c14ef1fe5c643a19e8c0e759e086b",
"text": "Deafblind people have a severe degree of combined visual and auditory impairment resulting in problems with communication, (access to) information and mobility. Moreover, in order to interact with other people, most of them need the constant presence of a caregiver who plays the role of an interpreter with an external world organized for hearing and sighted people. As a result, they usually live behind an invisible wall of silence, in a unique and inexplicable condition of isolation.\n In this paper, we describe DB-HAND, an assistive hardware/software system that supports users to autonomously interact with the environment, to establish social relationships and to gain access to information sources without an assistant. DB-HAND consists of an input/output wearable peripheral (a glove equipped with sensors and actuators) that acts as a natural interface since it enables communication using a language that is easily learned by a deafblind: Malossi method. Interaction with DB-HAND is managed by a software environment, whose purpose is to translate text into sequences of tactile stimuli (and vice-versa), to execute commands and to deliver messages to other users. It also provides multi-modal feedback on several standard output devices to support interaction with the hearing and the sighted people.",
"title": ""
},
{
"docid": "1d7d3a52e059a256434556c405c0e1fa",
"text": "Page segmentation is still a challenging problem due to the large variety of document layouts. Methods examining both foreground and background regions are among the most effective to solve this problem. However, their performance is influenced by the implementation of two key steps: the extraction and selection of background regions, and the grouping of background regions into separators. This paper proposes an efficient hybrid method for page segmentation. The method extracts white space rectangles based on connected component analysis, and filters white space rectangles progressively incorporating foreground and background information such that the remaining rectangles are likely to form column separators. Experimental results on the ICDAR2009 page segmentation competition test set demonstrate the effectiveness and superiority of the proposed method.",
"title": ""
},
{
"docid": "c64cfef80a4d49870894cd5f910896b6",
"text": "Digital music has become prolific in the web in recent decades. Automated recommendation systems are essential for users to discover music they love and for artists to reach appropriate audience. When manual annotations and user preference data is lacking (e.g. for new artists) these systems must rely on content based methods. Besides powerful machine learning tools for classification and retrieval, a key component for successful recommendation is the audio content representation. Good representations should capture informative musical patterns in the audio signal of songs. These representations should be concise, to enable efficient (low storage, easy indexing, fast search) management of huge music repositories, and should also be easy and fast to compute, to enable real-time interaction with a user supplying new songs to the system. Before designing new audio features, we explore the usage of traditional local features, while adding a stage of encoding with a pre-computed codebook and a stage of pooling to get compact vectorial representations. We experiment with different encoding methods, namely the LASSO, vector quantization (VQ) and cosine similarity (CS). We evaluate the representations' quality in two music information retrieval applications: query-by-tag and query-by-example. Our results show that concise representations can be used for successful performance in both applications. We recommend using top-τ VQ encoding, which consistently performs well in both applications, and requires much less computation time than the LASSO.",
"title": ""
},
{
"docid": "faec1a6b42cfdd303309c69c4185c9fe",
"text": "The currency which is imitated with illegal sanction of state and government is counterfeit currency. Every country incorporates a number of security features for its currency security. Currency counterfeiting is always been a challenging term for financial system of any country. The problem of counterfeiting majorly affects the economical as well as financial growth of a country. In view of the problem various studies about counterfeit detection has been conducted using various techniques and variety of tools. This paper focuses on the researches and studies that have been conducted by various researchers. The paper highlighted the methodologies used and the particular characteristics features considered for counterfeit money detection.",
"title": ""
},
{
"docid": "d35c44a54eaa294a60379b00dd0ce270",
"text": "Adopting deep learning methods for human activity recognition has been effective in extracting discriminative features from raw input sequences acquired from body-worn sensors. Although human movements are encoded in a sequence of successive samples in time, typical machine learning methods perform recognition tasks without exploiting the temporal correlations between input data samples. Convolutional neural networks (CNNs) address this issue by using convolutions across a one-dimensional temporal sequence to capture dependencies among input data. However, the size of convolutional kernels restricts the captured range of dependencies between data samples. As a result, typical models are unadaptable to a wide range of activity-recognition configurations and require fixed-length input windows. In this paper, we propose the use of deep recurrent neural networks (DRNNs) for building recognition models that are capable of capturing long-range dependencies in variable-length input sequences. We present unidirectional, bidirectional, and cascaded architectures based on long short-term memory (LSTM) DRNNs and evaluate their effectiveness on miscellaneous benchmark datasets. Experimental results show that our proposed models outperform methods employing conventional machine learning, such as support vector machine (SVM) and k-nearest neighbors (KNN). Additionally, the proposed models yield better performance than other deep learning techniques, such as deep believe networks (DBNs) and CNNs.",
"title": ""
},
{
"docid": "5bf25699dc9d808e539f316689d0214c",
"text": "Multiobjective evolutionary algorithms (MOEAs) have been widely used in real-world applications. However, most MOEAs based on Pareto-dominance handle many-objective problems (MaOPs) poorly due to a high proportion of incomparable and thus mutually nondominated solutions. Recently, a number of many-objective evolutionary algorithms (MaOEAs) have been proposed to deal with this scalability issue. In this article, a survey of MaOEAs is reported. According to the key ideas used, MaOEAs are categorized into seven classes: relaxed dominance based, diversity-based, aggregation-based, indicator-based, reference set based, preference-based, and dimensionality reduction approaches. Several future research directions in this field are also discussed.",
"title": ""
},
{
"docid": "edea2ca381ac3115a1c2218425ff9b55",
"text": "Reconfigurable hardware is by far the most dominant implementation platform in terms of the number of designs per year. During the past decade, security has emerged as a premier design metrics with an ever increasing scope. Our objective is to identify and survey the most important issues related to FPGA security. Instead of insisting on comprehensiveness, we focus on a number of techniques that have the highest potential for conceptual breakthroughs or for the practical widespread adoption. Our emphasis is on security primitives (PUFs and TRNGs), analysis of potential vulnerabilities of FPGA synthesis flow, digital rights management, and FPGA-based applied algorithmic cryptography. We also discuss the most popular and a selection of recent research directions related to FPGA-based security platforms. Specifically, we identify and discuss a number of classical and emerging exciting FPGA-based security research and development directions.",
"title": ""
},
{
"docid": "1e1bafd8f06a4f80415b338a949624db",
"text": "Commercial polypropylene pelvic mesh products were characterized in terms of their chemical compositions and molecular weight characteristics before and after implantation. These isotactic polypropylene mesh materials showed clear signs of oxidation by both Fourier-transform infrared spectroscopy and scanning electron microscopy with energy dispersive X-ray spectroscopy (SEM/EDS). The oxidation was accompanied by a decrease in both weight-average and z-average molecular weights and narrowing of the polydispersity index relative to that of the non-implanted material. SEM revealed the formation of transverse cracking of the fibers which generally, but with some exceptions, increased with implantation time. Collectively these results, as well as the loss of flexibility and embrittlement of polypropylene upon implantation as reported by other workers, may only be explained by in vivo oxidative degradation of polypropylene.",
"title": ""
},
{
"docid": "5edaa2ed52f29eeb9576ebdaeb819997",
"text": "Alzheimer's disease (AD) is the most common neurodegenerative disorder characterized by cognitive and intellectual deficits and behavior disturbance. The electroencephalogram (EEG) has been used as a tool for diagnosing AD for several decades. The hallmark of EEG abnormalities in AD patients is a shift of the power spectrum to lower frequencies and a decrease in coherence of fast rhythms. These abnormalities are thought to be associated with functional disconnections among cortical areas resulting from death of cortical neurons, axonal pathology, cholinergic deficits, etc. This article reviews main findings of EEG abnormalities in AD patients obtained from conventional spectral analysis and nonlinear dynamical methods. In particular, nonlinear alterations in the EEG of AD patients, i.e. a decreased complexity of EEG patterns and reduced information transmission among cortical areas, and their clinical implications are discussed. For future studies, improvement of the accuracy of differential diagnosis and early detection of AD based on multimodal approaches, longitudinal studies on nonlinear dynamics of the EEG, drug effects on the EEG dynamics, and linear and nonlinear functional connectivity among cortical regions in AD are proposed to be investigated. EEG abnormalities of AD patients are characterized by slowed mean frequency, less complex activity, and reduced coherences among cortical regions. These abnormalities suggest that the EEG has utility as a valuable tool for differential and early diagnosis of AD.",
"title": ""
},
{
"docid": "4ca7e1893c0ab71d46af4954f7daf58e",
"text": "Identifying coordinate transformations that make strongly nonlinear dynamics approximately linear has the potential to enable nonlinear prediction, estimation, and control using linear theory. The Koopman operator is a leading data-driven embedding, and its eigenfunctions provide intrinsic coordinates that globally linearize the dynamics. However, identifying and representing these eigenfunctions has proven challenging. This work leverages deep learning to discover representations of Koopman eigenfunctions from data. Our network is parsimonious and interpretable by construction, embedding the dynamics on a low-dimensional manifold. We identify nonlinear coordinates on which the dynamics are globally linear using a modified auto-encoder. We also generalize Koopman representations to include a ubiquitous class of systems with continuous spectra. Our framework parametrizes the continuous frequency using an auxiliary network, enabling a compact and efficient embedding, while connecting our models to decades of asymptotics. Thus, we benefit from the power of deep learning, while retaining the physical interpretability of Koopman embeddings. It is often advantageous to transform a strongly nonlinear system into a linear one in order to simplify its analysis for prediction and control. Here the authors combine dynamical systems with deep learning to identify these hard-to-find transformations.",
"title": ""
},
{
"docid": "556e496bd716f46e27c8378066c91521",
"text": "A study is being done into the psychology of crowd behaviour during emergencies, and ways of ensuring safety during mass evacuations by encouraging more altruistic behaviour. Crowd emergencies have previously been understood as involving panic and selfish behaviour. The present study tests the claims that (1) co-operation and altruistic behaviour rather than panic will predominate in mass responses to emergencies, even in situations where there is a clear threat of death; and that this is the case not only because (2) everyday norms and social roles continue to exert an influence, but also because (3) the external threat can create a sense of solidarity amongst strangers. Qualitative analysis of interviews with survivors of different emergencies supports these claims. A second study of the July 7 London bombings is on-going and also supports these claims. While these findings provide support for some existing models of mass emergency evacuation, it also points to the necessity of a new theoretical approach to the phenomena, using Self-Categorization Theory. Practical applications for the future management of crowd emergencies are also considered.",
"title": ""
},
{
"docid": "e998a25bce6c92d00f71b7453444cb97",
"text": "Modeling and simulation tools are being increasingly acclaimed in the research field of autonomous vehicles systems, as they provide suitable test beds for the development and evaluation of such complex systems. However, these tools still do not account for some integration capabilities amongst several state-of-the-art Intelligent Transportation Systems, e.g. to study autonomous driving behaviors in human-steered urban traffic scenarios, which are crucial to the Future Urban Transport paradigm.\n In this paper we describe the modeling and implementation of an integration architecture of two types of simulators, namely a robotics and a traffic simulator. This integration should enable autonomous vehicles to be deployed in a rather realistic traffic flow as an agent entity (on the traffic simulator), at the same time it simulates all its sensors and actuators (on the robotics counterpart). Also, the statistical tools available in the traffic simulator will allow practitioners to infer what kind of advantages such a novel technology will bring to our everyday's lives. Furthermore, an architecture for the integration of the aforementioned simulators is proposed and implemented in the light of the most desired features of such software environments.\n To assess the usefulness of the platform architecture towards the expected realistic simulation facility, a comprehensive system evaluation is performed and critically reviewed, leveraging the feasibility of the integration. Further developments and future perspectives are also suggested.",
"title": ""
},
{
"docid": "8986220451741c4dd977ec2106e2a2eb",
"text": "The Database of Interacting Proteins (DIP: http://dip.doe-mbi.ucla.edu) is a database that documents experimentally determined protein-protein interactions. It provides the scientific community with an integrated set of tools for browsing and extracting information about protein interaction networks. As of September 2001, the DIP catalogs approximately 11 000 unique interactions among 5900 proteins from >80 organisms; the vast majority from yeast, Helicobacter pylori and human. Tools have been developed that allow users to analyze, visualize and integrate their own experimental data with the information about protein-protein interactions available in the DIP database.",
"title": ""
},
{
"docid": "01a4b2be52e379db6ace7fa8ed501805",
"text": "The goal of our work is to complete the depth channel of an RGB-D image. Commodity-grade depth cameras often fail to sense depth for shiny, bright, transparent, and distant surfaces. To address this problem, we train a deep network that takes an RGB image as input and predicts dense surface normals and occlusion boundaries. Those predictions are then combined with raw depth observations provided by the RGB-D camera to solve for depths for all pixels, including those missing in the original observation. This method was chosen over others (e.g., inpainting depths directly) as the result of extensive experiments with a new depth completion benchmark dataset, where holes are filled in training data through the rendering of surface reconstructions created from multiview RGB-D scans. Experiments with different network inputs, depth representations, loss functions, optimization methods, inpainting methods, and deep depth estimation networks show that our proposed approach provides better depth completions than these alternatives.",
"title": ""
},
{
"docid": "ec593c78e3b2bc8f9b8a657093daac49",
"text": "Analyses of 3-D seismic data in predominantly basin-floor settings offshore Indonesia, Nigeria, and the Gulf of Mexico, reveal the extensive presence of gravity-flow depositional elements. Five key elements were observed: (1) turbidity-flow leveed channels, (2) channeloverbank sediment waves and levees, (3) frontal splays or distributarychannel complexes, (4) crevasse-splay complexes, and (5) debris-flow channels, lobes, and sheets. Each depositional element displays a unique morphology and seismic expression. The reservoir architecture of each of these depositional elements is a function of the interaction between sedimentary process, sea-floor morphology, and sediment grain-size distribution. (1) Turbidity-flow leveed-channel widths range from greater than 3 km to less than 200 m. Sinuosity ranges from moderate to high, and channel meanders in most instances migrate down-system. The highamplitude reflection character that commonly characterizes these features suggests the presence of sand within the channels. In some instances, high-sinuosity channels are associated with (2) channel-overbank sediment-wave development in proximal overbank levee settings, especially in association with outer channel bends. These sediment waves reach heights of 20 m and spacings of 2–3 km. The crests of these sediment waves are oriented normal to the inferred transport direction of turbidity flows, and the waves have migrated in an upflow direction. Channel-margin levee thickness decreases systematically down-system. Where levee thickness can no longer be resolved seismically, high-sinuosity channels feed (3) frontal splays or low-sinuosity, distributary-channel complexes. Low-sinuosity distributary-channel complexes are expressed as lobate sheets up to 5–10 km wide and tens of kilometers long that extend to the distal edges of these systems. They likely comprise sheet-like sandstone units consisting of shallow channelized and associated sand-rich overbank deposits. Also observed are (4) crevasse-splay deposits, which form as a result of the breaching of levees, commonly at channel bends. Similar to frontal splays, but smaller in size, these deposits commonly are characterized by sheet-like turbidites. (5) Debris-flow deposits comprise low-sinuosity channel fills, narrow elongate lobes, and sheets and are characterized seismically by contorted, chaotic, low-amplitude reflection patterns. These deposits commonly overlie striated or grooved pavements that can be up to tens of kilometers long, 15 m deep, and 25 m wide. Where flows are unconfined, striation patterns suggest that divergent flow is common. Debris-flow deposits extend as far basinward as turbidites, and individual debris-flow units can reach 80 m in thickness and commonly are marked by steep edges. Transparent to chaotic seismic reflection character suggest that these deposits are mud-rich. Stratigraphically, deep-water basin-floor successions commonly are characterized by mass-transport deposits at the base, overlain by turbidite frontal-splay deposits and subsequently by leveed-channel deposits. Capping this succession is another mass-transport unit ultimately overlain and draped by condensed-section deposits. This succession can be related to a cycle of relative sea-level change and associated events at the corresponding shelf edge. Commonly, deposition of a deep-water sequence is initiated with the onset of relative sea-level fall and ends with subsequent rapid relative sea-level rise. INTRODUCTION The understanding of deep-water depositional systems has advanced significantly in recent years. In the past, much understanding of deep-water sedimentation came from studies of outcrops, recent fan systems, and 2D reflection seismic data (Bouma 1962; Mutti and Ricci Lucchi 1972; Normark 1970, 1978; Walker 1978; Posamentier et al. 1991; Weimer 1991; Mutti and Normark 1991). However, in recent years this knowledge has advanced significantly because of (1) the interest by petroleum companies in deep-water exploration (e.g., Pirmez et al. 2000), and the advent of widely available high-quality 3D seismic data across a broad range of deepwater environments (e.g., Beaubouef and Friedman 2000; Posamentier et al. 2000), (2) the recent drilling and coring of both near-surface and reservoir-level deep-water systems (e.g., Twichell et al. 1992), and (3) the increasing utilization of deep-tow side-scan sonar and other imaging devices (e.g., Twichell et al. 1992; Kenyon and Millington 1995). It is arguably the first factor that has had the most significant impact on our understanding of deep-water systems. Three-dimensional seismic data afford an unparalleled view of the deep-water depositional environment, in some instances with vertical resolution down to 2–3 m. Seismic time slices, horizon-datum time slices, and interval attributes provide images of deepwater depositional systems in map view that can then be analyzed from a geomorphologic perspective. Geomorphologic analyses lead to the identification of depositional elements, which, when integrated with seismic profiles, can yield significant stratigraphic insight. Finally, calibration by correlation with borehole data, including logs, conventional core, and biostratigraphic samples, can provide the interpreter with an improved understanding of the geology of deep-water systems. The focus of this study is the deep-water component of a depositional sequence. We describe and discuss only those elements and stratigraphic successions that are present in deep-water depositional environments. The examples shown in this study largely are Pleistocene in age and most are encountered within the uppermost 400 m of substrate. These relatively shallowly buried features represent the full range of lowstand deep-water depositional sequences from early and late lowstand through transgressive and highstand deposits. Because they are not buried deeply, these stratigraphic units commonly are well-imaged on 3D seismic data. It is also noteworthy that although the examples shown here largely are of Pleistocene age, the age of these deposits should not play a significant role in subsequent discussion. What determines the architecture of deep-water deposits are the controlling parameters of flow discharge, sand-to-mud ratio, slope length, slope gradient, and rugosity of the seafloor, and not the age of the deposits. It does not matter whether these deposits are Pleistocene, Carboniferous, or Precambrian; the physical ‘‘first principles’’ of sediment gravity flow apply without distinguishing between when these deposits formed. However, from the perspective of studying deep-water turbidites it is advantageous that the Pleistocene was such an active time in the deepwater environment, resulting in deposition of numerous shallowly buried, well-imaged, deep-water systems. Depositional Elements Approach This study is based on the grouping of similar geomorphic features referred to as depositional elements. Depositional elements are defined by 368 H.W. POSAMENTIER AND V. KOLLA FIG. 1.—Schematic depiction of principal depositional elements in deep-water settings. Mutti and Normark (1991) as the basic mappable components of both modern and ancient turbidite systems and stages that can be recognized in marine, outcrop, and subsurface studies. These features are the building blocks of landscapes. The focus of this study is to use 3D seismic data to characterize the geomorphology and stratigraphy of deep-water depositional elements and infer process of deposition where appropriate. Depositional elements can vary from place to place and in the same place through time with changes of environmental parameters such as sand-to-mud ratio, flow discharge, and slope gradient. In some instances, systematic changes in these environmental parameters can be tied back to changes of relative sea level. The following depositional elements will be discussed: (1) turbidityflow leveed channels, (2) overbank sediment waves and levees, (3) frontal splays or distributary-channel complexes, (4) crevasse-splay complexes, and (5) debris-flow channels, lobes, and sheets (Fig. 1). Each element is described and depositional processes are discussed. Finally, the exploration significance of each depositional element is reviewed. Examples are drawn from three deep-water slope and basin-floor settings: the Gulf of Mexico, offshore Nigeria, and offshore eastern Kalimantan, Indonesia. We utilized various visualization techniques, including 3D perspective views, horizon slices, and horizon and interval attribute displays, to bring out the detailed characteristics of depositional elements and their respective geologic settings. The deep-water depositional elements we present here are commonly characterized by peak seismic frequencies in excess of 100 Hz. The vertical resolution at these shallow depths of burial is in the range of 3–4 m, thus affording high-resolution images of depositional elements. We hope that our study, based on observations from the shallow subsurface, will provide general insights into the reservoir architecture of deep-water depositional elements, which can be extrapolated to more poorly resolved deep-water systems encountered at deeper exploration depths. DEPOSITIONAL ELEMENTS The following discussion focuses on five depositional elements in deepwater environments. These include turbidity-flow leveed channels, overbank or levee deposits, frontal splays or distributary-channel complexes, crevasse splays, and debris-flow sheets, lobes, and channels (Fig. 1). Turbidity-Flow Leveed Channels Leveed channels are common depositional elements in slope and basinfloor environments. Leveed channels observed in this study range in width from 3 km to less than 250 m and in sinuosity (i.e., the ratio of channelaxis length to channel-belt length) between 1.2 and 2.2. Some leveed channels are internally characterized by complex cut-and-fill architecture. Many leveed channels show evidence ",
"title": ""
},
{
"docid": "93dd0ad4eb100d4124452e2f6626371d",
"text": "The role of background music in audience responses to commercials (and other marketing elements) has received increasing attention in recent years. This article extends the discussion of music’s influence in two ways: (1) by using music theory to analyze and investigate the effects of music’s structural profiles on consumers’ moods and emotions and (2) by examining the relationship between music’s evoked moods that are congruent versus incongruent with the purchase occasion and the resulting effect on purchase intentions. The study reported provides empirical support for the notion that when music is used to evoke emotions congruent with the symbolic meaning of product purchase, the likelihood of purchasing is enhanced. D 2003 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "35792db324d1aaf62f19bebec6b1e825",
"text": "Keyphrases: Global Vectors for Word Representation (GloVe). Intrinsic and extrinsic evaluations. Effect of hyperparameters on analogy evaluation tasks. Correlation of human judgment with word vector distances. Dealing with ambiguity in word using contexts. Window classification. This set of notes first introduces the GloVe model for training word vectors. Then it extends our discussion of word vectors (interchangeably called word embeddings) by seeing how they can be evaluated intrinsically and extrinsically. As we proceed, we discuss the example of word analogies as an intrinsic evaluation technique and how it can be used to tune word embedding techniques. We then discuss training model weights/parameters and word vectors for extrinsic tasks. Lastly we motivate artificial neural networks as a class of models for natural language processing tasks.",
"title": ""
}
] |
scidocsrr
|
5be9898b575aea72f21ab39dc6244897
|
Extracting Structured Scholarly Information from the Machine Translation Literature
|
[
{
"docid": "a9015698a5df36a2557b97838e6e05f9",
"text": "The evaluation of whole-sentence semantic structures plays an important role in semantic parsing and large-scale semantic structure annotation. However, there is no widely-used metric to evaluate wholesentence semantic structures. In this paper, we present smatch, a metric that calculates the degree of overlap between two semantic feature structures. We give an efficient algorithm to compute the metric and show the results of an inter-annotator agreement study.",
"title": ""
},
{
"docid": "659deeead04953483a3ed6c5cc78cd76",
"text": "We describe ParsCit, a freely available, open-source imple entation of a reference string parsing package. At the core of ParsCit is a trained conditional random field (CRF) model used to label th token sequences in the reference string. A heuristic model wraps this core with added functionality to identify reference string s from a plain text file, and to retrieve the citation contexts . The package comes with utilities to run it as a web service or as a standalone uti lity. We compare ParsCit on three distinct reference string datasets and show that it compares well with other previously published work.",
"title": ""
}
] |
[
{
"docid": "bf53216a95c20d5f41b7821b05418919",
"text": "Bowlby's attachment theory is a theory of psychopathology as well as a theory of normal development. It contains clear and specific propositions regarding the role of early experience in developmental psychopathology, the importance of ongoing context, and the nature of the developmental process underlying pathology. In particular, Bowlby argued that adaptation is always the joint product of developmental history and current circumstances (never either alone). Early experience does not cause later pathology in a linear way; yet, it has special significance due to the complex, systemic, transactional nature of development. Prior history is part of current context, playing a role in selection, engagement, and interpretation of subsequent experience and in the use of available environmental supports. Finally, except in very extreme cases, early anxious attachment is not viewed as psychopathology itself or as a direct cause of psychopathology but as an initiator of pathways probabilistically associated with later pathology.",
"title": ""
},
{
"docid": "3bf3546e686763259b953b31674e3cdc",
"text": "In this paper, we concentrate on the automatic recognition of Egyptian Arabic speech using syllables. Arabic spoken digits were described by showing their constructing phonemes, triphones, syllables and words. Speaker-independent hidden markov models (HMMs)-based speech recognition system was designed using Hidden markov model toolkit (HTK). The database used for both training and testing consists from forty-four Egyptian speakers. Experiments show that the recognition rate using syllables outperformed the rate obtained using monophones, triphones and words by 2.68%, 1.19% and 1.79% respectively. A syllable unit spans a longer time frame, typically three phones, thereby offering a more parsimonious framework for modeling pronunciation variation in spontaneous speech. Moreover, syllable-based recognition has relatively smaller number of used units and runs faster than word-based recognition. Key-Words: Speech recognition, syllables, Arabic language, HMMs.",
"title": ""
},
{
"docid": "3b07476ebb8b1d22949ec32fc42d2d05",
"text": "We provide a systematic review of the adaptive comanagement (ACM) literature to (i) investigate how the concept of governance is considered and (ii) examine what insights ACM offers with reference to six key concerns in environmental governance literature: accountability and legitimacy; actors and roles; fit, interplay, and scale; adaptiveness, flexibility, and learning; evaluation and monitoring; and, knowledge. Findings from the systematic review uncover a complicated relationship with evidence of conceptual closeness as well as relational ambiguities. The findings also reveal several specific contributions from the ACM literature to each of the six key environmental governance concerns, including applied strategies for sharing power and responsibility and value of systems approaches in understanding problems of fit. More broadly, the research suggests a dissolving or fuzzy boundary between ACM and governance, with implications for understanding emerging approaches to navigate social-ecological system change. Future research opportunities may be found at the confluence of ACM and environmental governance scholarship, such as identifying ways to build adaptive capacity and encouraging the development of more flexible governance arrangements.",
"title": ""
},
{
"docid": "d8683a777be0027f60e2ab8b2291fb92",
"text": "This paper focuses on coordinate update methods, which are useful for solving problems involving large or high-dimensional datasets. They decompose a problem into simple subproblems, where each updates one, or a small block of, variables while fixing others. These methods can deal with linear and nonlinear mappings, smooth and nonsmooth functions, as well as convex and nonconvex problems. In addition, they are easy to parallelize. The great performance of coordinate update methods depends on solving simple subproblems. To derive simple subproblems for several new classes of applications, this paper systematically studies coordinate friendly operators that perform low-cost coordinate updates. Based on the discovered coordinate friendly operators, as well as operator splitting techniques, we obtain new coordinate update algorithms for a variety of problems in machine learning, image processing, as well as sub-areas of optimization. Several problems are treated with coordinate update for the first time in history. The obtained algorithms are scalable to large instances through parallel and even asynchronous computing. We present numerical examples to illustrate how effective these algorithms are.",
"title": ""
},
{
"docid": "d477e2a2678de720c57895bf1d047c4b",
"text": "Interpreting predictions from tree ensemble methods such as gradient boosting machines and random forests is important, yet feature attribution for trees is often heuristic and not individualized for each prediction. Here we show that popular feature attribution methods are inconsistent, meaning they can lower a feature’s assigned importance when the true impact of that feature actually increases. This is a fundamental problem that casts doubt on any comparison between features. To address it we turn to recent applications of game theory and develop fast exact tree solutions for SHAP (SHapley Additive exPlanation) values, which are the unique consistent and locally accurate attribution values. We then extend SHAP values to interaction effects and define SHAP interaction values. We propose a rich visualization of individualized feature attributions that improves over classic attribution summaries and partial dependence plots, and a unique “supervised” clustering (clustering based on feature attributions). We demonstrate better agreement with human intuition through a user study, exponential improvements in run time, improved clustering performance, and better identification of influential features. An implementation of our algorithm has also been merged into XGBoost and LightGBM, see http://github.com/slundberg/shap for details. ACM Reference Format: Scott M. Lundberg, Gabriel G. Erion, and Su-In Lee. 2018. Consistent Individualized Feature Attribution for Tree Ensembles. In Proceedings of ACM (KDD’18). ACM, New York, NY, USA, 9 pages. https://doi.org/none",
"title": ""
},
{
"docid": "d87abfd50876da09bce301831f71605f",
"text": "Recent advances in topic models have explored complicated structured distributions to represent topic correlation. For example, the pachinko allocation model (PAM) captures arbitrary, nested, and possibly sparse correlations between topics using a directed acyclic graph (DAG). While PAM provides more flexibility and greater expressive power than previous models like latent Dirichlet allocation (LDA), it is also more difficult to determine the appropriate topic structure for a specific dataset. In this paper, we propose a nonparametric Bayesian prior for PAM based on a variant of the hierarchical Dirichlet process (HDP). Although the HDP can capture topic correlations defined by nested data structure, it does not automatically discover such correlations from unstructured data. By assuming an HDP-based prior for PAM, we are able to learn both the number of topics and how the topics are correlated. We evaluate our model on synthetic and real-world text datasets, and show that nonparametric PAM achieves performance matching the best of PAM without manually tuning the number of topics.",
"title": ""
},
{
"docid": "50e7e02f9a4b8b65cf2bce212314e77c",
"text": "Over the past few years, massive amounts of world knowledge have been accumulated in publicly available knowledge bases, such as Freebase, NELL, and YAGO. Yet despite their seemingly huge size, these knowledge bases are greatly incomplete. For example, over 70% of people included in Freebase have no known place of birth, and 99% have no known ethnicity. In this paper, we propose a way to leverage existing Web-search-based question-answering technology to fill in the gaps in knowledge bases in a targeted way. In particular, for each entity attribute, we learn the best set of queries to ask, such that the answer snippets returned by the search engine are most likely to contain the correct value for that attribute. For example, if we want to find Frank Zappa's mother, we could ask the query `who is the mother of Frank Zappa'. However, this is likely to return `The Mothers of Invention', which was the name of his band. Our system learns that it should (in this case) add disambiguating terms, such as Zappa's place of birth, in order to make it more likely that the search results contain snippets mentioning his mother. Our system also learns how many different queries to ask for each attribute, since in some cases, asking too many can hurt accuracy (by introducing false positives). We discuss how to aggregate candidate answers across multiple queries, ultimately returning probabilistic predictions for possible values for each attribute. Finally, we evaluate our system and show that it is able to extract a large number of facts with high confidence.",
"title": ""
},
{
"docid": "d07416d917175d6bf809c4cefeeb44a3",
"text": "Extracting relevant information in multilingual context from massive amounts of unstructured, structured and semi-structured data is a challenging task. Various theories have been developed and applied to ease the access to multicultural and multilingual resources. This papers describes a methodology for the development of an ontology-based Cross-Language Information Retrieval (CLIR) application and shows how it is possible to achieve the translation of Natural Language (NL) queries in any language by means of a knowledge-driven approach which allows to semi-automatically map natural language to formal language, simplifying and improving in this way the human-computer interaction and communication. The outlined research activities are based on Lexicon-Grammar (LG), a method devised for natural language formalization, automatic textual analysis and parsing. Thanks to its main characteristics, LG is independent from factors which are critical for other approaches, i.e. interaction type (voice or keyboard-based), length of sentences and propositions, type of vocabulary used and restrictions due to users' idiolects. The feasibility of our knowledge-based methodological framework, which allows mapping both data and metadata, will be tested for CLIR by implementing a domain-specific early prototype system.",
"title": ""
},
{
"docid": "54757e4760194a299218b060026be47c",
"text": "OBJECTIVE\nObesity and overweight are well known risk factors for coronary artery disease (CAD), and are expected to be increasing in the Kingdom of Saudi Arabia (KSA) particularly among females. Therefore, we designed this study with the objective to determine the prevalence of obesity and overweight among Saudis of both gender, between the ages of 30-70 years in rural as well as in urban communities. This work is part of a major national project called Coronary Artery Disease in Saudis Study (CADISS) that is designed to look at CAD and its risk factors in Saudi population.\n\n\nMETHODS\nThis study is a community-based national epidemiological health survey, conducted by examining Saudi subjects in the age group of 30-70 years of selected households over a 5-year period between 1995 and 2000 in KSA. Data were obtained from body mass index (BMI) and were analyzed to classify individuals with overweight (BMI = 25-29.9 kg/m2), obesity (BMI >/=30 kg/m2) and severe (gross) obesity (BMI >/=40 kg/m2) to provide the prevalence of overweight and obesity in KSA.\n\n\nRESULTS\nData were obtained by examining 17,232 Saudi subjects from selected households who participated in the study. The prevalence of overweight was 36.9%. Overweight is significantly more prevalent in males (42.4%) compared to 31.8% of females (p<0.0001). The age-adjusted prevalence of obesity was 35.5% in KSA with an overall prevalence of 35.6% [95% CI: 34.9-36.3], while severe (gross) obesity was 3.2%. Females are significantly more obese with a prevalence of 44% than males 26.4% (p<0.0001).\n\n\nCONCLUSION\nObesity and overweight are increasing in KSA with an overall obesity prevalence of 35.5%. Reduction in overweight and obesity are of considerable importance to public health. Therefore, we recommend a national obesity prevention program at community level to be implemented sooner to promote leaner and consequently healthier community.",
"title": ""
},
{
"docid": "470265e6acd60a190401936fb7121c75",
"text": "Synesthesia is a conscious experience of systematically induced sensory attributes that are not experienced by most people under comparable conditions. Recent findings from cognitive psychology, functional brain imaging and electrophysiology have shed considerable light on the nature of synesthesia and its neurocognitive underpinnings. These cognitive and physiological findings are discussed with respect to a neuroanatomical framework comprising hierarchically organized cortical sensory pathways. We advance a neurobiological theory of synesthesia that fits within this neuroanatomical framework.",
"title": ""
},
{
"docid": "3142c3d3089f6f0e24a72b6baef3a81a",
"text": "Cloud Computing is the key technology of today's cyber world which provides online provisioning of resources on demand and pay per use basis. Malware attacks such as virus, worm and rootkits etc. are some threats to virtual machines (VMs) in cloud environment. In this paper, we present a system call analysis approach to detect malware attacks which maliciously affect the legitimate programs running in Virtual Machines (VMs) and modify their behavior. Our approach is named as 'Malicious System Call Sequence Detection (MSCSD)' which is based on analysis of short sequence of system calls (n-grams). MSCSD employs an efficient feature representation method for system call patterns to improve the accuracy of attack detection and reduce the cost of storage with reduced false positives. MSCSD applies Machine Learning (Decision Tree C 4.5) over the collected n-gram patterns for learning the behavior of monitored programs and detecting malicious system call patterns in future. We have analyzed the performance of some other classifiers and compared our work with the existing work for securing virtual machine in cloud. A prototype implementation of the approach is carried out over UNM dataset and results seem to be promising.",
"title": ""
},
{
"docid": "f560be243747927a7d6873ca0f87d9c6",
"text": "Hydrophobic interaction chromatography-high performance liquid chromatography (HIC-HPLC) is a powerful analytical method used for the separation of molecular variants of therapeutic proteins. The method has been employed for monitoring various post-translational modifications, including proteolytic fragments and domain misfolding in etanercept (Enbrel®); tryptophan oxidation, aspartic acid isomerization, the formation of cyclic imide, and α amidated carboxy terminus in recombinant therapeutic monoclonal antibodies; and carboxy terminal heterogeneity and serine fucosylation in Fc and Fab fragments. HIC-HPLC is also a powerful analytical technique for the analysis of antibody-drug conjugates. Most current analytical columns, methods, and applications are described, and critical method parameters and suitability for operation in regulated environment are discussed, in this review.",
"title": ""
},
{
"docid": "a3fe3b92fe53109888b26bb03c200180",
"text": "Using Artificial Neural Networh (A\".) in critical applications can be challenging due to the often experimental nature of A\" construction and the \"black box\" label that is fiequently attached to A\".. Wellaccepted process models exist for algorithmic sofhyare development which facilitate software validation and acceptance. The sojiware development process model presented herein is targeted specifically toward artificial neural networks in crik-al appliicationr. 7% model is not unwieldy, and could easily be used on projects without critical aspects. This should be of particular interest to organizations that use AMVs and need to maintain or achieve a Capability Maturity Model (CM&?I or IS0 sofhyare development rating. Further, while this model is aimed directly at neural network development, with minor moda&ations, the model could be applied to any technique wherein knowledge is extractedfiom existing &ka, such as other numeric approaches or knowledge-based systems.",
"title": ""
},
{
"docid": "a57bdfa9c48a76d704258f96874ea700",
"text": "BACKGROUND\nPrevious state-of-the-art systems on Drug Name Recognition (DNR) and Clinical Concept Extraction (CCE) have focused on a combination of text \"feature engineering\" and conventional machine learning algorithms such as conditional random fields and support vector machines. However, developing good features is inherently heavily time-consuming. Conversely, more modern machine learning approaches such as recurrent neural networks (RNNs) have proved capable of automatically learning effective features from either random assignments or automated word \"embeddings\".\n\n\nOBJECTIVES\n(i) To create a highly accurate DNR and CCE system that avoids conventional, time-consuming feature engineering. (ii) To create richer, more specialized word embeddings by using health domain datasets such as MIMIC-III. (iii) To evaluate our systems over three contemporary datasets.\n\n\nMETHODS\nTwo deep learning methods, namely the Bidirectional LSTM and the Bidirectional LSTM-CRF, are evaluated. A CRF model is set as the baseline to compare the deep learning systems to a traditional machine learning approach. The same features are used for all the models.\n\n\nRESULTS\nWe have obtained the best results with the Bidirectional LSTM-CRF model, which has outperformed all previously proposed systems. The specialized embeddings have helped to cover unusual words in DrugBank and MedLine, but not in the i2b2/VA dataset.\n\n\nCONCLUSIONS\nWe present a state-of-the-art system for DNR and CCE. Automated word embeddings has allowed us to avoid costly feature engineering and achieve higher accuracy. Nevertheless, the embeddings need to be retrained over datasets that are adequate for the domain, in order to adequately cover the domain-specific vocabulary.",
"title": ""
},
{
"docid": "196868f85571b16815127d2bd87b98ff",
"text": "Scientists have predicted that carbon’s immediate neighbors on the periodic chart, boron and nitrogen, may also form perfect nanotubes, since the advent of carbon nanotubes (CNTs) in 1991. First proposed then synthesized by researchers at UC Berkeley in the mid 1990’s, the boron nitride nanotube (BNNT) has proven very difficult to make until now. Herein we provide an update on a catalyst-free method for synthesizing highly crystalline, small diameter BNNTs with a high aspect ratio using a high power laser under a high pressure and high temperature environment first discovered jointly by NASA/NIA JSA. Progress in purification methods, dispersion studies, BNNT mat and composite formation, and modeling and diagnostics will also be presented. The white BNNTs offer extraordinary properties including neutron radiation shielding, piezoelectricity, thermal oxidative stability (> 800 ̊C in air), mechanical strength, and toughness. The characteristics of the novel BNNTs and BNNT polymer composites and their potential applications are discussed.",
"title": ""
},
{
"docid": "2a81d56c89436b3379c7dec082d19b17",
"text": "We present a fast, efficient, and automatic method for extracting vessels from retinal images. The proposed method is based on the second local entropy and on the gray-level co-occurrence matrix (GLCM). The algorithm is designed to have flexibility in the definition of the blood vessel contours. Using information from the GLCM, a statistic feature is calculated to act as a threshold value. The performance of the proposed approach was evaluated in terms of its sensitivity, specificity, and accuracy. The results obtained for these metrics were 0.9648, 0.9480, and 0.9759, respectively. These results show the high performance and accuracy that the proposed method offers. Another aspect evaluated in this method is the elapsed time to carry out the segmentation. The average time required by the proposed method is 3 s for images of size 565 9 584 pixels. To assess the ability and speed of the proposed method, the experimental results are compared with those obtained using other existing methods.",
"title": ""
},
{
"docid": "146f1cd30a8f99e692cbd3e11d7245b0",
"text": "Record linkage has received significant attention in recent years due to the plethora of data sources that have to be integrated to facilitate data analyses. In several cases, such an integration involves disparate data sources containing huge volumes of records and must be performed in near real-time in order to support critical applications. In this paper, we propose the first summarization algorithms for speeding up online record linkage tasks. Our first method, called SkipBloom, summarizes efficiently the participating data sets, using their blocking keys, to allow for very fast comparisons among them. The second method, called BlockSketch, summarizes a block to achieve a constant number of comparisons for a submitted query record, during the matching phase. Additionally, we extend BlockSketch to adapt its functionality to streaming data, where the objective is to use a constant amount of main memory to handle potentially unbounded data sets. Through extensive experimental evaluation, using three real-world data sets, we demonstrate the superiority of our methods against two state-of-the-art algorithms for online record linkage.",
"title": ""
},
{
"docid": "ea3814910d338b633553f4be643efdf2",
"text": "The constant growth in the use of computer networks has demanded some concerns regarding disponibility, vulnerability and security. Intrusion Detection Systems (IDS) have been considered essential in keeping network security and therefore have been commonly adopted by network administrators. A possible disadvantage is the fact that such systems are usually based on signature systems, which make them strongly dependent on updated database and consequently inefficient against novel attacks (unknown attacks). The research presented in this paper proposes an IDS system based on artificial neural network (ANN) and the KDDCUP'99 dataset. Experimental results clearly show that the proposed system can reach an overall accuracy of 99.9% regarding the classification of pre-defined classes of intrusion attacks with, which is a very satisfactory result when compared to traditional methods.",
"title": ""
},
{
"docid": "2baa441b3daf9736154dd19864ec2497",
"text": "In some stochastic environments the well-known reinforcement learning algorithm Q-learning performs very poorly. This poor performance is caused by large overestimations of action values. These overestimations result from a positive bias that is introduced because Q-learning uses the maximum action value as an approximation for the maximum expected action value. We introduce an alternative way to approximate the maximum expected value for any set of random variables. The obtained double estimator method is shown to sometimes underestimate rather than overestimate the maximum expected value. We apply the double estimator to Q-learning to construct Double Q-learning, a new off-policy reinforcement learning algorithm. We show the new algorithm converges to the optimal policy and that it performs well in some settings in which Q-learning performs poorly due to its overestimation.",
"title": ""
}
] |
scidocsrr
|
a208dd5012766403247ca029e9b15e1f
|
MIMO State Feedback Controller for a Flexible Joint Robot with Strong Joint Coupling
|
[
{
"docid": "81b03da5e09cb1ac733c966b33d0acb1",
"text": "Abstrud In the last two years a third generation of torque-controlled light weight robots has been developed in DLR‘s robotics and mechatronics lab which is based on all the experiences that have been made with the first two generations. It aims at reaching the limits of what seems achievable with present day technologies not only with respect to light-weight, but also with respect to minimal power consumption and losses. One of the main gaps we tried to close in version III was the development of a new, robot-dedicated high energy motor designed with the best available techniques of concurrent engineering, and the renewed efforts to save weight in the links by using ultralight carbon fibres.",
"title": ""
}
] |
[
{
"docid": "012b42c01cebf0840a429ab0e7db2914",
"text": "Silicon single-photon avalanche diodes (SPADs) are nowadays a solid-state alternative to photomultiplier tubes (PMTs) in single-photon counting (SPC) and time-correlated single-photon counting (TCSPC) over the visible spectral range up to 1-mum wavelength. SPADs implemented in planar technology compatible with CMOS circuits offer typical advantages of microelectronic devices (small size, ruggedness, low voltage, low power, etc.). Furthermore, they have inherently higher photon detection efficiency, since they do not rely on electron emission in vacuum from a photocathode as do PMTs, but instead on the internal photoelectric effect. However, PMTs offer much wider sensitive area, which greatly simplifies the design of optical systems; they also attain remarkable performance at high counting rate, and offer picosecond timing resolution with microchannel plate models. In order to make SPAD detectors more competitive in a broader range of SPC and TCSPC applications, it is necessary to face several issues in the semiconductor device design and technology. Such issues will be discussed in the context of the two possible approaches to such a challenge: employing a standard industrial high-voltage CMOS technology or developing a dedicated CMOS-compatible technology. Advances recently attained in the development of SPAD detectors will be outlined and discussed with reference to both single-element detectors and integrated detector arrays.",
"title": ""
},
{
"docid": "d041b33794a14d07b68b907d38f29181",
"text": "This work presents a scalable data access layer, called PyEHR, designed to support the implementation of data management systems for secondary use of structured heterogeneous biomedical and clinical data. PyEHR adopts the openEHR's formalisms to guarantee the decoupling of data descriptions from implementation details and exploits structure indexing to accelerate searches. Data persistence is guaranteed by a driver layer with a common driver interface. Interfaces for two NoSQL Database Management Systems are already implemented: MongoDB and Elasticsearch. We evaluated the scalability of PyEHR experimentally through two types of tests, called \"Constant Load\" and \"Constant Number of Records\", with queries of increasing complexity on synthetic datasets of ten million records each, containing very complex openEHR archetype structures, distributed on up to ten computing nodes.",
"title": ""
},
{
"docid": "221453714bad4567c034ac0a8e316c0f",
"text": "Even in busy online communities usually only a small fraction of members post messages. Why do so many people prefer not to contribute publicly? From an online survey that generated 1188 responses from posters and lurkers from 375 MSN bulletin board communities, 219 lurkers spoke out about their reasons for not posting. While lurkers did not participate publicly, they did seek answers to questions. However, lurkers’ satisfaction with their community experience was lower than those who post. Data from 19 checkbox items and over 490 open-ended responses were analyzed. From this analysis the main reasons why lurkers lurk were concerned with: not needing to post; needing to find out more about the group before participating; thinking that they were being helpful by not posting; not being able to make the software work (i.e., poor usability); and not liking the group dynamics or the community was a poor fit for them. Two key conclusions can be drawn from this analysis. First, there are many reasons why people lurk in online discussion communities. Second, and most important, most lurkers are not selfish free-riders. From these findings it is clear that there are many ways to 1 University of Maryland, Baltimore County, USA. (410) 455 3795/1217 (fax) [email protected] 2 University of Guelph, Canada. [email protected] 3 University of Baltimore, USA. [email protected] Draft: Preece, J., Nonnecke, B., Andrews, D. (2004) The top 5 reasons for lurking: Improving community experiences for everyone. Computers in Human Behavior, 2, 1 (in press) 2 improve online community experiences for both posters and lurkers. Some solutions require improved software and better tools, but moderation and better interaction support will produce dramatic improvements.",
"title": ""
},
{
"docid": "e7f9783eeeebd1550c0299ccff2eab15",
"text": "In recent years, document clustering has been receiving more and more attentions as an important and fundamental technique for unsupervised document organization, automatictopic extraction, and fast information retrieval or filtering. In this paper, we propose a novel method for clustering documents using regularization. Unlike traditional globally regularized clustering methods, our method first construct a local regularized linear label predictor for each document vector, and then combine all those local regularizers with a global smoothness regularizer. So we call our algorithm Clustering with Local and Global Regularization (CLGR). We will show that the cluster memberships of the documents can be achieved by eigenvalue decomposition of a sparse symmetric matrix, which can be efficiently solved by iterative methods. Finally our experimental evaluations on several datasets are presented to show the superiorities of CLGR over traditional document clustering methods.",
"title": ""
},
{
"docid": "34e544af5158850b7119ac4f7c0b7b5e",
"text": "Over the last decade, the surprising fact has emerged that machines can possess therapeutic power. Due to the many healing qualities of touch, one route to such power is through haptic emotional interaction, which requires sophisticated touch sensing and interpretation. We explore the development of touch recognition technologies in the context of a furry artificial lap-pet, with the ultimate goal of creating therapeutic interactions by sensing human emotion through touch. In this work, we build upon a previous design for a new type of fur-based touch sensor. Here, we integrate our fur sensor with a piezoresistive fabric location/pressure sensor, and adapt the combined design to cover a curved creature-like object. We then use this interface to collect synchronized time-series data from the two sensors, and perform machine learning analysis to recognize 9 key affective touch gestures. In a study of 16 participants, our model averages 94% recognition accuracy when trained on individuals, and 86% when applied to the combined set of all participants. The model can also recognize which participant is touching the prototype with 79% accuracy. These results promise a new generation of emotionally intelligent machines, enabled by affective touch gesture recognition.",
"title": ""
},
{
"docid": "1675208fd7adefb20784a7708d655763",
"text": "The number of crime incidents that is reported per day in India is increasing dramatically. The criminals today use various advanced technologies and commit crimes in really tactful ways. This makes crime investigation a more complicated process. Thus the police officers have to perform a lot of manual tasks to get a thread for investigation. This paper deals with the study of data mining based systems for analyzing crime information and thus automates the crime investigation procedure of the police officers. The majority of these frameworks utilize a blend of data mining methods such as clustering and classification for the effective investigation of the criminal acts.",
"title": ""
},
{
"docid": "b77bef86667caed885fee95c79dc2292",
"text": "In this work, we propose a novel method for vocabulary selection to automatically adapt automatic speech recognition systems to the diverse topics that occur in educational and scientific lectures. Utilizing materials that are available before the lecture begins, such as lecture slides, our proposed framework iteratively searches for related documents on the web and generates a lecture-specific vocabulary based on the resulting documents. In this paper, we propose a novel method for vocabulary selection where we first collect documents similar to an initial seed document and then rank the resulting vocabulary based on a score which is calculated using a combination of word features. This is a critical component for adaptation that has typically been overlooked in prior works. On the inter ACT German-English simultaneous lecture translation system our proposed approach significantly improved vocabulary coverage, reducing the out-of-vocabulary rate, on average by 57.0% and up to 84.9%, compared to a lecture-independent baseline. Furthermore, our approach reduced the word error rate, by 12.5% on average and up to 25.3%, compared to a lecture-independent baseline.",
"title": ""
},
{
"docid": "3fa70c2667c6dbe179a7e17e44571727",
"text": "A~tract--For the past decade, many image segmentation techniques have been proposed. These segmentation techniques can be categorized into three classes, (I) characteristic feature thresholding or clustering, (2) edge detection, and (3) region extraction. This survey summarizes some of these techniques, in the area of biomedical image segmentation, most proposed techniques fall into the categories of characteristic feature thresholding or clustering and edge detection.",
"title": ""
},
{
"docid": "089c003534670cf6ab296828bf2604a3",
"text": "The development of ultra-low power LSIs is a promising area of research in microelectronics. Such LSIs would be suitable for use in power-aware LSI applications such as portable mobile devices, implantable medical devices, and smart sensor networks [1]. These devices have to operate with ultra-low power, i.e., a few microwatts or less, because they will probably be placed under conditions where they have to get the necessary energy from poor energy sources such as microbatteries or energy scavenging devices [2]. As a step toward such LSIs, we first need to develop voltage and current reference circuits that can operate with an ultra-low current, several tens of nanoamperes or less, i.e., sub-microwatt operation. To achieve such low-power operation, the circuits have to be operated in the subthreshold region, i.e., a region at which the gate-source voltage of MOSFETs is lower than the threshold voltage [3; 4]. Voltage and current reference circuits are important building blocks for analog, digital, and mixed-signal circuit systems in microelectronics, because the performance of these circuits is determined mainly by their bias voltages and currents. The circuits generate a constant reference voltage and current for various other components such as operational amplifiers, comparators, AD/DA converters, oscillators, and PLLs. For this purpose, bandgap reference circuits with CMOS-based vertical bipolar transistors are conventionally used in CMOS LSIs [5; 6]. However, they need resistors with a high resistance of several hundred megaohms to achieve low-current, subthreshold operation. Such a high resistance needs a large area to be implemented, and this makes conventional bandgap references unsuitable for use in ultra-low power LSIs. Therefore, modified voltage and current reference circuits for lowpower LSIs have been reported (see [7]-[12], [14]-[17]). However, these circuits have various problems. For example, their power dissipations are still large, their output voltages and currents are sensitive to supply voltage and temperature variations, and they have complex circuits with many MOSFETs; these problems are inconvenient for practical use in ultra-low power LSIs. Moreover, the effect of process variations on the reference signal has not been discussed in detail. To solve these problems, I and my colleagues reported new voltage and current reference circuits [13; 18] that can operate with sub-microwatt power dissipation and with low sensitivity to temperature and supply voltage. Our circuits consist of subthreshold MOSFET circuits and use no resistors.",
"title": ""
},
{
"docid": "15518edc9bde13f55df3192262c3a9bf",
"text": "Under the framework of the argumentation scheme theory (Walton, 1996), we developed annotation protocols for an argumentative writing task to support identification and classification of the arguments being made in essays. Each annotation protocol defined argumentation schemes (i.e., reasoning patterns) in a given writing prompt and listed questions to help evaluate an argument based on these schemes, to make the argument structure in a text explicit and classifiable. We report findings based on an annotation of 600 essays. Most annotation categories were applied reliably by human annotators, and some categories significantly contributed to essay score. An NLP system to identify sentences containing scheme-relevant critical questions was developed based on the human annotations.",
"title": ""
},
{
"docid": "254c0fa363a1eb83901ae16da531f5c2",
"text": "The recently developed variational autoencoders (VAEs) have proved to be an effective confluence of the rich representational power of neural networks with Bayesian methods. However, most work on VAEs use a rather simple prior over the latent variables such as standard normal distribution, thereby restricting its applications to relatively simple phenomena. In this work, we propose hierarchical non-parametric variational autoencoders, which combines tree-structured Bayesian nonparametric priors with VAEs, to enable infinite flexibility of the latent representation space. Both the neural parameters and Bayesian priors are learned jointly using tailored variational inference. The resulting model induces a hierarchical structure of latent semantic concepts underlying the data corpus, and infers accurate representations of data instances. We apply our model in video representation learning. Our method is able to discover highly interpretable activity hierarchies, and obtain improved clustering accuracy and generalization capacity based on the learned rich representations.",
"title": ""
},
{
"docid": "29f8b647d8f8de484f2b8f164b9e5add",
"text": "is the latest release of a versatile and very well optimized package for molecular simulation. Much effort has been devoted to achieving extremely high performance on both workstations and parallel computers. The design includes an extraction of vi-rial and periodic boundary conditions from the loops over pairwise interactions, and special software routines to enable rapid calculation of x –1/2. Inner loops are generated automatically in C or Fortran at compile time, with optimizations adapted to each architecture. Assembly loops using SSE and 3DNow! Multimedia instructions are provided for x86 processors, resulting in exceptional performance on inexpensive PC workstations. The interface is simple and easy to use (no scripting language), based on standard command line arguments with self-explanatory functionality and integrated documentation. All binary files are independent of hardware endian and can be read by versions of GROMACS compiled using different floating-point precision. A large collection of flexible tools for trajectory analysis is included, with output in the form of finished Xmgr/Grace graphs. A basic trajectory viewer is included, and several external visualization tools can read the GROMACS trajectory format. Starting with version 3.0, GROMACS is available under the GNU General Public License from",
"title": ""
},
{
"docid": "347ffb672490b9cfd0e6bf0901ba0efb",
"text": "Nature-inspired algorithms attract many researchers worldwide for solving the hardest optimization problems. One of the newest members of this extensive family is the bat algorithm. To date, many variants of this algorithm have emerged for solving continuous as well as combinatorial problems. One of the more promising variants, a self-adaptive bat algorithm, has recently been proposed that enables a self-adaptation of its control parameters. In this paper, we have hybridized this algorithm using different DE strategies and applied these as a local search heuristics for improving the current best solution directing the swarm of a solution towards the better regions within a search space. The results of exhaustive experiments were promising and have encouraged us to invest more efforts into developing in this direction.",
"title": ""
},
{
"docid": "31bb5687b284844596f437774b8b11ce",
"text": "In this paper, a new algorithm for calculating the QR decomposition (QRD) of a polynomial matrix is introduced. This algorithm amounts to transforming a polynomial matrix to upper triangular form by application of a series of paraunitary matrices such as elementary delay and rotation matrices. It is shown that this algorithm can also be used to formulate the singular value decomposition (SVD) of a polynomial matrix, which essentially amounts to diagonalizing a polynomial matrix again by application of a series of paraunitary matrices. Example matrices are used to demonstrate both types of decomposition. Mathematical proofs of convergence of both decompositions are also outlined. Finally, a possible application of such decompositions in multichannel signal processing is discussed.",
"title": ""
},
{
"docid": "0816dce7bc85621fd55933badcd7414e",
"text": "A novel dual-mode resonator with square-patch or corner-cut elements located at four corners of a conventional microstrip loop resonator is proposed. One of these patches or corner cuts is called the perturbation element, while the others are called reference elements. In the proposed design method, the transmission zeros are created or eliminated without sacrificing the passband response by changing the perturbation's size depending on the size of the reference elements. A simple transmission-line model is used to calculate the frequencies of the two transmission zeros. It is shown that the nature of the coupling between the degenerate modes determines the type of filter characteristic, whether it is Chebyshev or elliptic. Finally, two dual-mode microstrip bandpass filters are designed and realized using degenerate modes of the novel dual-mode resonator. The filters are evaluated by experiment and simulation with very good agreement.",
"title": ""
},
{
"docid": "af12d1794a65cb3818f1561384e069b2",
"text": " Multi-Criteria Decision Making (MCDM) methods have evolved to accommodate various types of applications. Dozens of methods have been developed, with even small variations to existing methods causing the creation of new branches of research. This paper performs a literature review of common Multi-Criteria Decision Making methods, examines the advantages and disadvantages of the identified methods, and explains how their common applications relate to their relative strengths and weaknesses. The analysis of MCDM methods performed in this paper provides a clear guide for how MCDM methods should be used in particular situations.",
"title": ""
},
{
"docid": "c0484f3055d7e7db8dfea9d4483e1e06",
"text": "Metastasis the spread of cancer cells to distant organs, is the main cause of death for cancer patients. Metastasis is often mediated by lymphatic vessels that invade the primary tumor, and an early sign of metastasis is the presence of cancer cells in the regional lymph node (the first lymph node colonized by metastasizing cancer cells from a primary tumor). Understanding the interplay between tumorigenesis and lymphangiogenesis (the formation of lymphatic vessels associated with tumor growth) will provide us with new insights into mechanisms that modulate metastatic spread. In the long term, these insights will help to define new molecular targets that could be used to block lymphatic vessel-mediated metastasis and increase patient survival. Here, we review the molecular mechanisms of embryonic lymphangiogenesis and those that are recapitulated in tumor lymphangiogenesis, with a view to identifying potential targets for therapies designed to suppress tumor lymphangiogenesis and hence metastasis.",
"title": ""
},
{
"docid": "9bf76a33c500c692c84f7f99a4c54c93",
"text": "Table detection is always an important task of document analysis and recognition. In this paper, we propose a novel and effective table detection method via visual separators and geometric content layout information, targeting at PDF documents. The visual separators refer to not only the graphic ruling lines but also the white spaces to handle tables with or without ruling lines. Furthermore, we detect page columns in order to assist table region delimitation in complex layout pages. Evaluations of our algorithm on an e-Book dataset and a scientific document dataset show competitive performance. It is noteworthy that the proposed method has been successfully incorporated into a commercial software package for large-scale Chinese e-Book production.",
"title": ""
},
{
"docid": "8b3431783f1dc699be1153ad80348d3e",
"text": "Quality Function Deployment (QFD) was conceived in Japan in the late 1960's, and introduced to America and Europe in 1983. This paper will provide a general overview of the QFD methodology and approach to product development. Once familiarity with the tool is established, a real-life application of the technique will be provided in a case study. The case study will illustrate how QFD was used to develop a new tape product and provide counsel to those that may want to implement the QFD process. Quality function deployment (QFD) is a “method to transform user demands into design quality, to deploy the functions forming quality, and to deploy methods for achieving the design quality into subsystems and component parts, and ultimately to specific elements of the manufacturing process.”",
"title": ""
},
{
"docid": "ddae1c6469769c2c7e683bfbc223ad1a",
"text": "Deep reinforcement learning has achieved many impressive results in recent years. However, tasks with sparse rewards or long horizons continue to pose significant challenges. To tackle these important problems, we propose a general framework that first learns useful skills in a pre-training environment, and then leverages the acquired skills for learning faster in downstream tasks. Our approach brings together some of the strengths of intrinsic motivation and hierarchical methods: the learning of useful skill is guided by a single proxy reward, the design of which requires very minimal domain knowledge about the downstream tasks. Then a high-level policy is trained on top of these skills, providing a significant improvement of the exploration and allowing to tackle sparse rewards in the downstream tasks. To efficiently pre-train a large span of skills, we use Stochastic Neural Networks combined with an information-theoretic regularizer. Our experiments1 show2 that this combination is effective in learning a wide span of interpretable skills in a sample-efficient way, and can significantly boost the learning performance uniformly across a wide range of downstream tasks.",
"title": ""
}
] |
scidocsrr
|
1126f4133228cdb754a475b00a2e422d
|
The use of environmental, health and safety research in nanotechnology research.
|
[
{
"docid": "5c52b6c7f35f0b075d338e8b17b88a97",
"text": "The ISI subject categories classify journals included in the Science Citation Index (SCI). The aggregated journal-journal citation matrix contained in the Journal Citation Reports can be aggregated on the basis of these categories. This leads to an asymmetrical transaction matrix (citing versus cited) which is much more densely populated than the underlying matrix at the journal level. Exploratory factor analysis leads us to opt for a fourteen-factor solution. This solution can easily be interpreted as the disciplinary structure of science. The nested maps of science (corresponding to 14 factors, 172 categories, and 6,164 journals) are brought online at http://www.leydesdorff.net/map06/index.htm. An analysis of interdisciplinary relations is pursued at three levels of aggregation using the newly added ISI subject category of “Nanoscience & nanotechnology.” The journal level provides the finer grained perspective. Errors in the attribution of journals to the ISI subject categories are averaged out so that the factor analysis can reveal the main structures. The mapping of science can, therefore, be comprehensive at the level of ISI subject categories.",
"title": ""
}
] |
[
{
"docid": "0b2fe2ec2168a88d187e8fc85250d30c",
"text": "At present readers of English have still limited access to Vygotsky's writings. Existing translations are marred by mistakes and outright falsifications. Analyses of Vygotsky's work tend to downplay the collaborative and experimental nature of his research. Several suggestions are made to improve this situation. New translations are certainly needed and new analyses should pay attention to the contextual nature of Vygotsky's thinking and research practice.",
"title": ""
},
{
"docid": "835072343b919fa76c54c6ba59b79dd3",
"text": "Electronic markers can be used to link physical representations and virtual content for tangible interaction, such as visual markers commonly used for tabletops. Another possibility is to leverage capacitive touch inputs of smartphones, tablets and notebooks. However, existing approaches either do not couple physical and virtual representations or require significant post-processing. This paper presents and evaluates a novel approach using a coding scheme for the automatic identification of tangibles by touch inputs when they are touched and shifted. The codes can be generated automatically and integrated into a great variety of existing 3D models from the internet. The resulting models can then be printed completely in one cycle by off-the-shelf 3D printers; post processing is not needed. Besides the identification, the object's position and orientation can be tracked by touch devices. Our evaluation examined multiple variables and showed that the CapCodes can be integrated into existing 3D models and the approach could also be applied to untouched use for larger tangibles.",
"title": ""
},
{
"docid": "ad02d315182c1b6181c6dda59185142c",
"text": "Fact checking is an essential part of any investigative work. For linguistic, psychological and social reasons, it is an inherently human task. Yet, modern media make it increasingly difficult for experts to keep up with the pace at which information is produced. Hence, we believe there is value in tools to assist them in this process. Much of the effort on Web data research has been focused on coping with incompleteness and uncertainty. Comparatively, dealing with context has received less attention, although it is crucial in judging the validity of a claim. For instance, what holds true in a US state, might not in its neighbors, e.g., due to obsolete or superseded laws. In this work, we address the problem of checking the validity of claims in multiple contexts. We define a language to represent and query facts across different dimensions. The approach is non-intrusive and allows relatively easy modeling, while capturing incompleteness and uncertainty. We describe the syntax and semantics of the language. We present algorithms to demonstrate its feasibility, and we illustrate its usefulness through examples.",
"title": ""
},
{
"docid": "fa0883f4adf79c65a6c13c992ae08b3f",
"text": "Being able to keep the graph scale small while capturing the properties of the original social graph, graph sampling provides an efficient, yet inexpensive solution for social network analysis. The challenge is how to create a small, but representative sample out of the massive social graph with millions or even billions of nodes. Several sampling algorithms have been proposed in previous studies, but there lacks fair evaluation and comparison among them. In this paper, we analyze the state-of art graph sampling algorithms and evaluate their performance on some widely recognized graph properties on directed graphs using large-scale social network datasets. We evaluate not only the commonly used node degree distribution, but also clustering coefficient, which quantifies how well connected are the neighbors of a node in a graph. Through the comparison we have found that none of the algorithms is able to obtain satisfied sampling results in both of these properties, and the performance of each algorithm differs much in different kinds of datasets.",
"title": ""
},
{
"docid": "1d1f93011e83bcefd207c845b2edafcd",
"text": "Although single dialyzer use and reuse by chemical reprocessing are both associated with some complications, there is no definitive advantage to either in this respect. Some complications occur mainly at the first use of a dialyzer: a new cellophane or cuprophane membrane may activate the complement system, or a noxious agent may be introduced to the dialyzer during production or generated during storage. These agents may not be completely removed during the routine rinsing procedure. The reuse of dialyzers is associated with environmental contamination, allergic reactions, residual chemical infusion (rebound release), inadequate concentration of disinfectants, and pyrogen reactions. Bleach used during reprocessing causes a progressive increase in dialyzer permeability to larger molecules, including albumin. Reprocessing methods without the use of bleach are associated with progressive decreases in membrane permeability, particularly to larger molecules. Most comparative studies have not shown differences in mortality between centers reusing and those not reusing dialyzers, however, the largest cluster of dialysis-related deaths occurred with single-use dialyzers due to the presence of perfluorohydrocarbon introduced during the manufacturing process and not completely removed during preparation of the dialyzers before the dialysis procedure. The cost savings associated with reuse is substantial, especially with more expensive, high-flux synthetic membrane dialyzers. With reuse, some dialysis centers can afford to utilize more efficient dialyzers that are more expensive; consequently they provide a higher dose of dialysis and reduce mortality. Some studies have shown minimally higher morbidity with chemical reuse, depending on the method. Waste disposal is definitely decreased with the reuse of dialyzers, thus environmental impacts are lessened, particularly if reprocessing is done by heat disinfection. It is safe to predict that dialyzer reuse in dialysis centers will continue because it also saves money for the providers. Saving both time for the patient and money for the provider were the main motivations to design a new machine for daily home hemodialysis. The machine, developed in the 1990s, cleans and heat disinfects the dialyzer and lines in situ so they do not need to be changed for a month. In contrast, reuse of dialyzers in home hemodialysis patients treated with other hemodialysis machines is becoming less popular and is almost extinct.",
"title": ""
},
{
"docid": "ab07e92f052a03aac253fabadaea4ab3",
"text": "As news is increasingly accessed on smartphones and tablets, the need for personalising news app interactions is apparent. We report a series of three studies addressing key issues in the development of adaptive news app interfaces. We first surveyed users' news reading preferences and behaviours; analysis revealed three primary types of reader. We then implemented and deployed an Android news app that logs users' interactions with the app. We used the logs to train a classifier and showed that it is able to reliably recognise a user according to their reader type. Finally we evaluated alternative, adaptive user interfaces for each reader type. The evaluation demonstrates the differential benefit of the adaptation for different users of the news app and the feasibility of adaptive interfaces for news apps.",
"title": ""
},
{
"docid": "e65c5458a27fc5367be4fd6024e8eb43",
"text": "The aims of this article are to review low-voltage vs high-voltage electrical burn complications in adults and to identify novel areas that are not recognized to improve outcomes. An extensive literature search on electrical burn injuries was performed using OVID MEDLINE, PubMed, and EMBASE databases from 1946 to 2015. Studies relating to outcomes of electrical injury in the adult population (≥18 years of age) were included in the study. Forty-one single-institution publications with a total of 5485 electrical injury patients were identified and included in the present study. Fourty-four percent of these patients were low-voltage injuries (LVIs), 38.3% high-voltage injuries (HVIs), and 43.7% with voltage not otherwise specified. Forty-four percentage of studies did not characterize outcomes according to LHIs vs HVIs. Reported outcomes include surgical, medical, posttraumatic, and others (long-term/psychological/rehabilitative), all of which report greater incidence rates in HVI than in LVI. Only two studies report on psychological outcomes such as posttraumatic stress disorder. Mortality rates from electrical injuries are 2.6% in LVI, 5.2% in HVI, and 3.7% in not otherwise specified. Coroner's reports revealed a ratio of 2.4:1 for deaths caused by LVI compared with HVI. HVIs lead to greater morbidity and mortality than LVIs. However, the results of the coroner's reports suggest that immediate mortality from LVI may be underestimated. Furthermore, on the basis of this analysis, we conclude that the majority of studies report electrical injury outcomes; however, the majority of them do not analyze complications by low vs high voltage and often lack long-term psychological and rehabilitation outcomes after electrical injury indicating that a variety of central aspects are not being evaluated or assessed.",
"title": ""
},
{
"docid": "deb637eb087f817485fc5a56d7a1e87f",
"text": "The Internet continues to grow as a medium to support commerce. Economic analysis of Internet commerce is still in a nascent stage while Internet technology and use has rapidly advanced. The result is an Internet marketplace brimming with entrepreneurs and major corporations experimenting with business strategies and technology advances even though the economics of Internet commerce is not well understood. This thesis responds to this need by exploring how the Internet reduces the market friction common in physical commerce. The intermediaries who help reduce market friction in physical markets may be eliminated, when suppliers and consumers increasingly rely on the Internet as a transaction medium. An intermediary in any market may reduce transaction costs by performing four roles: aggregation, pricing, search, and trust. The intermediary roles of aggregation and pricing may provide little or no value as the Internet becomes the medium for commerce because the technology, not the intermediary, reduces transaction costs. This thesis examines the possible elimination of aggregation and pricing intermediaries in Internet commerce. It does so by extending the theory of the economics of intermediation and electronic markets; developing a methodology for the analysis of Internet price competition; analyzing exploratory empirical data of the book, compact disc, and software markets to test a subset of this theory; and exploring the public policy implications of Internet price discrimination. The approach is interdisciplinary because this thesis integrates the technology, policy, and economics that underpin the role of aggregation and pricing intermediaries in Internet commerce. The thesis shows that Internet commerce may not reduce market friction because prices are higher when consumers buy homogeneous products on the Internet, and price dispersion for homogenous products among Internet retailers is greater than the price dispersion among physical retailers. Internet retailers—even those selling homogenous goods—can develop pricing strategies to differentiate themselves from their competitors and to price discriminate. The ability for the Internet to become a medium for price discrimination is an area that requires the attention of public policy makers. While self-regulation of Internet price discrimination may be the most appropriate policy, monitoring by the United States government through the Federal Trade Commission and international organizations such as the World Trade Organization helps establish a trusted transaction environment for future Internet commerce growth. Thesis Committee: Lee W. McKnight Lecturer, Technology and Policy Program Erik Brynjolfsson Associate Professor, MIT Sloan School of Management David D. Clark Senior Research Scientist, Laboratory for Computer Science",
"title": ""
},
{
"docid": "a789965702706ae8c8e82eb34ee5decc",
"text": "Pregnancy presents a great challenge to the maternal immune system. Given that maternal alloreactive lymphocytes are not depleted during pregnancy, local and/or systemic mechanisms have to serve a central function in altering the maternal immune responses. Regulatory T cells (Tregs) and the PD-1/PD-L1 pathway are both critical in controlling the immune responses. Recent studies have proved the critical function of the PD-1/PD-L1 pathway in regulating the T-cell homeostasis and the peripheral tolerance through promoting the development and function of Tregs, and inhibiting the activation of effector T cells. The function of the PD-1/PD-L1 pathway in feto-maternal interface and pregnancy has been investigated in human and animal models of pregnancy. In this review, we provide recent insight into the role of the PD-1/PD-L1 pathway in regulating T-cell homeostasis, maternal tolerance, and pregnancy-related complications as well as its possible applicability in clinical immunotherapy.",
"title": ""
},
{
"docid": "12818095167dbf85d5d717121f00f533",
"text": "Sarmento, H, Figueiredo, A, Lago-Peñas, C, Milanovic, Z, Barbosa, A, Tadeu, P, and Bradley, PS. Influence of tactical and situational variables on offensive sequences during elite football matches. J Strength Cond Res 32(8): 2331-2339, 2018-This study examined the influence of tactical and situational variables on offensive sequences during elite football matches. A sample of 68 games and 1,694 offensive sequences from the Spanish La Liga, Italian Serie A, German Bundesliga, English Premier League, and Champions League were analyzed using χ and logistic regression analyses. Results revealed that counterattacks (odds ratio [OR] = 1.44; 95% confidence interval [CI]: 1.13-1.83; p < 0.01) and fast attacks (OR = 1.43; 95% CI: 1.11-1.85; p < 0.01) increased the success of an offensive sequence by 40% compared with positional attacks. The chance of an offensive sequence ending effectively in games from the Spanish, Italian, and English Leagues were higher than that in the Champions League. Offensive sequences that started in the preoffensive or offensive zones were more successful than those started in the defensive zones. An increase of 1 second in the offensive sequence duration and an extra pass resulted in a decrease of 2% (OR = 0.98; 95% CI: 0.98-0.99; p < 0.001) and 7% (OR = 0.93; 95% CI: 0.91-0.96; p < 0.001), respectively, in the probability of its success. These findings could assist coaches in designing specific training situations that improve the effectiveness of the offensive process.",
"title": ""
},
{
"docid": "1ef814163a5c91155a2d7e1b4b19f4d7",
"text": "In this article, a frequency reconfigurable fractal patch antenna using pin diodes is proposed and studied. The antenna structure has been designed on FR-4 low-cost substrate material of relative permittivity εr = 4.4, with a compact volume of 30×30×0.8 mm3. The bandwidth and resonance frequency of the antenna design will be increased when we exploit the fractal iteration on the patch antenna. This antenna covers some service bands such as: WiMAX, m-WiMAX, WLAN, C-band and X band applications. The simulation of the proposed antenna is carried out using CST microwave studio. The radiation pattern and S parameter are further presented and discussed.",
"title": ""
},
{
"docid": "20e13726ebc2430f7305c75d70761a18",
"text": "The procedure of pancreaticoduodenectomy consists of three parts: resection, lymph node dissection, and reconstruction. A transection of the pancreas is commonly performed after a maneuver of the pancreatic head, exposing of the portal vein or lymph node dissection, and it should be confirmed as a safe method for pancreatic transection for decreasing the incidence of pancreatic fistula. However, there are only a few clinical trials with high levels of evidence for pancreatic surgery. In this report, we discuss the following issues: dissection of peripancreatic tissue, exposing the portal vein, pancreatic transection, dissection of the right hemicircle of the peri-superior mesenteric artery including plexus and lymph nodes, and dissection of the pancreatic parenchyma.",
"title": ""
},
{
"docid": "b7a6adb1eee3fe1f0a9abd4508d57828",
"text": "As part of a complete software stack for autonomous driving, NVIDIA has created a neural-network-based system, known as PilotNet, which outputs steering angles given images of the road ahead. PilotNet is trained using road images paired with the steering angles generated by a human driving a data-collection car. It derives the necessary domain knowledge by observing human drivers. This eliminates the need for human engineers to anticipate what is important in an image and foresee all the necessary rules for safe driving. Road tests demonstrated that PilotNet can successfully perform lane keeping in a wide variety of driving conditions, regardless of whether lane markings are present or not. The goal of the work described here is to explain what PilotNet learns and how it makes its decisions. To this end we developed a method for determining which elements in the road image most influence PilotNet’s steering decision. Results show that PilotNet indeed learns to recognize relevant objects on the road. In addition to learning the obvious features such as lane markings, edges of roads, and other cars, PilotNet learns more subtle features that would be hard to anticipate and program by engineers, for example, bushes lining the edge of the road and atypical vehicle classes.",
"title": ""
},
{
"docid": "1d33c03bc877acd1d04ca7aeb58a4af4",
"text": "We state and analyze the first active learning algorithm which works in the presence of arbitrary forms of noise. The algorithm, A2 (for Agnostic Active), relies only upon the assumption that the samples are drawn i.i.d. from a fixed distribution. We show that A2 achieves an exponential improvement (i.e., requires only O (ln 1/ε) samples to find an ε-optimal classifier) over the usual sample complexity of supervised learning, for several settings considered before in the realizable case. These include learning threshold classifiers and learning homogeneous linear separators with respect to an input distribution which is uniform over the unit sphere.",
"title": ""
},
{
"docid": "ff3359fe51ed275de1f3b61eee833045",
"text": "Opinion target extraction is a fundamental task in opinion mining. In recent years, neural network based supervised learning methods have achieved competitive performance on this task. However, as with any supervised learning method, neural network based methods for this task cannot work well when the training data comes from a different domain than the test data. On the other hand, some rule-based unsupervised methods have shown to be robust when applied to different domains. In this work, we use rule-based unsupervised methods to create auxiliary labels and use neural network models to learn a hidden representation that works well for different domains. When this hidden representation is used for opinion target extraction, we find that it can outperform a number of strong baselines with a large margin.",
"title": ""
},
{
"docid": "5f57fdeba1afdfb7dcbd8832f806bc48",
"text": "OBJECTIVES\nAdolescents spend increasingly more time on electronic devices, and sleep deficiency rising in adolescents constitutes a major public health concern. The aim of the present study was to investigate daytime screen use and use of electronic devices before bedtime in relation to sleep.\n\n\nDESIGN\nA large cross-sectional population-based survey study from 2012, the youth@hordaland study, in Hordaland County in Norway.\n\n\nSETTING\nCross-sectional general community-based study.\n\n\nPARTICIPANTS\n9846 adolescents from three age cohorts aged 16-19. The main independent variables were type and frequency of electronic devices at bedtime and hours of screen-time during leisure time.\n\n\nOUTCOMES\nSleep variables calculated based on self-report including bedtime, rise time, time in bed, sleep duration, sleep onset latency and wake after sleep onset.\n\n\nRESULTS\nAdolescents spent a large amount of time during the day and at bedtime using electronic devices. Daytime and bedtime use of electronic devices were both related to sleep measures, with an increased risk of short sleep duration, long sleep onset latency and increased sleep deficiency. A dose-response relationship emerged between sleep duration and use of electronic devices, exemplified by the association between PC use and risk of less than 5 h of sleep (OR=2.70, 95% CI 2.14 to 3.39), and comparable lower odds for 7-8 h of sleep (OR=1.64, 95% CI 1.38 to 1.96).\n\n\nCONCLUSIONS\nUse of electronic devices is frequent in adolescence, during the day as well as at bedtime. The results demonstrate a negative relation between use of technology and sleep, suggesting that recommendations on healthy media use could include restrictions on electronic devices.",
"title": ""
},
{
"docid": "f5182ad077b1fdaa450d16544d63f01b",
"text": "This article paves the knowledge about the next generation Bluetooth Standard-BT 5 that will bring some mesmerizing upgrades including increased range, speed, and broadcast messaging capacity. Further, three relevant queries such as what is better about BT 5, why does that matter, and how will it affect IoT have been explained to gather related information so that developers, practitioners, and naive people could formulate BT 5 into IoT based applications while assimilating the need of short range communication in true sense.",
"title": ""
},
{
"docid": "2910fe6ac9958d9cbf9014c5d3140030",
"text": "We present a novel variational approach to estimate dense depth maps from multiple images in real-time. By using robust penalizers for both data term and regularizer, our method preserves discontinuities in the depth map. We demonstrate that the integration of multiple images substantially increases the robustness of estimated depth maps to noise in the input images. The integration of our method into recently published algorithms for camera tracking allows dense geometry reconstruction in real-time using a single handheld camera. We demonstrate the performance of our algorithm with real-world data.",
"title": ""
},
{
"docid": "953d1b368a4a6fb09e6b34e3131d7804",
"text": "The activation of the Deep Convolutional Neural Networks hidden layers can be successfully used as features, often referred as Deep Features, in generic visual similarity search tasks. Recently scientists have shown that permutation-based methods offer very good performance in indexing and supporting approximate similarity search on large database of objects. Permutation-based approaches represent metric objects as sequences (permutations) of reference objects, chosen from a predefined set of data. However, associating objects with permutations might have a high cost due to the distance calculation between the data objects and the reference objects. In this work, we propose a new approach to generate permutations at a very low computational cost, when objects to be indexed are Deep Features. We show that the permutations generated using the proposed method are more effective than those obtained using pivot selection criteria specifically developed for permutation-based methods.",
"title": ""
},
{
"docid": "0752210c380591aca1017d8796cd70a3",
"text": "For robots to coexist with humans in a social world like ours, it is crucial that they possess human-like social interaction skills. Programming a robot to possess such skills is a challenging task. In this paper, we propose a Multimodal Deep Q-Network (MDQN) to enable a robot to learn human-like interaction skills through a trial and error method. This paper aims to develop a robot that gathers data during its interaction with a human, and learns human interaction behavior from the high dimensional sensory information using end-to-end reinforcement learning. This paper demonstrates that the robot was able to learn basic interaction skills successfully, after 14 days of interacting with people.",
"title": ""
}
] |
scidocsrr
|
c8983bfb8c5a99dee65e22755e50089d
|
ARCTIC: metadata extraction from scientific papers in pdf using two-layer CRF
|
[
{
"docid": "46ae500eb0f9c67cd1e384aef90db032",
"text": "The task of assigning label sequences to a set of observation sequences arises in many fields, including bioinformatics, computational linguistics and speech recognition [6, 9, 12]. For example, consider the natural language processing task of labeling the words in a sentence with their corresponding part-of-speech (POS) tags. In this task, each word is labeled with a tag indicating its appropriate part of speech, resulting in annotated text, such as:",
"title": ""
},
{
"docid": "bbcd2673c6c24043b9bedb281ce4a447",
"text": "We introduce Enlil, an information extraction system that discovers the institutional affiliations of authors in scholarly papers. Enlil consists of two steps: one that first identifies authors and affiliations using a conditional random field; and a second support vector machine that connects authors to their affiliations. We benchmark Enlil in three separate experiments drawn from three different sources: the ACL Anthology Corpus, the ACM Digital Library, and a set of cross-disciplinary scientific journal articles acquired by querying Google Scholar. Against a state-of-the-art production baseline, Enlil reports a statistically significant improvement in F_1 of nearly 10% (p << 0.01). In the case of multidisciplinary articles from Google Scholar, Enlil is benchmarked over both clean input (F_1 > 90%) and automatically-acquired input (F_1 > 80%).\n We have deployed Enlil in a case study involving Asian genomics research publication patterns to understand how government sponsored collaborative links evolve. Enlil has enabled our team to construct and validate new metrics to quantify the facilitation of research as opposed to direct publication.",
"title": ""
},
{
"docid": "436657862080e0c37966ddba3df0c4b5",
"text": "Scholarly digital libraries increasingly provide analytics to information within documents themselves. This includes information about the logical document structure of use to downstream components, such as search, navigation, and summarization. In this paper, the authors describe SectLabel, a module that further develops existing software to detect the logical structure of a document from existing PDF files, using the formalism of conditional random fields. While previous work has assumed access only to the raw text representation of the document, a key aspect of this work is to integrate the use of a richer representation of the document that includes features from optical character recognition (OCR), such as font size and text position. Experiments reveal that using such rich features improves logical structure detection by a significant 9 F1 points, over a suitable baseline, motivating the use of richer document representations in other digital library applications. DOI: 10.4018/978-1-4666-0900-6.ch014",
"title": ""
},
{
"docid": "f5d82708cda91d48920dd0b39cfe9227",
"text": "This paper evaluates the performance of tools for the extraction of metadata from scientific articles. Accurate metadata extraction is an important task for automating the management of digital libraries. This comparative study is a guide for developers looking to integrate the most suitable and effective metadata extraction tool into their software. We shed light on the strengths and weaknesses of seven tools in common use. In our evaluation using papers from the arXiv collection, GROBID delivered the best results, followed by Mendeley Desktop. SciPlore Xtract, PDFMeat, and SVMHeaderParse also delivered good results depending on the metadata type to be extracted.",
"title": ""
}
] |
[
{
"docid": "d0d7016430b55ae6dec0edf3b5e1b1fd",
"text": "• Our goal is to extend the Julia static analyzer, based on abstract interpretation, to perform formally correct analyses of Android programs. This article is an in depth description of such an extension,of the difficulties that we faced and of the results that we obtained. • We have extended the class analysis of the Julia analyzer, which lies at the heart of many other analyses, by considering some Android key specific features • Classcast, dead code, nullness and termination analysis are done. • Formally correct results in at most 7 min and on standard hardware. • As a language, Android is Java with an extended library for mobile and interactive applications, hence based on an eventdriven architecture. (WRONG)",
"title": ""
},
{
"docid": "ab7db4c786d2f5b084bf9dd2529baed6",
"text": "New protocols for Internet inter-domain routing struggle to get widely adopted. Because the Internet consists of more than 50,000 autonomous systems (ASes), deployment of a new routing protocol has to be incremental. In this work, we study such incremental deployment. We first formulate the routing problem in regard to a metric of routing cost. Then, the paper proposes and rigorously defines a statistical notion of protocol ignorance that quantifies the inability of a routing protocol to accurately determine routing prices with respect to the metric of interest. The proposed protocol-ignorance model of a routing protocol is fairly generic and can be applied to routing in both inter-domain and intra-domain settings, as well as to transportation and other types of networks. Our model of protocol deployment makes our study specific to Internet interdomain routing. Through a combination of mathematical analysis and simulation, we demonstrate that the benefits from adopting a new inter-domain protocol accumulate smoothly during its incremental deployment. In particular, the simulation shows that decreasing the routing price by 25% requires between 43% and 53% of all nodes to adopt the new protocol. Our findings elucidate the deployment struggle of new inter-domain routing protocols and indicate that wide deployment of such a protocol necessitates involving a large number of relevant ASes into a coordinated effort to adopt the new protocol.",
"title": ""
},
{
"docid": "f008e38cd63db0e4cf90705cc5e8860e",
"text": "6 Abstract— The purpose of this paper is to propose a MATLAB/ Simulink simulators for PV cell/module/array based on the Two-diode model of a PV cell.This model is known to have better accuracy at low irradiance levels which allows for more accurate prediction of PV systems performance.To reduce computational time , the input parameters are reduced as the values of Rs and Rp are estimated by an efficient iteration method. Furthermore ,all of the inputs to the simulators are information available on a standard PV module datasheet. The present paper present first abrief introduction to the behavior and functioning of a PV device and write the basic equation of the two-diode model,without the intention of providing an indepth analysis of the photovoltaic phenomena and the semicondutor physics. The introduction on PV devices is followed by the modeling and simulation of PV cell/PV module/PV array, which is the main subject of this paper. A MATLAB Simulik based simulation study of PV cell/PV module/PV array is carried out and presented .The simulation model makes use of the two-diode model basic circuit equations of PV solar cell, taking the effect of sunlight irradiance and cell temperature into consideration on the output current I-V characteristic and output power P-V characteristic . A particular typical 50W solar panel was used for model evaluation. The simulation results , compared with points taken directly from the data sheet and curves pubblished by the manufacturers, show excellent correspondance to the model.",
"title": ""
},
{
"docid": "6cf7fb67afbbc7d396649bb3f05dd0ca",
"text": "This paper details a methodology for using structured light laser imaging to create high resolution bathymetric maps of the sea floor. The system includes a pair of stereo cameras and an inclined 532nm sheet laser mounted to a remotely operated vehicle (ROV). While a structured light system generally requires a single camera, a stereo vision set up is used here for in-situ calibration of the laser system geometry by triangulating points on the laser line. This allows for quick calibration at the survey site and does not require precise jigs or a controlled environment. A batch procedure to extract the laser line from the images to sub-pixel accuracy is also presented. The method is robust to variations in image quality and moderate amounts of water column turbidity. The final maps are constructed using a reformulation of a previous bathymetric Simultaneous Localization and Mapping (SLAM) algorithm called incremental Smoothing and Mapping (iSAM). The iSAM framework is adapted from previous applications to perform sub-mapping, where segments of previously visited terrain are registered to create relative pose constraints. The resulting maps can be gridded at one centimeter and have significantly higher sample density than similar surveys using high frequency multibeam sonar or stereo vision. Results are presented for sample surveys at a submerged archaeological site and sea floor rock outcrop.",
"title": ""
},
{
"docid": "447399fb4b6c059c58b1b49a8c94330f",
"text": "Learning with imbalanced data is one of the recent challenges in machine learning. Various solutions have been proposed in order to find a treatment for this problem, such as modifying methods or the application of a preprocessing stage. Within the preprocessing focused on balancing data, two tendencies exist: reduce the set of examples (undersampling) or replicate minority class examples (oversampling). Undersampling with imbalanced datasets could be considered as a prototype selection procedure with the purpose of balancing datasets to achieve a high classification rate, avoiding the bias toward majority class examples. Evolutionary algorithms have been used for classical prototype selection showing good results, where the fitness function is associated to the classification and reduction rates. In this paper, we propose a set of methods called evolutionary undersampling that take into consideration the nature of the problem and use different fitness functions for getting a good trade-off between balance of distribution of classes and performance. The study includes a taxonomy of the approaches and an overall comparison among our models and state of the art undersampling methods. The results have been contrasted by using nonparametric statistical procedures and show that evolutionary undersampling outperforms the nonevolutionary models when the degree of imbalance is increased.",
"title": ""
},
{
"docid": "db4ea0aca8add80d8674abb2ecf2276f",
"text": "We combine polynomial techniques with some geometric arguments to obtain restrictions of the structure of spherical designs with fixed odd strength and odd cardinality. Our bounds for the extreme inner products of such designs allow us to prove nonexistence results in many cases. Applications are shown for 7-designs. DOI: 10.1134/S0032946009020033",
"title": ""
},
{
"docid": "fd26e9e1d054bd76da28fb792dc88040",
"text": "Both strength and endurance training have several positive effects on aging muscle and physical performance of middle-aged and older adults, but their combination may compromise optimal adaptation. This study examined the possible interference of combined strength and endurance training on neuromuscular performance and skeletal muscle hypertrophy in previously untrained 40-67-year-old men. Maximal strength and muscle activation in the upper and lower extremities, maximal concentric power, aerobic capacity and muscle fiber size and distribution in the vastus lateralis muscle were measured before and after a 21-week training period. Ninety-six men [mean age 56 (SD 7) years] completed high-intensity strength training (S) twice a week, endurance training (E) twice a week, combined training (SE) four times per week or served as controls (C). SE and S led to similar gains in one repetition maximum strength of the lower extremities [22 (9)% and 21 (8)%, P<0.001], whereas E and C showed minor changes. Cross-sectional area of type II muscle fibers only increased in S [26 (22)%, P=0.002], while SE showed an inconsistent, non-significant change [8 (35)%, P=0.73]. Combined training may interfere with muscle hypertrophy in aging men, despite similar gains in maximal strength between the strength and the combined training groups.",
"title": ""
},
{
"docid": "0160ef86512929e91fc3e5bb3902514e",
"text": "In this paper we propose a clustering method based on combination of the particle swarm optimization (PSO) and the k-mean algorithm. PSO algorithm was showed to successfully converge during the initial stages of a global search, but around global optimum, the search process will become very slow. On the contrary, k-means algorithm can achieve faster convergence to optimum solution. At the same time, the convergent accuracy for k-means can be higher than PSO. So in this paper, a hybrid algorithm combining particle swarm optimization (PSO) algorithm with k-means algorithm is proposed we refer to it as PSO-KM algorithm. The algorithm aims to group a given set of data into a user specified number of clusters. We evaluate the performance of the proposed algorithm using five datasets. The algorithm performance is compared to K-means and PSO clustering.",
"title": ""
},
{
"docid": "0dd2596342ecb90099f70b800ac4ea47",
"text": "This letter presents a broadband transition between microstrip and CPW located at the opposite lawyer of the substrate. Basically, the transition is based on two couples of microstrip-to-slotline transitions. In order to widen bandwidth of the transition, a short-ended parallel microstrip stub is added. A demonstrator transition has been designed, fabricated and measured. Results show that a frequency range of 2.05 to 9.96 GHz (referred to return loss of 10 dB) is obtained.",
"title": ""
},
{
"docid": "fd814dde88ba181758efa131f5185526",
"text": "This paper describes a translator called Java PathFinder (Jpf), which translates from Java to Promela, the modeling language of the Spin model checker. Jpf translates a given Java program into a Promela model, which then can be model checked using Spin. The Java program may contain assertions, which are translated into similar assertions in the Promela model. The Spin model checker will then look for deadlocks and violations of any stated assertions. Jpf generates a Promela model with the same state space characteristics as the Java program. Hence, the Java program must have a finite and tractable state space. This work should be seen in a broader attempt to make formal methods applicable within NASA’s areas such as space, aviation, and robotics. The work is a continuation of an effort to formally analyze, using Spin, a multi-threaded operating system for the Deep-Space 1 space craft, and of previous work in applying existing model checkers and theorem provers to real applications.",
"title": ""
},
{
"docid": "e03af37529ad80ba2b1833a7affb4c34",
"text": "Among the different mechanisms of bacterial resistance to antimicrobial agents that have been studied, biofilm formation is one of the most widespread. This mechanism is frequently the cause of failure in the treatment of prosthetic device infections, and several attempts have been made to develop molecules and protocols that are able to inhibit biofilm-embedded bacteria. We present data suggesting the possibility that proteolytic enzymes could significantly enhance the activities of antibiotics against biofilms. Antibiotic susceptibility tests on both planktonic and sessile cultures, studies on the dynamics of colonization of 10 biofilm-forming isolates, and then bioluminescence and scanning electron microscopy under seven different experimental conditions showed that serratiopeptidase greatly enhances the activity of ofloxacin on sessile cultures and can inhibit biofilm formation.",
"title": ""
},
{
"docid": "bdbd3d65c79e4f22d2e85ac4137ee67a",
"text": "With the advances in new-generation information technologies, especially big data and digital twin, smart manufacturing is becoming the focus of global manufacturing transformation and upgrading. Intelligence comes from data. Integrated analysis for the manufacturing big data is beneficial to all aspects of manufacturing. Besides, the digital twin paves a way for the cyber-physical integration of manufacturing, which is an important bottleneck to achieve smart manufacturing. In this paper, the big data and digital twin in manufacturing are reviewed, including their concept as well as their applications in product design, production planning, manufacturing, and predictive maintenance. On this basis, the similarities and differences between big data and digital twin are compared from the general and data perspectives. Since the big data and digital twin can be complementary, how they can be integrated to promote smart manufacturing are discussed.",
"title": ""
},
{
"docid": "3ba586c49e662c29f373eb08ad9eb1cb",
"text": "The first pathologic alterations of the retina are seen in the vessel network. These modifications affect very differently arteries and veins, and the appearance and entity of the modification differ as the retinopathy becomes milder or more severe. In order to develop an automatic procedure for the diagnosis and grading of retinopathy, it is necessary to be able to discriminate arteries from veins. The problem is complicated by the similarity in the descriptive features of these two structures and by the contrast and luminosity variability of the retina. We developed a new algorithm for classifying the vessels, which exploits the peculiarities of retinal images. By applying a divide et imperaapproach that partitioned a concentric zone around the optic disc into quadrants, we were able to perform a more robust local classification analysis. The results obtained by the proposed technique were compared with those provided by a manual classification on a validation set of 443 vessels and reached an overall classification error of 12 %, which reduces to 7 % if only the diagnostically important retinal vessels are considered.",
"title": ""
},
{
"docid": "a0f8af71421d484cbebb550a0bf59a6d",
"text": "researchers and practitioners doing work in these three related areas. Risk management, fraud detection, and intrusion detection all involve monitoring the behavior of populations of users (or their accounts) to estimate, plan for, avoid, or detect risk. In his paper, Til Schuermann (Oliver, Wyman, and Company) categorizes risk into market risk, credit risk, and operating risk (or fraud). Similarly, Barry Glasgow (Metropolitan Life Insurance Co.) discusses inherent risk versus fraud. This workshop focused primarily on what might loosely be termed “improper behavior,” which includes fraud, intrusion, delinquency, and account defaulting. However, Glasgow does discuss the estimation of “inherent risk,” which is the bread and butter of insurance firms. Problems of predicting, preventing, and detecting improper behavior share characteristics that complicate the application of existing AI and machine-learning technologies. In particular, these problems often have or require more than one of the following that complicate the technical problem of automatically learning predictive models: large volumes of (historical) data, highly skewed distributions (“improper behavior” occurs far less frequently than “proper behavior”), changing distributions (behaviors change over time), widely varying error costs (in certain contexts, false positive errors are far more costly than false negatives), costs that change over time, adaptation of undesirable behavior to detection techniques, changing patterns of legitimate behavior, the trad■ The 1997 AAAI Workshop on AI Approaches to Fraud Detection and Risk Management brought together over 50 researchers and practitioners to discuss problems of fraud detection, computer intrusion detection, and risk scoring. This article presents highlights, including discussions of problematic issues that are common to these application domains, and proposed solutions that apply a variety of AI techniques.",
"title": ""
},
{
"docid": "785b42fe7765d415dcfef09a6142aa6f",
"text": "In this paper a first approach for digital media forensics is presented to determine the used microphones and the environments of recorded digital audio samples by using known audio steganalysis features. Our first evaluation is based on a limited exemplary test set of 10 different audio reference signals recorded as mono audio data by four microphones in 10 different rooms with 44.1 kHz sampling rate and 16 bit quantisation. Note that, of course, a generalisation of the results cannot be achieved. Motivated by the syntactical and semantical analysis of information and in particular by known audio steganalysis approaches, a first set of specific features are selected for classification to evaluate, whether this first feature set can support correct classifications. The idea was mainly driven by the existing steganalysis features and the question of applicability within a first and limited test set. In the tests presented in this paper, an inter-device analysis with different device characteristics is performed while intra-device evaluations (identical microphone models of the same manufacturer) are not considered. For classification the data mining tool WEKA with K-means as a clustering and Naive Bayes as a classification technique are applied with the goal to evaluate their classification in regard to the classification accuracy on known audio steganalysis features. Our results show, that for our test set, the used classification techniques and selected steganalysis features, microphones can be better classified than environments. These first tests show promising results but of course are based on a limited test and training set as well a specific test set generation. Therefore additional and enhanced features with different test set generation strategies are necessary to generalise the findings.",
"title": ""
},
{
"docid": "f715f471118b169502941797d17ceac6",
"text": "Software is a knowledge intensive product, which can only evolve if there is effective and efficient information exchange between developers. Complying to coding conventions improves information exchange by improving the readability of source code. However, without some form of enforcement, compliance to coding conventions is limited. We look at the problem of information exchange in code and propose gamification as a way to motivate developers to invest in compliance. Our concept consists of a technical prototype and its integration into a Scrum environment. By means of two experiments with agile software teams and subsequent surveys, we show that gamification can effectively improve adherence to coding conventions.",
"title": ""
},
{
"docid": "adfe1398a35e63b0bfbf2fd55e7a9d81",
"text": "Neutrosophic numbers easily allow modeling uncertainties of prices universe, thus justifying the growing interest for theoretical and practical aspects of arithmetic generated by some special numbers in our work. At the beginning of this paper, we reconsider the importance in applied research of instrumental discernment, viewed as the main support of the final measurement validity. Theoretically, the need for discernment is revealed by decision logic, and more recently by the new neutrosophic logic and by constructing neutrosophic-type index numbers, exemplified in the context and applied to the world of prices, and, from a practical standpoint, by the possibility to use index numbers in characterization of some cyclical phenomena and economic processes, e.g. inflation rate. The neutrosophic index numbers or neutrosophic indexes are the key topic of this article. The next step is an interrogative and applicative one, drawing the coordinates of an optimized discernment centered on neutrosophic-type index numbers. The inevitable conclusions are optimistic in relation to the common future of the index method and neutrosophic logic, with statistical and economic meaning and utility.",
"title": ""
},
{
"docid": "ce490bbf1146c7832c35ce49a9dd45f2",
"text": "Smart Farming makes a tremendous contribution for food sustainability for 21st century. Using wireless sensor network in farming from; independent power source distribution, monitoring valves and switches operation, and remote area control will efficiently produce excellent quality farm products in all season. In order to control farm power distribution and irrigation system, this paper proposes a communication methodology of the wireless sensor network for collecting environment data and sending control command to turn on/off irrigation system and manipulate power distribution. The simulation results shows that the proposed system developed is accurate robust and reliable.",
"title": ""
},
{
"docid": "018a8b222d1fa5d41a64f3f77fbb860a",
"text": "The classical music traditions of the Indian subcontinent, Hindustani and Carnatic, offer an excellent ground on which to test the limitations of current music information research approaches. At the same time, studies based on these music traditions can shed light on how to solve new and complex music modeling problems. Both traditions have very distinct characteristics, specially compared with western ones: they have developed unique instruments, musical forms, performance practices, social uses and context. In this article, we focus on the Carnatic music tradition of south India, especially on its melodic characteristics. We overview the theoretical aspects that are relevant for music information research and discuss the scarce computational approaches developed so far. We put emphasis on the limitations of the current methodologies and we present open issues that have not yet been addressed and that we believe are important to be worked on.",
"title": ""
},
{
"docid": "b0c2d9130a48fc0df8f428460b949741",
"text": "A micro-strip patch antenna for a passive radio frequency identification (RFID) tag which can operate in the ultra high frequency (UHF) range from 865 MHz to 867 MHz is presented in this paper. The proposed antenna is designed and suitable for tagging the metallic boxes in the UK and Europe warehouse environment. The design is supplemented with the simulation results. In addition, the effect of the antenna substrate thickness and the ground plane on the performance of the proposed antenna is also investigated. The study shows that there is little affect by the antenna substrate thickness on the performance.",
"title": ""
}
] |
scidocsrr
|
8081174ad50c0a60b392d9c94cc36ba7
|
Improved Fusion of Visual and Language Representations by Dense Symmetric Co-attention for Visual Question Answering
|
[
{
"docid": "a1ef2bce061c11a2d29536d7685a56db",
"text": "This paper presents stacked attention networks (SANs) that learn to answer natural language questions from images. SANs use semantic representation of a question as query to search for the regions in an image that are related to the answer. We argue that image question answering (QA) often requires multiple steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an image multiple times to infer the answer progressively. Experiments conducted on four image QA data sets demonstrate that the proposed SANs significantly outperform previous state-of-the-art approaches. The visualization of the attention layers illustrates the progress that the SAN locates the relevant visual clues that lead to the answer of the question layer-by-layer.",
"title": ""
}
] |
[
{
"docid": "e6be28ac4a4c74ca2f8967b6a661b9cf",
"text": "This paper describes the design and simulation of a MEMS-based oscillator using a synchronous amplitude limiter. The proposed solution does not require external control signals to keep the resonator drive amplitude within the desired range. In a MEMS oscillator the oscillation amplitude needs to be limited to avoid over-driving the resonator which could cause unwanted nonlinear behavior [1] or component failure. The interface electronics has been implemented and simulated in 0.35μm HV CMOS process. The resonator was fabricated using a custom rapid-prototyping process involving Focused Ion Beam masking and Cryogenic Deep Reactive Ion Etching.",
"title": ""
},
{
"docid": "58ab999df6099ae98e72a89ec2e97e9d",
"text": "We present an extensive flow-level traffic analysis of the network worm Blaster.A and of the e-mail worm Sobig.F. Based on packet-level measurements with these worms in a testbed we defined flow-level filters. We then extracted the flows that carried malicious worm traffic from AS559 (SWITCH) border router backbone traffic that we had captured in the DDoSVax project. We discuss characteristics and anomalies detected during the outbreak phases, and present an in-depth analysis of partially and completely successful Blaster infections. Detailed flow-level traffic plots of the outbreaks are given. We found a short network test of a Blaster pre-release, significant changes of various traffic parameters, backscatter effects due to non-existent hosts, ineffectiveness of certain temporary port blocking countermeasures, and a surprisingly low frequency of successful worm code transmissions due to Blaster‘s multi-stage nature. Finally, we detected many TCP packet retransmissions due to Sobig.F‘s far too greedy spreading algorithm.",
"title": ""
},
{
"docid": "c8a2ba8f47266d0a63281a5abb5fa47f",
"text": "Hair plays an important role in human appearance. However, hair segmentation is still a challenging problem partially due to the lack of an effective model to handle its arbitrary shape variations. In this paper, we present a part-based model robust to hair shape and environment variations. The key idea of our method is to identify local parts by promoting the effectiveness of the part-based model. To this end, we propose a measurable statistic, called Subspace Clustering Dependency (SC-Dependency), to estimate the co-occurrence probabilities between local shapes. SC-Dependency guarantees output reasonability and allows us to evaluate the effectiveness of part-wise constraints in an information-theoretic way. Then we formulate the part identification problem as an MRF that aims to optimize the effectiveness of the potential functions. Experiments are performed on a set of consumer images and show our algorithm's capability and robustness to handle hair shape variations and extreme environment conditions.",
"title": ""
},
{
"docid": "9f362249c508abe7f0146158d9370395",
"text": "A shadow appears on an area when the light from a source cannot reach the area due to obstruction by an object. The shadows are sometimes helpful for providing useful information about objects. However, they cause problems in computer vision applications, such as segmentation, object detection and object counting. Thus shadow detection and removal is a pre-processing task in many computer vision applications. This paper proposes a simple method to detect and remove shadows from a single RGB image. A shadow detection method is selected on the basis of the mean value of RGB image in A and B planes of LAB equivalent of the image. The shadow removal is done by multiplying the shadow region by a constant. Shadow edge correction is done to reduce the errors due to diffusion in the shadow boundary.",
"title": ""
},
{
"docid": "905027f065ca2efac792e4ec37e8e07b",
"text": "This case, written on the basis of published sources, concerns the decision facing management of Starbucks Canada about how to implement mobile payments. While Starbucks has currently been using a mobile app to accept payments through their proprietary Starbucks card, rival Tim Hortons has recently introduced a more advanced mobile payments solution and the company now has to consider its next moves. The case reviews various aspects of mobile payments technology and platforms that must be understood to make a decision about the best direction for Starbucks Canada.",
"title": ""
},
{
"docid": "a2e161724489b6210bf29c0c4f721534",
"text": "OBJECTIVE\nTo review the results and complications of the surgical treatment of craniosynostosis in 283 consecutive patients treated between 1999 and 2007.\n\n\nPATIENTS AND METHODS\nOur series consisted of 330 procedures performed in 283 patients diagnosed with scaphocephaly (n=155), trigonocephaly (n=50), anterior plagiocephaly (n=28), occipital plagiocephaly (n=1), non-syndromic multi-suture synostosis (n=20), and with diverse craniofacial syndromes (n=32; 11 Crouzon, 11 Apert, 7 Pfeiffer, 2 Saethre-Chotzen, and 2 clover-leaf skull). We used the classification of Whitaker et al. to evaluate the surgical results. Complications of each technique and time of patients' hospitalization were also recorded. The surgeries were classified in 12 different types according to the techniques used. Type I comprised endoscopic assisted osteotomies for sagittal synostosis (42 cases). Type II included sagittal suturectomy and expanding osteotomies (46 cases). Type III encompassed procedures similar to type II but that included frontal dismantling or frontal osteotomies in scaphocephaly (59 cases). Type IV referred to complete cranial vault remodelling (holocranial dismantling) in scaphocephaly (13 cases). Type V belonged to fronto-orbital remodelling without fronto-orbital bandeau in trigonocephaly (50 cases). Type VI included fronto-orbital remodelling without fronto-orbital bandeau in plagiocephaly (14 cases). In Type VII cases of plagiocephaly with frontoorbital remodelling and fronto-orbital bandeau were comprised (14 cases). Type VIII consisted of occipital advancement in posterior plagiocephaly (1 case). Type IX included standard bilateral fronto-orbital advancement with expanding osteotomies (30 cases). Type X was used in multi-suture craniosynostosis (15 cases) and consisted of holocranial dismantling (complete cranial vault remodelling). Type XI included occipital and suboccipital craniectomies in multiple suture craniosynostosis (10 cases) and Type XII instances of fronto-orbital distraction (26 cases).\n\n\nRESULTS\nThe mortality rate of the series was 2 out of 283 cases (0.7%). These 2 patients died one year after surgery. All complications were resolved without permanent deficit. Mean age at surgery was 6.75 months. According to Whitaker et al's classification, 191 patients were classified into Category I (67.49%), 51 into Category II (18.02%), 30 into Category III (10.6%) and 14 into Category IV (4.90%). Regarding to craniofacial conformation, 85.5 % of patients were considered as a good result and 15.5% of patients as a poor result. Of the patients with poor results, 6.36% were craniofacial syndromes, 2.12% had anterior plagiocephaly and 1.76% belonged to non-syndromic craniosynostosis. The most frequent complication was postoperative hyperthermia of undetermined origin (13.43% of the cases), followed by infection (7.5%), subcutaneous haematoma (5.3%), dural tears (5%), and CSF leakage (2.5%). The number of complications was higher in the group of re-operated patients (12.8% of all). In this subset of reoperations, infection accounted for 62.5%, dural tears for 93% and CSF leaks for 75% of the total. In regard to the surgical procedures, endoscopic assisted osteotomies presented the lowest rate of complications, followed by standard fronto-orbital advancement in multiple synostosis, trigonocephaly and plagiocephaly. The highest number of complications occurred in complete cranial vault remodelling (holocranial dismantling) in scaphocephaly and multiple synostoses and after the use of internal osteogenic distractors. Of note, are two cases of iatrogenic basal encephalocele that occurred after combined fronto-facial distraction.\n\n\nCONCLUSIONS\nThe best results were obtained in patients with isolated craniosynostosis and the worst in cases with syndromic and multi-suture craniosynostosis. The rate and severity of complications were related to the type of surgical procedure and was higher among patients undergoing re-operations. The mean time of hospitalization was also modified by these factors. Finally, we report our considerations for the management of craniosynostosis taking into account each specific technique and the age at surgery, complication rates and the results of the whole series.",
"title": ""
},
{
"docid": "80c745ee8535d9d53819ced4ad8f996d",
"text": "Wireless Sensor Networks (WSN) are vulnerable to various sensor faults and faulty measurements. This vulnerability hinders efficient and timely response in various WSN applications, such as healthcare. For example, faulty measurements can create false alarms which may require unnecessary intervention from healthcare personnel. Therefore, an approach to differentiate between real medical conditions and false alarms will improve remote patient monitoring systems and quality of healthcare service afforded by WSN. In this paper, a novel approach is proposed to detect sensor anomaly by analyzing collected physiological data from medical sensors. The objective of this method is to effectively distinguish false alarms from true alarms. It predicts a sensor value from historic values and compares it with the actual sensed value for a particular instance. The difference is compared against a threshold value, which is dynamically adjusted, to ascertain whether the sensor value is anomalous. The proposed approach has been applied to real healthcare datasets and compared with existing approaches. Experimental results demonstrate the effectiveness of the proposed system, providing high Detection Rate (DR) and low False Positive Rate (FPR).",
"title": ""
},
{
"docid": "4d987e2c0f3f49609f70149460201889",
"text": "Estimating count and density maps from crowd images has a wide range of applications such as video surveillance, traffic monitoring, public safety and urban planning. In addition, techniques developed for crowd counting can be applied to related tasks in other fields of study such as cell microscopy, vehicle counting and environmental survey. The task of crowd counting and density map estimation is riddled with many challenges such as occlusions, non-uniform density, intra-scene and inter-scene variations in scale and perspective. Nevertheless, over the last few years, crowd count analysis has evolved from earlier methods that are often limited to small variations in crowd density and scales to the current state-of-the-art methods that have developed the ability to perform successfully on a wide range of scenarios. The success of crowd counting methods in the recent years can be largely attributed to deep learning and publications of challenging datasets. In this paper, we provide a comprehensive survey of recent Convolutional Neural Network (CNN) based approaches that have demonstrated significant improvements over earlier methods that rely largely on hand-crafted representations. First, we briefly review the pioneering methods that use hand-crafted representations and then we delve in detail into the deep learning-based approaches and recently published datasets. Furthermore, we discuss the merits and drawbacks of existing CNN-based approaches and identify promising avenues of research in this rapidly evolving field. c © 2017 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "59d3a3ec644d8554cbb2a5ac75a329f8",
"text": "Relational learning can be described as the task of learning first-order logic rules from examples. It has enabled a number of new machine learning applications, e.g. graph mining and link analysis. Inductive Logic Programming (ILP) performs relational learning either directly by manipulating first-order rules or through propositionalization, which translates the relational task into an attribute-value learning task by representing subsets of relations as features. In this paper, we introduce a fast method and system for relational learning based on a novel propositionalization called Bottom Clause Propositionalization (BCP). Bottom clauses are boundaries in the hypothesis search space used by ILP systems Progol and Aleph. Bottom clauses carry semantic meaning and can be mapped directly onto numerical vectors, simplifying the feature extraction process. We have integrated BCP with a well-known neural-symbolic system, C-IL2P, to perform learning from numerical vectors. C-IL2P uses background knowledge in the form of propositional logic programs to build a neural network. The integrated system, which we call CILP++, handles first-order logic knowledge and is available for download from Sourceforge. We have evaluated CILP++ on seven ILP datasets, comparing results with Aleph and a well-known propositionalization method, RSD. The results show that CILP++ can achieve accuracy comparable to Aleph, while being generally faster, BCP achieved statistically significant improvement in accuracy in comparison with RSD when running with a neural network, but BCP and RSD perform similarly when running with C4.5. We have also extended CILP++ to include a statistical feature selection method, mRMR, with preliminary results indicating that a reduction of more than 90 % of features can be achieved with a small loss of accuracy.",
"title": ""
},
{
"docid": "910c42c4737d38db592f7249c2e0d6d2",
"text": "This document presents the Enterprise Ontology a collection of terms and de nitions relevant to business enterprises It was developed as part of the Enterprise Project a collaborative e ort to provide a framework for enterprise modelling The Enterprise Ontology will serve as a basis for this framework which includes methods and a computer toolset for enterprise modelling We give an overview of the Enterprise Project elaborate on the intended use of the Ontology and discuss the process we went through to build it The scope of the Enterprise Ontology is limited to those core concepts required for the project however it is expected that it will appeal to a wider audience It should not be considered static during the course of the project the Enterprise Ontology will be further re ned and extended",
"title": ""
},
{
"docid": "09c19ae7eea50f269ee767ac6e67827b",
"text": "In the last years Python has gained more and more traction in the scientific community. Projects like NumPy, SciPy, and Matplotlib have created a strong foundation for scientific computing in Python and machine learning packages like scikit-learn or packages for data analysis like Pandas are building on top of it. In this paper we present Wyrm ( https://github.com/bbci/wyrm ), an open source BCI toolbox in Python. Wyrm is applicable to a broad range of neuroscientific problems. It can be used as a toolbox for analysis and visualization of neurophysiological data and in real-time settings, like an online BCI application. In order to prevent software defects, Wyrm makes extensive use of unit testing. We will explain the key aspects of Wyrm’s software architecture and design decisions for its data structure, and demonstrate and validate the use of our toolbox by presenting our approach to the classification tasks of two different data sets from the BCI Competition III. Furthermore, we will give a brief analysis of the data sets using our toolbox, and demonstrate how we implemented an online experiment using Wyrm. With Wyrm we add the final piece to our ongoing effort to provide a complete, free and open source BCI system in Python.",
"title": ""
},
{
"docid": "55d584440f6925f12dd3a28917b10c85",
"text": "Bitcoin and other similar digital currencies on blockchains are not ideal means for payment, because their prices tend to go up in the long term (thus people are incentivized to hoard those currencies), and to fluctuate widely in the short term (thus people would want to avoid risks of losing values). The reason why those blockchain currencies based on proof of work are unstable may be found in their designs that the supplies of currencies do not respond to their positive and negative demand shocks, as the authors have formulated in our past work. Continuing from our past work, this paper proposes minimal changes to the design of blockchain currencies so that their market prices are automatically stabilized, absorbing both positive and negative demand shocks of the currencies by autonomously controlling their supplies. Those changes are: 1) limiting re-adjustment of proof-of-work targets, 2) making mining rewards variable according to the observed over-threshold changes of block intervals, and 3) enforcing negative interests to remove old coins in circulation. We have made basic design checks of these measures through simple simulations. In addition to stabilization of prices, the proposed measures may have effects of making those currencies preferred means for payment by disincentivizing hoarding, and improving sustainability of the currency systems by making rewards to miners perpetual.",
"title": ""
},
{
"docid": "288f8a2dab0c32f85c313f5a145e47a5",
"text": "Neural networks have a smooth initial inductive bias, such that small changes in input do not lead to large changes in output. However, in reinforcement learning domains with sparse rewards, value functions have non-smooth structure with a characteristic asymmetric discontinuity whenever rewards arrive. We propose a mechanism that learns an interpolation between a direct value estimate and a projected value estimate computed from the encountered reward and the previous estimate. This reduces the need to learn about discontinuities, and thus improves the value function approximation. Furthermore, as the interpolation is learned and state-dependent, our method can deal with heterogeneous observability. We demonstrate that this one change leads to significant improvements on multiple Atari games, when applied to the state-of-the-art A3C algorithm. 1 Motivation The central problem of reinforcement learning is value function approximation: how to accurately estimate the total future reward from a given state. Recent successes have used deep neural networks to approximate the value function, resulting in state-of-the-art performance in a variety of challenging domains [9]. Neural networks are most effective when the desired target function is smooth. However, value functions are, by their very nature, discontinuous functions with sharp variations over time. In this paper we introduce a representation of value that matches the natural temporal structure of value functions. A value function represents the expected sum of future discounted rewards. If non-zero rewards occur infrequently but reliably, then an accurate prediction of the cumulative discounted reward rises as such rewarding moments approach and drops immediately after. This is depicted schematically with the dashed black line in Figure 1. The true value function is quite smooth, except immediately after receiving a reward when there is a sharp drop. This is a pervasive scenario because many domains associate positive or negative reinforcements to salient events (like picking up an object, hitting a wall, or reaching a goal position). The problem is that the agent’s observations tend to be smooth in time, so learning an accurate value estimate near those sharp drops puts strain on the function approximator – especially when employing differentiable function approximators such as neural networks that naturally make smooth maps from observations to outputs. To address this problem, we incorporate the temporal structure of cumulative discounted rewards into the value function itself. The main idea is that, by default, the value function can respect the reward sequence. If no reward is observed, then the next value smoothly matches the previous value, but 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. Figure 1: After the same amount of training, our proposed method (red) produces much more accurate estimates of the true value function (dashed black), compared to the baseline (blue). The main plot shows discounted future returns as a function of the step in a sequence of states; the inset plot shows the RMSE when training on this data, as a function of network updates. See section 4 for details. becomes a little larger due to the discount. If a reward is observed, it should be subtracted out from the previous value: in other words a reward that was expected has now been consumed. The natural value approximator (NVA) combines the previous value with the observed rewards and discounts, which makes this sequence of values easy to represent by a smooth function approximator such as a neural network. Natural value approximators may also be helpful in partially observed environments. Consider a situation in which an agent stands on a hill top. The goal is to predict, at each step, how many steps it will take until the agent has crossed a valley to another hill top in the distance. There is fog in the valley, which means that if the agent’s state is a single observation from the valley it will not be able to accurately predict how many steps remain. In contrast, the value estimate from the initial hill top may be much better, because the observation is richer. This case is depicted schematically in Figure 2. Natural value approximators may be effective in these situations, since they represent the current value in terms of previous value estimates. 2 Problem definition We consider the typical scenario studied in reinforcement learning, in which an agent interacts with an environment at discrete time intervals: at each time step t the agent selects an action as a function of the current state, which results in a transition to the next state and a reward. The goal of the agent is to maximize the discounted sum of rewards collected in the long run from a set of initial states [12]. The interaction between the agent and the environment is modelled as a Markov Decision Process (MDP). An MDP is a tuple (S,A, R, γ, P ) where S is a state space, A is an action space, R : S×A×S → D(R) is a reward function that defines a distribution over the reals for each combination of state, action, and subsequent state, P : S × A → D(S) defines a distribution over subsequent states for each state and action, and γt ∈ [0, 1] is a scalar, possibly time-dependent, discount factor. One common goal is to make accurate predictions under a behaviour policy π : S → D(A) of the value vπ(s) ≡ E [R1 + γ1R2 + γ1γ2R3 + . . . | S0 = s] . (1) The expectation is over the random variables At ∼ π(St), St+1 ∼ P (St, At), and Rt+1 ∼ R(St, At, St+1), ∀t ∈ N. For instance, the agent can repeatedly use these predictions to improve its policy. The values satisfy the recursive Bellman equation [2] vπ(s) = E [Rt+1 + γt+1vπ(St+1) | St = s] . We consider the common setting where the MDP is not known, and so the predictions must be learned from samples. The predictions made by an approximate value function v(s;θ), where θ are parameters that are learned. The approximation of the true value function can be formed by temporal 2 difference (TD) learning [10], where the estimate at time t is updated towards Z t ≡ Rt+1 + γt+1v(St+1;θ) or Z t ≡ n ∑ i=1 (Πi−1 k=1γt+k)Rt+i + (Π n k=1γt+k)v(St+n;θ) ,(2) where Z t is the n-step bootstrap target, and the TD-error is δ n t ≡ Z t − v(St;θ). 3 Proposed solution: Natural value approximators The conventional approach to value function approximation produces a value estimate from features associated with the current state. In states where the value approximation is poor, it can be better to rely more on a combination of the observed sequence of rewards and older but more reliable value estimates that are projected forward in time. Combining these estimates can potentially be more accurate than using one alone. These ideas lead to an algorithm that produces three estimates of the value at time t. The first estimate, Vt ≡ v(St;θ), is a conventional value function estimate at time t. The second estimate, Gpt ≡ Gβt−1 −Rt γt if γt > 0 and t > 0 , (3) is a projected value estimate computed from the previous value estimate, the observed reward, and the observed discount for time t. The third estimate, Gβt ≡ βtG p t + (1− βt)Vt = (1− βt)Vt + βt Gβt−1 −Rt γt , (4) is a convex combination of the first two estimates1 formed by a time-dependent blending coefficient βt. This coefficient is a learned function of state β(·;θ) : S → [0, 1], over the same parameters θ, and we denote βt ≡ β(St;θ). We call Gβt the natural value estimate at time t and we call the overall approach natural value approximators (NVA). Ideally, the natural value estimate will become more accurate than either of its constituents from training. The value is learned by minimizing the sum of two losses. The first loss captures the difference between the conventional value estimate Vt and the target Zt, weighted by how much it is used in the natural value estimate, JV ≡ E [ [[1− βt]]([[Zt]]− Vt) ] , (5) where we introduce the stop-gradient identity function [[x]] = x that is defined to have a zero gradient everywhere, that is, gradients are not back-propagated through this function. The second loss captures the difference between the natural value estimate and the target, but it provides gradients only through the coefficient βt, Jβ ≡ E [ ([[Zt]]− (βt [[Gpt ]] + (1− βt)[[Vt]])) ] . (6) These two losses are summed into a joint loss, J = JV + cβJβ , (7) where cβ is a scalar trade-off parameter. When conventional stochastic gradient descent is applied to minimize this loss, the parameters of Vt are adapted with the first loss and parameters of βt are adapted with the second loss. When bootstrapping on future values, the most accurate value estimate is best, so using Gβt instead of Vt leads to refined prediction targets Z t ≡ Rt+1 + γt+1G β t+1 or Z β,n t ≡ n ∑ i=1 (Πi−1 k=1γt+k)Rt+i + (Π n k=1γt+k)G β t+n . (8) 4 Illustrative Examples We now provide some examples of situations where natural value approximations are useful. In both examples, the value function is difficult to estimate well uniformly in all states we might care about, and the accuracy can be improved by using the natural value estimate Gβt instead of the direct value estimate Vt. Note the mixed recursion in the definition, G depends on G , and vice-versa. 3 Sparse rewards Figure 1 shows an example of value function approximation. To separate concerns, this is a supervised learning setup (regression) with the true value targets provided (dashed black line). Each point 0 ≤ t ≤ 100 on the horizontal axis corresponds to one state St in a single sequence. The shape of the target values stems from a handful of reward events, and discounting with γ = 0.9. We mimic observations that smoothly vary across time by 4 equally spaced radial basis functions, so St ∈ R. The approximators v(s) and β(s) are two small neural networks with one hidden layer of 32 ReLU units each, and a single linear or sigmoid output unit, respectively. The input",
"title": ""
},
{
"docid": "c7f944e3c31fbb45dcd83252b43f73ff",
"text": "The moderation of content in many social media systems, such as Twitter and Facebook, motivated the emergence of a new social network system that promotes free speech, named Gab. Soon after that, Gab has been removed from Google Play Store for violating the company's hate speech policy and it has been rejected by Apple for similar reasons. In this paper we characterize Gab, aiming at understanding who are the users who joined it and what kind of content they share in this system. Our findings show that Gab is a very politically oriented system that hosts banned users from other social networks, some of them due to possible cases of hate speech and association with extremism. We provide the first measurement of news dissemination inside a right-leaning echo chamber, investigating a social media where readers are rarely exposed to content that cuts across ideological lines, but rather are fed with content that reinforces their current political or social views.",
"title": ""
},
{
"docid": "14d77d118aad5ee75b82331dc3db8afd",
"text": "Graphical passwords are an alternative to alphanumeric passwords in which users click on images to authenticate themselves rather than type alphanumeric strings. We have developed one such system, called PassPoints, and evaluated it with human users. The results of the evaluation were promising with respect to rmemorability of the graphical password. In this study we expand our human factors testing by studying two issues: the effect of tolerance, or margin of error, in clicking on the password points and the effect of the image used in the password system. In our tolerance study, results show that accurate memory for the password is strongly reduced when using a small tolerance (10 x 10 pixels) around the user's password points. This may occur because users fail to encode the password points in memory in the precise manner that is necessary to remember the password over a lapse of time. In our image study we compared user performance on four everyday images. The results indicate that there were few significant differences in performance of the images. This preliminary result suggests that many images may support memorability in graphical password systems.",
"title": ""
},
{
"docid": "1d5cd4756e424f3d282545f029c1e9bb",
"text": "Anomaly detection systems deployed for monitoring in oil and gas industries are mostly WSN based systems or SCADA systems which all suffer from noteworthy limitations. WSN based systems are not homogenous or incompatible systems. They lack coordinated communication and transparency among regions and processes. On the other hand, SCADA systems are expensive, inflexible, not scalable, and provide data with long delay. In this paper, a novel IoT based architecture is proposed for Oil and gas industries to make data collection from connected objects as simple, secure, robust, reliable and quick. Moreover, it is suggested that how this architecture can be applied to any of the three categories of operations, upstream, midstream and downstream. This can be achieved by deploying a set of IoT based smart objects (devices) and cloud based technologies in order to reduce complex configurations and device programming. Our proposed IoT architecture supports the functional and business requirements of upstream, midstream and downstream oil and gas value chain of geologists, drilling contractors, operators, and other oil field services. Using our proposed IoT architecture, inefficiencies and problems can be picked and sorted out sooner ultimately saving time and money and increasing business productivity.",
"title": ""
},
{
"docid": "7e5415fd007bfe74a469c6b6dbfb2419",
"text": "In this thesis, I explore a reinforcement learning technique for improving bounding box localizations of objects in images. The model takes as input a bounding box already known to overlap an object and aims to improve the fit of the box through a series of transformations that shift the location of the box by translation, or change its size or aspect ratio. Over the course of these actions, the model adapts to new information extracted from the image. This active localization approach contrasts with existing bounding-box regression methods, which extract information from the image only once. I implement, train, and test this reinforcement learning model using data taken from the Portland State Dog-Walking image set [12]. The model balances exploration with exploitation in training using an -greedy policy. I find that the performance of the model is sensitive to the -greedy configuration used during training, performing best when the epsilon parameter is set to very low values over the course of training. With = 0.01, I find the algorithm can improve bounding boxes in about 78% of test cases for the ‘dog’ object category, and 76% for the ‘human’ category.",
"title": ""
},
{
"docid": "5f344817b225363f5309208909619306",
"text": "Semantic specialization is a process of finetuning pre-trained distributional word vectors using external lexical knowledge (e.g., WordNet) to accentuate a particular semantic relation in the specialized vector space. While post-processing specialization methods are applicable to arbitrary distributional vectors, they are limited to updating only the vectors of words occurring in external lexicons (i.e., seen words), leaving the vectors of all other words unchanged. We propose a novel approach to specializing the full distributional vocabulary. Our adversarial post-specialization method propagates the external lexical knowledge to the full distributional space. We exploit words seen in the resources as training examples for learning a global specialization function. This function is learned by combining a standard L2-distance loss with a adversarial loss: the adversarial component produces more realistic output vectors. We show the effectiveness and robustness of the proposed method across three languages and on three tasks: word similarity, dialog state tracking, and lexical simplification. We report consistent improvements over distributional word vectors and vectors specialized by other state-of-the-art specialization frameworks. Finally, we also propose a cross-lingual transfer method for zero-shot specialization which successfully specializes a full target distributional space without any lexical knowledge in the target language and without any bilingual data.",
"title": ""
},
{
"docid": "282ce10aca27085060ebe833d47f157a",
"text": "Although numerous context-aware applications have been developed and there have been technological advances for acquiring contextual information, it is still difficult to develop and prototype interesting context-aware applications. This is largely due to the lack of programming support available to both programmers and end-users. This lack of support closes off the context-aware application design space to a larger group of users. We present iCAP, a system that allows end-users to visually design a wide variety of context-aware applications, including those based on if-then rules, temporal and spatial relationships and environment personalization. iCAP allows users to quickly prototype and test their applications without writing any code. We describe the study we conducted to understand end-users’ mental models of context-aware applications, how this impacted the design of our system and several applications that demonstrate iCAP’s richness and ease of use. We also describe a user study performed with 20 end-users, who were able to use iCAP to specify every application that they envisioned, illustrating iCAP’s expressiveness and usability.",
"title": ""
},
{
"docid": "046148901452aefdc5a14357ed89cbd3",
"text": "Of late, there has been an avalanche of cross-layer design proposals for wireless networks. A number of researchers have looked at specific aspects of network performance and, approaching cross-layer design via their interpretation of what it implies, have presented several cross-layer design proposals. These proposals involve different layers of the protocol stack, and address both cellular and ad hoc networks. There has also been work relating to the implementation of cross-layer interactions. It is high time that these various individual efforts be put into perspective and a more holistic view be taken. In this article, we take a step in that direction by presenting a survey of the literature in the area of cross-layer design, and by taking stock of the ongoing work. We suggest a definition for cross-layer design, discuss the basic types of cross-layer design with examples drawn from the literature, and categorize the initial proposals on how cross-layer interactions may be implemented. We then highlight some open challenges and new opportunities for cross-layer design. Designers presenting cross-layer design proposals can start addressing these as they move ahead.",
"title": ""
}
] |
scidocsrr
|
8fec4a3d35fc037040a63604163fa116
|
A Machine Learning Based System for Semi-Automatically Redacting Documents
|
[
{
"docid": "60ff841b0b13442c2afd5dd73178145a",
"text": "Detecting inferences in documents is critical for ensuring privacy when sharing information. In this paper, we propose a refined and practical model of inference detection using a reference corpus. Our model is inspired by association rule mining: inferences are based on word co-occurrences. Using the model and taking the Web as the reference corpus, we can find inferences and measure their strength through web-mining algorithms that leverage search engines such as Google or Yahoo!.\n Our model also includes the important case of private corpora, to model inference detection in enterprise settings in which there is a large private document repository. We find inferences in private corpora by using analogues of our Web-mining algorithms, relying on an index for the corpus rather than a Web search engine.\n We present results from two experiments. The first experiment demonstrates the performance of our techniques in identifying all the keywords that allow for inference of a particular topic (e.g. \"HIV\") with confidence above a certain threshold. The second experiment uses the public Enron e-mail dataset. We postulate a sensitive topic and use the Enron corpus and the Web together to find inferences for the topic.\n These experiments demonstrate that our techniques are practical, and that our model of inference based on word co-occurrence is well-suited to efficient inference detection.",
"title": ""
}
] |
[
{
"docid": "cafa33bb8996d393063e2744f12045b1",
"text": "Latent Semantic Analysis is used as a technique for measuring the coherence of texts. By comparing the vectors for two adjoining segments of text in a highdimensional semantic space, the method provides a characterization of the degree of semantic relatedness between the segments. We illustrate the approach for predicting coherence through re-analyzing sets of texts from two studies that manipulated the coherence of texts and assessed readers' comprehension. The results indicate that the method is able to predict the effect of text coherence on comprehension and is more effective than simple term-term overlap measures. In this manner, LSA can be applied as an automated method that produces coherence predictions similar to propositional modeling. We describe additional studies investigating the application of LSA to analyzing discourse structure and examine the potential of LSA as a psychological model of coherence effects in text comprehension. Measuring Coherence 3 The Measurement of Textual Coherence with Latent Semantic Analysis. In order to comprehend a text, a reader must create a well connected representation of the information in it. This connected representation is based on linking related pieces of textual information that occur throughout the text. The linking of information is a process of determining and maintaining coherence. Because coherence is a central issue to text comprehension, a large number of studies have investigated the process readers use to maintain coherence and to model the readers' representation of the textual information as well as of their previous knowledge (e.g., Lorch & O'Brien, 1995) There are many aspects of a discourse that contribute to coherence, including, coreference, causal relationships, connectives, and signals. For example, Kintsch and van Dijk (Kintsch, 1988; Kintsch & van Dijk, 1978) have emphasized the effect of coreference in coherence through propositional modeling of texts. While coreference captures one aspect of coherence, it is highly correlated with other coherence factors such as causal relationships found in the text (Fletcher, Chrysler, van den Broek, Deaton, & Bloom, 1995; Trabasso, Secco & van den Broek, 1984). Although a propositional model of a text can predict readers' comprehension, a problem with the approach is that in-depth propositional analysis is time consuming and requires a considerable amount of training. Semi-automatic methods of propositional coding (e.g., Turner, 1987) still require a large amount of effort. This degree of effort limits the size of the text that can be analyzed. Thus, most texts analyzed and used in reading comprehension experiments have been small, typically from 50 to 500 words, and almost all are under 1000 words. Automated methods such as readability measures (e.g., Flesch, 1948; Klare, 1963) provide another characterization of the text, however, they do not correlate well with comprehension measures (Britton & Gulgoz, 1991; Kintsch & Vipond, 1979). Thus, while the coherence of a text can be measured, it can often involve considerable effort. In this study, we use Latent Semantic Analysis (LSA) to determine the coherence of texts. A more complete description of the method and approach to using LSA may be found in Deerwester, Dumais, Furnas, Landauer and Harshman, (1990), Landauer and Dumais, (1997), as well as in the preceding article by Landauer, Foltz and Laham (this issue). LSA provides a fully automatic method for comparing units of textual information to each other in order to determine their semantic relatedness. These units of text are compared to each other using a derived measure of their similarity of meaning. This measure is based on a Measuring Coherence 4 powerful mathematical analysis of direct and indirect relations among words and passages in a large training corpus. Semantic relatedness so measured, should correspond to a measure of coherence since it captures the extent to which two text units are discussing semantically related information. Unlike methods which rely on counting literal word overlap between units of text, LSA's comparisons are based on a derived semantic relatedness measure which reflects semantic similarity among synonyms, antonyms, hyponyms, compounds, and other words that tend to be used in similar contexts. In this way, it can reflect coherence due to automatic inferences made by readers as well as to literal surface coreference. In addition, since LSA is automatic, there are no constraints on the size of the text analyzed. This permits analyses of much larger texts to examine aspects of their discourse structure. In order for LSA to be considered an appropriate approach for modeling text coherence, we first establish how well LSA captures elements of coherence that are similar to modeling methods such as propositional models. A re-analysis of two studies that examined the role of coherence in readers' comprehension is described. This re-analysis of the texts produces automatic predictions of the coherence of texts which are then compared to measures of the readers' comprehension. We next describe the application of the method to investigating other features of the discourse structure of texts. Finally, we illustrate how the approach applies both as a tool for text researchers and as a theoretical model of text coherence. General approach for using LSA to measure coherence The primary method for using LSA to make coherence predictions is to compare some unit of text to an adjoining unit of text in order to determine the degree to which the two are semantically related. These units could be sentences, paragraphs or even individual words or whole books. This analysis can then be performed for all pairs of adjoining text units in order to characterize the overall coherence of the text. Coherence predictions have typically been performed at a propositional level, in which a set of propositions all contained within working memory are compared or connected to each other (e.g., Kintsch, 1988, In press). For LSA coherence analyses, using sentences as the basic unit of text appears to be an appropriate corresponding level that can be easily parsed by automated methods. Sentences serve as a good level in that they represent a small set of textual information (e.g., typically 3-7 propositions) and thus would be approximately consistent with the amount of information that is held in short term memory. Measuring Coherence 5 As discussed in the preceding article by Landauer, et al. (this issue), the power of computing semantic relatedness with LSA comes from analyzing a large number of text examples. Thus, for computing the coherence of a target text, it may first be necessary to have another set of texts that contain a large proportion of the terms used in the target text and that have occurrences in many contexts. One approach is to use a large number of encyclopedia articles on similar topics as the target text. A singular value decomposition (SVD) is then performed on the term by article matrix, thereby generating a high dimensional semantic space which contains most of the terms used in the target text. Individual terms, as well as larger text units such as sentences, can be represented as vectors in this space. Each text unit is represented as the weighted average of vectors of the terms it contains. Typically the weighting is by the log entropy transform of each term (see Landauer, et al., this issue). This weighting helps account for both the term's importance in the particular unit as well as the degree to which the term carries information in the domain of discourse in general. The semantic relatedness of two text units can then be compared by determining the cosine between the vectors for the two units. Thus, to find the coherence between the first and second sentence of a text, the cosine between the vectors for the two sentences would be determined. For instance, two sentences that use exactly the same terms with the same frequencies will have a cosine of 1, while two sentences that use no terms that are semantically related, will tend to have cosines near 0 or below. At intermediate levels, sentences containing terms of related meaning, even if none are the same terms or roots will have more moderate cosines. (It is even possible, although in practice very rare, that two sentences with no words of obvious similarity will have similar overall meanings as indicated by similar LSA vectors in the high dimensional semantic space.) Coherence and text comprehension This paper illustrates a complementary approach to propositional modeling for determining coherence, using LSA, and comparing the predicted coherence to measures of the readers' comprehension. For these analyses, the texts and comprehension measures are taken from two previous studies by Britton and Gulgoz (1988), and, McNamara, et al. (1996). In the first study, the text coherence was manipulated primarily by varying the amount of sentence to sentence repetition of particular important content words through analyzing propositional overlap. Simulating its results with LSA demonstrates the degree to which coherence is carried, or at least reflected, in the Measuring Coherence 6 continuity of lexical semantics, and shows that LSA correctly captures these effects. However, for these texts, a simpler literal word overlap measure, absent any explicit propositional or LSA analysis, also predicts comprehension very well. The second set of texts, those from McNamara et al. (1996), manipulates coherence in much subtler ways; often by substituting words and phrases of related meaning but containing different lexical items to provide the conceptual bridges between one sentence and the next. These materials provide a much more rigorous and interesting test of the LSA technique by requiring it to detect underlying meaning similarities in the absence of literal word repetition. The success of this simulation, and its superiority to d",
"title": ""
},
{
"docid": "11557714ac3bbd9fc9618a590722212e",
"text": "In Taobao, the largest e-commerce platform in China, billions of items are provided and typically displayed with their images.For better user experience and business effectiveness, Click Through Rate (CTR) prediction in online advertising system exploits abundant user historical behaviors to identify whether a user is interested in a candidate ad. Enhancing behavior representations with user behavior images will help understand user's visual preference and improve the accuracy of CTR prediction greatly. So we propose to model user preference jointly with user behavior ID features and behavior images. However, training with user behavior images brings tens to hundreds of images in one sample, giving rise to a great challenge in both communication and computation. To handle these challenges, we propose a novel and efficient distributed machine learning paradigm called Advanced Model Server (AMS). With the well-known Parameter Server (PS) framework, each server node handles a separate part of parameters and updates them independently. AMS goes beyond this and is designed to be capable of learning a unified image descriptor model shared by all server nodes which embeds large images into low dimensional high level features before transmitting images to worker nodes. AMS thus dramatically reduces the communication load and enables the arduous joint training process. Based on AMS, the methods of effectively combining the images and ID features are carefully studied, and then we propose a Deep Image CTR Model. Our approach is shown to achieve significant improvements in both online and offline evaluations, and has been deployed in Taobao display advertising system serving the main traffic.",
"title": ""
},
{
"docid": "bcdf411d631f822e15a0b78396dc55e7",
"text": "Exercise-induced ST-segment elevation was correlated with myocardial perfusion abnormalities and coronary artery obstruction in 35 patients. Ten patients (group 1) developed exercise ST elevation in leads without Q waves on the resting ECG. The site of ST elevation corresponded to both a reversible perfusion defect and a severely obstructed coronary artery. Associated ST-segment depression in other leads occurred in seven patients, but only one had a second perfusion defect at the site of ST depression. In three of the 10 patients, abnormal left ventricular wall motion at the site of exercise-induced ST elevation was demonstrated by ventriculography. Twenty-five patients (group 2) developed exercise ST elevation in leads with Q waves on the resting ECG. The site ofST elevation corresponded to severe coronary artery stenosis and a thallium perfusion defect that persisted on the 4-hour scan (constant in 12 patients, decreased in 13). Associated ST depression in other leads occurred in 11 patients and eight (73%) had a second perfusion defect at the site of ST depression. In all 25 patients with previous transmural infarction, abnormal left ventricular wall motion at the site of the Q waves was shown by ventriculography. In patients without previous myocardial infarction, the site of exercise-induced ST-segment elevation indicates the site of severe transient myocardial ischemia, and associated ST depression is usually reciprocal. In patients with Q waves on the resting ECG, exercise ST elevation way be due to peri-infarctional ischemia, abnormal ventricular wall motion or both. Exercise ST-segment depression may be due to a second area of myocardial ischemia rather than being reciprocal to ST elevation.",
"title": ""
},
{
"docid": "d31646394ff4e6aa66bbb3c61651592e",
"text": "The computer vision strategies used to recognize a fruit rely on four basic features which characterize the object: intensity, color, shape and texture. This paper proposes an efficient fusion of color and texture features for fruit recognition. The recognition is done by the minimum distance classifier based upon the statistical and co-occurrence features derived from the Wavelet transformed subbands. Experimental results on a database of about 2635 fruits from 15 different classes confirm the effectiveness of the proposed approach.",
"title": ""
},
{
"docid": "032db9c2dba42ca376e87b28ecb812fa",
"text": "This paper tries to put various ways in which Natural Language Processing (NLP) and Software Engineering (SE) can be seen as inter-disciplinary research areas. We survey the current literature, with the aim of assessing use of Software Engineering and Natural Language Processing tools in the researches undertaken. An assessment of how various phases of SDLC can employ NLP techniques is presented. The paper also provides the justification of the use of text for automating or combining both these areas. A short research direction while undertaking multidisciplinary research is also provided.",
"title": ""
},
{
"docid": "a208464e315fd86b626bafa14a27b7f6",
"text": "Adaptive autonomy enables agents operating in an environment to change, or adapt, their autonomy levels by relying on tasks executed by others. Moreover, tasks could be delegated between agents, and as a result decision-making concerning them could also be delegated. In this work, adaptive autonomy is modeled through the willingness of agents to cooperate in order to complete abstract tasks, the latter with varying levels of dependencies between them. Furthermore, it is sustained that adaptive autonomy should be considered at an agent’s architectural level. Thus the aim of this paper is two-fold. Firstly, the initial concept of an agent architecture is proposed and discussed from an agent interaction perspective. Secondly, the relations between static values of willingness to help, dependencies between tasks and overall usefulness of the agents’ population are analysed. The results show that a unselfish population will complete more tasks than a selfish one for low dependency degrees. However, as the latter increases more tasks are dropped, and consequently the utility of the population degrades. Utility is measured by the number of tasks that the population completes during run-time. Finally, it is shown that agents are able to finish more tasks by dynamically changing their willingness to cooperate.",
"title": ""
},
{
"docid": "6627a1d89adf1389959983d04c8c26dd",
"text": "Recent models of procrastination due to self-control problems assume that a procrastinator considers just one option and is unaware of her self-control problems. We develop a model where a person chooses from a menu of options and is partially aware of her self-control problems. This menu model replicates earlier results and generates new ones. A person might forego completing an attractive option because she plans to complete a more attractive but never-to-be-completed option. Hence, providing a non-procrastinator additional options can induce procrastination, and a person may procrastinate worse pursuing important goals than unimportant ones.",
"title": ""
},
{
"docid": "7d78e87112f3a29f228bcf5a5f64b5d9",
"text": "Register transfer level (RTL) synthesis model which simplified the design of clocked circuits allowed design automation boost and VLSI progress for more than a decade. Shrinking technology and progressive increase in clock frequency are bringing clock to its physical limits. Asynchronous circuits, which are believed to replace globally clocked designs in the future, remain out of the competition due to the design complexity of some automated approaches and poor results of other techniques. Successful asynchronous designs are known but they are primarily custom. This work sketches an automated approach for automatically re-implementing conventional RTL designs as fine-grain pipelined asynchronous quasi-delay-insensitive (QDI) circuits and presents a framework for automated synthesis of such implementations from high-level behavior specifications. Experimental results are presented using our new dynamic asynchronous library.",
"title": ""
},
{
"docid": "a3f6781adeca64763156ac41dff32c82",
"text": "A multilayer bandpass filter (BPF) with harmonic suppression using meander line inductor and interdigital capacitor (MLI-IDC) resonant structure is presented in this letter. The BPF is fabricated with three unit cells and its measured passband center frequency is 2.56 GHz with a bandwidth of 0.38 GHz and an insertion loss of 1.5 dB. The harmonics are suppressed up to 11 GHz. A diplexer using the proposed BPF is also presented. The proposed diplexer consists of 4.32 mm sized unit cells to couple 2.5 GHz signal into port 2, and 3.65 mm sized unit cells to couple 3.7 GHz signal into port 3. The notch circuit is placed on the output lines of the diplexer to improve isolation. The proposed diplexer has demonstrated insertion loss of 1.35 dB with 0.45 GHz bandwidth in port 2 and 1.73 dB insertion loss with 0.44 GHz bandwidth in port 3. The isolation is better than 18 dB in the first passband with 38 dB maximum isolation at 2.5 GHz. The isolation in the second passband is better than 26 dB with 45 dB maximum isolation at 3.7 GHz.",
"title": ""
},
{
"docid": "72a6a7fe366def9f97ece6d1ddc46a2e",
"text": "Our work in this paper presents a prediction of quality of experience based on full reference parametric (SSIM, VQM) and application metrics (resolution, bit rate, frame rate) in SDN networks. First, we used DCR (Degradation Category Rating) as subjective method to build the training model and validation, this method is based on not only the quality of received video but also the original video but all subjective methods are too expensive, don't take place in real time and takes much time for example our method takes three hours to determine the average MOS (Mean Opinion Score). That's why we proposed novel method based on machine learning algorithms to obtain the quality of experience in an objective manner. Previous researches in this field help us to use four algorithms: Decision Tree (DT), Neural Network, K nearest neighbors KNN and Random Forest RF thanks to their efficiency. We have used two metrics recommended by VQEG group to assess the best algorithm: Pearson correlation coefficient r and Root-Mean-Square-Error RMSE. The last part of the paper describes environment based on: Weka to analyze ML algorithms, MSU tool to calculate SSIM and VQM and Mininet for the SDN simulation.",
"title": ""
},
{
"docid": "e83ae69dea6d34e169fc34c64d33ee93",
"text": "Topic models have the potential to improve search and browsing by extracting useful semantic themes from web pages and other text documents. When learned topics are coherent and interpretable, they can be valuable for faceted browsing, results set diversity analysis, and document retrieval. However, when dealing with small collections or noisy text (e.g. web search result snippets or blog posts), learned topics can be less coherent, less interpretable, and less useful. To overcome this, we propose two methods to regularize the learning of topic models. Our regularizers work by creating a structured prior over words that reflect broad patterns in the external data. Using thirteen datasets we show that both regularizers improve topic coherence and interpretability while learning a faithful representation of the collection of interest. Overall, this work makes topic models more useful across a broader range of text data.",
"title": ""
},
{
"docid": "9c5d3f89d5207b42d7e2c8803b29994c",
"text": "With the advent of data mining, machine learning has come of age and is now a critical technology in many businesses. However, machine learning evolved in a different research context to that in which it now finds itself employed. A particularly important problem in the data mining world is working effectively with large data sets. However, most machine learning research has been conducted in the context of learning from very small data sets. To date most approaches to scaling up machine learning to large data sets have attempted to modify existing algorithms to deal with large data sets in a more computationally efficient and effective manner. But is this necessarily the best method? This paper explores the possibility of designing algorithms specifically for large data sets. Specifically, the paper looks at how increasing data set size affects bias and variance error decompositions for classification algorithms. Preliminary results of experiments to determine these effects are presented, showing that, as hypothesised variance can be expected to decrease as training set size increases. No clear effect of training set size on bias was observed. These results have profound implications for data mining from large data sets, indicating that developing effective learning algorithms for large data sets is not simply a matter of finding computationally efficient variants of existing learning algorithms.",
"title": ""
},
{
"docid": "f1699e1e87ef2e95357c834384f77931",
"text": "Catastrophic forgetting is a problem of neural networks that loses the information of the first task after training the second task. Here, we propose a method, i.e. incremental moment matching (IMM), to resolve this problem. IMM incrementally matches the moment of the posterior distribution of the neural network which is trained on the first and the second task, respectively. To make the search space of posterior parameter smooth, the IMM procedure is complemented by various transfer learning techniques including weight transfer, L2-norm of the old and the new parameter, and a variant of dropout with the old parameter. We analyze our approach on a variety of datasets including the MNIST, CIFAR-10, Caltech-UCSDBirds, and Lifelog datasets. The experimental results show that IMM achieves state-of-the-art performance by balancing the information between an old and a new network.",
"title": ""
},
{
"docid": "dc693ab2e8991630f62caf0f62eb0dc6",
"text": "The paper presents the power amplifier design. The introduction of a practical harmonic balance capability at the device measurement stage brings a number of advantages and challenges. Breaking down this traditional barrier means that the test-bench engineer needs to become more aware of the design process and requirements. The inverse is also true, as the measurement specifications for a harmonically tuned amplifier are a bit more complex than just the measurement of load-pull contours. We hope that the new level of integration between both will also result in better exchanges between both sides and go beyond showing either very accurate, highly tuned device models, or using the device model as the traditional scapegoat for unsuccessful PA designs. A nonlinear model and its quality can now be diagnosed through direct comparison of simulated and measured wave forms. The quality of a PA design can be verified by placing the device within the measurement system, practical harmonic balance emulator into the same impedance state in which it will operate in the actual realized design.",
"title": ""
},
{
"docid": "bff8ad5f962f501b299a0f69a0a820fd",
"text": "Many methods for object recognition, segmentation, etc., rely on tessellation of an image into “superpixels”. A superpixel is an image patch which is better aligned with intensity edges than a rectangular patch. Superpixels can be extracted with any segmentation algorithm, however, most of them produce highly irregular superpixels, with widely varying sizes and shapes. A more regular space tessellation may be desired. We formulate the superpixel partitioning problem in an energy minimization framework, and optimize with graph cuts. Our energy function explicitly encourages regular superpixels. We explore variations of the basic energy, which allow a trade-off between a less regular tessellation but more accurate boundaries or better efficiency. Our advantage over previous work is computational efficiency, principled optimization, and applicability to 3D “supervoxel” segmentation. We achieve high boundary recall on 2D images and spatial coherence on video. We also show that compact superpixels improve accuracy on a simple application of salient object segmentation.",
"title": ""
},
{
"docid": "785b42fe7765d415dcfef09a6142aa6f",
"text": "In this paper a first approach for digital media forensics is presented to determine the used microphones and the environments of recorded digital audio samples by using known audio steganalysis features. Our first evaluation is based on a limited exemplary test set of 10 different audio reference signals recorded as mono audio data by four microphones in 10 different rooms with 44.1 kHz sampling rate and 16 bit quantisation. Note that, of course, a generalisation of the results cannot be achieved. Motivated by the syntactical and semantical analysis of information and in particular by known audio steganalysis approaches, a first set of specific features are selected for classification to evaluate, whether this first feature set can support correct classifications. The idea was mainly driven by the existing steganalysis features and the question of applicability within a first and limited test set. In the tests presented in this paper, an inter-device analysis with different device characteristics is performed while intra-device evaluations (identical microphone models of the same manufacturer) are not considered. For classification the data mining tool WEKA with K-means as a clustering and Naive Bayes as a classification technique are applied with the goal to evaluate their classification in regard to the classification accuracy on known audio steganalysis features. Our results show, that for our test set, the used classification techniques and selected steganalysis features, microphones can be better classified than environments. These first tests show promising results but of course are based on a limited test and training set as well a specific test set generation. Therefore additional and enhanced features with different test set generation strategies are necessary to generalise the findings.",
"title": ""
},
{
"docid": "3817e2af004e089915bcdb030622606f",
"text": "The paper describes a practical model for routing and tracking with mobile vehicle in a large area outdoor environment based on the Global Positioning System (GPS) and Global System for Mobile Communication (GSM). The supporting devices, GPS module-eMD3620 of AT&S company and GSM modem-GM862 of Telit company, are controlled by a 32bits microcontroller LM3S2965 implemented a new version ARM Cortex M3 core. The system is equipped the Compass sensor-YAS529 of Yamaha company and Accelerator sensor- KXSC72050 of Koinix company to determine moving direction of a vehicle. The device will collect positions of the vehicle via GPS receiver and then sends the data of positions to supervised center by the SMS (Short Message Services) or GPRS (General Package Radio Service) service. The supervised center is composed of a development kit that supports GSM techniques-WMP100 of the Wavecom company. After processing data, the position of the mobile vehicle will be displayed on Google Map.",
"title": ""
},
{
"docid": "f12cbeb6a202ea8911a67abe3ffa6ccc",
"text": "In order to enhance the study of the kinematics of any robot arm, parameter design is directed according to certain necessities for the robot, and its forward and inverse kinematics are discussed. The DH convention Method is used to form the kinematical equation of the resultant structure. In addition, the Robotics equations are modeled in MATLAB to create a 3D visual simulation of the robot arm to show the result of the trajectory planning algorithms. The simulation has detected the movement of each joint of the robot arm, and tested the parameters, thus accomplishing the predetermined goal which is drawing a sine wave on a writing board.",
"title": ""
},
{
"docid": "1ad1690ff359462acb320edb42ac821e",
"text": "Green marketing subsumes greening products as well as greening firms. In addition to manipulating the 4Ps (product, price, place and promotion) of the traditional marketing mix, it requires a careful understanding of public policy processes. This paper focuses primarily on promoting products by employing claims about their environmental attributes or about firms that manufacture and/or sell them. Secondarily, it focuses on product and pricing issues. Drawing on multiple literatures, it examines issues such as what needs to be greened (products, systems or processes), why consumers purchase/do not purchase green products and how firms should think about information disclosure strategies on environmental claims. Copyright 2002 John Wiley & Sons, Ltd and ERP Environment.",
"title": ""
}
] |
scidocsrr
|
083c185e2cb0c777fd25956e47b97b1c
|
Online decision making in crowdsourcing markets: theoretical challenges
|
[
{
"docid": "526e6384b38b9254f0e755a13b3ab193",
"text": "In a multi-armed bandit problem, an online algorithm chooses from a set of strategies in a sequence of $n$ trials so as to maximize the total payoff of the chosen strategies. While the performance of bandit algorithms with a small finite strategy set is quite well understood, bandit problems with large strategy sets are still a topic of very active investigation, motivated by practical applications such as online auctions and web advertisement. The goal of such research is to identify broad and natural classes of strategy sets and payoff functions which enable the design of efficient solutions.\n In this work we study a very general setting for the multi-armed bandit problem in which the strategies form a metric space, and the payoff function satisfies a Lipschitz condition with respect to the metric. We refer to this problem as the \"Lipschitz MAB problem\". We present a complete solution for the multi-armed problem in this setting. That is, for every metric space (L,X) we define an isometry invariant Max Min COV(X) which bounds from below the performance of Lipschitz MAB algorithms for $X$, and we present an algorithm which comes arbitrarily close to meeting this bound. Furthermore, our technique gives even better results for benign payoff functions.",
"title": ""
}
] |
[
{
"docid": "1efe9405027ad67ccba8b18c3a28c6f0",
"text": "To encourage strong passwords, system administrators employ password-composition policies, such as a traditional policy requiring that passwords have at least 8 characters from 4 character classes and pass a dictionary check. Recent research has suggested, however, that policies requiring longer passwords with fewer additional requirements can be more usable and in some cases more secure than this traditional policy. To explore long passwords in more detail, we conducted an online experiment with 8,143 participants. Using a cracking algorithm modified for longer passwords, we evaluate eight policies across a variety of metrics for strength and usability. Among the longer policies, we discover new evidence for a security/usability tradeoff, with none being strictly better than another on both dimensions. However, several policies are both more usable and more secure that the traditional policy we tested. Our analyses additionally reveal common patterns and strings found in cracked passwords. We discuss how system administrators can use these results to improve password-composition policies.",
"title": ""
},
{
"docid": "13a9329bdd46ba243003090bf219a20a",
"text": "Visual art represents a powerful resource for mental and physical well-being. However, little is known about the underlying effects at a neural level. A critical question is whether visual art production and cognitive art evaluation may have different effects on the functional interplay of the brain's default mode network (DMN). We used fMRI to investigate the DMN of a non-clinical sample of 28 post-retirement adults (63.71 years ±3.52 SD) before (T0) and after (T1) weekly participation in two different 10-week-long art interventions. Participants were randomly assigned to groups stratified by gender and age. In the visual art production group 14 participants actively produced art in an art class. In the cognitive art evaluation group 14 participants cognitively evaluated artwork at a museum. The DMN of both groups was identified by using a seed voxel correlation analysis (SCA) in the posterior cingulated cortex (PCC/preCUN). An analysis of covariance (ANCOVA) was employed to relate fMRI data to psychological resilience which was measured with the brief German counterpart of the Resilience Scale (RS-11). We observed that the visual art production group showed greater spatial improvement in functional connectivity of PCC/preCUN to the frontal and parietal cortices from T0 to T1 than the cognitive art evaluation group. Moreover, the functional connectivity in the visual art production group was related to psychological resilience (i.e., stress resistance) at T1. Our findings are the first to demonstrate the neural effects of visual art production on psychological resilience in adulthood.",
"title": ""
},
{
"docid": "589c347dd860c238e1ee60bf81c08b1f",
"text": "OBJECTIVE\nEven though much progress has been made in defining primitive hematologic cell phenotypes by using flow cytometry and clonogenic methods, the direct method for study of marrow repopulating cells still remains to be elusive. Long Term Culture-Initiating Cells (LTC-IC) are known as the most primitive human hematopoietic cells detectable by in vitro functional assays.\n\n\nMETHODS\nIn this study, LTC-IC with limiting dilution assay was used to evaluate repopulating potential of cord blood stem cells.\n\n\nRESULTS\nCD34 selections from cord blood were completed succesfully with magnetic beads (73,64%±9,12). The average incidence of week 5 LTC-IC was 1: 1966 CD34+ cells (range 1261-2906).\n\n\nCONCLUSION\nWe found that number of LTC-IC obtained from CD34+ cord blood cells were relatively low in numbers when compared to previously reported bone marrow CD34+ cells. This may be due to the lack of some transcription and growth factors along with some cytokines and chemokines released by accessory cells which are necessary for proliferation of cord blood progenitor/stem cells and it presents an area of interest for further studies.",
"title": ""
},
{
"docid": "5d80c293595fc4fc9fd52218a3a639fa",
"text": "Recent works on image retrieval have proposed to index images by compact representations encoding powerful local descriptors, such as the closely related VLAD and Fisher vector. By combining such a representation with a suitable coding technique, it is possible to encode an image in a few dozen bytes while achieving excellent retrieval results. This paper revisits some assumptions proposed in this context regarding the handling of \"visual burstiness\", and shows that ad-hoc choices are implicitly done which are not desirable. Focusing on VLAD without loss of generality, we propose to modify several steps of the original design. Albeit simple, these modifications significantly improve VLAD and make it compare favorably against the state of the art.",
"title": ""
},
{
"docid": "a47d9d5ddcd605755eb60d5499ad7f7a",
"text": "This paper presents a 14MHz Class-E power amplifier to be used for wireless power transmission. The Class-E power amplifier was built to consider the VSWR and the frequency bandwidth. Tw o kinds of circuits were designed: the high and low quality factor amplifiers. The low quality factor amplifier is confirmed to have larger bandwidth than the high quality factor amplifier. It has also possessed less sensitive characteristics. Therefore, the low quality factor amplifier circuit was adopted and tested. The effect of gate driving input source is studied. The efficiency of the Class-E amplifier reaches 85.5% at 63W.",
"title": ""
},
{
"docid": "3c1c89aeeae6bde84e338c15c44b20ce",
"text": "Using statistical machine learning for making security decisions introduces new vulnerabilities in large scale systems. This paper shows how an adversary can exploit statistical machine learning, as used in the SpamBayes spam filter, to render it useless—even if the adversary’s access is limited to only 1% of the training messages. We further demonstrate a new class of focused attacks that successfully prevent victims from receiving specific email messages. Finally, we introduce two new types of defenses against these attacks.",
"title": ""
},
{
"docid": "e747b34292b95cd490b11ace7e7fdfec",
"text": "The present study used simulator sickness questionnaire data from nine different studies to validate and explore the work of the most widely used simulator sickness index. The ability to predict participant dropouts as a result of simulator sickness symptoms was also evaluated. Overall, participants experiencing nausea and nausea-related symptoms were the most likely to fail to complete simulations. Further, simulation specific factors that increase the discrepancy between visual and vestibular perceptions are also related to higher participant study dropout rates. As a result, it is suggested that simulations minimize turns, curves, stops, et cetera, if possible, in order to minimize participant simulation sickness symptoms. The present study highlights several factors to attend to in order to minimize elevated participant simulation sickness.",
"title": ""
},
{
"docid": "f5ce928373042e01a48496b104da28f6",
"text": "This paper explores the most common methods of data collection used in qualitative research: interviews and focus groups. The paper examines each method in detail, focusing on how they work in practice, when their use is appropriate and what they can offer dentistry. Examples of empirical studies that have used interviews or focus groups are also provided.",
"title": ""
},
{
"docid": "a324180129b78d853c035c2477f54a30",
"text": "A book aiming to build a bridge between two fields that share the subject of research but do not share the same views necessarily puts itself in a difficult position: The authors have either to strike a fair balance at peril of dissatisfying both sides or nail their colors to the mast and cater mainly to one of two communities. For semantic processing of natural language with either NLP methods or Semantic Web approaches, the authors clearly favor the latter and propose a strictly ontology-driven interpretation of natural language. The main contribution of the book, driving semantic processing from the ground up by a formal domain-specific ontology, is elaborated in ten well-structured chapters spanning 143 pages of content.",
"title": ""
},
{
"docid": "c19658ecdae085902d936f615092fbe5",
"text": "Predicting student attrition is an intriguing yet challenging problem for any academic institution. Classimbalanced data is a common in the field of student retention, mainly because a lot of students register but fewer students drop out. Classification techniques for imbalanced dataset can yield deceivingly high prediction accuracy where the overall predictive accuracy is usually driven by the majority class at the expense of having very poor performance on the crucial minority class. In this study, we compared different data balancing techniques to improve the predictive accuracy in minority class while maintaining satisfactory overall classification performance. Specifically, we tested three balancing techniques—oversampling, under-sampling and synthetic minority over-sampling (SMOTE)—along with four popular classification methods—logistic regression, decision trees, neuron networks and support vector machines. We used a large and feature rich institutional student data (between the years 2005 and 2011) to assess the efficacy of both balancing techniques as well as prediction methods. The results indicated that the support vector machine combined with SMOTE data-balancing technique achieved the best classification performance with a 90.24% overall accuracy on the 10-fold holdout sample. All three data-balancing techniques improved the prediction accuracy for the minority class. Applying sensitivity analyses on developed models, we also identified the most important variables for accurate prediction of student attrition. Application of these models has the potential to accurately predict at-risk students and help reduce student dropout rates. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "df4923225affcd0ad02db3719409d5f2",
"text": "Emotions have a high impact in productivity, task quality, creativity, group rapport and job satisfaction. In this work we use lexical sentiment analysis to study emotions expressed in commit comments of different open source projects and analyze their relationship with different factors such as used programming language, time and day of the week in which the commit was made, team distribution and project approval. Our results show that projects developed in Java tend to have more negative commit comments, and that projects that have more distributed teams tend to have a higher positive polarity in their emotional content. Additionally, we found that commit comments written on Mondays tend to a more negative emotion. While our results need to be confirmed by a more representative sample they are an initial step into the study of emotions and related factors in open source projects.",
"title": ""
},
{
"docid": "49e0aa9d6fa579b4217bdd7f61d1d0eb",
"text": "Big data analytics is firmly recognized as a strategic priority for modern enterprises. At the heart of big data analytics lies the data curation process, consists of tasks that transform raw data (unstructured, semi-structured and structured data sources) into curated data, i.e. contextualized data and knowledge that is maintained and made available for use by end-users and applications. To achieve this, the data curation process may involve techniques and algorithms for extracting, classifying, linking, merging, enriching, sampling, and the summarization of data and knowledge. To facilitate the data curation process and enhance the productivity of researchers and developers, we identify and implement a set of basic data curation APIs and make them available as services to researchers and developers to assist them in transforming their raw data into curated data. The curation APIs enable developers to easily add features such as extracting keyword, part of speech, and named entities such as Persons, Locations, Organizations, Companies, Products, Diseases, Drugs, etc.; providing synonyms and stems for extracted information items leveraging lexical knowledge bases for the English language such as WordNet; linking extracted entities to external knowledge bases such as Google Knowledge Graph and Wikidata; discovering similarity among the extracted information items, such as calculating similarity between string and numbers; classifying, sorting and categorizing data into various types, forms or any other distinct class; and indexing structured and unstructured data into their data applications. These services can be accessed via a REST API, and the data is returned as a JSON file that can be integrated into data applications. The curation APIs are available as an open source project on GitHub.",
"title": ""
},
{
"docid": "082517b83d9a9cdce3caef62a579bf2e",
"text": "To enable autonomous driving, a semantic knowledge of the environment is unavoidable. We therefore introduce a multiclass classifier to determine the classes of an object relying solely on radar data. This is a challenging problem as objects of the same category have often a diverse appearance in radar data. As classification methods a random forest classifier and a deep convolutional neural network are evaluated. To get good results despite the limited training data available, we introduce a hybrid approach using an ensemble consisting of the two classifiers. Further we show that the accuracy can be improved significantly by allowing a lower detection rate.",
"title": ""
},
{
"docid": "0ac679740e0e3911af04be9464f76a7d",
"text": "Max-Min Fairness is a flexible resource allocation mechanism used in most datacenter schedulers. However, an increasing number of jobs have hard placement constraints, restricting the machines they can run on due to special hardware or software requirements. It is unclear how to define, and achieve, max-min fairness in the presence of such constraints. We propose Constrained Max-Min Fairness (CMMF), an extension to max-min fairness that supports placement constraints, and show that it is the only policy satisfying an important property that incentivizes users to pool resources. Optimally computing CMMF is challenging, but we show that a remarkably simple online scheduler, called Choosy, approximates the optimal scheduler well. Through experiments, analysis, and simulations, we show that Choosy on average differs 2% from the optimal CMMF allocation, and lets jobs achieve their fair share quickly.",
"title": ""
},
{
"docid": "06c3f32f07418575c700e2f0925f4398",
"text": "The spacing of a fixed amount of study time across multiple sessions usually increases subsequent test performance*a finding known as the spacing effect. In the spacing experiment reported here, subjects completed multiple learning trials, and each included a study phase and a test. Once a subject achieved a perfect test, the remaining learning trials within that session comprised what is known as overlearning. The number of these overlearning trials was reduced when learning trials were spaced across multiple sessions rather than massed in a single session. In addition, the degree to which spacing reduced overlearning predicted the size of the spacing effect, which is consistent with the possibility that spacing increases subsequent recall by reducing the occurrence of overlearning. By this account, overlearning is an inefficient use of study time, and the efficacy of spacing depends at least partly on the degree to which it reduces the occurrence of overlearning.",
"title": ""
},
{
"docid": "8f53f02a1bae81e5c06828b6147d2934",
"text": "E-Government, as a vehicle to deliver enhanced services to citizens, is now extending its reach to the elderly population through provision of targeted services. In doing so, the ideals of ubiquitous e-Government may be better achieved. However, there is a lack of studies on e-Government adoption among senior citizens, especially considering that this age group is growing in size and may be averse to new IT applications. This study aims to address this gap by investigating an innovative e- Government service specifically tailored for senior citizens, called CPF e-Withdrawal. Technology adoption model (TAM) is employed as the theoretical foundation, in which perceived usefulness is recognized as the most significant predictor of adoption intention. This study attempts to identify the antecedents of perceived usefulness by drawing from the innovation diffusion literature as well as age-related studies. Our findings agree with TAM and indicate that internet safety perception and perceived ease of use are significant predictors of perceived usefulness.",
"title": ""
},
{
"docid": "c1389acb62cca5cb3cfdec34bd647835",
"text": "A Chinese resume information extraction system (CRIES) based on semi-structured text is designed and implemented to obtain formatted information by extracting text content of every field from resumes in different formats and update information automatically based on the web. Firstly, ideas to classify resumes, some constraints obtained by analyzing resume features and overall extraction strategy is introduced. Then two extraction algorithms for parsing resumes in different text formats are given. Consequently, the system was implemented by java programming. Finally, use the system to resolve the resume samples, and the statistical analysis and system optimization analysis are carried out according to the accuracy rate and recall rate of the extracted results.",
"title": ""
},
{
"docid": "60f2baba7922543e453a3956eb503c05",
"text": "Pylearn2 is a machine learning research library. This does n t just mean that it is a collection of machine learning algorithms that share a comm n API; it means that it has been designed for flexibility and extensibility in ord e to facilitate research projects that involve new or unusual use cases. In this paper we give a brief history of the library, an overview of its basic philosophy, a summar y of the library’s architecture, and a description of how the Pylearn2 communi ty functions socially.",
"title": ""
},
{
"docid": "09f812cae6c8952d27ef86168906ece8",
"text": "Genetic algorithms provide an alternative to traditional optimization techniques by using directed random searches to locate optimal solutions in complex landscapes. We introduce the art and science of genetic algorithms and survey current issues in GA theory and practice. We do not present a detailed study, instead, we offer a quick guide into the labyrinth of GA research. First, we draw the analogy between genetic algorithms and the search processes in nature. Then we describe the genetic algorithm that Holland introduced in 1975 and the workings of GAs. After a survey of techniques proposed as improvements to Holland's GA and of some radically different approaches, we survey the advances in GA theory related to modeling, dynamics, and deception.<<ETX>>",
"title": ""
}
] |
scidocsrr
|
166e495ba77bcb6ee0a134a4b87f2807
|
Real-Time Human Motion Capture with Multiple Depth Cameras
|
[
{
"docid": "bc3658f75aa9af27a16ded8def1ad522",
"text": "Tracking human pose in real-time is a difficult problem with many interesting applications. Existing solutions suffer from a variety of problems, especially when confronted with unusual human poses. In this paper, we derive an algorithm for tracking human pose in real-time from depth sequences based on MAP inference in a probabilistic temporal model. The key idea is to extend the iterative closest points (ICP) objective by modeling the constraint that the observed subject cannot enter free space, the area of space in front of the true range measurements. Our primary contribution is an extension to the articulated ICP algorithm that can efficiently enforce this constraint. Our experiments show that including this term improves tracking accuracy significantly. The resulting filter runs at 125 frames per second using a single desktop CPU core. We provide extensive experimental results on challenging real-world data, which show that the algorithm outperforms the previous state-of-the-art trackers both in computational efficiency and accuracy.",
"title": ""
}
] |
[
{
"docid": "31b5deab1e434962f0bf974834134d50",
"text": "The aim of this paper is to layout deep investment techniques in financial markets using deep learning models. Financial prediction problems usually involve huge variety of data-sets with complex data interactions which makes it difficult to design an economic model. Applying deep learning models to such problems can exploit potentially non-linear patterns in data. In this paper author introduces deep learning hierarchical decision models for prediction analysis and better decision making for financial domain problem set such as pricing securities, risk factor analysis and portfolio selection. The Section 3 includes architecture as well as detail on training a financial domain deep learning neural network. It further lays out different models such asLSTM, auto-encoding, smart indexing, credit risk analysis model for solving the complex data interactions. The experiments along with their results show how these models can be useful in deep investments for financial domain problems.",
"title": ""
},
{
"docid": "2181397b2f808737f191aa999022502b",
"text": "In recent years, the As-Rigid-As-Possible (ARAP) shape deformation and shape interpolation techniques gained popularity, and the ARAP energy was successfully used in other applications as well. We improve the ARAP animation technique in two aspects. First, we introduce a new ARAP-type energy, named SR-ARAP, which has a consistent discretization for surfaces (triangle meshes). The quality of our new surface deformation scheme competes with the quality of the volumetric ARAP deformation (for tetrahedral meshes). Second, we propose a new ARAP shape interpolation method that is superior to prior art also based on the ARAP energy. This method is compatible with our new SR-ARAP energy, as well as with the ARAP volume energy.",
"title": ""
},
{
"docid": "218ddb719c00ea390d08b2d128481333",
"text": "Teeth move through alveolar bone, whether through the normal process of tooth eruption or by strains generated by orthodontic appliances. Both eruption and orthodontics accomplish this feat through similar fundamental biological processes, osteoclastogenesis and osteogenesis, but there are differences that make their mechanisms unique. A better appreciation of the molecular and cellular events that regulate osteoclastogenesis and osteogenesis in eruption and orthodontics is not only central to our understanding of how these processes occur, but also is needed for ultimate development of the means to control them. Possible future studies in these areas are also discussed, with particular emphasis on translation of fundamental knowledge to improve dental treatments.",
"title": ""
},
{
"docid": "f26f254827efa3fe29301ef31eb8669f",
"text": "Today, computer vision systems are tested by their accuracy in detecting and localizing instances of objects. As an alternative, and motivated by the ability of humans to provide far richer descriptions and even tell a story about an image, we construct a \"visual Turing test\": an operator-assisted device that produces a stochastic sequence of binary questions from a given test image. The query engine proposes a question; the operator either provides the correct answer or rejects the question as ambiguous; the engine proposes the next question (\"just-in-time truthing\"). The test is then administered to the computer-vision system, one question at a time. After the system's answer is recorded, the system is provided the correct answer and the next question. Parsing is trivial and deterministic; the system being tested requires no natural language processing. The query engine employs statistical constraints, learned from a training set, to produce questions with essentially unpredictable answers-the answer to a question, given the history of questions and their correct answers, is nearly equally likely to be positive or negative. In this sense, the test is only about vision. The system is designed to produce streams of questions that follow natural story lines, from the instantiation of a unique object, through an exploration of its properties, and on to its relationships with other uniquely instantiated objects.",
"title": ""
},
{
"docid": "6ac231de51b69685fcb45d4ef2b32051",
"text": "This paper deals with a design and motion planning algorithm of a caterpillar-based pipeline robot that can be used for inspection of 80-100-mm pipelines in an indoor pipeline environment. The robot system uses a differential drive to steer the robot and spring loaded four-bar mechanisms to assure that the robot expands to grip the pipe walls. Unique features of this robot are the caterpillar wheels, the analysis of the four-bar mechanism supporting the treads, a closed-form kinematic approach, and an intuitive user interface. In addition, a new motion planning approach is proposed, which uses springs to interconnect two robot modules and allows the modules to cooperatively navigate through difficult segments of the pipes. Furthermore, an analysis method of selecting optimal compliance to assure functionality and cooperation is suggested. Simulation and experimental results are used throughout the paper to highlight algorithms and approaches.",
"title": ""
},
{
"docid": "d848a684aeddd5447f17282fdd2efaf0",
"text": "..........................................................................................................iii ACKNOWLEDGMENTS.........................................................................................iv TABLE OF CONTENTS .........................................................................................vi LIST OF TABLES................................................................................................viii LIST OF FIGURES ................................................................................................ix",
"title": ""
},
{
"docid": "72f5a5112bb8d2bd57ae11bf9765787f",
"text": "Semantic segmentation requires large amounts of pixelwise annotations to learn accurate models. In this paper, we present a video prediction-based methodology to scale up training sets by synthesizing new training samples in order to improve the accuracy of semantic segmentation networks. We exploit video prediction models’ ability to predict future frames in order to also predict future labels. A joint propagation strategy is also proposed to alleviate mis-alignments in synthesized samples. We demonstrate that training segmentation models on datasets augmented by the synthesized samples leads to significant improvements in accuracy. Furthermore, we introduce a novel boundary label relaxation technique that makes training robust to annotation noise and propagation artifacts along object boundaries. Our proposed methods achieve state-of-the-art mIoUs of 83.5% on Cityscapes and 82.9% on CamVid. Our single model, without model ensembles, achieves 72.8% mIoU on the KITTI semantic segmentation test set, which surpasses the winning entry of the ROB challenge 2018. Our code and videos can be found at https://nv-adlr.github. io/publication/2018-Segmentation.",
"title": ""
},
{
"docid": "cc61cf5de5445258a1dbb9a052821add",
"text": "In healthcare systems, there is huge medical data collected from many medical tests which conducted in many domains. Much research has been done to generate knowledge from medical data by using data mining techniques. However, there still needs to extract hidden information in the medical data, which can help in detecting diseases in the early stage or even before happening. In this study, we apply three data mining classifiers; Decision Tree, Rule Induction, and Naïve Bayes, on a test blood dataset which has been collected from Europe Gaza Hospital, Gaza Strip. The classifiers utilize the CBC characteristics to predict information about possible blood diseases in early stage, which may enhance the curing ability. Three experiments are conducted on the test blood dataset, which contains three types of blood diseases; Hematology Adult, Hematology Children and Tumor. The results show that Naïve Bayes classifier has the ability to predict the Tumor of blood disease better than the other two classifiers with accuracy of 56%, Rule induction classifier gives better result in predicting Hematology (Adult, Children) with accuracy of (57%–67%) respectively, while Decision Tree has the Lowest accuracy rate for detecting the three types of diseases in our dataset.",
"title": ""
},
{
"docid": "dd51cc2138760f1dcdce6e150cabda19",
"text": "Breast cancer is the most common cancer in women worldwide. The most common screening technology is mammography. To reduce the cost and workload of radiologists, we propose a computer aided detection approach for classifying and localizing calcifications and masses in mammogram images. To improve on conventional approaches, we apply deep convolutional neural networks (CNN) for automatic feature learning and classifier building. In computer-aided mammography, deep CNN classifiers cannot be trained directly on full mammogram images because of the loss of image details from resizing at input layers. Instead, our classifiers are trained on labelled image patches and then adapted to work on full mammogram images for localizing the abnormalities. State-of-the-art deep convolutional neural networks are compared on their performance of classifying the abnormalities. Experimental results indicate that VGGNet receives the best overall accuracy at 92.53% in classifications. For localizing abnormalities, ResNet is selected for computing class activation maps because it is ready to be deployed without structural change or further training. Our approach demonstrates that deep convolutional neural network classifiers have remarkable localization capabilities despite no supervision on the location of abnormalities is provided.",
"title": ""
},
{
"docid": "19256f0de34e0a0b65c41754230643a0",
"text": "As interest in cryptocurrency has increased, problems have arisen with Proof-of-Work (PoW) and Proof-of-Stake (PoS) methods, the most representative methods of acquiring cryptocurrency in a blockchain. The PoW method is uneconomical and the PoS method can be easily monopolized by a few people. To cope with this issue, this paper introduces a Proof-of-Probability (PoP) method. The PoP is a method where each node sorts the encrypted actual hash as well as a number of fake hash, and then the first node to decrypt actual hash creates block. In addition, a wait time is used when decrypting one hash and then decrypting the next hash for restricting the excessive computing power competition. In addition, the centralization by validaters with many stakes can be avoided in the proposed PoP method.",
"title": ""
},
{
"docid": "63cfadd9a71aaa1cbe1ead79f943f83c",
"text": "Persuasiveness is a high-level personality trait that quantifies the influence a speaker has on the beliefs, attitudes, intentions, motivations, and behavior of the audience. With social multimedia becoming an important channel in propagating ideas and opinions, analyzing persuasiveness is very important. In this work, we use the publicly available Persuasive Opinion Multimedia (POM) dataset to study persuasion. One of the challenges associated with this problem is the limited amount of annotated data. To tackle this challenge, we present a deep multimodal fusion architecture which is able to leverage complementary information from individual modalities for predicting persuasiveness. Our methods show significant improvement in performance over previous approaches.",
"title": ""
},
{
"docid": "8e324cf4900431593d9ebc73e7809b23",
"text": "Even though there is a plethora of studies investigating the challenges of adopting ebanking services, a search through the literature indicates that prior studies have investigated either user adoption challenges or the bank implementation challenges. This study integrated both perspectives to provide a broader conceptual framework for investigating challenges banks face in marketing e-banking services in developing country such as Ghana. The results from the mixed method study indicated that institutional–based challenges as well as userbased challenges affect the marketing of e-banking products in Ghana. The strategic implications of the findings for marketing ebanking services are discussed to guide managers to implement e-banking services in Ghana.",
"title": ""
},
{
"docid": "5c29ee234c8f278879fd61b8fa256e7a",
"text": "In this work, we propose a new training method for finding minimum weight norm solutions in over-parameterized neural networks (NNs). This method seeks to improve training speed and generalization performance by framing NN training as a constrained optimization problem wherein the sum of the norm of the weights in each layer of the network is minimized, under the constraint of exactly fitting training data. It draws inspiration from support vector machines (SVMs), which are able to generalize well, despite often having an infinite number of free parameters in their primal form, and from recent theoretical generalization bounds on NNs which suggest that lower norm solutions generalize better. To solve this constrained optimization problem, our method employs Lagrange multipliers that act as integrators of error over training and identify ‘support vector’-like examples. The method can be implemented as a wrapper around gradient based methods and uses standard back-propagation of gradients from the NN for both regression and classification versions of the algorithm. We provide theoretical justifications for the effectiveness of this algorithm in comparison to early stopping and L2regularization using simple, analytically tractable settings. In particular, we show faster convergence to the max-margin hyperplane in a shallow network (compared to vanilla gradient descent); faster convergence to the minimum-norm solution in a linear chain (compared to L2-regularization); and initialization-independent generalization performance in a deep linear network. Finally, using the MNIST dataset, we demonstrate that this algorithm can boost test accuracy and identify difficult examples in real-world datasets.",
"title": ""
},
{
"docid": "f4490447bf8a43de95d61e1626d365ae",
"text": "The connective tissue of the skin is composed mostly of collagen and elastin. Collagen makes up 70-80% of the dry weight of the skin and gives the dermis its mechanical and structural integrity. Elastin is a minor component of the dermis, but it has an important function in providing the elasticity of the skin. During aging, the synthesis of collagen gradually declines, and the skin thus becomes thinner in protected skin, especially after the seventh decade. Several factors contribute to the aging of the skin. In several hereditary disorders collagen or elastin are deficient, leading to accelerated aging. In cutis laxa, for example, elastin fibers are deficient or completely lacking, leading to sagging of the skin. Solar irradiation causes skin to look prematurely aged. Especially ultraviolet radiation induces an accumulation of abnormal elastotic material. These changes are usually observed after 60 years of age, but excessive exposure to the sun may cause severe photoaging as early as the second decade of life. The different biochemical and mechanical parameters of the dermis can be studied by modern techniques. The applications of these techniques to study the aging of dermal connective tissue are described in detail.",
"title": ""
},
{
"docid": "9b70a12243bdd0aaece4268dd32935b1",
"text": "PURPOSE\nOvertraining is primarily related to sustained high load training, often coupled with other stressors. Studies in animal models have suggested that unremittingly heavy training (monotonous training) may increase the likelihood of developing overtraining syndrome. The purpose of this study was to extend our preliminary observations by relating the incidence of illnesses and minor injuries to various indices of training.\n\n\nMETHODS\nWe report observations of the relationship of banal illnesses (a frequently cited marker of overtraining syndrome) to training load and training monotony in experienced athletes (N = 25). Athletes recorded their training using a method that integrates the exercise session RPE and the duration of the training session. Illnesses were noted and correlated with indices of training load (rolling 6 wk average), monotony (daily mean/standard deviation), and strain (load x monotony).\n\n\nRESULTS\nIt was observed that a high percentage of illnesses could be accounted for when individual athletes exceeded individually identifiable training thresholds, mostly related to the strain of training.\n\n\nCONCLUSIONS\nThese suggest that simple methods of monitoring the characteristics of training may allow the athlete to achieve the goals of training while minimizing undesired training outcomes.",
"title": ""
},
{
"docid": "29fc090c5d1e325fd28e6bbcb690fb8d",
"text": "Many forensic computing practitioners work in a high workload and low resource environment. With the move by the discipline to seek ISO 17025 laboratory accreditation, practitioners are finding it difficult to meet the demands of validation and verification of their tools and still meet the demands of the accreditation framework. Many agencies are ill-equipped to reproduce tests conducted by organizations such as NIST since they cannot verify the results with their equipment and in many cases rely solely on an independent validation study of other peoples' equipment. This creates the issue of tools in reality never being tested. Studies have shown that independent validation and verification of complex forensic tools is expensive and time consuming, and many practitioners also use tools that were not originally designed for forensic purposes. This paper explores the issues of validation and verification in the accreditation environment and proposes a paradigm that will reduce the time and expense required to validate and verify forensic software tools",
"title": ""
},
{
"docid": "e0155b21837e87dd1c7bb01635d042e9",
"text": "The purpose of this paper is to provide the reader with an extensive technical analysis and review of the book, \"Multi agent Systems: A Modern Approach to Distributed Artificial Intelligence\" by Gerhard Weiss. Due to the complex nature of the topic of distributed artificial intelligence (DAT) and multi agent systems (MAS), this paper has been divided into two major segments: an overview of field and book analysis. The first section of the paper provides the reader with background information about the topic of DAT and MAS, which not only introduces the reader to the field but also assists the reader to comprehend the essential themes in such a complex field. On the other hand, the second portion of the paper provides the reader with a comprehensive review of the book from the viewpoint of a senior computer science student with an introductory knowledge of the field of artificial intelligence.",
"title": ""
},
{
"docid": "78276f95c0080200585b89221a94f5ed",
"text": "Skeletal muscle damaged by injury or by degenerative diseases such as muscular dystrophy is able to regenerate new muscle fibers. Regeneration mainly depends upon satellite cells, myogenic progenitors localized between the basal lamina and the muscle fiber membrane. However, other cell types outside the basal lamina, such as pericytes, also have myogenic potency. Here, we discuss the main properties of satellite cells and other myogenic progenitors as well as recent efforts to obtain myogenic cells from pluripotent stem cells for patient-tailored cell therapy. Clinical trials utilizing these cells to treat muscular dystrophies, heart failure, and stress urinary incontinence are also briefly outlined.",
"title": ""
},
{
"docid": "592b8bf954c7cd770444675e745a3ebd",
"text": "A compact patch antenna array with high isolation by using two decoupling structures including a row of fractal uniplanar compact electromagnetic bandgap (UC-EBG) structure and three cross slots is proposed. Simulated results show that significant improvement in interelement isolation of 13 dB is obtained by placing the proposed fractal UC-EBG structure between the two radiating patches. Moreover, three cross slots etched on the ground plane are introduced to further suppress the mutual coupling. The design is easy to be manufactured without the implementation of metal vias, and a more compact array with the edge-to-edge distance of 0.22 λ0 can be facilitated by a row of fractal UC-EBG, which can be well applied in the patch antenna array.",
"title": ""
},
{
"docid": "05cea038adce7f5ae2a09a7fd5e024a7",
"text": "The paper describes the use TMS320C5402 DSP for single channel active noise cancellation (ANC) in duct system. The canceller uses a feedback control topology and is designed to cancel narrowband periodic tones. The signal is processed with well-known filtered-X least mean square (filtered-X LMS) Algorithm in the digital signal processing. The paper describes the hardware and use chip support libraries for data streaming. The FXLMS algorithm is written in assembly language callable from C main program. The results obtained are compatible to the expected result in the literature available. The paper highlights the features of cancellation and analyzes its performance at different gain and frequency.",
"title": ""
}
] |
scidocsrr
|
f49184697769d5d85df021e50f7b376d
|
Crank-wheel: A brand new mobile base for field robots
|
[
{
"docid": "ad7e2df2bf191d38a308b00d8efca250",
"text": "We propose track-changeable quadruped walking robot, named “TITAN X”. TITAN X is a new leg-track hybrid mobile robot with a special leg driving system on each leg. A belt on each leg changes to a timing-belt in leg form and a track-belt in track form. TITAN X walks in leg form on rough terrain and makes tracked locomotion using track-belt on level or comparatively low-rough terrain. The characteristics of TITAN X are: 1) it has a hybrid function but is lightweight, 2) it has potential capabilities to demonstrate high-performance on highly-rough terrain. In this paper, details of leg design using a special belt are reported. Also form changing mechanisms are integrated into the system. We have constructed prototype of TITAN X to demonstrate basic performance. Experiments were conducted to verify the validity of the concept of track-changeable walking robot.",
"title": ""
}
] |
[
{
"docid": "80fc5a0c795deb1ec7a687c7f7b6c863",
"text": "Long non-coding RNAs (lncRNAs) have emerged as critical regulators of genes at epigenetic, transcriptional and post-transcriptional levels, yet what genes are regulated by a specific lncRNA remains to be characterized. To assess the effects of the lncRNA on gene expression, an increasing number of researchers profiled the genome-wide or individual gene expression level change after knocking down or overexpressing the lncRNA. Herein, we describe a curated database named LncRNA2Target, which stores lncRNA-to-target genes and is publicly accessible at http://www.lncrna2target.org. A gene was considered as a target of a lncRNA if it is differentially expressed after the lncRNA knockdown or overexpression. LncRNA2Target provides a web interface through which its users can search for the targets of a particular lncRNA or for the lncRNAs that target a particular gene. Both search types are performed either by browsing a provided catalog of lncRNA names or by inserting lncRNA/target gene IDs/names in a search box.",
"title": ""
},
{
"docid": "b6dcf2064ad7f06fd1672b1348d92737",
"text": "In this paper, we propose a two-step method to recognize multiple-food images by detecting candidate regions with several methods and classifying them with various kinds of features. In the first step, we detect several candidate regions by fusing outputs of several region detectors including Felzenszwalb's deformable part model (DPM) [1], a circle detector and the JSEG region segmentation. In the second step, we apply a feature-fusion-based food recognition method for bounding boxes of the candidate regions with various kinds of visual features including bag-of-features of SIFT and CSIFT with spatial pyramid (SP-BoF), histogram of oriented gradient (HoG), and Gabor texture features. In the experiments, we estimated ten food candidates for multiple-food images in the descending order of the confidence scores. As results, we have achieved the 55.8% classification rate, which improved the baseline result in case of using only DPM by 14.3 points, for a multiple-food image data set. This demonstrates that the proposed two-step method is effective for recognition of multiple-food images.",
"title": ""
},
{
"docid": "cd61c0b8c1b0f304fa318b22f0577c33",
"text": "Software Defined Networking (SDN) is a concept which provides the network operators and data centres to flexibly manage their networking equipment using software running on external servers. According to the SDN framework, the control and management of the networks, which is usually implemented in software, is decoupled from the data plane. On the other hand cloud computing materializes the vision of utility computing. Tenants can benefit from on-demand provisioning of networking, storage and compute resources according to a pay-per-use business model. In this work we present the networking issues in IaaS and networking and federation challenges that are currently addressed with existing technologies. We also present innovative software-define networking proposals, which are applied to some of the challenges and could be used in future deployments as efficient solutions. cloud computing networking and the potential contribution of software-defined networking along with some performance evaluation results are presented in this paper.",
"title": ""
},
{
"docid": "b3450073ad3d6f2271d6a56fccdc110a",
"text": "OBJECTIVE\nMindfulness-based therapies (MBTs) have been shown to be efficacious in treating internally focused psychological disorders (e.g., depression); however, it is still unclear whether MBTs provide improved functioning and symptom relief for individuals with externalizing disorders, including ADHD. To clarify the literature on the effectiveness of MBTs in treating ADHD and to guide future research, an effect-size analysis was conducted.\n\n\nMETHOD\nA systematic review of studies published in PsycINFO, PubMed, and Google Scholar was completed from the earliest available date until December 2014.\n\n\nRESULTS\nA total of 10 studies were included in the analysis of inattention and the overall effect size was d = -.66. A total of nine studies were included in the analysis of hyperactivity/impulsivity and the overall effect was calculated at d = -.53.\n\n\nCONCLUSION\nResults of this study highlight the possible benefits of MBTs in reducing symptoms of ADHD.",
"title": ""
},
{
"docid": "548d87ac6f8a023d9f65af371ad9314c",
"text": "Mindfiilness meditation is an increasingly popular intervention for the treatment of physical illnesses and psychological difficulties. Using intervention strategies with mechanisms familiar to cognitive behavioral therapists, the principles and practice of mindfijlness meditation offer promise for promoting many of the most basic elements of positive psychology. It is proposed that mindfulness meditation promotes positive adjustment by strengthening metacognitive skills and by changing schemas related to emotion, health, and illness. Additionally, the benefits of yoga as a mindfulness practice are explored. Even though much empirical work is needed to determine the parameters of mindfulness meditation's benefits, and the mechanisms by which it may achieve these benefits, theory and data thus far clearly suggest the promise of mindfulness as a link between positive psychology and cognitive behavioral therapies.",
"title": ""
},
{
"docid": "a1bbe6651835408e8f7f595068aaad85",
"text": "The process of translating comprises in its essence the whole secret of human understanding and social communication. This chapter introduces techniques for machine translation (MT), the use of MACHINE TRANSLATION MT computers to automate some or all of the process of translating from one language to another. Translation, in its full generality, is a difficult, fascinating, and intensely human endeavor, as rich as any other area of human creativity. Consider the following passage from the end of Chapter 45 of the 18th-century novel The Story of the Stone, also called Dream of the Red Chamber, by Cao Xue Qin (Cao, 1792), transcribed in the Mandarin dialect: Fig. 24.1 shows the English translation of this passage by David Hawkes, in sentences labeled E 1-E 4. For ease of reading, instead of giving the Chinese, we have shown the English glosses of each Chinese word IN SMALL CAPS. Words in blue are Chinese words not translated into English, or English words not in the Chinese. We have shown alignment lines between words that roughly correspond in the two languages. Consider some of the issues involved in this translation. First, the English and Chinese texts are very different structurally and lexically. The four English sentences (notice the periods in blue) correspond to one long Chinese sentence. The word order of the two texts is very different, as we can see by the many crossed alignment lines in Fig. 24.1. The English has many more words than the Chinese, as we can see by the large number of English words marked in blue. Many of these differences are caused by structural differences between the two languages. For example, because Chinese rarely marks verbal aspect or tense; the English translation has additional words like as, turned to, and had begun, and Hawkes had to decide to translate Chinese tou as penetrated, rather than say was penetrating or had penetrated. Chinese has less articles than English, explaining the large number of blue thes. Chinese also uses far fewer pronouns than English, so Hawkes had to insert she and her in many places into the",
"title": ""
},
{
"docid": "ad9f074e86a1eea6985f8e9ebf115078",
"text": "Podosomes are highly dynamic actin-rich adhesion structures in cells of myeloid lineage and some transformed cells. Unlike transformed mesenchymal cell types, podosomes are the sole adhesion structure in macrophage and thus mediate all contact with adhesion substrate, including movement through complex tissues for immune surveillance. The existence of podosomes in inflammatory macrophages and transformed cell types suggest an important role in tissue invasion. The proteome, assembly, and maintenance of podosomes are emerging, but remain incompletely defined. Previously, we reported a formin homology sequence and actin assembly activity in association with macrophage beta-3 integrin. In this study we demonstrate by quantitative reverse transcriptase polymerase chain reaction and Western blotting that the formin FRL1 is specifically upregulated during monocyte differentiation to macrophages. We show that the formin FRL1 localizes to the actin-rich cores of primary macrophage podosomes. FRL1 co-precipitates with beta-3 integrin and both fixed and live cell fluorescence microscopy show that endogenous and overexpressed FRL1 selectively localize to macrophage podosomes. Targeted disruption of FRL1 by siRNA results in reduced cell adhesion and disruption of podosome dynamics. Our data suggest that FRL1 is responsible for modifying actin at the macrophage podosome and may be involved in actin cytoskeleton dynamics during adhesion and migration within tissues.",
"title": ""
},
{
"docid": "7c2960e9fd059e57b5a0172e1d458250",
"text": "The main goal of this research is to discover the structure of home appliances usage patterns, hence providing more intelligence in smart metering systems by taking into account the usage of selected home appliances and the time of their usage. In particular, we present and apply a set of unsupervised machine learning techniques to reveal specific usage patterns observed at an individual household. The work delivers the solutions applicable in smart metering systems that might: (1) contribute to higher energy awareness; (2) support accurate usage forecasting; and (3) provide the input for demand response systems in homes with timely energy saving recommendations for users. The results provided in this paper show that determining household characteristics from smart meter data is feasible and allows for quickly grasping general trends in data.",
"title": ""
},
{
"docid": "104c9ef558234250d56ef941f09d6a7c",
"text": "The first of these questions is in the province of sensory physiology, and is the only one for which appreciable understanding has been achieved. This article will be concerned primarily with the second and third questions, which are still subject to a vast amount of speculation, and where the few relevant facts currently supplied by neurophysiology have not yet been integrated into an acceptable theory. With regard to the second question, two alternative positions have been maintained. The first suggests that storage of sensory information is in the form of coded representations or images, with some sort of one-to-one mapping between the sensory stimulus",
"title": ""
},
{
"docid": "709c06739d20fe0a5ba079b21e5ad86d",
"text": "Bug triaging refers to the process of assigning a bug to the most appropriate developer to fix. It becomes more and more difficult and complicated as the size of software and the number of developers increase. In this paper, we propose a new framework for bug triaging, which maps the words in the bug reports (i.e., the term space) to their corresponding topics (i.e., the topic space). We propose a specialized topic modeling algorithm named <italic> multi-feature topic model (MTM)</italic> which extends Latent Dirichlet Allocation (LDA) for bug triaging. <italic>MTM </italic> considers product and component information of bug reports to map the term space to the topic space. Finally, we propose an incremental learning method named <italic>TopicMiner</italic> which considers the topic distribution of a new bug report to assign an appropriate fixer based on the affinity of the fixer to the topics. We pair <italic> TopicMiner</italic> with MTM (<italic>TopicMiner<inline-formula><tex-math notation=\"LaTeX\">$^{MTM}$</tex-math> <alternatives><inline-graphic xlink:href=\"xia-ieq1-2576454.gif\"/></alternatives></inline-formula></italic>). We have evaluated our solution on 5 large bug report datasets including GCC, OpenOffice, Mozilla, Netbeans, and Eclipse containing a total of 227,278 bug reports. We show that <italic>TopicMiner<inline-formula><tex-math notation=\"LaTeX\"> $^{MTM}$</tex-math><alternatives><inline-graphic xlink:href=\"xia-ieq2-2576454.gif\"/></alternatives></inline-formula> </italic> can achieve top-1 and top-5 prediction accuracies of 0.4831-0.6868, and 0.7686-0.9084, respectively. We also compare <italic>TopicMiner<inline-formula><tex-math notation=\"LaTeX\">$^{MTM}$</tex-math><alternatives> <inline-graphic xlink:href=\"xia-ieq3-2576454.gif\"/></alternatives></inline-formula></italic> with Bugzie, LDA-KL, SVM-LDA, LDA-Activity, and Yang et al.'s approach. The results show that <italic>TopicMiner<inline-formula> <tex-math notation=\"LaTeX\">$^{MTM}$</tex-math><alternatives><inline-graphic xlink:href=\"xia-ieq4-2576454.gif\"/> </alternatives></inline-formula></italic> on average improves top-1 and top-5 prediction accuracies of Bugzie by 128.48 and 53.22 percent, LDA-KL by 262.91 and 105.97 percent, SVM-LDA by 205.89 and 110.48 percent, LDA-Activity by 377.60 and 176.32 percent, and Yang et al.'s approach by 59.88 and 13.70 percent, respectively.",
"title": ""
},
{
"docid": "4a1559bd8a401d3273c34ab20931611d",
"text": "Spiking Neural Networks (SNNs) are widely regarded as the third generation of artificial neural networks, and are expected to drive new classes of recognition, data analytics and computer vision applications. However, large-scale SNNs (e.g., of the scale of the human visual cortex) are highly compute and data intensive, requiring new approaches to improve their efficiency. Complementary to prior efforts that focus on parallel software and the design of specialized hardware, we propose AxSNN, the first effort to apply approximate computing to improve the computational efficiency of evaluating SNNs. In SNNs, the inputs and outputs of neurons are encoded as a time series of spikes. A spike at a neuron's output triggers updates to the potentials (internal states) of neurons to which it is connected. AxSNN determines spike-triggered neuron updates that can be skipped with little or no impact on output quality and selectively skips them to improve both compute and memory energy. Neurons that can be approximated are identified by utilizing various static and dynamic parameters such as the average spiking rates and current potentials of neurons, and the weights of synaptic connections. Such a neuron is placed into one of many approximation modes, wherein the neuron is sensitive only to a subset of its inputs and sends spikes only to a subset of its outputs. A controller periodically updates the approximation modes of neurons in the network to achieve energy savings with minimal loss in quality. We apply AxSNN to both hardware and software implementations of SNNs. For hardware evaluation, we designed SNNAP, a Spiking Neural Network Approximate Processor that embodies the proposed approximation strategy, and synthesized it to 45nm technology. The software implementation of AxSNN was evaluated on a 2.7 GHz Intel Xeon server with 128 GB memory. Across a suite of 6 image recognition benchmarks, AxSNN achieves 1.4–5.5x reduction in scalar operations for network evaluation, which translates to 1.2–3.62x and 1.26–3.9x improvement in hardware and software energies respectively, for no loss in application quality. Progressively higher energy savings are achieved with modest reductions in output quality.",
"title": ""
},
{
"docid": "34b3c5ee3ea466c23f5c7662f5ce5b33",
"text": "A hstruct -The concept of a super value node is developed to estend the theor? of influence diagrams to allow dynamic programming to be performed within this graphical modeling framework. The operations necessa? to exploit the presence of these nodes and efficiently analyze the models are developed. The key result is that by reprewnting value function separability in the structure of the graph of the influence diagram. formulation is simplified and operations on the model can take advantage of the wparability. Froni the decision analysis perspective. this allows simple exploitation of separabilih in the value function of a decision problem which can significantly reduce memory and computation requirements. Importantly. this allows algorithms to be designed to solve influence diagrams that automatically recognize the opportunih for applying dynamic programming. From the decision processes perspective, influence diagrams with super value nodes allow efficient formulation and solution of nonstandard decision process structures. They a h allow the exploitation of conditional independence between state variables. Examples are provided that demonstrate these advantages.",
"title": ""
},
{
"docid": "4def0dc478dfb5ddb5a0ec59ec7433f5",
"text": "A system that enables continuous slip compensation for a Mars rover has been designed, implemented, and field-tested. This system is composed of several components that allow the rover to accurately and continuously follow a designated path, compensate for slippage, and reach intended goals in high-slip environments. These components include: visual odometry, vehicle kinematics, a Kalman filter pose estimator, and a slip compensation/path follower. Visual odometry tracks distinctive scene features in stereo imagery to estimate rover motion between successively acquired stereo image pairs. The vehicle kinematics for a rocker-bogie suspension system estimates motion by measuring wheel rates, and rocker, bogie, and steering angles. The Kalman filter merges data from an inertial measurement unit (IMU) and visual odometry. This merged estimate is then compared to the kinematic estimate to determine how much slippage has occurred, taking into account estimate uncertainties. If slippage has occurred then a slip vector is calculated by differencing the current Kalman filter estimate from the kinematic estimate. This slip vector is then used to determine the necessary wheel velocities and steering angles to compensate for slip and follow the desired path.",
"title": ""
},
{
"docid": "cd982555c4d9d0565e96fef0ed44d2c2",
"text": "The millennials use mobile phones on a daily basis to keep in touch with family and friends (Lenhart 2010). However, the role of mobile phones in education needs to be close examined as educators strive to incorporate mobile leaning devices in the classroom. Consequently, schools will not only need to evaluate their school curriculums but also recognize the power in the digital devices to engage, enable, and empower Gen-M and iGen learners. Therefore, the purpose of this article is to provide a rationale for the need for administrators to design guidelines for schools planning to adopt mobile phones in their curricula. Additionally, this article is intended to stimulate reflections on effective ways to adopt mobile phones in education to engage learners.",
"title": ""
},
{
"docid": "6facd37330e6d88c84bbbfe0119d7008",
"text": "Bug fixing is one of the most important activities in software development and maintenance. A software project often employs an issue tracking system such as Bugzilla to store and manage their bugs. In the issue tracking system, many bugs are invalid but take unnecessary efforts to identify them. In this paper, we mainly focus on bug fixing rate, i.e., The proportion of the fixed bugs in the reported closed bugs. In particular, we study the characteristics of bug fixing rate and investigate the impact of a reporter's different contribution behaviors to the bug fixing rate. We perform an empirical study on all reported bugs of two large open source software communities Eclipse and Mozilla. We find (1) the bug fixing rates of both projects are not high, (2) there exhibits a negative correlation between a reporter's bug fixing rate and the average time cost to close the bugs he/she reports, (3) the amount of bugs a reporter ever fixed has a strong positive impact on his/her bug fixing rate, (4) reporters' bug fixing rates have no big difference, whether their contribution behaviors concentrate on a few products or across many products, (5) reporters' bug fixing rates tend to increase as time goes on, i.e., Developers become more experienced at reporting bugs.",
"title": ""
},
{
"docid": "b174bbcb91d35184674532b6ab22dcdf",
"text": "Many studies have confirmed the benefit of gamification on learners’ motivation. However, gamification may also demotivate some learners, or learners may focus on the gamification elements instead of the learning content. Some researchers have recommended building learner models that can be used to adapt gamification elements based on learners’ personalities. Building such a model requires a strong understanding of the relationship between gamification and personality. Existing empirical work has focused on measuring knowledge gain and learner preference. These findings may not be reliable because the analyses are based on learners who complete the study and because they rely on self-report from learners. This preliminary study explores a different approach by allowing learners to drop out at any time and then uses the number of students left as a proxy for motivation and engagement. Survival analysis is used to analyse the data. The results confirm the benefits of gamification and provide some pointers to how this varies with personality.",
"title": ""
},
{
"docid": "e02050f14a7567bc6d4b439b8ed7fc48",
"text": "The accumulation mechanisms of technetium-99m methylene diphosphonate (99mTc-MDP) were investigated using hydroxyapatite powder and various phosphates. After reaction with99mTc-MDP, radioactivity was analyzed using a scintillation counter. The adsorption of99mTc-MDP onto hydroxyapatite occurred within 30 sec, and was not temperature dependent at 0–95°C. There was no change in the adsorption of99mTc-MDP onto hydroxyapatite in 5 or 50mM water-soluble organic compounds (glucose or urea). Anions had a greater effect on adsorption than cations. The only phosphate with adsorption equal to that of hydroxyapatite was calcium pyrophosphate. Adsorption onto calcium hydrogenphosphate was low at a pH of 6.0 in comparison with hydroxyapatite. These findings suggest that the adsorption of99mTc-MDP onto hydroxyapatite is influenced by the concentration of coexisting anions and by the chemical constitution of the phosphate components.",
"title": ""
},
{
"docid": "fcea8882b303897fd47cbece47271512",
"text": "Inference in the presence of outliers is an important field of research as outliers are ubiquitous and may arise across a variety of problems and domains. Bayesian optimization is method that heavily relies on probabilistic inference. This allows outstanding sample efficiency because the probabilistic machinery provides a memory of the whole optimization process. However, that virtue becomes a disadvantage when the memory is populated with outliers, inducing bias in the estimation. In this paper, we present an empirical evaluation of Bayesian optimization methods in the presence of outliers. The empirical evidence shows that Bayesian optimization with robust regression often produces suboptimal results. We then propose a new algorithm which combines robust regression (a Gaussian process with Student-t likelihood) with outlier diagnostics to classify data points as outliers or inliers. By using an scheduler for the classification of outliers, our method is more efficient and has better convergence over the standard robust regression. Furthermore, we show that even in controlled situations with no expected outliers, our method is able to produce better results.",
"title": ""
},
{
"docid": "3d83a89ffbb5e63e4db33eef4b7d32d2",
"text": "Autonomous mobile robots are being developed for numerous applications where long-term capabilities would be beneficial. However, most mobile robots have onboard power supplies in the form of batteries that last for a finite amount of time, in which case the robot becomes reliant on human intervention for extended usage. To achieve true long-term autonomy, the robot must be selfsustaining in its environment. We have developed a control architecture and an accompanying recharging mechanism which allows a robot to readily intervene its regular operation with autonomous recharging to stay alive. We demonstrate the efficacy of our system experimentally, by requiring the robot to serve as a sentry, monitoring our lab entrance for an extended period of time. The system is able to operate for long periods of time without operator intervention.",
"title": ""
},
{
"docid": "6392a6c384613f8ed9630c8676f0cad8",
"text": "References D. Bruckner, J. Rosen, and E. R. Sparks. deepviz: Visualizing convolutional neural networks for image classification. 2014. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012. Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of Machine Learning Research,9(2579-2605):85, 2008. Jason Yosinski, Jeff Clune, Anh Nguyen, Thomas Fuchs, and Hods Lipson. Understanding neural networks through deep visualization. arXiv preprint arXiv:1506.06579, 2015. Matthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In Computer vision–ECCV 2014, pages 818–833. Springer, 2014. Network visualization of ReVACNN",
"title": ""
}
] |
scidocsrr
|
5c8f5d47c5c9d9641b7e5b75e84fed7d
|
A Dataset for Multi-Target Stance Detection
|
[
{
"docid": "8b863cd49dfe5edc2d27a0e9e9db0429",
"text": "This paper presents an annotation scheme for adding entity and event target annotations to the MPQA corpus, a rich span-annotated opinion corpus. The new corpus promises to be a valuable new resource for developing systems for entity/event-level sentiment analysis. Such systems, in turn, would be valuable in NLP applications such as Automatic Question Answering. We introduce the idea of entity and event targets (eTargets), describe the annotation scheme, and present the results of an agreement study.",
"title": ""
}
] |
[
{
"docid": "da5339bb74d6af2bfa7c8f46b4f50bb3",
"text": "Conversational agents are exploding in popularity. However, much work remains in the area of non goal-oriented conversations, despite significant growth in research interest over recent years. To advance the state of the art in conversational AI, Amazon launched the Alexa Prize, a 2.5-million dollar university competition where sixteen selected university teams built conversational agents to deliver the best social conversational experience. Alexa Prize provided the academic community with the unique opportunity to perform research with a live system used by millions of users. The subjectivity associated with evaluating conversations is key element underlying the challenge of building non-goal oriented dialogue systems. In this paper, we propose a comprehensive evaluation strategy with multiple metrics designed to reduce subjectivity by selecting metrics which correlate well with human judgement. The proposed metrics provide granular analysis of the conversational agents, which is not captured in human ratings. We show that these metrics can be used as a reasonable proxy for human judgment. We provide a mechanism to unify the metrics for selecting the top performing agents, which has also been applied throughout the Alexa Prize competition. To our knowledge, to date it is the largest setting for evaluating agents with millions of conversations and hundreds of thousands of ratings from users. We believe that this work is a step towards an automatic evaluation process for conversational AIs.",
"title": ""
},
{
"docid": "f1d9ff305dfa7bd5e3a2dec3ee880e6e",
"text": "With the advent of LTE-Carrier Aggregation (CA) as a means of increasing data rates in mobile device communication, a paradigm shift in the design of acoustic filters has been arising. Whereas before CA the focus of filtering has been on supporting one telecommunication band at a time, now multiple bands will have to be supported simultaneously over the same antenna. As a consequence, using switched filter banks is not an option anymore, and the only viable solution is to connect multiple Tx, Rx, and/or TDD filters to the same antenna pin. In this multiplexing configuration, several additional challenges need to be addressed when designing the individual filter components. The traditional requirements (low insertion loss, good out-of-band attenuation, tight impedance locus, good IMD behavior, power handling, and - in case of FDD bands - the highest possible Tx-Rx isolation) are no longer sufficient. Now designers also need to minimize loading of multiple filters by all the other filter's out-of-band impedance, optimize a multitude of cross-isolation requirements, and worry about a plethora of nonlinear mixing products (IMDx). This presentation provides an overview of these new challenges and discusses strategies to address some of them. Case studies will be reviewed and the next challenges lurking behind the horizon will be revealed.",
"title": ""
},
{
"docid": "a50f168329c1b44ed881e99d66fe7c13",
"text": "Indian agriculture is diverse; ranging from impoverished farm villages to developed farms utilizing modern agricultural technologies. Facility agriculture area in China is expanding, and is leading the world. However, its ecosystem control technology and system is still immature, with low level of intelligence. Promoting application of modern information technology in agriculture will solve a series of problems facing by farmers. Lack of exact information and communication leadsto the loss in production. Our paper is designed to over come these problems. This regulator provides an intelligent monitoring platform framework and system structure for facility agriculture ecosystem based on IOT[3]. This will be a catalyst for the transition from traditional farming to modern farming. This also provides opportunity for creating new technology and service development in IOT (internet of things) farming application. The Internet Of Things makes everything connected. Over 50 years since independence, India has made immense progress towards food productivity. The Indian population has tripled, but food grain production more than quadrupled[1]: there has thus been a substantial increase in available food grain per ca-pita. Modern agriculture practices have a great promise for the economic development of a nation. So we have brought-in an innovative project for the welfare of farmers and also for the farms. There are no day or night restrictions. This is helpful at any time.",
"title": ""
},
{
"docid": "bd96b290d83f10db3d70e912aa4bd177",
"text": "In deployment of smart grid it is imperative to adopt advanced and smart technologies in SCADA of distribution system for smart monitoring, automation and control of a power system. The present paper focuses the status of present SCADA of small distribution systems and proposes the use of smart meter for variety of tasks to be performed in smart distribution systems. Sample smart operations for monitoring and control task using latest communication technologies are demonstrated with simulation and hardware results. The proposed scheme can be extended and implemented effectively to gratify variety of errands as mandatory in smart grid.",
"title": ""
},
{
"docid": "16932e01fdea801f28ec6c4194f70352",
"text": "Plum pox virus (PPV) causes the most economically-devastating viral disease in Prunus species. Unfortunately, few natural resistance genes are available for the control of PPV. Recessive resistance to some potyviruses is associated with mutations of eukaryotic translation initiation factor 4E (eIF4E) or its isoform eIF(iso)4E. In this study, we used an RNA silencing approach to manipulate the expression of eIF4E and eIF(iso)4E towards the development of PPV resistance in Prunus species. The eIF4E and eIF(iso)4E genes were cloned from plum (Prunus domestica L.). The sequence identity between plum eIF4E and eIF(iso)4E coding sequences is 60.4% at the nucleotide level and 52.1% at the amino acid level. Quantitative real-time RT-PCR analysis showed that these two genes have a similar expression pattern in different tissues. Transgenes allowing the production of hairpin RNAs of plum eIF4E or eIF(iso)4E were introduced into plum via Agrobacterium-mediated transformation. Gene expression analysis confirmed specific reduced expression of eIF4E or eIF(iso)4E in the transgenic lines and this was associated with the accumulation of siRNAs. Transgenic plants were challenged with PPV-D strain and resistance was evaluated by measuring the concentration of viral RNA. Eighty-two percent of the eIF(iso)4E silenced transgenic plants were resistant to PPV, while eIF4E silenced transgenic plants did not show PPV resistance. Physical interaction between PPV-VPg and plum eIF(iso)4E was confirmed. In contrast, no PPV-VPg/eIF4E interaction was observed. These results indicate that eIF(iso)4E is involved in PPV infection in plum, and that silencing of eIF(iso)4E expression can lead to PPV resistance in Prunus species.",
"title": ""
},
{
"docid": "b2f1ec4d8ac0a8447831df4287271c35",
"text": "We present a new, robust and computationally efficient Hierarchical Bayesian model for effective topic correlation modeling. We model the prior distribution of topics by a Generalized Dirichlet distribution (GD) rather than a Dirichlet distribution as in Latent Dirichlet Allocation (LDA). We define this model as GD-LDA. This framework captures correlations between topics, as in the Correlated Topic Model (CTM) and Pachinko Allocation Model (PAM), and is faster to infer than CTM and PAM. GD-LDA is effective to avoid over-fitting as the number of topics is increased. As a tree model, it accommodates the most important set of topics in the upper part of the tree based on their probability mass. Thus, GD-LDA provides the ability to choose significant topics effectively. To discover topic relationships, we perform hyper-parameter estimation based on Monte Carlo EM Estimation. We provide results using Empirical Likelihood(EL) in 4 public datasets from TREC and NIPS. Then, we present the performance of GD-LDA in ad hoc information retrieval (IR) based on MAP, P@10, and Discounted Gain. We discuss an empirical comparison of the fitting time. We demonstrate significant improvement over CTM, LDA, and PAM for EL estimation. For all the IR measures, GD-LDA shows higher performance than LDA, the dominant topic model in IR. All these improvements with a small increase in fitting time than LDA, as opposed to CTM and PAM.",
"title": ""
},
{
"docid": "28d350a72a83318703c85b7c54f2d7c5",
"text": "Clustering is an unsupervised learning procedure and there is no a prior knowledge of data distribution. It organizes a set of objects/data into similar groups called clusters, and the objects within one cluster are highly similar and dissimilar with the objects in other clusters. The classic K-means algorithm (KM) is the most popular clustering algorithm for its easy implementation and fast working. But KM is very sensitive to initialization, the better centers we choose, the better results we get. Also, it is easily trapped in local optimal. The K-harmonic means algorithm (KHM) is less sensitive to the initialization than the KM algorithm. The Ant clustering algorithm (ACA) can avoid trapping in local optimal solution. In this paper, we will propose a new clustering algorithm using the Ant clustering algorithm with K-harmonic means clustering (ACAKHM). The experiment results on three well-known data sets like Iris and two other artificial data sets indicate the superiority of the ACAKHM algorithm. At last the performance of the ACAKHM algorithm is compared with the ACA and the KHM algorithm. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "3116f979980535e447cd47ee5e931f47",
"text": "Based on two successfully and widely used control techniques in many industrial applications under normal (fault-free) operation conditions, the Gain-Scheduled Proportional-Integral-Derivative (GS-PID) control and Model Reference Adaptive Control (MRAC) strategies have been extended, implemented, and experimentally tested on a quadrotor helicopter Unmanned Aerial Vehicle (UAV) testbed available at Concordia University, for the purpose of investigation of these two typical and different control techniques as two useful Fault-Tolerant Control (FTC) approaches. Controllers are designed and implemented in order to track the desired trajectory of the helicopter in both normal and faulty scenarios of the flight. A Linear Quadratic Regulator (LQR) with integral action controller is also used to control the pitch and roll motion of the quadrotor helicopter. Square trajectory, together with specified autonomous and safe taking-off and landing path, is considered as the testing trajectory and the experimental flight testing results with both GS-PID and MRAC are presented and compared with tracking performance under partial loss of control power due to fault/damage in the propeller of the quadrotor UAV. The performance of both controllers showed to be good. Although GS-PID is easier for development and implementation, MRAC showed to be more robust to faults and noises, and is friendly to be applied to the quadrotor UAV.",
"title": ""
},
{
"docid": "dd867c3f55696bebea3d9049a3d43163",
"text": "This paper examines the task of detecting intensity of emotion from text. We create the first datasets of tweets annotated for anger, fear, joy, and sadness intensities. We use a technique called best–worst scaling (BWS) that improves annotation consistency and obtains reliable fine-grained scores. We show that emotion-word hashtags often impact emotion intensity, usually conveying a more intense emotion. Finally, we create a benchmark regression system and conduct experiments to determine: which features are useful for detecting emotion intensity; and, the extent to which two emotions are similar in terms of how they manifest in language.",
"title": ""
},
{
"docid": "a395993ce7fb6fa144b79364724cd7dc",
"text": "High cesarean birth rates are an issue of international public health concern.1 Worries over such increases have led the World Health Organization to advise that Cesarean Section (CS) rates should not be more than 15%,2 with some evidence that CS rates above 15% are not associated with additional reduction in maternal and neonatal mortality and morbidity.3 Analyzing CS rates in different countries, including primary vs. repeat CS and potential reasons of these, provide important insights into the solution for reducing the overall CS rate. Robson,4 proposed a new classification system, the Robson Ten-Group Classification System to allow critical analysis according to characteristics of pregnancy (Table 1). The characteristics used are: (i) single or multiple pregnancy (ii) nulliparous, multiparous, or multiparous with a previous CS (iii) cephalic, breech presentation or other malpresentation (iv) spontaneous or induced labor (v) term or preterm births.",
"title": ""
},
{
"docid": "96c68a64670b4f22915f3353f2659626",
"text": "Most existing works on dialog systems only consider conversation content while neglecting the personality of the user the bot is interacting with, which begets several unsolved issues. In this paper, we present a personalized end-to-end model in an attempt to leverage personalization in goal-oriented dialogs. We first introduce a PROFILE MODEL which encodes user profiles into distributed embeddings and refers to conversation history from other similar users. Then a PREFERENCE MODEL captures user preferences over knowledge base entities to handle the ambiguity in user requests. The two models are combined into the PERSONALIZED MEMN2N. Experiments show that the proposed model achieves qualitative performance improvements over state-of-the-art methods. As for human evaluation, it also outperforms other approaches in terms of task completion rate and user satisfaction.",
"title": ""
},
{
"docid": "374383490d88240b410a14a185ff082e",
"text": "A substantial part of the operating costs of public transport is attributable to drivers, whose efficient use therefore is important. The compilation of optimal work packages is difficult, being NP-hard. In practice, algorithmic advances and enhanced computing power have led to significant progress in achieving better schedules. However, differences in labor practices among modes of transport and operating companies make production of a truly general system with acceptable performance a difficult proposition. TRACS II has overcome these difficulties, being used with success by a substantial number of bus and train operators. Many theoretical aspects of the system have been published previously. This paper shows for the first time how theory and practice have been brought together, explaining the many features which have been added to the algorithmic kernel to provide a user-friendly and adaptable system designed to provide maximum flexibility in practice. We discuss the extent to which users have been involved in system development, leading to many practical successes, and we summarize some recent achievements.",
"title": ""
},
{
"docid": "204b902e344ac52ba5ed90e9f8d5cf54",
"text": "The reason for the rapid rise of autism in the United States that began in the 1990s is a mystery. Although individuals probably have a genetic predisposition to develop autism, researchers suspect that one or more environmental triggers are also needed. One of those triggers might be the battery of vaccinations that young children receive. Using regression analysis and controlling for family income and ethnicity, the relationship between the proportion of children who received the recommended vaccines by age 2 years and the prevalence of autism (AUT) or speech or language impairment (SLI) in each U.S. state from 2001 and 2007 was determined. A positive and statistically significant relationship was found: The higher the proportion of children receiving recommended vaccinations, the higher was the prevalence of AUT or SLI. A 1% increase in vaccination was associated with an additional 680 children having AUT or SLI. Neither parental behavior nor access to care affected the results, since vaccination proportions were not significantly related (statistically) to any other disability or to the number of pediatricians in a U.S. state. The results suggest that although mercury has been removed from many vaccines, other culprits may link vaccines to autism. Further study into the relationship between vaccines and autism is warranted.",
"title": ""
},
{
"docid": "78697b1a87b2bada5bf169c075cca18b",
"text": "Recent trends show that Internet traffic is increasingly dominated by content, which is accompanied by the exponential growth of traffic. To cope with this phenomena, network caching is introduced to utilize the storage capacity of diverse network devices. In this paper, we first summarize four basic caching placement strategies, i.e., local caching, Device-to-Device (D2D) caching, Small cell Base Station (SBS) caching and Macrocell Base Station (MBS) caching. However, studies show that so far, much of the research has ignored the impact of user mobility. Therefore, taking the effect of the user mobility into consideration, we proposes a joint mobility-aware caching and SBS density placement scheme (MS caching). In addition, differences and relationships between caching and computation offloading are discussed. We present a design of a hybrid computation offloading and support it with experimental results, which demonstrate improved performance in terms of energy cost. Finally, we discuss the design of an incentive mechanism by considering network dynamics, differentiated user's quality of experience (QoE) and the heterogeneity of mobile terminals in terms of caching and computing capabilities.",
"title": ""
},
{
"docid": "b9f7c3cbf856ff9a64d7286c883e2640",
"text": "Graph database models can be defined as those in which data structures for the schema and instances are modeled as graphs or generalizations of them, and data manipulation is expressed by graph-oriented operations and type constructors. These models took off in the eighties and early nineties alongside object-oriented models. Their influence gradually died out with the emergence of other database models, in particular geographical, spatial, semistructured, and XML. Recently, the need to manage information with graph-like nature has reestablished the relevance of this area. The main objective of this survey is to present the work that has been conducted in the area of graph database modeling, concentrating on data structures, query languages, and integrity constraints.",
"title": ""
},
{
"docid": "2f6c2a4e83bf86b29fcff77d7937eded",
"text": "0957-4174/$ see front matter 2008 Elsevier Ltd. A doi:10.1016/j.eswa.2008.10.027 * Corresponding author. E-mail addresses: [email protected] (C (B. Diri). This paper provides a systematic review of previous software fault prediction studies with a specific focus on metrics, methods, and datasets. The review uses 74 software fault prediction papers in 11 journals and several conference proceedings. According to the review results, the usage percentage of public datasets increased significantly and the usage percentage of machine learning algorithms increased slightly since 2005. In addition, method-level metrics are still the most dominant metrics in fault prediction research area and machine learning algorithms are still the most popular methods for fault prediction. Researchers working on software fault prediction area should continue to use public datasets and machine learning algorithms to build better fault predictors. The usage percentage of class-level is beyond acceptable levels and they should be used much more than they are now in order to predict the faults earlier in design phase of software life cycle. 2008 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
644d770dcc78766644bd4edc32c278e8
|
Selection of Support Vector Machines based classifiers for credit risk domain
|
[
{
"docid": "3231eedb6c06d3ce428f3c20dac5c37d",
"text": "In this study, differential evolution algorithm (DE) is proposed to train a wavelet neural network (WNN). The resulting network is named as differential evolution trained wavelet neural network (DEWNN). The efficacy of DEWNN is tested on bankruptcy prediction datasets viz. US banks, Turkish banks and Spanish banks. Further, its efficacy is also tested on benchmark datasets such as Iris, Wine and Wisconsin Breast Cancer. Moreover, Garson’s algorithm for feature selection in multi layer perceptron is adapted in the case of DEWNN. The performance of DEWNN is compared with that of threshold accepting trained wavelet neural network (TAWNN) [Vinay Kumar, K., Ravi, V., Mahil Carr, & Raj Kiran, N. (2008). Software cost estimation using wavelet neural networks. Journal of Systems and Software] and the original wavelet neural network (WNN) in the case of all data sets without feature selection and also in the case of four data sets where feature selection was performed. The whole experimentation is conducted using 10-fold cross validation method. Results show that soft computing hybrids viz., DEWNN and TAWNN outperformed the original WNN in terms of accuracy and sensitivity across all problems. Furthermore, DEWNN outscored TAWNN in terms of accuracy and sensitivity across all problems except Turkish banks dataset. 2008 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "5f31e3405af91cd013c3193c7d3cdd8d",
"text": "In this paper, we review most major filtering approaches to texture feature extraction and perform a comparative study. Filtering approaches included are Laws masks, ring/wedge filters, dyadic Gabor filter banks, wavelet transforms, wavelet packets and wavelet frames, quadrature mirror filters, discrete cosine transform, eigenfilters, optimized Gabor filters, linear predictors, and optimized finite impulse response filters. The features are computed as the local energy of the filter responses. The effect of the filtering is highlighted, keeping the local energy function and the classification algorithm identical for most approaches. For reference, comparisons with two classical nonfiltering approaches, co-occurrence (statistical) and autoregressive (model based) features, are given. We present a ranking of the tested approaches based on extensive experiments.",
"title": ""
},
{
"docid": "a7db9f3f1bb5883f6a5a873dd661867b",
"text": "Psychologists and sociologists usually interpret happiness scores as cardinal and comparable across respondents, and thus run OLS regressions on happiness and changes in happiness. Economists usually assume only ordinality and have mainly used ordered latent response models, thereby not taking satisfactory account of fixed individual traits. We address this problem by developing a conditional estimator for the fixed-effect ordered logit model. We find that assuming ordinality or cardinality of happiness scores makes little difference, whilst allowing for fixed-effects does change results substantially. We call for more research into the determinants of the personality traits making up these fixed-effects.",
"title": ""
},
{
"docid": "ba3315636b720625e7b285b26d8d371a",
"text": "Sharing of physical infrastructure using virtualization presents an opportunity to improve the overall resource utilization. It is extremely important for a Software as a Service (SaaS) provider to understand the characteristics of the business application workload in order to size and place the virtual machine (VM) containing the application. A typical business application has a multi-tier architecture and the application workload is often predictable. Using the knowledge of the application architecture and statistical analysis of the workload, one can obtain an appropriate capacity and a good placement strategy for the corresponding VM. In this paper we propose a tool iCirrus-WoP that determines VM capacity and VM collocation possibilities for a given set of application workloads. We perform an empirical analysis of the approach on a set of business application workloads obtained from geographically distributed data centers. The iCirrus-WoP tool determines the fixed reserved capacity and a shared capacity of a VM which it can share with another collocated VM. Based on the workload variation, the tool determines if the VM should be statically allocated or needs a dynamic placement. To determine the collocation possibility, iCirrus-WoP performs a peak utilization analysis of the workloads. The empirical analysis reveals the possibility of collocating applications running in different time-zones. The VM capacity that the tool recommends, show a possibility of improving the overall utilization of the infrastructure by more than 70% if they are appropriately collocated.",
"title": ""
},
{
"docid": "5768212e1fa93a7321fa6c0deff10c88",
"text": "Human research biobanks have rapidly expanded in the past 20 years, in terms of both their complexity and utility. To date there exists no agreement upon classification schema for these biobanks. This is an important issue to address for several reasons: to ensure that the diversity of biobanks is appreciated, to assist researchers in understanding what type of biobank they need access to, and to help institutions/funding bodies appreciate the varying level of support required for different types of biobanks. To capture the degree of complexity, specialization, and diversity that exists among human research biobanks, we propose here a new classification schema achieved using a conceptual classification approach. This schema is based on 4 functional biobank \"elements\" (donor/participant, design, biospecimens, and brand), which we feel are most important to the major stakeholder groups (public/participants, members of the biobank community, health care professionals/researcher users, sponsors/funders, and oversight bodies), and multiple intrinsic features or \"subelements\" (eg, the element \"biospecimens\" could be further classified based on preservation method into fixed, frozen, fresh, live, and desiccated). We further propose that the subelements relating to design (scale, accrual, data format, and data content) and brand (user, leadership, and sponsor) should be specifically recognized by individual biobanks and included in their communications to the broad stakeholder audience.",
"title": ""
},
{
"docid": "56fa6f96657182ff527e42655bbd0863",
"text": "Nootropics or smart drugs are well-known compounds or supplements that enhance the cognitive performance. They work by increasing the mental function such as memory, creativity, motivation, and attention. Recent researches were focused on establishing a new potential nootropic derived from synthetic and natural products. The influence of nootropic in the brain has been studied widely. The nootropic affects the brain performances through number of mechanisms or pathways, for example, dopaminergic pathway. Previous researches have reported the influence of nootropics on treating memory disorders, such as Alzheimer's, Parkinson's, and Huntington's diseases. Those disorders are observed to impair the same pathways of the nootropics. Thus, recent established nootropics are designed sensitively and effectively towards the pathways. Natural nootropics such as Ginkgo biloba have been widely studied to support the beneficial effects of the compounds. Present review is concentrated on the main pathways, namely, dopaminergic and cholinergic system, and the involvement of amyloid precursor protein and secondary messenger in improving the cognitive performance.",
"title": ""
},
{
"docid": "9ecdd32c16f801b7390c0767364781c5",
"text": "The traditional assumption in artificial intelligence (AI) is that most expert knowledge ·is encoded in the form of rules. We consider the phenomenon of rea soning from memories of specific episodes, however, to be the foundation of an intelligent system, rather than an adj unct to some other reasoning method. This theory contrasts with much of the current work in similarity-based learning, which tacitly assumes that learning is equivalent to the automatic generation of rules, and differs from work on \"explanation-based\" and \"case-based\" reasoning in that it does not depend on having a strong domain model. With the development of new parallel architec tures, specifically the Connection Machine@ system, the operations necessary to implement this approach . to reasoning have become sufficiently fast to allow experimentation. This article describes MBRtalk, an experimental memory-based reasoning system that has been implemented on the Connection Machine, as well as the application of memory-based reason ing to other domains.",
"title": ""
},
{
"docid": "79b26ac97deb39c4de11a87604003f26",
"text": "This paper presents a novel wheel-track-Leg hybrid Locomotion Mechanism that has a compact structure. Compared to most robot wheels that have a rigid round rim, the transformable wheel with a flexible rim can switch to track mode for higher efficiency locomotion on swampy terrain or leg mode for better over-obstacle capability on rugged road. In detail, the wheel rim of this robot is cut into four end-to-end circles to make it capable of transforming between a round circle with a flat ring (just like “O” and “∞”) to change the contact type between transformable wheels with the ground. The transformation principle and constraint conditions between different locomotion modes are explained. The driving methods and locomotion strategies on various terrains of the robot are analyzed. Meanwhile, an initial experiment is conducted to verify the design.",
"title": ""
},
{
"docid": "d066c07fc64cf91f32be6ccf83761789",
"text": "This study tests the hypothesis that chewing gum leads to cognitive benefits through improved delivery of glucose to the brain, by comparing the cognitive performance effects of gum and glucose administered separately and together. Participants completed a battery of cognitive tests in a fully related 2 x 2 design, where one factor was Chewing Gum (gum vs. mint sweet) and the other factor was Glucose Co-administration (consuming a 25 g glucose drink vs. consuming water). For four tests (AVLT Immediate Recall, Digit Span, Spatial Span and Grammatical Transformation), beneficial effects of chewing and glucose were found, supporting the study hypothesis. However, on AVLT Delayed Recall, enhancement due to chewing gum was not paralleled by glucose enhancement, suggesting an alternative mechanism. The glucose delivery model is supported with respect to the cognitive domains: working memory, immediate episodic long-term memory and language-based attention and processing speed. However, some other mechanism is more likely to underlie the facilitatory effect of chewing gum on delayed episodic long-term memory.",
"title": ""
},
{
"docid": "3bff3136e5e2823d0cca2f864fe9e512",
"text": "Cloud computing provides variety of services with the growth of their offerings. Due to efficient services, it faces numerous challenges. It is based on virtualization, which provides users a plethora computing resources by internet without managing any infrastructure of Virtual Machine (VM). With network virtualization, Virtual Machine Manager (VMM) gives isolation among different VMs. But, sometimes the levels of abstraction involved in virtualization have been reducing the workload performance which is also a concern when implementing virtualization to the Cloud computing domain. In this paper, it has been explored how the vendors in cloud environment are using Containers for hosting their applications and also the performance of VM deployments. It also compares VM and Linux Containers with respect to the quality of service, network performance and security evaluation.",
"title": ""
},
{
"docid": "102e1718e03b3a4e96ee8c2212738792",
"text": "This paper introduces a new method for the rapid development of complex rule bases involving cue phrases for the purpose of classifying text segments. The method is based on Ripple-Down Rules, a knowledge acquisition method that proved very successful in practice for building medical expert systems and does not require a knowledge engineer. We implemented our system KAFTAN and demonstrate the applicability of our method to the task of classifying scientific citations. Building cue phrase rules in KAFTAN is easy and efficient. We demonstrate the effectiveness of our approach by presenting experimental results where our resulting classifier clearly outperforms previously built classifiers in the recent literature.",
"title": ""
},
{
"docid": "099371952baecb790cf0600ae3b26e41",
"text": "Digital watermarks have recently been proposed for authentication of both video data and still images and for integrity verification of visual multimedia. In such applications, the watermark has to depend on a secret key and on the original image. It is important that the dependence on the key be sensitive, while the dependence on the image be continuous (robust). Both requirements can be satisfied using special image digest functions that return the same bit-string for a whole class of images derived from an original image using common processing operations. It is further required that two completely different images produce completely different bit-strings. In this paper, we discuss methods how such robust hash functions can be built. We describe an algorithm and evaluate its performance. We also show how the hash bits As another application, the robust image digest can be used as a search index for an efficient image database",
"title": ""
},
{
"docid": "bde1d85da7f1ac9c9c30b0fed448aac6",
"text": "We survey temporal description logics that are based on standard temporal logics such as LTL and CTL. In particular, we concentrate on the computational complexity of the satisfiability problem and algorithms for deciding it.",
"title": ""
},
{
"docid": "719ca13e95b9b4a1fc68772746e436d9",
"text": "The increased chance of deception in computer-mediated communication and the potential risk of taking action based on deceptive information calls for automatic detection of deception. To achieve the ultimate goal of automatic prediction of deception, we selected four common classification methods and empirically compared their performance in predicting deception. The deception and truth data were collected during two experimental studies. The results suggest that all of the four methods were promising for predicting deception with cues to deception. Among them, neural networks exhibited consistent performance and were robust across test settings. The comparisons also highlighted the importance of selecting important input variables and removing noise in an attempt to enhance the performance of classification methods. The selected cues offer both methodological and theoretical contributions to the body of deception and information systems research.",
"title": ""
},
{
"docid": "a3b680c8c9eb00b6cc66ec24aeadaa66",
"text": "With the application of Internet of Things and services to manufacturing, the fourth stage of industrialization, referred to as Industrie 4.0, is believed to be approaching. For Industrie 4.0 to come true, it is essential to implement the horizontal integration of inter-corporation value network, the end-to-end integration of engineering value chain, and the vertical integration of factory inside. In this paper, we focus on the vertical integration to implement flexible and reconfigurable smart factory. We first propose a brief framework that incorporates industrial wireless networks, cloud, and fixed or mobile terminals with smart artifacts such as machines, products, and conveyors.Then,we elaborate the operationalmechanism from the perspective of control engineering, that is, the smart artifacts form a self-organized systemwhich is assistedwith the feedback and coordination blocks that are implemented on the cloud and based on the big data analytics. In addition, we outline the main technical features and beneficial outcomes and present a detailed design scheme. We conclude that the smart factory of Industrie 4.0 is achievable by extensively applying the existing enabling technologies while actively coping with the technical challenges.",
"title": ""
},
{
"docid": "8956724f86026b377e6268ffa6ed26f8",
"text": "Excellent book is always being the best friend for spending little time in your office, night time, bus, and everywhere. It will be a good way to just look, open, and read the book while in that time. As known, experience and skill don't always come with the much money to acquire them. Reading this book with the PDF learning php mysql step by step guide to creating database driven web sites will let you know more things.",
"title": ""
},
{
"docid": "9f44d82b0f11037e593e719ae0c60a13",
"text": "The past 25 years have been a significant period with advances in the development of interior permanent magnet (IPM) machines. Line-start small IPM synchronous motors have expanded their presence in the domestic marketplace from few specialized niche markets in high efficiency machine tools, household appliances, small utility motors, and servo drives to mass-produced applications. A closer examination reveals that several different knowledge-based technological advancements and market forces as well as consumer demand for high efficiency requirements have combined, sometimes in fortuitous ways, to accelerate the development of the improved new small energy efficient motors. This paper provides a broad explanation of the various factors that lead to the current state of the art of the single-phase interior permanent motor drive technology. A unified analysis of single-phase IPM motor that permits the determination of the steady-state, dynamic, and transient performances is presented. The mathematical model is based on both d-q axis theory and finite-element analysis. It leads to more accurate numerical results and meets the engineering requirements more satisfactorily than any other methods. Finally, some concluding comments and remarks are provided for efficiency improvement, manufacturing, and future research trends of line-start energy efficient permanent magnet synchronous motors.",
"title": ""
},
{
"docid": "2068c62685eb927cc8344f2a2c8d9a2e",
"text": "The BioNLP Shared Task 2013 is the third edition of the BioNLP Shared Task series that is a community-wide effort to address fine-grained, structural information extraction from biomedical literature. The BioNLP Shared Task 2013 was held from January to April 2013. Six main tasks were proposed. 38 final submissions were received, from 22 teams. The results show advances in the state of the art and demonstrate that extraction methods can be successfully generalized in various aspects.",
"title": ""
},
{
"docid": "3532bb1766e9cbe158112a62bdbde52f",
"text": "A dual circularly polarized horn antenna, which employs a chiral metamaterial composed of two-layered periodic metallic arc structure, is presented. The whole antenna composite has functions of polarization transformation and band-pass filter. The designed antenna produces left-handed circularly polarized wave in the band from 12.4 GHz to 12.5 GHz, and realizes right-handed circularly polarized wave in the range of 14.2 GHz-14.4 GHz. Due to low loss characteristic of the chiral metamaterial, the measured gains are only reduced by about 0.6 dB at the above two operation frequencies, compared with single horn antenna. The axial ratios are 1.05 dB at 12.45 GHz and 0.95 dB at14.35 GHz.",
"title": ""
},
{
"docid": "dbfdb9251e8b9738eaebae3bcd708926",
"text": "Stable Haptic Interaction with Virtual Environments",
"title": ""
},
{
"docid": "77ce917536f59d5489d0d6f7000c7023",
"text": "In this supplementary document, we present additional results to complement the paper. First, we provide the detailed configurations and parameters of the generator and discriminator in the proposed Generative Adversarial Network. Second, we present the qualitative comparisons with the state-ofthe-art CNN-based optical flow methods. The complete results and source code are publicly available on http://vllab.ucmerced.edu/wlai24/semiFlowGAN.",
"title": ""
}
] |
scidocsrr
|
72bbcd9a55c4965c4725491eb467769b
|
Text Mining of News Articles for Stock Price Predictions
|
[
{
"docid": "78449a425b0951363480d2151840a216",
"text": "This paper examines the role of financial news articles on three different textual representations; Bag of Words, Noun Phrases, and Named Entities and their ability to predict discrete number stock prices twenty minutes after an article release. Using a Support Vector Machine (SVM) derivative, we show that our model had a statistically significant impact on predicting future stock prices compared to linear regression. We further demonstrate that using a Noun Phrase representation scheme performs better than the de facto standard of Bag of Words.",
"title": ""
}
] |
[
{
"docid": "b1ef897890df4c719d85dd339f8dee70",
"text": "Repositories of health records are collections of events with varying number and sparsity of occurrences within and among patients. Although a large number of predictive models have been proposed in the last decade, they are not yet able to simultaneously capture cross-attribute and temporal dependencies associated with these repositories. Two major streams of predictive models can be found. On one hand, deterministic models rely on compact subsets of discriminative events to anticipate medical conditions. On the other hand, generative models offer a more complete and noise-tolerant view based on the likelihood of the testing arrangements of events to discriminate a particular outcome. However, despite the relevance of generative predictive models, they are not easily extensible to deal with complex grids of events. In this work, we rely on the Markov assumption to propose new predictive models able to deal with cross-attribute and temporal dependencies. Experimental results hold evidence for the utility and superior accuracy of generative models to anticipate health conditions, such as the need for surgeries. Additionally, we show that the proposed generative models are able to decode temporal patterns of interest (from the learned lattices) with acceptable completeness and precision levels, and with superior efficiency for voluminous repositories.",
"title": ""
},
{
"docid": "14863b1ca1d21c16319e40a34a0e3893",
"text": "Amyloid-beta peptide is central to the pathology of Alzheimer's disease, because it is neurotoxic--directly by inducing oxidant stress, and indirectly by activating microglia. A specific cell-surface acceptor site that could focus its effects on target cells has been postulated but not identified. Here we present evidence that the 'receptor for advanced glycation end products' (RAGE) is such a receptor, and that it mediates effects of the peptide on neurons and microglia. Increased expressing of RAGE in Alzheimer's disease brain indicates that it is relevant to the pathogenesis of neuronal dysfunction and death.",
"title": ""
},
{
"docid": "f91a9214409df84c4a53c92b2a14bbe3",
"text": "OBJECTIVE\nwe performed the first systematic review with meta-analyses of the existing studies that examined mindfulness-based Baduanjin exercise for its therapeutic effects for individuals with musculoskeletal pain or insomnia.\n\n\nMETHODS\nBoth English- (PubMed, Web of Science, Elsevier, and Google Scholar) and Chinese-language (CNKI and Wangfang) electronic databases were used to search relevant articles. We used a modified PEDro scale to evaluate risk of bias across studies selected. All eligible RCTS were considered for meta-analysis. The standardized mean difference was calculated for the pooled effects to determine the magnitude of the Baduanjin intervention effect. For the moderator analysis, we performed subgroup meta-analysis for categorical variables and meta-regression for continuous variables.\n\n\nRESULTS\nThe aggregated result has shown a significant benefit in favour of Baduanjin at alleviating musculoskeletal pain (SMD = -0.88, 95% CI -1.02 to -0.74, p < 0.001, I² = 10.29%) and improving overall sleep quality (SMD = -0.48, 95% CI -0.95 to -0.01, p = 004, I² = 84.42%).\n\n\nCONCLUSIONS\nMindfulness-based Baduanjin exercise may be effective for alleviating musculoskeletal pain and improving overall sleep quality in people with chronic illness. Large, well-designed RCTs are needed to confirm these findings.",
"title": ""
},
{
"docid": "467d48d121ee8b9f792dbfbc7e281cc1",
"text": "This paper focuses on improving face recognition performance with a new signature combining implicit facial features with explicit soft facial attributes. This signature has two components: the existing patch-based features and the soft facial attributes. A deep convolutional neural network adapted from state-of-the-art networks is used to learn the soft facial attributes. Then, a signature matcher is introduced that merges the contributions of both patch-based features and the facial attributes. In this matcher, the matching scores computed from patch-based features and the facial attributes are combined to obtain a final matching score. The matcher is also extended so that different weights are assigned to different facial attributes. The proposed signature and matcher have been evaluated with the UR2D system on the UHDB31 and IJB-A datasets. The experimental results indicate that the proposed signature achieve better performance than using only patch-based features. The Rank-1 accuracy is improved significantly by 4% and 0.37% on the two datasets when compared with the UR2D system.",
"title": ""
},
{
"docid": "499cdc46e2d6e35ab27d1878b70c2be1",
"text": "Image splicing is a simple process that crops and pastes regions from the same or separate sources. It is a fundamental step used in digital photomontage, which refers to a paste-up produced by sticking together images using digital tools such as Photoshop. Examples of photomontages can be seen in several infamous news reporting cases involving the use of faked images. Searching for technical solutions for image authentication, researchers have recently started development of new techniques aiming at blind passive detection of image splicing. However, like most other research communities dealing with data processing, we need an open data set with diverse content and realistic splicing conditions in order to expedite the progresses and facilitate collaborative studies. In this report, we describe with details a data set of 1845 image blocks with a fixed size of 128 pixels x 128 pixels. The image blocks are extracted from images in the CalPhotos collection [CalPhotos'00], with a small number of additional images captured by digital cameras. The data set include about the same number of authentic and spliced image blocks, which are further divided into different subcategories (smooth vs. textured, arbitrary object boundary vs. straight boundary).",
"title": ""
},
{
"docid": "f37fb443aaa8194ee9fa8ba496e6772a",
"text": "Current Light Field (LF) cameras offer fixed resolution in space, time and angle which is decided a-priori and is independent of the scene. These cameras either trade-off spatial resolution to capture single-shot LF or tradeoff temporal resolution by assuming a static scene to capture high spatial resolution LF. Thus, capturing high spatial resolution LF video for dynamic scenes remains an open and challenging problem. We present the concept, design and implementation of a LF video camera that allows capturing high resolution LF video. The spatial, angular and temporal resolution are not fixed a-priori and we exploit the scene-specific redundancy in space, time and angle. Our reconstruction is motion-aware and offers a continuum of resolution tradeoff with increasing motion in the scene. The key idea is (a) to design efficient multiplexing matrices that allow resolution tradeoffs, (b) use dictionary learning and sparse representations for robust reconstruction, and (c) perform local motion-aware adaptive reconstruction. We perform extensive analysis and characterize the performance of our motion-aware reconstruction algorithm. We show realistic simulations using a graphics simulator as well as real results using a LCoS based programmable camera. We demonstrate novel results such as high resolution digital refocusing for dynamic moving objects.",
"title": ""
},
{
"docid": "ca3c3dec83821747896d44261ba2f9ad",
"text": "Building discriminative representations for 3D data has been an important task in computer graphics and computer vision research. Convolutional Neural Networks (CNNs) have shown to operate on 2D images with great success for a variety of tasks. Lifting convolution operators to 3D (3DCNNs) seems like a plausible and promising next step. Unfortunately, the computational complexity of 3D CNNs grows cubically with respect to voxel resolution. Moreover, since most 3D geometry representations are boundary based, occupied regions do not increase proportionately with the size of the discretization, resulting in wasted computation. In this work, we represent 3D spaces as volumetric fields, and propose a novel design that employs field probing filters to efficiently extract features from them. Each field probing filter is a set of probing points — sensors that perceive the space. Our learning algorithm optimizes not only the weights associated with the probing points, but also their locations, which deforms the shape of the probing filters and adaptively distributes them in 3D space. The optimized probing points sense the 3D space “intelligently”, rather than operating blindly over the entire domain. We show that field probing is significantly more efficient than 3DCNNs, while providing state-of-the-art performance, on classification tasks for 3D object recognition benchmark datasets.",
"title": ""
},
{
"docid": "0867eb365ca19f664bd265a9adaa44e5",
"text": "We present VI-DSO, a novel approach for visual-inertial odometry, which jointly estimates camera poses and sparse scene geometry by minimizing photometric and IMU measurement errors in a combined energy functional. The visual part of the system performs a bundle-adjustment like optimization on a sparse set of points, but unlike key-point based systems it directly minimizes a photometric error. This makes it possible for the system to track not only corners, but any pixels with large enough intensity gradients. IMU information is accumulated between several frames using measurement preintegration, and is inserted into the optimization as an additional constraint between keyframes. We explicitly include scale and gravity direction into our model and jointly optimize them together with other variables such as poses. As the scale is often not immediately observable using IMU data this allows us to initialize our visual-inertial system with an arbitrary scale instead of having to delay the initialization until everything is observable. We perform partial marginalization of old variables so that updates can be computed in a reasonable time. In order to keep the system consistent we propose a novel strategy which we call “dynamic marginalization”. This technique allows us to use partial marginalization even in cases where the initial scale estimate is far from the optimum. We evaluate our method on the challenging EuRoC dataset, showing that VI-DSO outperforms the state of the art.",
"title": ""
},
{
"docid": "be3466a43f12f66b222ffdc60f71c6a0",
"text": "Clothing with conductive textiles for health care applications has in the last decade been of an upcoming research interest. An advantage with the technique is its suitability in distributed and home health care. The present study investigates the electrical properties of conductive yarns and textile electrodes in contact with human skin, thus representing a real ECG-registration situation. The yarn measurements showed a pure resistive characteristic proportional to the length. The electrodes made of pure stainless steel (electrode A) and 20% stainless steel/80% polyester (electrode B) showed acceptable stability of electrode potentials, the stability of A was better than that of B. The electrode made of silver plated copper (electrode C) was less stable. The electrode impedance was lower for electrodes A and B than that for electrode C. From an electrical properties point of view we recommend to use electrodes of type A to be used in intelligent textile medical applications.",
"title": ""
},
{
"docid": "bceaded3710f8d6501aa1118d191aaaa",
"text": "The human gut harbors a large and complex community of beneficial microbes that remain stable over long periods. This stability is considered critical for good health but is poorly understood. Here we develop a body of ecological theory to help us understand microbiome stability. Although cooperating networks of microbes can be efficient, we find that they are often unstable. Counterintuitively, this finding indicates that hosts can benefit from microbial competition when this competition dampens cooperative networks and increases stability. More generally, stability is promoted by limiting positive feedbacks and weakening ecological interactions. We have analyzed host mechanisms for maintaining stability—including immune suppression, spatial structuring, and feeding of community members—and support our key predictions with recent data.",
"title": ""
},
{
"docid": "dd270ffa800d633a7a354180eb3d426c",
"text": "I have taken an experimental approach to this question. Freely voluntary acts are pre ceded by a specific electrical change in the brain (the ‘readiness potential’, RP) that begins 550 ms before the act. Human subjects became aware of intention to act 350–400 ms after RP starts, but 200 ms. before the motor act. The volitional process is therefore initiated unconsciously. But the conscious function could still control the outcome; it can veto the act. Free will is therefore not excluded. These findings put constraints on views of how free will may operate; it would not initiate a voluntary act but it could control performance of the act. The findings also affect views of guilt and responsibility. But the deeper question still remains: Are freely voluntary acts subject to macro deterministic laws or can they appear without such constraints, non-determined by natural laws and ‘truly free’? I shall present an experimentalist view about these fundamental philosophical opposites.",
"title": ""
},
{
"docid": "ceda2e7fb5881c6b2080f09c226d99ba",
"text": "Fraud detection has become an important issue to be explored. Fraud detection involves identifying fraud as quickly as possible once it has been perpetrated. Fraud is often a dynamic and challenging problem in Credit card lending business. Credit card fraud can be broadly classified into behavioral and application fraud, with behavioral fraud being the more prominent of the two. Supervised Modeling/Segmentation techniques are commonly used in fraud",
"title": ""
},
{
"docid": "335fbbf27b34e3937c2f6772b3227d51",
"text": "WordNet has facilitated important research in natural language processing but its usefulness is somewhat limited by its relatively small lexical coverage. The Paraphrase Database (PPDB) covers 650 times more words, but lacks the semantic structure of WordNet that would make it more directly useful for downstream tasks. We present a method for mapping words from PPDB to WordNet synsets with 89% accuracy. The mapping also lays important groundwork for incorporating WordNet’s relations into PPDB so as to increase its utility for semantic reasoning in applications.",
"title": ""
},
{
"docid": "c20393a25f4e53be6df2bd49abf6635f",
"text": "This paper overviews NTCIR-13 Actionable Knowledge Graph (AKG) task. The task focuses on finding possible actions related to input entities and the relevant properties of such actions. AKG is composed of two subtasks: Action Mining (AM) and Actionable Knowledge Graph Generation (AKGG). Both subtasks are focused on English language. 9 runs have been submitted by 4 teams for the task. In this paper we describe both the subtasks, datasets, evaluation methods and the results of meta analyses.",
"title": ""
},
{
"docid": "b8c683c194792a399f9c12fdf7e9f0cd",
"text": "The rise of Social Media services in the last years has created huge streams of information that can be very valuable in a variety of scenarios. What precisely these scenarios are and how the data streams can efficiently be analyzed for each scenario is still largely unclear at this point in time and has therefore created significant interest in industry and academia. In this paper, we describe a novel algorithm for geo-spatial event detection on Social Media streams. We monitor all posts on Twitter issued in a given geographic region and identify places that show a high amount of activity. In a second processing step, we analyze the resulting spatio-temporal clusters of posts with a Machine Learning component in order to detect whether they constitute real-world events or not. We show that this can be done with high precision and recall. The detected events are finally displayed to a user on a map, at the location where they happen and while they happen.",
"title": ""
},
{
"docid": "00dfecba30f7c6e3a1f9f98e53e58528",
"text": "In this study a novel electronic health information system that integrates the functions of medical recording, reporting and data utilization is presented. The goal of this application is to provide synchronized operation and auto-generated reports to improve the efficiency and accuracy for physicians working at regional clinics and health centers in China, where paper record is the dominant way for diagnosis and medicine prescription. The database design offers high efficiency for operations such as data mining on the medical data collected by the system during diagnosis. The result of data mining can be applied on inventory planning, diagnosis assistance, clinical research and disease control and prevention. Compared with electronic health and medical information system used in urban hospitals, the system presented here is light-weighted, with simpler database structure, self-explanatory webpage display, and tag-oriented navigations. These features makes the system more accessible and affordable for regional clinics and health centers such as university clinics and community hospitals, which have a much more lagging development with limited funding and resources than urban hospitals while they are playing an increasingly important role in the health care system in China.",
"title": ""
},
{
"docid": "d0f14357e0d675c99d4eaa1150b9c55e",
"text": "Purpose – The purpose of this research is to investigate if, and in that case, how and what the egovernment field can learn from user participation concepts and theories in general IS research. We aim to contribute with further understanding of the importance of citizen participation and involvement within the e-government research body of knowledge and when developing public eservices in practice. Design/Methodology/Approach – The analysis in the article is made from a comparative, qualitative case study of two e-government projects. Three analysis themes are induced from the literature review; practice of participation, incentives for participation, and organization of participation. These themes are guiding the comparative analysis of our data with a concurrent openness to interpretations from the field. Findings – The main results in this article are that the e-government field can get inspiration and learn from methods and approaches in traditional IS projects concerning user participation, but in egovernment we also need methods to handle the challenges that arise when designing public e-services for large, heterogeneous user groups. Citizen engagement cannot be seen as a separate challenge in egovernment, but rather as an integrated part of the process of organizing, managing, and performing egovernment projects. Our analysis themes of participation generated from literature; practice, incentives and organization can be used in order to highlight, analyze, and discuss main issues regarding the challenges of citizen participation within e-government. This is an important implication based on our study that contributes both to theory on and practice of e-government. Practical implications – Lessons to learn from this study concern that many e-government projects have a public e-service as one outcome and an internal e-administration system as another outcome. A dominating internal, agency perspective in such projects might imply that citizens as the user group of the e-service are only seen as passive receivers of the outcome – not as active participants in the development. By applying the analysis themes, proposed in this article, citizens as active participants can be thoroughly discussed when initiating (or evaluating) an e-government project. Originality/value – This article addresses challenges regarding citizen participation in e-government development projects. User participation is well-researched within the IS discipline, but the egovernment setting implies new challenges, that are not explored enough.",
"title": ""
},
{
"docid": "bedf1cc302c4ca05dc8371c29d396169",
"text": "We propose Mixcoin, a protocol to facilitate anonymous payments using the Bitcoin currency system. We build on the emergent phenomenon of currency mixes, adding an accountability mechanism to expose theft. Unlike other proposals to improve anonymity in Bitcoin, our scheme can be deployed immediately with no changes to Bitcoin itself. We demonstrate that incentives of mixes and clients can be aligned to ensure that rational mixes will not steal from clients. We contrast mixing for financial anonymity with better-studied communication mixes, demonstrating important and subtle new attacks.",
"title": ""
},
{
"docid": "749b380acf38c39ee3ae7a6576dd63af",
"text": "We present a new method for real-time physics-based simulation supporting many different types of hyperelastic materials. Previous methods such as Position Based or Projective Dynamics are fast, but support only limited selection of materials; even classical materials such as the Neo-Hookean elasticity are not supported. Recently, Xu et al. [2015] introduced new “splinebased materials” which can be easily controlled by artists to achieve desired animation effects. Simulation of these types of materials currently relies on Newton’s method, which is slow, even with only one iteration per timestep. In this paper, we show that Projective Dynamics can be interpreted as a quasi-Newton method. This insight enables very efficient simulation of a large class of hyperelastic materials, including the Neo-Hookean, splinebased materials, and others. The quasi-Newton interpretation also allows us to leverage ideas from numerical optimization. In particular, we show that our solver can be further accelerated using L-BFGS updates (Limitedmemory Broyden-Fletcher-Goldfarb-Shanno algorithm). Our final method is typically more than 10 times faster than one iteration of Newton’s method without compromising quality. In fact, our result is often more accurate than the result obtained with one iteration of Newton’s method. Our method is also easier to implement, implying reduced software development costs.",
"title": ""
},
{
"docid": "e0eded1237c635af3c762f6bbe5d1b26",
"text": "Locating boundaries between coherent and/or repetitive segments of a time series is a challenging problem pervading many scientific domains. In this paper we propose an unsupervised method for boundary detection, combining three basic principles: novelty, homogeneity, and repetition. In particular, the method uses what we call structure features, a representation encapsulating both local and global properties of a time series. We demonstrate the usefulness of our approach in detecting music structure boundaries, a task that has received much attention in recent years and for which exist several benchmark datasets and publicly available annotations. We find our method to significantly outperform the best accuracies published so far. Importantly, our boundary approach is generic, thus being applicable to a wide range of time series beyond the music and audio domains.",
"title": ""
}
] |
scidocsrr
|
e29a81ba63d8f3a44ee59bbb22c2d02a
|
Personalized Recommendations of Locally Interesting Venues to Tourists via Cross-Region Community Matching
|
[
{
"docid": "150e7a6f46e93fc917e43e32dedd9424",
"text": "This purpose of this introductory paper is threefold. First, it introduces the Monte Carlo method with emphasis on probabilistic machine learning. Second, it reviews the main building blocks of modern Markov chain Monte Carlo simulation, thereby providing and introduction to the remaining papers of this special issue. Lastly, it discusses new interesting research horizons.",
"title": ""
}
] |
[
{
"docid": "f8a89a023629fa9bcb2c3566b6817b0c",
"text": "In this paper, we propose a robust on-the-fly estimator initialization algorithm to provide high-quality initial states for monocular visual-inertial systems (VINS). Due to the non-linearity of VINS, a poor initialization can severely impact the performance of either filtering-based or graph-based methods. Our approach starts with a vision-only structure from motion (SfM) to build the up-to-scale structure of camera poses and feature positions. By loosely aligning this structure with pre-integrated IMU measurements, our approach recovers the metric scale, velocity, gravity vector, and gyroscope bias, which are treated as initial values to bootstrap the nonlinear tightly-coupled optimization framework. We highlight that our approach can perform on-the-fly initialization in various scenarios without using any prior information about system states and movement. The performance of the proposed approach is verified through the public UAV dataset and real-time onboard experiment. We make our implementation open source, which is the initialization part integrated in the VINS-Mono1.",
"title": ""
},
{
"docid": "0257589dc59f1ddd4ec19a2450e3156f",
"text": "Drawing upon the literatures on beliefs about magical contagion and property transmission, we examined people's belief in a novel mechanism of human-to-human contagion, emotional residue. This is the lay belief that people's emotions leave traces in the physical environment, which can later influence others or be sensed by others. Studies 1-4 demonstrated that Indians are more likely than Americans to endorse a lay theory of emotions as substances that move in and out of the body, and to claim that they can sense emotional residue. However, when the belief in emotional residue is measured implicitly, both Indians and American believe to a similar extent that emotional residue influences the moods and behaviors of those who come into contact with it (Studies 5-7). Both Indians and Americans also believe that closer relationships and a larger number of people yield more detectable residue (Study 8). Finally, Study 9 demonstrated that beliefs about emotional residue can influence people's behaviors. Together, these finding suggest that emotional residue is likely to be an intuitive concept, one that people in different cultures acquire even without explicit instruction.",
"title": ""
},
{
"docid": "f6bf901eeb4af1e455381d0d01e2fd99",
"text": "Due to sharp depth transition, big holes may be found in the novel view that is synthesized by depth-image-based rendering (DIBR). A hole-filling method based on disparity map is proposed. One important aspect of the method is that the disparity map of destination image is used for hole-filling, instead of the depth image of reference image. Firstly, the big hole detection based on disparity map is conducted, and the start point and the end point of the hole are recorded. Then foreground pixels and background pixels are distinguished for hole-dilating according to disparity map, so that areas with matching errors can be determined and eliminated. In addition, parallaxes of pixels in the area with holes and matching errors are changed to new values. Finally, holes are filled with background pixels from reference image according to these new parallaxes. Experimental results show that the quality of the new view after hole-filling is quite well; and geometric distortions are avoided in destination image, in contrast to the virtual view generated by depth-smoothing methods and image inpainting methods. Moreover, this method is easy for hardware implementation.",
"title": ""
},
{
"docid": "7487f889eae6a32fc1afab23e54de9b8",
"text": "Although many researchers have investigated the use of different powertrain topologies, component sizes, and control strategies in fuel-cell vehicles, a detailed parametric study of the vehicle types must be conducted before a fair comparison of fuel-cell vehicle types can be performed. This paper compares the near-optimal configurations for three topologies of vehicles: fuel-cell-battery, fuel-cell-ultracapacitor, and fuel-cell-battery-ultracapacitor. The objective function includes performance, fuel economy, and powertrain cost. The vehicle models, including detailed dc/dc converter models, are programmed in Matlab/Simulink for the customized parametric study. A controller variable for each vehicle type is varied in the optimization.",
"title": ""
},
{
"docid": "fe4969041bf14a86b4e4a973f6fe362b",
"text": "We argue the case for abstract document structure as a separate descriptive level in the analysis and generation of written texts. The purpose of this representation is to mediate between the message of a text (i.e., its discourse structure) and its physical presentation (i.e., its organization into graphical constituents like sections, paragraphs, sentences, bulleted lists, figures, and footnotes). Abstract document structure can be seen as an extension of Nunberg's text-grammar it is also closely related to logical markup in languages like HTML and LaTEX. We show that by using this intermediate representation, several subtasks in language generation and language understanding can be defined more cleanly.",
"title": ""
},
{
"docid": "4c65b5cbd49eaaa88610d0b38d297c53",
"text": "Transforming a graphical user interface screenshot created by a designer into computer code is a typical task conducted by a developer in order to build customized software, websites, and mobile applications. In this paper, we show that deep learning methods can be leveraged to train a model end-to-end to automatically reverse engineer user interfaces and generate code from a single input image with over 77% of accuracy for three different platforms (i.e. iOS, Android and web-based technologies).",
"title": ""
},
{
"docid": "fc8c233aefc733b1187ca8aa4c3f5cf1",
"text": "A material model for the fracturing behavior for braided composites is developed and implemented in a material subroutine for use in the commercial explicit finite element code ABAQUS. The subroutine is based on the microplane model in which the constitutive behavior is defined not in terms of stress and strain tensors and their invariants but in terms of stress and strain vectors in the material mesostructure called the “microplanes.” This is a semi-multiscale model, which captures the interactions between inelastic phenomena such as cracking, splitting, and frictional slipping occurring on planes of various orientations though not the interactions at a distance. To avoid spurious mesh sensitivity due to softening, the crack band model is adopted. Its band width, related to the material characteristic length, serves as the localization limiter. It is shown that the model can realistically predict the orthotropic elastic constants and the strength limits. More importantly, the present model can also fit the tests of size effect on the strength of notched specimens and the post-peak behavior, which have been conducted for this purpose. When used in the ABAQUS software, the model gives a realistic picture of the axial crushing of a braided tube by a divergent plug. !DOI: 10.1115/1.4003102\"",
"title": ""
},
{
"docid": "3b601daf3064ac34ac8826cdacdc252f",
"text": "Smart contracts in Ethereum are executed by the Ethereum Virtual Machine (EVM). We defined EVM in Lem, a language that can be compiled for a few interactive theorem provers. We tested our definition against a standard test suite for Ethereum implementations. Using our definition, we proved some safety properties of Ethereum smart contracts in an interactive theorem prover Isabelle/HOL. To our knowledge, ours is the first formal EVM definition for smart contract verification that implements all instructions. Our definition can serve as a basis for further analysis and generation of Ethereum smart contracts.",
"title": ""
},
{
"docid": "6bc710fd6d11ff590a25ba44757f1da4",
"text": "Convolutional neural nets (CNNs) have demonstrated remarkable performance in recent history. Such approaches tend to work in a \"unidirectional\" bottom-up feed-forward fashion. However, practical experience and biological evidence tells us that feedback plays a crucial role, particularly for detailed spatial understanding tasks. This work explores \"bidirectional\" architectures that also reason with top-down feedback: neural units are influenced by both lower and higher-level units. We do so by treating units as rectified latent variables in a quadratic energy function, which can be seen as a hierarchical Rectified Gaussian model (RGs) [39]. We show that RGs can be optimized with a quadratic program (QP), that can in turn be optimized with a recurrent neural network (with rectified linear units). This allows RGs to be trained with GPU-optimized gradient descent. From a theoretical perspective, RGs help establish a connection between CNNs and hierarchical probabilistic models. From a practical perspective, RGs are well suited for detailed spatial tasks that can benefit from top-down reasoning. We illustrate them on the challenging task of keypoint localization under occlusions, where local bottom-up evidence may be misleading. We demonstrate state-of-the-art results on challenging benchmarks.",
"title": ""
},
{
"docid": "74686e9acab0a4d41c87cadd7da01889",
"text": "Automatic analysis of biomedical time series such as electroencephalogram (EEG) and electrocardiographic (ECG) signals has attracted great interest in the community of biomedical engineering due to its important applications in medicine. In this work, a simple yet effective bag-of-words representation that is able to capture both local and global structure similarity information is proposed for biomedical time series representation. In particular, similar to the bag-of-words model used in text document domain, the proposed method treats a time series as a text document and extracts local segments from the time series as words. The biomedical time series is then represented as a histogram of codewords, each entry of which is the count of a codeword appeared in the time series. Although the temporal order of the local segments is ignored, the bag-of-words representation is able to capture high-level structural information because both local and global structural information are well utilized. The performance of the bag-of-words model is validated on three datasets extracted from real EEG and ECG signals. The experimental results demonstrate that the proposed method is not only insensitive to parameters of the bag-of-words model such as local segment length and codebook size, but also robust to noise.",
"title": ""
},
{
"docid": "f3a8fa7b4c6ac7a6218a0b8aa5a8f4b2",
"text": "Give us 5 minutes and we will show you the best book to read today. This is it, the uncertainty quantification theory implementation and applications that will be your best choice for better reading book. Your five times will not spend wasted by reading this website. You can take the book as a source to make better concept. Referring the books that can be situated with your needs is sometime difficult. But here, this is so easy. You can find the best thing of book that you can read.",
"title": ""
},
{
"docid": "86e4dc2e01d2415640c17a703ccafdd6",
"text": "Descriptive analysis of the magnitude and situation of road safety in general and road accidents in particular is important, but understanding of data quality, factors related with dangerous situations and various interesting patterns in data is of even greater importance. Under the umbrella of information architecture research for road safety in developing countries, the objective of this machine learning experimental research is to explore data quality issues, analyze trends and predict the role of road users on possible injury risks. The research employed TreeNet, Classification and Adaptive Regression Trees (CART), Random Forest (RF) and hybrid ensemble approach. To identify relevant patterns and illustrate the performance of the techniques for the road safety domain, road accident data collected from Addis Ababa Traffic Office is subject to several analyses. Empirical results illustrate that data quality is a major problem that needs architectural guideline and the prototype models could classify accidents with promising accuracy. In addition, an ensemble technique proves to be better in terms of predictive accuracy in the domain under study.",
"title": ""
},
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "309e020e38f4a9286cef5aaba33a78a5",
"text": "Brain-machine interface (BMI) systems convert neural signals from motor regions of the brain into control signals to guide prosthetic devices. The ultimate goal of BMIs is to improve the quality of life for people with paralysis by providing direct neural control of prosthetic arms or computer cursors. While considerable research over the past 15 years has led to compelling BMI demonstrations, there remain several challenges to achieving clinically viable BMI systems. In this review, we focus on the challenge of increasing BMI performance and robustness. We review and highlight key aspects of intracortical BMI decoder design, which is central to the conversion of neural signals into prosthetic control signals, and discuss emerging opportunities to improve intracortical BMI decoders. This is one of the primary research opportunities where information systems engineering can directly impact the future success of BMIs.",
"title": ""
},
{
"docid": "a56e4d881081f9d88c9ca2f40f595c01",
"text": "We describe a framework for building abstraction hierarchies whereby an agent alternates skill- and representation-construction phases to construct a sequence of increasingly abstract Markov decision processes. Our formulation builds on recent results showing that the appropriate abstract representation of a problem is specified by the agent's skills. We describe how such a hierarchy can be used for fast planning, and illustrate the construction of an appropriate hierarchy for the Taxi domain.",
"title": ""
},
{
"docid": "16e9a7b1384cde7c0bd3f8407c0d15b5",
"text": "The e-commerce field has developed to the point that more and more hotel companies provide online booking services to travelers as an integral part of their business model. Increasing numbers of hotel companies now provide such services as an integral part of their business model and their guests' experiences with their hotel. Some third-party services allow customers to add comments on each hotel at the affiliated website. The current search tool features at hotel websites are based on fixed properties, allowing companies to take advantage of the huge number of available customer reviews to provide relevant information to consumers considering new services. The present research focuses on the possibility of linking customer reviews with search tools for online hotel booking and dividing the customers into categories based on their travel aims. This shall be accomplished by: 1) extracting customer reviews using opinion mining and finding hotel features that are frequently mentioned in the reviews, and 2) then analyzing those features to achieve the goal of enhancing booking processes by adding new characteristics, based on customer preferences. This research should improve online hotel booking by building a customized tool that utilizes available customer reviews at the Agoda website and matches them with users' preferences based on survey results.",
"title": ""
},
{
"docid": "e3292a9df5acbae20bcc8f8fb3d21e91",
"text": "Nuclear pore complexes (NPCs) form aqueous conduits in the nuclear envelope and gate the diffusion of large proteins between the cytoplasm and nucleoplasm. NPC proteins (nucleoporins) that contain phenylalanine-glycine motifs in filamentous, natively unfolded domains (FG domains) line the diffusion conduit of the NPC, but their role in the size-selective barrier is unclear. We show that deletion of individual FG domains in yeast relaxes the NPC permeability barrier. At the molecular level, the FG domains of five nucleoporins anchored at the NPC center form a cohesive meshwork of filaments through hydrophobic interactions, which involve phenylalanines in FG motifs and are dispersed by aliphatic alcohols. In contrast, the FG domains of four peripherally anchored nucleoporins are generally noncohesive. The results support a two-gate model of NPC architecture featuring a central diffusion gate formed by a meshwork of cohesive FG nucleoporin filaments and a peripheral gate formed by repulsive FG nucleoporin filaments.",
"title": ""
},
{
"docid": "9c1267f42c32f853db912a08eddb8972",
"text": "IBM's Physical Analytics Integrated Data Repository and Services (PAIRS) is a geospatial Big Data service. PAIRS contains a massive amount of curated geospatial (or more precisely spatio-temporal) data from a large number of public and private data resources, and also supports user contributed data layers. PAIRS offers an easy-to-use platform for both rapid assembly and retrieval of geospatial datasets or performing complex analytics, lowering time-to-discovery significantly by reducing the data curation and management burden. In this paper, we review recent progress with PAIRS and showcase a few exemplary analytical applications which the authors are able to build with relative ease leveraging this technology.",
"title": ""
},
{
"docid": "b9fe96df144a7c10ce7496f6c35def92",
"text": "Because of the serious effects of pollution on water supply much closer attention has been paid to water quality than to other aspects of river integrity. However, channel form and water flow are relevant components of river health, and recent evidences show that their impairment threatens the services derived from them. In this article, we review the literature on the effects of common hydromorphological impacts (channel modification and flow modification) on the functioning of river ecosystems. There are evidences that even light hydromorphological impacts can have deep effects on ecosystem functioning, and that different functional variables differ in their responses. Three criteria (relevance, scale and sensitivity) in the selection of functional variables are suggested as a guide for the river scientists and managers to assess the ecological impacts of hydromorphological modifications.",
"title": ""
},
{
"docid": "9cbe52f8a135310d5da850c51b0a7d08",
"text": "Training robots for operation in the real world is a complex, time consuming and potentially expensive task. Despite significant success of reinforcement learning in games and simulations, research in real robot applications has not been able to match similar progress. While sample complexity can be reduced by training policies in simulation, such policies can perform sub-optimally on the real platform given imperfect calibration of model dynamics. We present an approach – supplemental to fine tuning on the real robot – to further benefit from parallel access to a simulator during training and reduce sample requirements on the real robot. The developed approach harnesses auxiliary rewards to guide the exploration for the real world agent based on the proficiency of the agent in simulation and vice versa. In this context, we demonstrate empirically that the reciprocal alignment for both agents provides further benefit as the agent in simulation can optimize its behaviour for states commonly visited by the real-world agent.",
"title": ""
}
] |
scidocsrr
|
eb3386cad47e66397158ddc703f37c99
|
Trust assessment and decision-making in dynamic multi-agent systems
|
[
{
"docid": "c1c3b9393dd375b241f69f3f3cbf5acd",
"text": "The purpose of trust and reputation systems is to strengthen the quality of markets and communities by providing an incentive for good behaviour and quality services, and by sanctioning bad behaviour and low quality services. However, trust and reputation systems will only be able to produce this effect when they are sufficiently robust against strategic manipulation or direct attacks. Currently, robustness analysis of TRSs is mostly done through simple simulated scenarios implemented by the TRS designers themselves, and this can not be considered as reliable evidence for how these systems would perform in a realistic environment. In order to set robustness requirements it is important to know how important robustness really is in a particular community or market. This paper discusses research challenges for trust and reputation systems, and proposes a research agenda for developing sound and reliable robustness principles and mechanisms for trust and reputation systems.",
"title": ""
}
] |
[
{
"docid": "b37db75dcd62cc56977d1a28a81be33e",
"text": "In this article we report on a new digital interactive self-report method for the measurement of human affect. The AffectButton (Broekens & Brinkman, 2009) is a button that enables users to provide affective feedback in terms of values on the well-known three affective dimensions of Pleasure (Valence), Arousal and Dominance. The AffectButton is an interface component that functions and looks like a medium-sized button. The button presents one dynamically changing iconic facial expression that changes based on the coordinates of the user’s pointer in the button. To give affective feedback the user selects the most appropriate expression by clicking the button, effectively enabling 1-click affective self-report on 3 affective dimensions. Here we analyze 5 previously published studies, and 3 novel large-scale studies (n=325, n=202, n=128). Our results show the reliability, validity, and usability of the button for acquiring three types of affective feedback in various domains. The tested domains are holiday preferences, real-time music annotation, emotion words, and textual situation descriptions (ANET). The types of affective feedback tested are preferences, affect attribution to the previously mentioned stimuli, and self-reported mood. All of the subjects tested were Dutch and aged between 15 and 56 years. We end this article with a discussion of the limitations of the AffectButton and of its relevance to areas including recommender systems, preference elicitation, social computing, online surveys, coaching and tutoring, experimental psychology and psychometrics, content annotation, and game consoles.",
"title": ""
},
{
"docid": "cc3b36d8026396a7a931f07ef9d3bcfb",
"text": "Planning an itinerary before traveling to a city is one of the most important travel preparation activities. In this paper, we propose a novel framework called TripPlanner, leveraging a combination of location-based social network (i.e., LBSN) and taxi GPS digital footprints to achieve personalized, interactive, and traffic-aware trip planning. First, we construct a dynamic point-of-interest network model by extracting relevant information from crowdsourced LBSN and taxi GPS traces. Then, we propose a two-phase approach for personalized trip planning. In the route search phase, TripPlanner works interactively with users to generate candidate routes with specified venues. In the route augmentation phase, TripPlanner applies heuristic algorithms to add user's preferred venues iteratively to the candidate routes, with the objective of maximizing the route score while satisfying both the venue visiting time and total travel time constraints. To validate the efficiency and effectiveness of the proposed approach, extensive empirical studies were performed on two real-world data sets from the city of San Francisco, which contain more than 391 900 passenger delivery trips generated by 536 taxis in a month and 110 214 check-ins left by 15 680 Foursquare users in six months.",
"title": ""
},
{
"docid": "d75958dac28d9d8d8c0e6a6269c204ec",
"text": "To ensure more effectiveness in the learning process in educational institutions, categorization of students is a very interesting method to enhance student's learning capabilities by identifying the factors that affect their performance and use their categories to design targeted inventions for improving their quality. Many research works have been conducted on student performances, to improve their grades and to stop them from dropping out from school by using a data driven approach [1] [2]. In this paper, we have proposed a new model to categorize students into 3 categories to determine their learning capabilities and to help them to improve their studying techniques. We have chosen the state of the art of machine learning approach to classify student's nature of study by selecting prominent features of their activity in their academic field. We have chosen a data driven approach where key factors that determines the base of student and classify them into high, medium and low ranks. This process generates a system where we can clearly identify the crucial factors for which they are categorized. Manual construction of student labels is a difficult approach. Therefore, we have come up with a student categorization model on the basis of selected features which are determined by the preprocessing of Dataset and implementation of Random Forest Importance; Chi2 algorithm; and Artificial Neural Network algorithm. For the research we have used Python's Machine Learning libraries: Scikit-Learn [3]. For Deep Learning paradigm we have used Tensor-Flow, Keras. For data processing Pandas library and Matplotlib and Pyplot has been used for graph visualization purpose.",
"title": ""
},
{
"docid": "8cd505a913baad02c11883ecb4b3b54f",
"text": "During the last decade, studies have shown the benefits of using clinical guidelines in the practice of medicine. Although the importance of these guidelines is widely recognized, health care organizations typically pay more attention to guideline development than to guideline implementation for routine use in daily care. However, studies have shown that clinicians are often not familiar with written guidelines and do not apply them appropriately during the actual care process. Implementing guidelines in computer-based decision support systems promises to improve the acceptance and application of guidelines in daily practice because the actions and observations of health care workers are monitored and advice is generated whenever a guideline is not followed. Such implementations are increasingly applied in diverse areas such as policy development, utilization management, education, clinical trials, and workflow facilitation. Many parties are developing computer-based guidelines as well as decision support systems that incorporate these guidelines. This paper reviews generic approaches for developing and implementing computer-based guidelines that facilitate decision support. It addresses guideline representation, acquisition, verification and execution aspects. The paper describes five approaches (the Arden Syntax, GuideLine Interchange Format (GLIF), PROforma, Asbru and EON), after the approaches are compared and discussed.",
"title": ""
},
{
"docid": "3427740a87691629bd6cf97792089f62",
"text": "Maintainers face the daunting task of wading through a collection of both new and old revisions, trying to ferret out revisions which warrant personal inspection. One can rank revisions by size/lines of code (LOC), but often, due to the distribution of the size of changes, revisions will be of similar size. If we can't rank revisions by LOC perhaps we can rank by Halstead's and McCabe's complexity metrics? However, these metrics are problematic when applied to code fragments (revisions) written in multiple languages: special parsers are required which may not support the language or dialect used; analysis tools may not understand code fragments. We propose using the statistical moments of indentation as a lightweight, language independent, revision/diff friendly metric which actually proxies classical complexity metrics. We have extensively evaluated our approach against the entire CVS histories of the 278 of the most popular and most active SourceForge projects. We found that our results are linearly correlated and rank-correlated with traditional measures of complexity, suggesting that measuring indentation is a cheap and accurate proxy for code complexity of revisions. Thus ranking revisions by the standard deviation and summation of indentation will be very similar to ranking revisions by complexity.",
"title": ""
},
{
"docid": "25b0d0bcc17f2f2660d845a5e3d307b4",
"text": "It is a clear fact that nowadays knowledge is growing faster and with the spreading of information and communication technology, the dream of network learning has become a reality, at least technically, and now a vast amount of spontaneous knowledge exchange is possible. Younger and older learners need to generate new ideas and new products that are to be innovative. In this context, this study aims to explore the nature of Connectivism (Siemens, 2004) using available literature as a traditional qualitative method. The second issue is the advantages and disadvantages of Connectivism as it is concieved by the educationalists. For this, a focus group discussion was used to obtain data. The data obtained formed the following categories: Shortness of traditional theories, the tools of Connectivism, digital literacy, flexible learning time and ecenomc competetion, learning to learn, media psychology, need for expertise, dependence on electricity and available sources. Since half of what is known today was not known 10 years ago, there should be more researches about the use, benefits and drawbacks of Connectivism in the cotext of formal and informal lerning.",
"title": ""
},
{
"docid": "73bc78cb91ae0f5ef3261845f1e0aa92",
"text": "There has been a lot of recent interest in designing neural network models to estimate a distribution from a set of examples. We introduce a simple modification for autoencoder neural networks that yields powerful generative models. Our method masks the autoencoder’s parameters to respect autoregressive constraints: each input is reconstructed only from previous inputs in a given ordering. Constrained this way, the autoencoder outputs can be interpreted as a set of conditional probabilities, and their product, the full joint probability. We can also train a single network that can decompose the joint probability in multiple different orderings. Our simple framework can be applied to multiple architectures, including deep ones. Vectorized implementations, such as on GPUs, are simple and fast. Experiments demonstrate that this approach is competitive with stateof-the-art tractable distribution estimators. At test time, the method is significantly faster and scales better than other autoregressive estimators.",
"title": ""
},
{
"docid": "02c904c320db3a6e0fc9310f077f5d08",
"text": "Rejuvenative procedures of the face are increasing in numbers, and a plethora of different therapeutic options are available today. Every procedure should aim for the patient's safety first and then for natural and long-lasting results. The face is one of the most complex regions in the human body and research continuously reveals new insights into the complex interplay of the different participating structures. Bone, ligaments, muscles, fat, and skin are the key players in the layered arrangement of the face.Aging occurs in all involved facial structures but the onset and the speed of age-related changes differ between each specific structure, between each individual, and between different ethnic groups. Therefore, knowledge of age-related anatomy is crucial for a physician's work when trying to restore a youthful face.This review focuses on the current understanding of the anatomy of the human face and tries to elucidate the morphological changes during aging of bone, ligaments, muscles, and fat, and their role in rejuvenative procedures.",
"title": ""
},
{
"docid": "abaf3d722acb6a641a481cb5324bc765",
"text": "Numerous studies have demonstrated a strong connection between the experience of stigma and the well-being of the stigmatized. But in the area of mental illness there has been controversy surrounding the magnitude and duration of the effects of labeling and stigma. One of the arguments that has been used to downplay the importance of these factors is the substantial body of evidence suggesting that labeling leads to positive effects through mental health treatment. However, as Rosenfield (1997) points out, labeling can simultaneously induce both positive consequences through treatment and negative consequences through stigma. In this study we test whether stigma has enduring effects on well-being by interviewing 84 men with dual diagnoses of mental disorder and substance abuse at two points in time--at entry into treatment, when they were addicted to drugs and had many psychiatric symptoms and then again after a year of treatment, when they were far less symptomatic and largely drug- and alcohol-free. We found a relatively strong and enduring effect of stigma on well-being. This finding indicates that stigma continues to complicate the lives of the stigmatized even as treatment improves their symptoms and functioning. It follows that if health professionals want to maximize the well-being of the people they treat, they must address stigma as a separate and important factor in its own right.",
"title": ""
},
{
"docid": "c4c686a3838088d890dd3dee1fdc19da",
"text": "Agile programming involves continually evolving requirements along with a possible change in their business value and an uncertainty in their time of development. This leads to the difficulty in adapting the release plans according to the response of the environment at each iteration step. This paper shows how a machine learning approach can support the release planning process in an agile environment. The objective is to adapt the release plans according to the results of the previous iterations in the present environment . Reinforcement learning technique has been used to learn the release planning process in an environment of various constraints and multiple objectives. The technique has been applied to a case study to show the utility of the method. The simulation results show that the reinforcement technique can be easily integrated into the release planning process. The teams can learn from the previous iterations and incorporate the learning into the release plans",
"title": ""
},
{
"docid": "5f6670c7e05b2e96175ba51a5259e7a2",
"text": "The development of the Measure of Job Satisfaction (MJS) for use in a longitudinal study of the morale of community nurses in four trusts is described. The review of previous studies focuses on the use of principal component analysis or factor analysis in the development of measures. The MJS was developed from a bank of items culled from the literature and from discussions with key informants. It was mailed to a one in three sample of 723 members of the community nursing forums of the Royal College of Nursing. A 72% response rate was obtained from those eligible for inclusion. Principal component analysis with varimax rotation led to the identification of five dimensions of job satisfaction; Personal Satisfaction, Satisfaction with Workload, Satisfaction with Professional Support, Satisfaction with Pay and Prospects and Satisfaction with Training. These factors form the basis of five subscales of satisfaction which summate to give an Overall Job Satisfaction score. Internal consistency, test-retest reliability, concurrent and discriminatory validity were assessed and were found to be satisfactory. The factor structure was replicated using data obtained from the first three of the community trusts involved in the main study. The limitations of the study and issues which require further exploration are identified and discussed.",
"title": ""
},
{
"docid": "ff002c483d22b4d961bbd2f1a18231fd",
"text": "Dogs can be grouped into two distinct types of breed based on the predisposition to chondrodystrophy, namely, non-chondrodystrophic (NCD) and chondrodystrophic (CD). In addition to a different process of endochondral ossification, NCD and CD breeds have different characteristics of intravertebral disc (IVD) degeneration and IVD degenerative diseases. The anatomy, physiology, histopathology, and biochemical and biomechanical characteristics of the healthy and degenerated IVD are discussed in the first part of this two-part review. This second part describes the similarities and differences in the histopathological and biochemical characteristics of IVD degeneration in CD and NCD canine breeds and discusses relevant aetiological factors of IVD degeneration.",
"title": ""
},
{
"docid": "85f0f820bfb8ed24a51d604f89ebd7d0",
"text": "“Input uncertainty” refers to the (often unmeasured) effect of not knowing the true, correct distributions of the basic stochastic processes that drive the simulation. These include, for instance, interarrival-time and service-time distributions in queueing models; bed-occupancy distributions in health care models; distributions for the values of underlying assets in financial models; and time-to-failure and time-to-repair distributions in reliability models. When the input distributions are obtained by fitting to observed real-world data, then it is possible to quantify the impact of input uncertainty on the output results. In this tutorial we carefully define input uncertainty, describe various proposals for measuring it, contrast input uncertainty with input sensitivity, and provide and illustrate a practical approach for quantifying overall input uncertainty and the relative contribution of each input model to overall input uncertainty.",
"title": ""
},
{
"docid": "3e80fb154cb594dc15f5318b774cf0c3",
"text": "Progressive multifocal leukoencephalopathy (PML) is a rare, subacute, demyelinating disease of the central nervous system caused by JC virus. Studies of PML from HIV Clade C prevalent countries are scarce. We sought to study the clinical, neuroimaging, and pathological features of PML in HIV Clade C patients from India. This is a prospective cum retrospective study, conducted in a tertiary care Neurological referral center in India from Jan 2001 to May 2012. Diagnosis was considered “definite” (confirmed by histopathology or JCV PCR in CSF) or “probable” (confirmed by MRI brain). Fifty-five patients of PML were diagnosed between January 2001 and May 2012. Complete data was available in 38 patients [mean age 39 ± 8.9 years; duration of illness—82.1 ± 74.7 days). PML was prevalent in 2.8 % of the HIV cohort seen in our Institute. Hemiparesis was the commonest symptom (44.7 %), followed by ataxia (36.8 %). Definitive diagnosis was possible in 20 cases. Eighteen remained “probable” wherein MRI revealed multifocal, symmetric lesions, hypointense on T1, and hyperintense on T2/FLAIR. Stereotactic biopsy (n = 11) revealed demyelination, enlarged oligodendrocytes with intranuclear inclusions and astrocytosis. Immunohistochemistry revelaed the presence of JC viral antigen within oligodendroglial nuclei and astrocytic cytoplasm. No differences in clinical, radiological, or pathological features were evident from PML associated with HIV Clade B. Clinical suspicion of PML was entertained in only half of the patients. Hence, a high index of suspicion is essential for diagnosis. There are no significant differences between clinical, radiological, and pathological picture of PML between Indian and Western countries.",
"title": ""
},
{
"docid": "43f2dcf2f2260ff140e20380d265105b",
"text": "As ontologies are the backbone of the Semantic Web, they attract much attention from researchers and engineers in many domains. This results in an increasing number of ontologies and semantic web applications. The number and complexity of such ontologies makes it hard for developers of ontologies and tools to decide which ontologies to use and reuse. To simplify the problem, a modularization algorithm can be used to partition ontologies into sets of modules. In order to evaluate the quality of modularization, we propose a new evaluation metric that quantifies the goodness of ontology modularization. In particular, we investigate the ontology module homogeneity, which assesses module cohesion, and the ontology module heterogeneity, which appraises module coupling. The experimental results demonstrate that the proposed metric is effective.",
"title": ""
},
{
"docid": "bd9f584e7dbc715327b791e20cd20aa9",
"text": "We discuss learning a profile of user interests for recommending information sources such as Web pages or news articles. We describe the types of information available to determine whether to recommend a particular page to a particular user. This information includes the content of the page, the ratings of the user on other pages and the contents of these pages, the ratings given to that page by other users and the ratings of these other users on other pages and demographic information about users. We describe how each type of information may be used individually and then discuss an approach to combining recommendations from multiple sources. We illustrate each approach and the combined approach in the context of recommending restaurants.",
"title": ""
},
{
"docid": "5edc470fab2cea0666efcdbba176a01c",
"text": "We exploit the versatile framework of Riemannian optimization on quotient manifolds to develop R3MC, a nonlinear conjugate-gradient method for low-rank matrix completion. The underlying search space of fixed-rank matrices is endowed with a novel Riemannian metric that is tailored to the least-squares cost. Numerical comparisons suggest that R3MC robustly outperforms state-of-the-art algorithms across different problem instances, especially those that combine scarcely sampled and ill-conditioned data.",
"title": ""
},
{
"docid": "5bb390a0c9e95e0691ac4ba07b5eeb9d",
"text": "Clearing the clouds away from the true potential and obstacles posed by this computing capability.",
"title": ""
},
{
"docid": "ed05e897103e361ace9435d2f5e7756e",
"text": "Clinical disease caused by Empedobacter brevis (E. brevis) is very rare. We report the first case of E. brevis bacteremia in a patient with HIV and review the current literature. A 69-year-old man with human immunodeficiency virus (HIV) and CD4 count of 319 presented with chief complaints of black tarry stools, nausea and vomiting for 2 days. Physical exam was significant for abdominal pain on palpation with no rebound or guarding. His total leukocyte count was 32,000 cells/μL with 82% neutrophils and 9% bands. Emergent colonoscopy and endoscopic esophagogastroduodenoscopy showed esophageal candidiasis, a nonbleeding gastric ulcer, and diverticulosis. Blood cultures drawn on days 1, 2, and 3 of hospitalization grew E. brevis. Patient improved with intravenous antibiotics. This case is unusual, raising the possibility of gastrointestinal colonization as a source of the patient's bacteremia. In conclusion, E. brevis is an emerging pathogen that can cause serious health care associated infections.",
"title": ""
},
{
"docid": "5c0d74be236f8836017dc2c1f6de16df",
"text": "Person re-identification is the problem of recognizing people across images or videos from non-overlapping views. Although there has been much progress in person re-identification for the last decade, it still remains a challenging task because of severe appearance changes of a person due to diverse camera viewpoints and person poses. In this paper, we propose a novel framework for person reidentification by analyzing camera viewpoints and person poses, so-called Pose-aware Multi-shot Matching (PaMM), which robustly estimates target poses and efficiently conducts multi-shot matching based on the target pose information. Experimental results using public person reidentification datasets show that the proposed methods are promising for person re-identification under diverse viewpoints and pose variances.",
"title": ""
}
] |
scidocsrr
|
3d102ae30a6beef20da5dd313ef38772
|
Bayesian combination of sparse and non-sparse priors in image super resolution
|
[
{
"docid": "784dc5ac8e639e3ba4103b4b8653b1ff",
"text": "Super-resolution reconstruction produces one or a set of high-resolution images from a set of low-resolution images. In the last two decades, a variety of super-resolution methods have been proposed. These methods are usually very sensitive to their assumed model of data and noise, which limits their utility. This paper reviews some of these methods and addresses their shortcomings. We propose an alternate approach using L/sub 1/ norm minimization and robust regularization based on a bilateral prior to deal with different data and noise models. This computationally inexpensive method is robust to errors in motion and blur estimation and results in images with sharp edges. Simulation results confirm the effectiveness of our method and demonstrate its superiority to other super-resolution methods.",
"title": ""
}
] |
[
{
"docid": "42d5712d781140edbc6a35703d786e15",
"text": "This paper considers control and estimation problems where the sensor signals and the actuator signals are transmitted to various subsystems over a network. In contrast to traditional control and estimation problems, here the observation and control packets may be lost or delayed. The unreliability of the underlying communication network is modeled stochastically by assigning probabilities to the successful transmission of packets. This requires a novel theory which generalizes classical control/estimation paradigms. The paper offers the foundations of such a novel theory. The central contribution is to characterize the impact of the network reliability on the performance of the feedback loop. Specifically, it is shown that for network protocols where successful transmissions of packets is acknowledged at the receiver (e.g., TCP-like protocols), there exists a critical threshold of network reliability (i.e., critical probabilities for the successful delivery of packets), below which the optimal controller fails to stabilize the system. Further, for these protocols, the separation principle holds and the optimal LQG controller is a linear function of the estimated state. In stark contrast, it is shown that when there is no acknowledgement of successful delivery of control packets (e.g., UDP-like protocols), the LQG optimal controller is in general nonlinear. Consequently, the separation principle does not hold in this circumstance",
"title": ""
},
{
"docid": "97353be7c54dd2ded69815bf93545793",
"text": "In recent years, with the rapid development of deep learning, it has achieved great success in the field of image recognition. In this paper, we applied the convolution neural network (CNN) on supermarket commodity identification, contributing to the study of supermarket commodity identification. Different from the QR code identification of supermarket commodity, our work applied the CNN using the collected images of commodity as input. This method has the characteristics of fast and non-contact. In this paper, we mainly did the following works: 1. Collected a small dataset of supermarket goods. 2. Built Different convolutional neural network frameworks in caffe and trained the dataset using the built networks. 3. Improved train methods by finetuning the trained model.",
"title": ""
},
{
"docid": "d51408ad40bdc9a3a846aaf7da907cef",
"text": "Accessing online information from various data sources has become a necessary part of our everyday life. Unfortunately such information is not always trustworthy, as different sources are of very different qualities and often provide inaccurate and conflicting information. Existing approaches attack this problem using unsupervised learning methods, and try to infer the confidence of the data value and trustworthiness of each source from each other by assuming values provided by more sources are more accurate. However, because false values can be widespread through copying among different sources and out-of-date data often overwhelm up-to-date data, such bootstrapping methods are often ineffective.\n In this paper we propose a semi-supervised approach that finds true values with the help of ground truth data. Such ground truth data, even in very small amount, can greatly help us identify trustworthy data sources. Unlike existing studies that only provide iterative algorithms, we derive the optimal solution to our problem and provide an iterative algorithm that converges to it. Experiments show our method achieves higher accuracy than existing approaches, and it can be applied on very huge data sets when implemented with MapReduce.",
"title": ""
},
{
"docid": "bb3295be91f0365d0d101e08ca4f5f5f",
"text": "Autonomous driving with high velocity is a research hotspot which challenges the scientists and engineers all over the world. This paper proposes a scheme of indoor autonomous car based on ROS which combines the method of Deep Learning using Convolutional Neural Network (CNN) with statistical approach using liDAR images and achieves a robust obstacle avoidance rate in cruise mode. In addition, the design and implementation of autonomous car are also presented in detail which involves the design of Software Framework, Hector Simultaneously Localization and Mapping (Hector SLAM) by Teleoperation, Autonomous Exploration, Path Plan, Pose Estimation, Command Processing, and Data Recording (Co- collection). what’s more, the schemes of outdoor autonomous car, communication, and security are also discussed. Finally, all functional modules are integrated in nVidia Jetson TX1.",
"title": ""
},
{
"docid": "31da3c17b757b4b0f0c10b4d1f7c906f",
"text": "Our planet is experiencing simultaneous changes in global population, urbanization, and climate. These changes, along with the rapid growth of climate data and increasing popularity of data mining techniques may lead to the conclusion that the time is ripe for data mining to spur major innovations in climate science. However, climate data bring forth unique challenges that are unfamiliar to the traditional data mining literature, and unless they are addressed, data mining will not have the same powerful impact that it has had on fields such as biology or e-commerce. In this chapter, we refer to spatio-temporal data mining (STDM) as a collection of methods that mine the data’s spatio-temporal context to increase an algorithm’s accuracy, scalability, or interpretability (relative to non-space-time aware algorithms). We highlight some of the singular characteristics and challenges STDM faces within climate data and their applications, and provide the reader with an overview of the advances in STDM and related climate applications. We also demonstrate some of the concepts introduced in the chapter’s earlier sections with a real-world STDM pattern mining application to identify mesoscale ocean eddies from satellite data. The case-study provides the reader with concrete examples of challenges faced when mining climate data and how effectively analyzing the data’s spatio-temporal context may improve existing methods’ accuracy, interpretability, and scalability. We end the chapter with a discussion of notable opportunities for STDM research within climate. James H. Faghmous Department of Computer Science and Engineering, The University of Minnesota – Twin Cities e-mail: [email protected] Vipin Kumar Department of Computer Science and Engineering, The University of Minnesota – Twin Cities e-mail: [email protected]",
"title": ""
},
{
"docid": "467b4537bdc6a466909d819e67d0ebc1",
"text": "We have created an immersive application for statistical graphics and have investigated what benefits it offers over more traditional data analysis tools. This paper presents a description of both the traditional data analysis tools and our virtual environment, and results of an experiment designed to determine if an immersive environment based on the XGobi desktop system provides advantages over XGobi for analysis of high-dimensional statistical data. The experiment included two aspects of each environment: three structure detection (visualization) tasks and one ease of interaction task. The subjects were given these tasks in both the C2 virtual environment and a workstation running XGobi. The experiment results showed an improvement in participants’ ability to perform structure detection tasks in the C2 to their performance in the desktop environment. However, participants were more comfortable with the interaction tools in the desktop",
"title": ""
},
{
"docid": "809b5194b8f842a6e3f7e5b8748fefc3",
"text": "Failure modes and mechanisms of AlGaN/GaN high-electron-mobility transistors are reviewed. Data from three de-accelerated tests are presented, which demonstrate a close correlation between failure modes and bias point. Maximum degradation was found in \"semi-on\" conditions, close to the maximum of hot-electron generation which was detected with the aid of electroluminescence (EL) measurements. This suggests a contribution of hot-electron effects to device degradation, at least at moderate drain bias (VDS<30 V). A procedure for the characterization of hot carrier phenomena based on EL microscopy and spectroscopy is described. At high drain bias (VDS>30-50 V), new failure mechanisms are triggered, which induce an increase of gate leakage current. The latter is possibly related with the inverse piezoelectric effect leading to defect generation due to strain relaxation, and/or to localized permanent breakdown of the AlGaN barrier layer. Results are compared with literature data throughout the text.",
"title": ""
},
{
"docid": "005308068bc62c2672f03e4b252c32ba",
"text": "Although bilinguals rarely make random errors of language when they speak, research on spoken production provides compelling evidence to suggest that both languages are active when only one language is spoken (e.g., [Poulisse, N. (1999). Slips of the tongue: Speech errors in first and second language production. Amsterdam/Philadelphia: John Benjamins]). Moreover, the parallel activation of the two languages appears to characterize the planning of speech for highly proficient bilinguals as well as second language learners. In this paper, we first review the evidence for cross-language activity during single word production and then consider the two major alternative models of how the intended language is eventually selected. According to language-specific selection models, both languages may be active but bilinguals develop the ability to selectively attend to candidates in the intended language. The alternative model, that candidates from both languages compete for selection, requires that cross-language activity be modulated to allow selection to occur. On the latter view, the selection mechanism may require that candidates in the nontarget language be inhibited. We consider the evidence for such an inhibitory mechanism in a series of recent behavioral and neuroimaging studies.",
"title": ""
},
{
"docid": "51f4b288d0c902e083a0eede6f342ba2",
"text": "Transactional memory (TM) is a promising synchronization mechanism for the next generation of multicore processors. Best-effort Hardware Transactional Memory (HTM) designs, such as Sun's prototype Rock processor and AMD's proposed Advanced Synchronization Facility (ASF), can efficiently execute many transactions, but abort in some cases due to various limitations. Hybrid TM systems can use a compatible software TM (STM) in such cases.\n We introduce a family of hybrid TMs built using the recent NOrec STM algorithm that, unlike existing hybrid approaches, provide both low overhead on hardware transactions and concurrent execution of hardware and software transactions. We evaluate implementations for Rock and ASF, exploring how the differing HTM designs affect optimization choices. Our investigation yields valuable input for designers of future best-effort HTMs.",
"title": ""
},
{
"docid": "d7bb22eefbff0a472d3e394c61788be2",
"text": "Crowd evacuation of a building has been studied over the last decades. In this paper, seven methodological approaches for crowd evacuation have been identified. These approaches include cellular automata models, lattice gas models, social force models, fluid-dynamic models, agent-based models, game theoretic models, and approaches based on experiments with animals. According to available literatures, we discuss the advantages and disadvantages of these approaches, and conclude that a variety of different kinds of approaches should be combined to study crowd evacuation. Psychological and physiological elements affecting individual and collective behaviors should be also incorporated into the evacuation models. & 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "809b40cd0089410592d7b7f77f04c8e4",
"text": "This paper presents a new method for segmentation and interpretation of 3D point clouds from mobile LIDAR data. The main contribution of this work is the automatic detection and classification of artifacts located at the ground level. The detection is based on Top-Hat of hole filling algorithm of range images. Then, several features are extracted from the detected connected components (CCs). Afterward, a stepwise forward variable selection by using Wilk's Lambda criterion is performed. Finally, CCs are classified in four categories (lampposts, pedestrians, cars, the others) by using a SVM machine learning method.",
"title": ""
},
{
"docid": "008b5ae7c256a52853fcdbd413931829",
"text": "We present applications of rough set methods for feature selection in pattern recognition. We emphasize the role of the basic constructs of rough set approach in feature selection, namely reducts and their approximations, including dynamic reducts. In the overview of methods for feature selection we discuss feature selection criteria, including the rough set based methods. Our algorithm for feature selection is based on an application of a rough set method to the result of principal components analysis (PCA) used for feature projection and reduction. Finally, the paper presents numerical results of face and mammogram recognition experiments using neural network, with feature selection based on proposed PCA and rough set methods. 2002 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "8f65f1971405e0c225e3625bb682a2d4",
"text": "We address the problem of 3D shape completion from sparse and noisy point clouds, a fundamental problem in computer vision and robotics. Recent approaches are either data-driven or learning-based: Data-driven approaches rely on a shape model whose parameters are optimized to fit the observations; Learning-based approaches, in contrast, avoid the expensive optimization step by learning to directly predict complete shapes from incomplete observations in a fully-supervised setting. However, full supervision is often not available in practice. In this work, we propose a weakly-supervised learning-based approach to 3D shape completion which neither requires slow optimization nor direct supervision. While we also learn a shape prior on synthetic data, we amortize, i.e., learn, maximum likelihood fitting using deep neural networks resulting in efficient shape completion without sacrificing accuracy. On synthetic benchmarks based on ShapeNet (Chang et al. Shapenet: an information-rich 3d model repository, 2015. arXiv:1512.03012) and ModelNet (Wu et al., in: Proceedings of IEEE conference on computer vision and pattern recognition (CVPR), 2015) as well as on real robotics data from KITTI (Geiger et al., in: Proceedings of IEEE conference on computer vision and pattern recognition (CVPR), 2012) and Kinect (Yang et al., 3d object dense reconstruction from a single depth view, 2018. arXiv:1802.00411), we demonstrate that the proposed amortized maximum likelihood approach is able to compete with the fully supervised baseline of Dai et al. (in: Proceedings of IEEE conference on computer vision and pattern recognition (CVPR), 2017) and outperforms the data-driven approach of Engelmann et al. (in: Proceedings of the German conference on pattern recognition (GCPR), 2016), while requiring less supervision and being significantly faster.",
"title": ""
},
{
"docid": "48427804f2e704ab6ea15251c624cdf2",
"text": "In this work, we propose Residual Attention Network, a convolutional neural network using attention mechanism which can incorporate with state-of-art feed forward network architecture in an end-to-end training fashion. Our Residual Attention Network is built by stacking Attention Modules which generate attention-aware features. The attention-aware features from different modules change adaptively as layers going deeper. Inside each Attention Module, bottom-up top-down feedforward structure is used to unfold the feedforward and feedback attention process into a single feedforward process. Importantly, we propose attention residual learning to train very deep Residual Attention Networks which can be easily scaled up to hundreds of layers. Extensive analyses are conducted on CIFAR-10 and CIFAR-100 datasets to verify the effectiveness of every module mentioned above. Our Residual Attention Network achieves state-of-the-art object recognition performance on three benchmark datasets including CIFAR-10 (3.90% error), CIFAR-100 (20.45% error) and ImageNet (4.8% single model and single crop, top-5 error). Note that, our method achieves 0.6% top-1 accuracy improvement with 46% trunk depth and 69% forward FLOPs comparing to ResNet-200. The experiment also demonstrates that our network is robust against noisy labels.",
"title": ""
},
{
"docid": "6a7bfed246b83517655cb79a951b1f48",
"text": "Hypernymy, textual entailment, and image captioning can be seen as special cases of a single visual-semantic hierarchy over words, sentences, and images. In this paper we advocate for explicitly modeling the partial order structure of this hierarchy. Towards this goal, we introduce a general method for learning ordered representations, and show how it can be applied to a variety of tasks involving images and language. We show that the resulting representations improve performance over current approaches for hypernym prediction and image-caption retrieval.",
"title": ""
},
{
"docid": "5809c27155986612b0e4a9ef48b3b930",
"text": "Using the same technologies for both work and private life is an intensifying phenomenon. Mostly driven by the availability of consumer IT in the marketplace, individuals—more often than not—are tempted to use privately-owned IT rather than enterprise IT in order to get their job done. However, this dual-use of technologies comes at a price. It intensifies the blurring of the boundaries between work and private life—a development in stark contrast to the widely spread desire of employees to segment more clearly between their two lives. If employees cannot follow their segmentation preference, it is proposed that this misfit will result in work-to-life conflict (WtLC). This paper investigates the relationship between organizational encouragement for dual use and WtLC. Via a quantitative survey, we find a significant relationship between the two concepts. In line with boundary theory, the effect is stronger for people that strive for work-life segmentation.",
"title": ""
},
{
"docid": "d7f4b2b524a5b7b78263881b2ec7a797",
"text": "Traditional semantic parsers map language onto compositional, executable queries in a fixed schema. This mapping allows them to effectively leverage the information co tained in large, formal knowledge bases (KBs, e.g., Freebas e) to answer questions, but it is also fundamentally limiting— these semantic parsers can only assign meaning to language that falls within the KB’s manually-produced schema. Recently proposed methods for open vocabulary semantic parsing overcome this limitation by learning execution models for arbitrary language, essentially using a text corpus as a kind of knowledge base. However, all prior approaches to open vocabulary semantic parsing replace a formal KB with textual information, making no use of the KB in their models. We show how to combine the disparate representations used by these two approaches, presenting for the first time a semantic parser that (1) produces compositional, executab le representations of language, (2) can successfully leverag e the information contained in both a formal KB and a large corpus, and (3) is not limited to the schema of the underlying KB. We demonstrate significantly improved performance over state-of-the-art baselines on an open-domain natural language question answering task.",
"title": ""
},
{
"docid": "3fb3715c0c80d2e871b5d7eed4ed5f9a",
"text": "23 24 25 26 27 28 29 30 31 Article history: Available online xxxx",
"title": ""
},
{
"docid": "ce429bbed5895731c9a3a9b77e3f488b",
"text": "[Purpose] This study assessed the relationships between the ankle dorsiflexion range of motion and foot and ankle strength. [Subjects and Methods] Twenty-nine healthy (young adults) volunteers participated in this study. Each participant completed tests for ankle dorsiflexion range of motion, hallux flexor strength, and ankle plantar and dorsiflexor strength. [Results] The results showed (1) a moderate correlation between ankle dorsiflexor strength and dorsiflexion range of motion and (2) a moderate correlation between ankle dorsiflexor strength and first toe flexor muscle strength. Ankle dorsiflexor strength is the main contributor ankle dorsiflexion range of motion to and first toe flexor muscle strength. [Conclusion] Ankle dorsiflexion range of motion can play an important role in determining ankle dorsiflexor strength in young adults.",
"title": ""
},
{
"docid": "916c7a159dd22d0a0c0d3f00159ad790",
"text": "The concept of scalability was introduced to the IEEE 802.16 WirelessMAN Orthogonal Frequency Division Multiplexing Access (OFDMA) mode by the 802.16 Task Group e (TGe). A scalable physical layer enables standard-based solutions to deliver optimum performance in channel bandwidths ranging from 1.25 MHz to 20 MHz with fixed subcarrier spacing for both fixed and portable/mobile usage models, while keeping the product cost low. The architecture is based on a scalable subchannelization structure with variable Fast Fourier Transform (FFT) sizes according to the channel bandwidth. In addition to variable FFT sizes, the specification supports other features such as Advanced Modulation and Coding (AMC) subchannels, Hybrid Automatic Repeat Request (H-ARQ), high-efficiency uplink subchannel structures, Multiple-Input-MultipleOutput (MIMO) diversity, and coverage enhancing safety channels, as well as other OFDMA default features such as different subcarrier allocations and diversity schemes. The purpose of this paper is to provide a brief tutorial on the IEEE 802.16 WirelessMAN OFDMA with an emphasis on scalable OFDMA. INTRODUCTION The IEEE 802.16 WirelessMAN standard [1] provides specifications for an air interface for fixed, portable, and mobile broadband wireless access systems. The standard includes requirements for high data rate Line of Sight (LOS) operation in the 10-66 GHz range for fixed wireless networks as well as requirements for Non Line of Sight (NLOS) fixed, portable, and mobile systems operating in sub 11 GHz licensed and licensed-exempt bands. Because of its superior performance in multipath fading wireless channels, Orthogonal Frequency Division Multiplexing (OFDM) signaling is recommended in OFDM and WirelessMAN OFDMA Physical (PHY) layer modes of the 802.16 standard for operation in sub 11 GHz NLOS applications. OFDM technology has been recommended in other wireless standards such as Digital Video Broadcasting (DVB) [2] and Wireless Local Area Networking (WLAN) [3]-[4], and it has been successfully implemented in the compliant solutions. Amendments for PHY and Medium Access Control (MAC) layers for mobile operation are being developed (working drafts [5] are being debated at the time of publication of this paper) by TGe of the 802.16 Working Group. The task group’s responsibility is to develop enhancement specifications to the standard to support Subscriber Stations (SS) moving at vehicular speeds and thereby specify a system for combined fixed and mobile broadband wireless access. Functions to support optional PHY layer structures, mobile-specific MAC enhancements, higher-layer handoff between Base Stations (BS) or sectors, and security features are among those specified. Operation in mobile mode is limited to licensed bands suitable for mobility between 2 and 6 GHz. Unlike many other OFDM-based systems such as WLAN, the 802.16 standard supports variable bandwidth sizes between 1.25 and 20 MHz for NLOS operations. This feature, along with the requirement for support of combined fixed and mobile usage models, makes the need for a scalable design of OFDM signaling inevitable. More specifically, neither one of the two OFDM-based modes of the 802.16 standard, WirelessMAN OFDM and OFDMA (without scalability option), can deliver the kind of performance required for operation in vehicular mobility multipath fading environments for all bandwidths in the specified range, without scalability enhancements that guarantee fixed subcarrier spacing for OFDM signals. The concept of scalable OFDMA is introduced to the IEEE 802.16 WirelessMAN OFDMA mode by the 802.16 TGe and has been the subject of many contributions to the standards committee [6]-[9]. Other features such as AMC subchannels, Hybrid Automatic Repeat Request Intel Technology Journal, Volume 8, Issue 3, 2004 Scalable OFDMA Physical Layer in IEEE 802.16 WirelessMAN 202 (H-ARQ), high-efficiency Uplink (UL) subchannel structures, Multiple-Input-Multiple-Output (MIMO) diversity, enhanced Advanced Antenna Systems (AAS), and coverage enhancing safety channels were introduced [10]-[14] simultaneously to enhance coverage and capacity of mobile systems while providing the tools to trade off mobility with capacity. The rest of the paper is organized as follows. In the next section we cover multicarrier system requirements, drivers of scalability, and design tradeoffs. We follow that with a discussion in the following six sections of the OFDMA frame structure, subcarrier allocation modes, Downlink (DL) and UL MAP messaging, diversity options, ranging in OFDMA, and channel coding options. Note that although the IEEE P802.16-REVd was ratified shortly before the submission of this paper, the IEEE P802.16e was still in draft stage at the time of submission, and the contents of this paper therefore are based on proposed contributions to the working group. MULTICARRIER DESIGN REQUIREMENTS AND TRADEOFFS A typical early step in the design of an Orthogonal Frequency Division Multiplexing (OFDM)-based system is a study of subcarrier design and the size of the Fast Fourier Transform (FFT) where optimal operational point balancing protection against multipath, Doppler shift, and design cost/complexity is determined. For this, we use Wide-Sense Stationary Uncorrelated Scattering (WSSUS), a widely used method to model time varying fading wireless channels both in time and frequency domains using stochastic processes. Two main elements of the WSSUS model are briefly discussed here: Doppler spread and coherence time of channel; and multipath delay spread and coherence bandwidth. A maximum speed of 125 km/hr is used here in the analysis for support of mobility. With the exception of high-speed trains, this provides a good coverage of vehicular speed in the US, Europe, and Asia. The maximum Doppler shift [15] corresponding to the operation at 3.5 GHz (selected as a middle point in the 26 GHz frequency range) is given by Equation (1). Hz m s m f m 408 086 . 0 / 35 = = = λ ν Equation (1) The worst-case Doppler shift value for 125 km/hr (35 m/s) would be ~700 Hz for operation at the 6 GHz upper limit specified by the standard. Using a 10 KHz subcarrier spacing, the Inter Channel Interference (ICI) power corresponding to the Doppler shift calculated in Equation (1) can be shown [16] to be limited to ~-27 dB. The coherence time of the channel, a measure of time variation in the channel, corresponding to the Doppler shift specified above, is calculated in Equation (2) [15].",
"title": ""
}
] |
scidocsrr
|
48ea7044367f4c6de37ac9ddf0e655f6
|
Investigation and improvement of multi-layer perception neural networks for credit scoring
|
[
{
"docid": "4eda5bc4f8fa55ae55c69f4233858fc7",
"text": "In this paper, we set out to compare several techniques that can be used in the analysis of imbalanced credit scoring data sets. In a credit scoring context, imbalanced data sets frequently occur as the number of defaulting loans in a portfolio is usually much lower than the number of observations that do not default. As well as using traditional classification techniques such as logistic regression, neural networks and decision trees, this paper will also explore the suitability of gradient boosting, least square support vector machines and random forests for loan default prediction. Five real-world credit scoring data sets are used to build classifiers and test their performance. In our experiments, we progressively increase class imbalance in each of these data sets by randomly undersampling the minority class of defaulters, so as to identify to what extent the predictive power of the respective techniques is adversely affected. The performance criterion chosen to measure this effect is the area under the receiver operating characteristic curve (AUC); Friedman’s statistic and Nemenyi post hoc tests are used to test for significance of AUC differences between techniques. The results from this empirical study indicate that the random forest and gradient boosting classifiers perform very well in a credit scoring context and are able to cope comparatively well with pronounced class imbalances in these data sets. We also found that, when faced with a large class imbalance, the C4.5 decision tree algorithm, quadratic discriminant analysis and k-nearest neighbours perform significantly worse than the best performing classifiers. 2011 Elsevier Ltd.",
"title": ""
}
] |
[
{
"docid": "1c94a04fdeb39ba00357e4dcc87d3862",
"text": "Automatic segmentation of speech is an important problem that is useful in speech recognition, synthesis and coding. We explore in this paper, the robust parameter set, weighting function and distance measure for reliable segmentation of noisy speech. It is found that the MFCC parameters, successful in speech recognition. holds the best promise for robust segmentation also. We also explored a variety of symmetric and asymmetric weighting lifters. from which it is found that a symmetric lifter of the form 1 + A sin1/2(πn/L), 0 ≤ n ≤ L − 1, for MFCC dimension L, is most effective. With regard to distance measure, the direct L2 norm is found adequate.",
"title": ""
},
{
"docid": "a73b9ce3d0808177c9f0739b67a1a3f3",
"text": "Multiword expressions (MWEs) are lexical items that can be decomposed into multiple component words, but have properties that are unpredictable with respect to their component words. In this paper we propose the first deep learning models for token-level identification of MWEs. Specifically, we consider a layered feedforward network, a recurrent neural network, and convolutional neural networks. In experimental results we show that convolutional neural networks are able to outperform the previous state-of-the-art for MWE identification, with a convolutional neural network with three hidden layers giving the best performance.",
"title": ""
},
{
"docid": "43baeb87f1798d52399ba8c78ffa7fef",
"text": "ECONOMISTS are frequently asked to measure the effects of an economic event on the value of firms. On the surface this seems like a difficult task, but a measure can be constructed easily using an event study. Using financial market data, an event study measures the impact of a specific event on the value of a firm. The usefulness of such a study comes from the fact that, given rationality in the marketplace, the effects of an event will be reflected immediately in security prices. Thus a measure of the event’s economic impact can be constructed using security prices observed over a relatively short time period. In contrast, direct productivity related measures may require many months or even years of observation. The event study has many applications. In accounting and finance research, event studies have been applied to a variety of firm specific and economy wide events. Some examples include mergers and acquisitions, earnings announcements, issues of new debt or equity, and announcements of macroeconomic variables such as the trade deficit.1 However, applications in other fields are also abundant. For example, event studies are used in the field of law and economics to measure the impact on the value of a firm of a change in the regulatory environment (see G. William Schwert 1981) and in legal liability cases event studies are used to assess damages (see Mark Mitchell and Jeffry Netter 1994). In the majority of applications, the focus is the effect of an event on the price of a particular class of securities of the firm, most often common equity. In this paper the methodology is discussed in terms of applications that use common equity. However, event studies can be applied using debt securities with little modification. Event studies have a long history. Perhaps the first published study is James Dolley (1933). In this work, he examines the price effects of stock splits, studying nominal price changes at the time of the split. Using a sample of 95 splits from 1921 to 1931, he finds that the price in-",
"title": ""
},
{
"docid": "28dfe540e7bf24c66be0a2563fb9a145",
"text": "Taxonomies are often used to look up the concepts they contain in text documents (for instance, to classify a document). The more comprehensive the taxonomy, the higher recall the application has that uses the taxonomy. In this paper, we explore automatic taxonomy augmentation with paraphrases. We compare two state-of-the-art paraphrase models based on Moses, a statistical Machine Translation system, and a sequence-to-sequence neural network, trained on a paraphrase datasets with respect to their abilities to add novel nodes to an existing taxonomy from the risk domain. We conduct component-based and task-based evaluations. Our results show that paraphrasing is a viable method to enrich a taxonomy with more terms, and that Moses consistently outperforms the sequence-to-sequence neural model. To the best of our knowledge, this is the first approach to augment taxonomies with paraphrases.",
"title": ""
},
{
"docid": "14c981a63e34157bb163d4586502a059",
"text": "In this paper, we investigate an angle of arrival (AoA) and angle of departure (AoD) estimation algorithm for sparse millimeter wave multiple-input multiple-output (MIMO) channels. The analytical channel model whose use we advocate here is the beam space (or virtual) MIMO channel representation. By leveraging the beam space MIMO concept, we characterize probabilistic channel priors under an analog precoding and combining constraints. This investigation motivates Bayesian inference approaches to virtual AoA and AoD estimation. We divide the estimation task into downlink sounding for AoA estimation and uplink sounding for AoD estimation. A belief propagation (BP)-type algorithm is adopted, leading to computationally efficient approximate message passing (AMP) and approximate log-likelihood ratio testing (ALLRT) algorithms. Numerical results demonstrate that the proposed algorithm outperforms the conventional AMP in terms of the AoA and AoD estimation accuracy for the sparse millimeter wave MIMO channel.",
"title": ""
},
{
"docid": "448be7422a2c4fe5ba4858311a52a51a",
"text": "Every organization is associated with huge amount of information which is more valuable. Data is important and so it should be consistent, accurate and correct. Today many approaches are used to protect the data as well as networks from attackers (attacks like SQLIA, Brute-force attack). One way to make data more secure is using Intrusion Detection System (IDS). Many researches are done in this intrusion detection field but it mainly concentrated on networks and operating system. This approach is for database so that it will prevent the data loss, maintain consistency and accuracy. Database security research is concerned about the protection of database from unauthorized access. The unauthorized access may be in the form of execution of malicious transaction and this may lead to break the integrity of the system. Banking is one of the sectors which are suffering from million dollars losses only because of this unauthorized activities and malicious transactions. So, it is today's demand to detect malicious transactions and also to provide some recommendation. In this paper, we provided the detection system for the real-world problem of intrusion detection in the banking system and we are going to give some preventive measures to avoid or reduce future attacks. In order to detect malicious transactions, we used data mining algorithm for framing a data dependency miner for our banking database IDS. Our approach extracts the read-write dependency rules and then these rules are used to check whether the coming transaction is malicious or not. Our",
"title": ""
},
{
"docid": "94f364c7b1f4254db525c3c6108a9e4c",
"text": "A planar radar sensor for automotive application is presented. The design comprises a fully integrated transceiver multi-chip module (MCM) and an electronically steerable microstrip patch array. The antenna feed network is based on a modified Rotman-lens. An extended angular coverage together with an adapted resolution allows for the integration of automatic cruise control (ACC), precrash sensing and cut-in detection within a single 77 GHz frontend. For ease of manufacturing the interconnects between antenna and MCM rely on a mixed wire bond and flip-chip approach. The concept is validated by laboratory radar measurements.",
"title": ""
},
{
"docid": "19f8ae070aa161ca1399b21b6a9c4678",
"text": "Wireless Sensor Network (WSN) is a large scale network with from dozens to thousands tiny devices. Using fields of WSNs (military, health, smart home e.g.) has a large-scale and its usage areas increasing day by day. Secure issue of WSNs is an important research area and applications of WSN have some big security deficiencies. Intrusion Detection System is a second-line of the security mechanism for networks, and it is very important to integrity, confidentiality and availability. Intrusion Detection in WSNs is somewhat different from wired and non-energy constraint wireless network because WSN has some constraints influencing cyber security approaches and attack types. This paper is a survey describing attack types of WSNs intrusion detection approaches being against to this attack types.",
"title": ""
},
{
"docid": "792907ad8871e63f6b39d344452ca66a",
"text": "This paper presents the design of a hardware-efficient, low-power image processing system for next-generation wireless endoscopy. The presented system is composed of a custom CMOS image sensor, a dedicated image compressor, a forward error correction (FEC) encoder protecting radio transmitted data against random and burst errors, a radio data transmitter, and a controller supervising all operations of the system. The most significant part of the system is the image compressor. It is based on an integer version of a discrete cosine transform and a novel, low complexity yet efficient, entropy encoder making use of an adaptive Golomb-Rice algorithm instead of Huffman tables. The novel hardware-efficient architecture designed for the presented system enables on-the-fly compression of the acquired image. Instant compression, together with elimination of the necessity of retransmitting erroneously received data by their prior FEC encoding, significantly reduces the size of the required memory in comparison to previous systems. The presented system was prototyped in a single, low-power, 65-nm field programmable gate arrays (FPGA) chip. Its power consumption is low and comparable to other application-specific-integrated-circuits-based systems, despite FPGA-based implementation.",
"title": ""
},
{
"docid": "49dd1fd4640a160ba41fed048b2c804b",
"text": "This paper proposes a novel method to predict increases in YouTube viewcount driven from the Twitter social network. Specifically, we aim to predict two types of viewcount increases: a sudden increase in viewcount (named as Jump), and the viewcount shortly after the upload of a new video (named as Early). Experiments on hundreds of thousands of videos and millions of tweets show that Twitter-derived features alone can predict whether a video will be in the top 5% for Early popularity with 0.7 Precision@100. Furthermore, our results reveal that while individual influence is indeed important for predicting how Twitter drives YouTube views, it is a diversity of interest from the most active to the least active Twitter users mentioning a video (measured by the variation in their total activity) that is most informative for both Jump and Early prediction. In summary, by going beyond features that quantify individual influence and additionally leveraging collective features of activity variation, we are able to obtain an effective cross-network predictor of Twitter-driven YouTube views.",
"title": ""
},
{
"docid": "751b853f780fc8047ff73ce646b68cd6",
"text": "This paper builds on previous research in the light field area of image-based rendering. We present a new reconstruction filter that significantly reduces the “ghosting” artifacts seen in undersampled light fields, while preserving important high-fidelity features such as sharp object boundaries and view-dependent reflectance. By improving the rendering quality achievable from undersampled light fields, our method allows acceptable images to be generated from smaller image sets. We present both frequency and spatial domain justifications for our techniques. We also present a practical framework for implementing the reconstruction filter in multiple rendering passes. CR Categories: I.3.3 [Computer Graphics]: Picture/Image Generation ― Viewing algorithms; I.3.6 [Computer Graphics]: Methodologies and Techniques ― Graphics data structures and data types; I.4.1 [Image Processing and Computer Vision]: Digitization and Image Capture ― Sampling",
"title": ""
},
{
"docid": "8cfce71cc96c98063b29ec0603f5d18c",
"text": "Time-series of count data are generated in many different contexts, such as web access logging, freeway traffic monitoring, and security logs associated with buildings. Since this data measures the aggregated behavior of individual human beings, it typically exhibits a periodicity in time on a number of scales (daily, weekly,etc.) that reflects the rhythms of the underlying human activity and makes the data appear non-homogeneous. At the same time, the data is often corrupted by a number of bursty periods of unusual behavior such as building events, traffic accidents, and so forth. The data mining problem of finding and extracting these anomalous events is made difficult by both of these elements. In this paper we describe a framework for unsupervised learning in this context, based on a time-varying Poisson process model that can also account for anomalous events. We show how the parameters of this model can be learned from count time series using statistical estimation techniques. We demonstrate the utility of this model on two datasets for which we have partial ground truth in the form of known events, one from freeway traffic data and another from building access data, and show that the model performs significantly better than a non-probabilistic, threshold-based technique. We also describe how the model can be used to investigate different degrees of periodicity in the data, including systematic day-of-week and time-of-day effects, and make inferences about the detected events (e.g., popularity or level of attendance). Our experimental results indicate that the proposed time-varying Poisson model provides a robust and accurate framework for adaptively and autonomously learning how to separate unusual bursty events from traces of normal human activity.",
"title": ""
},
{
"docid": "e64d177c2898aee78fbe0f06ef61c373",
"text": "For both humans and robots, tactile sensing is important for interaction with the environment: it is the core sensing used for exploration and manipulation of objects. In this paper, we present a novel tactile-array sensor based on flexible piezoresistive rubber. We describe the design of the sensor and data acquisition system.We evaluate the sensitivity and robustness of the sensor, and show that it is consistent over time with little relaxation. Furthermore, the sensor has the benefit of being flexible, having a high resolution, it is easy to mount, and simple to manufacture. We demonstrate the use of the sensor in an active object-classification system. A robotic gripper with two sensors mounted on its fingers performs a palpation procedure on a set of objects. By squeezing an object, the robot actively explores the material properties, and the system acquires tactile information corresponding to the resulting pressure. Based on a k nearest neighbor classifier and using dynamic time warping to calculate the distance between different time series, the system is able to successfully classify objects. Our sensor demonstrates similar classification performance to the Weiss Robotics tactile sensor, while having additional benefits. © 2012 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "25d60ca2cbbb49cf025de9c97923ec3e",
"text": "We studied the thermophoretic motion of wrinkles formed in substrate-supported graphene sheets by nonequilibrium molecular dynamics simulations. We found that a single wrinkle moves along applied temperature gradient with a constant acceleration that is linearly proportional to temperature deviation between the heating and cooling sides of the graphene sheet. Like a solitary wave, the atoms of the single wrinkle drift upwards and downwards, which prompts the wrinkle to move forwards. The driving force for such thermophoretic movement can be mainly attributed to a lower free energy of the wrinkle back root when it is transformed from the front root. We establish a motion equation to describe the soliton-like thermophoresis of a single graphene wrinkle based on the Korteweg-de Vries equation. Similar motions are also observed for wrinkles formed in a Cu-supported graphene sheet. These findings provide an energy conversion mechanism by using graphene wrinkle thermophoresis.",
"title": ""
},
{
"docid": "7a718827578d63ff9b7187be7e486051",
"text": "In this paper, we propose an adaptive specification-based intrusion detection system (IDS) for detecting malicious unmanned air vehicles (UAVs) in an airborne system in which continuity of operation is of the utmost importance. An IDS audits UAVs in a distributed system to determine if the UAVs are functioning normally or are operating under malicious attacks. We investigate the impact of reckless, random, and opportunistic attacker behaviors (modes which many historical cyber attacks have used) on the effectiveness of our behavior rule-based UAV IDS (BRUIDS) which bases its audit on behavior rules to quickly assess the survivability of the UAV facing malicious attacks. Through a comparative analysis with the multiagent system/ant-colony clustering model, we demonstrate a high detection accuracy of BRUIDS for compliant performance. By adjusting the detection strength, BRUIDS can effectively trade higher false positives for lower false negatives to cope with more sophisticated random and opportunistic attackers to support ultrasafe and secure UAV applications.",
"title": ""
},
{
"docid": "252526e9d50cab28d702f695c12acc27",
"text": "This paper describes several optimization techniques used to create an adequate route network graph for autonomous cars as a map reference for driving on German autobahn or similar highway tracks. We have taken the Route Network Definition File Format (RNDF) specified by DARPA and identified multiple flaws of the RNDF for creating digital maps for autonomous vehicles. Thus, we introduce various enhancements to it to form a digital map graph called RND-FGraph, which is well suited to map almost any urban transportation infrastructure. We will also outline and show results of fast optimizations to reduce the graph size. The RNDFGraph has been used for path-planning and trajectory evaluation by the behavior module of our two autonomous cars “Spirit of Berlin” and “MadeInGermany”. We have especially tuned the graph to map structured high speed environments such as autobahns where we have tested autonomously hundreds of kilometers under real traffic conditions.",
"title": ""
},
{
"docid": "8c38fa79c02e9b9aabd107f5b02d2587",
"text": "Graph computation approaches such as GraphChi and TurboGraph recently demonstrated that a single PC can perform efficient computation on billion-node graphs. To achieve high speed and scalability, they often need sophisticated data structures and memory management strategies. We propose a minimalist approach that forgoes such requirements, by leveraging the fundamental memory mapping (MMap) capability found on operating systems. We contribute: (1) a new insight that MMap is a viable technique for creating fast and scalable graph algorithms that surpasses some of the best techniques; (2) the design and implementation of popular graph algorithms for billion-scale graphs with little code, thanks to memory mapping; (3) extensive experiments on real graphs, including the 6.6 billion edge Yahoo Web graph, and show that this new approach is significantly faster or comparable to the highly-optimized methods (e.g., 9.5X faster than GraphChi for computing PageRank on 1.47B edge Twitter graph). We believe our work provides a new direction in the design and development of scalable algorithms. Our packaged code is available at http://poloclub.gatech.edu/mmap/.",
"title": ""
},
{
"docid": "83f1e80a8d4b54184531798559a028d5",
"text": "Fast-response and high-sensitivity deep-ultraviolet (DUV) photodetectors with detection wavelength shorter than 320 nm are in high demand due to their potential applications in diverse fields. However, the fabrication processes of DUV detectors based on traditional semiconductor thin films are complicated and costly. Here we report a high-performance DUV photodetector based on graphene quantum dots (GQDs) fabricated via a facile solution process. The devices are capable of detecting DUV light with wavelength as short as 254 nm. With the aid of an asymmetric electrode structure, the device performance could be significantly improved. An on/off ratio of ∼6000 under 254 nm illumination at a relatively weak light intensity of 42 μW cm(-2) is achieved. The devices also exhibit excellent stability and reproducibility with a fast response speed. Given the solution-processing capability of the devices and extraordinary properties of GQDs, the use of GQDs will open up unique opportunities for future high-performance, low-cost DUV photodetectors.",
"title": ""
},
{
"docid": "77335856af8b62ae2e1fcd10654ed9a1",
"text": "Instrumenting and collecting annotated visual grasping datasets to train modern machine learning algorithms can be extremely time-consuming and expensive. An appealing alternative is to use off-the-shelf simulators to render synthetic data for which ground-truth annotations are generated automatically. Unfortunately, models trained purely on simulated data often fail to generalize to the real world. We study how randomized simulated environments and domain adaptation methods can be extended to train a grasping system to grasp novel objects from raw monocular RGB images. We extensively evaluate our approaches with a total of more than 25,000 physical test grasps, studying a range of simulation conditions and domain adaptation methods, including a novel extension of pixel-level domain adaptation that we term the GraspGAN. We show that, by using synthetic data and domain adaptation, we are able to reduce the number of real-world samples needed to achieve a given level of performance by up to 50 times, using only randomly generated simulated objects. We also show that by using only unlabeled real-world data and our GraspGAN methodology, we obtain real-world grasping performance without any real-world labels that is similar to that achieved with 939,777 labeled real-world samples.",
"title": ""
},
{
"docid": "ae0d63126ff55961533dc817554bcb82",
"text": "This paper presents a novel bipedal robot concept and prototype that takes inspiration from humanoids but features fundamental differences that drastically improve its agility and stability while reducing its complexity and cost. This Non-Anthropomorphic Bipedal Robotic System (NABiRoS) modifies the traditional bipedal form by aligning the legs in the sagittal plane and adding a compliance to the feet. The platform is comparable in height to a human, but weighs much less because of its lightweight architecture and novel leg configuration. The inclusion of the compliant element showed immense improvements in the stability and robustness of walking gaits on the prototype, allowing the robot to remain stable during locomotion without any inertial feedback control. NABiRoS was able to achieve walking speeds of up to 0.75km/h (0.21m/s) using a simple pre-processed ZMP based gait and a positioning accuracy of +/- 0.04m with a preprocessed quasi-static algorithm.",
"title": ""
}
] |
scidocsrr
|
1fc7a4b9b53a1e3ed0e1d37a737b6836
|
Two Decades of Recommender Systems at Amazon.com
|
[
{
"docid": "9c43ce72f77582848fd7603b9c5a9319",
"text": "This article discusses the various algorithms that make up the Netflix recommender system, and describes its business purpose. We also describe the role of search and related algorithms, which for us turns into a recommendations problem as well. We explain the motivations behind and review the approach that we use to improve the recommendation algorithms, combining A/B testing focused on improving member retention and medium term engagement, as well as offline experimentation using historical member engagement data. We discuss some of the issues in designing and interpreting A/B tests. Finally, we describe some current areas of focused innovation, which include making our recommender system global and language aware.",
"title": ""
},
{
"docid": "4984f9e1995cd69aac609374778d45c0",
"text": "We discuss the video recommendation system in use at YouTube, the world's most popular online video community. The system recommends personalized sets of videos to users based on their activity on the site. We discuss some of the unique challenges that the system faces and how we address them. In addition, we provide details on the experimentation and evaluation framework used to test and tune new algorithms. We also present some of the findings from these experiments.",
"title": ""
}
] |
[
{
"docid": "f05f700538724d3ee838914286df4d1b",
"text": "In recent years, deep neural networks have yielded state-of-the-art performance on several tasks. Although some recent works have focused on combining deep learning with recommendation, we highlight three issues of existing works. First, most works perform deep content feature learning and resort to matrix factorization, which cannot effectively model the highly complex user-item interaction function. Second, due to the difficulty on training deep neural networks, existing models utilize a shallow architecture, and thus limit the expressiveness potential of deep learning. Third, neural network models are easy to overfit on the implicit setting, because negative interactions are not taken into account. To tackle these issues, we present a novel recommender framework called Deep Collaborative Autoencoder (DCAE) for both explicit feedback and implicit feedback, which can effectively capture the relationship between interactions via its non-linear expressiveness. To optimize the deep architecture of DCAE, we develop a three-stage pretraining mechanism that combines supervised and unsupervised feature learning. Moreover, we propose a popularity-based error reweighting module and a sparsity-aware data-augmentation strategy for DCAE to prevent overfitting on the implicit setting. Extensive experiments on three real-world datasets demonstrate that DCAE can significantly advance the state-of-the-art.",
"title": ""
},
{
"docid": "4621f0bd002f8bd061dd0b224f27977c",
"text": "Organisations increasingly perceive their employees as a great asset that needs to be cared for; however, at the same time, they view employees as one of the biggest potential threats to their cyber security. Employees are widely acknowledged to be responsible for security breaches in organisations, and it is important that these are given as much attention as are technical issues. A significant number of researchers have argued that non-compliance with information security policy is one of the major challenges facing organisations. This is primarily considered to be a human problem rather than a technical issue. Thus, it is not surprising that employees are one of the major underlying causes of breaches in information security. In this paper, academic literature and reports of information security institutes relating to policy compliance are reviewed. The objective is to provide an overview of the key challenges surrounding the successful implementation of information security policies. A further aim is to investigate the factors that may have an influence upon employees' behaviour in relation to information security policy. As a result, challenges to information security policy have been classified into four main groups: security policy promotion; noncompliance with security policy; security policy management and updating; and shadow security. Furthermore, the factors influencing behaviour have been divided into organisational and human factors. Ultimately, this paper concludes that continuously subjecting users to targeted awareness raising and dynamically monitoring their adherence to information security policy should increase the compliance level.",
"title": ""
},
{
"docid": "7c1af982b6ac6aa6df4549bd16c1964c",
"text": "This paper deals with the problem of estimating the position of emitters using only direction of arrival information. We propose an improvement of newly developed algorithm for position finding of a stationary emitter called sensitivity analysis. The proposed method uses Taylor series expansion iteratively to enhance the estimation of the emitter location and reduce position finding error. Simulation results show that our proposed method makes a great improvement on accuracy of position finding with respect to sensitivity analysis method.",
"title": ""
},
{
"docid": "9737e400108f6327be17d23db07b2e75",
"text": "While recent deep monocular depth estimation approaches based on supervised regression have achieved remarkable performance, costly ground truth annotations are required during training. To cope with this issue, in this paper we present a novel unsupervised deep learning approach for predicting depth maps and show that the depth estimation task can be effectively tackled within an adversarial learning framework. Specifically, we propose a deep generative network that learns to predict the correspondence field (i.e. the disparity map) between two image views in a calibrated stereo camera setting. The proposed architecture consists of two generative sub-networks jointly trained with adversarial learning for reconstructing the disparity map and organized in a cycle such as to provide mutual constraints and supervision to each other. Extensive experiments on the publicly available datasets KITTI and Cityscapes demonstrate the effectiveness of the proposed model and competitive results with state of the art methods. The code is available at https://github.com/andrea-pilzer/unsup-stereo-depthGAN",
"title": ""
},
{
"docid": "73c0360dfcf421d71a258b5b6959572e",
"text": "Text representation plays a crucial role in classical text mining, where the primary focus was on static text. Nevertheless, well-studied static text representations including TFIDF are not optimized for non-stationary streams of information such as news, discussion board messages, and blogs. We therefore introduce a new temporal representation for text streams based on bursty features. Our bursty text representation differs significantly from traditional schemes in that it 1) dynamically represents documents over time, 2) amplifies a feature in proportional to its burstiness at any point in time, and 3) is topic independent. Our bursty text representation model was evaluated against a classical bagof-words text representation on the task of clustering TDT3 topical text streams. It was shown to consistently yield more cohesive clusters in terms of cluster purity and cluster/class entropies. This new temporal bursty text representation can be extended to most text mining tasks involving a temporal dimension, such as modeling of online blog pages.",
"title": ""
},
{
"docid": "31580731891882a8415df9bc38755bd9",
"text": "This paper tests the influential hypothesis, typically attributed to Friedman (1953), that irrational traders will be driven out of financial markets by trading losses. The paper‟s main finding is that overconfident currency dealers are not driven out of the market. Traders with extensive experience are neither more nor less overconfident than their inexperienced colleagues. We first provide evidence that currency dealers are indeed overconfident, which is notable since they get daily trading practice and face intense financial incentives to accuracy. [",
"title": ""
},
{
"docid": "0034d1b96b8a3255344f59c5b8663e59",
"text": "This paper discusses atrium building typology by analysing the architectural aspects of existing atrium buildings. One hundred sixty commercial office buildings in Klang Valley were identified as the chosen building type for the initial selection process. Thirteen out of 160 office buildings surveyed were further analyzed based on the following architectural aspects: i) atrium spaces that include the description of atrium type, form and shape, physical dimensions, number of floors and height; ii) skylight design and roof fenestration system; and iii) atrium usage/activity and indoor environmental conditions. The atrium designs in these tropical office buildings are briefly described. The results show the most common atrium form is the enclosed central rectangular atrium with 4-storey average height. This study could lead to further research on design considerations for innovative applications to improve daylight performance in tropical office building atrium.",
"title": ""
},
{
"docid": "a8ca6ef7b99cca60f5011b91d09e1b06",
"text": "When virtual teams need to establish trust at a distance, it is advantageous for them to use rich media to communicate. We studied the emergence of trust in a social dilemma game in four different communication situations: face-to-face, video, audio, and text chat. All three of the richer conditions were significant improvements over text chat. Video and audio conferencing groups were nearly as good as face-to-face, but both did show some evidence of what we term delayed trust (slower progress toward full cooperation) and fragile trust (vulnerability to opportunistic behavior)",
"title": ""
},
{
"docid": "03a39c98401fc22f1a376b9df66988dc",
"text": "A highly efficient wireless power transfer (WPT) system is required in many applications to replace the conventional wired system. The high temperature superconducting (HTS) wires are examined in a WPT system to increase the power-transfer efficiency (PTE) as compared with the conventional copper/Litz conductor. The HTS conductors are naturally can produce higher amount of magnetic field with high induced voltage to the receiving coil. Moreover, the WPT systems are prone to misalignment, which can cause sudden variation in the induced voltage and lead to rapid damage of the resonant capacitors connected in the circuit. Hence, the protection or elimination of resonant capacitor is required to increase the longevity of WPT system, but both the adoptions will operate the system in nonresonance mode. The absence of resonance phenomena in the WPT system will drastically reduce the PTE and correspondingly the future commercialization. This paper proposes an open bifilar spiral coils based self-resonant WPT method without using resonant capacitors at both the sides. The mathematical modeling and circuit simulation of the proposed system is performed by designing the transmitter coil using HTS wire and the receiver with copper coil. The three-dimensional modeling and finite element simulation of the proposed system is performed to analyze the current density at different coupling distances between the coil. Furthermore, the experimental results show the PTE of 49.8% under critical coupling with the resonant frequency of 25 kHz.",
"title": ""
},
{
"docid": "92a112d7b6f668ece433e62a7fe4054c",
"text": "A new technique for stabilizing nonholonomic systems to trajectories is presented. It is well known (see [2]) that such systems cannot be stabilized to a point using smooth static-state feedback. In this note, we suggest the use of control laws for stabilizing a system about a trajectory, instead of a point. Given a nonlinear system and a desired (nominal) feasible trajectory, the note gives an explicit control law which will locally exponentially stabilize the system to the desired trajectory. The theory is applied to several examples, including a car-like robot.",
"title": ""
},
{
"docid": "5536cc03e26fc3911f1019d2369c1cec",
"text": "Monaural source separation is important for many real world applications. It is challenging because, with only a single channel of information available, without any constraints, an infinite number of solutions are possible. In this paper, we explore joint optimization of masking functions and deep recurrent neural networks for monaural source separation tasks, including speech separation, singing voice separation, and speech denoising. The joint optimization of the deep recurrent neural networks with an extra masking layer enforces a reconstruction constraint. Moreover, we explore a discriminative criterion for training neural networks to further enhance the separation performance. We evaluate the proposed system on the TSP, MIR-1K, and TIMIT datasets for speech separation, singing voice separation, and speech denoising tasks, respectively. Our approaches achieve 2.30-4.98 dB SDR gain compared to NMF models in the speech separation task, 2.30-2.48 dB GNSDR gain and 4.32-5.42 dB GSIR gain compared to existing models in the singing voice separation task, and outperform NMF and DNN baselines in the speech denoising task.",
"title": ""
},
{
"docid": "97b4de3dc73e0a6d7e17f94dff75d7ac",
"text": "Evolution in cloud services and infrastructure has been constantly reshaping the way we conduct business and provide services in our day to day lives. Tools and technologies created to improve such cloud services can also be used to impair them. By using generic tools like nmap, hping and wget, one can estimate the placement of virtual machines in a cloud infrastructure with a high likelihood. Moreover, such knowledge and tools can also be used by adversaries to further launch various kinds of attacks. In this paper we focus on one such specific kind of attack, namely a denial of service (DoS), where an attacker congests a bottleneck network channel shared among virtual machines (VMs) coresident on the same physical node in the cloud infrastructure. We evaluate the behavior of this shared network channel using Click modular router on DETER testbed. We illustrate that game theoretic concepts can be used to model this attack as a two-player game and recommend strategies for defending against such attacks.",
"title": ""
},
{
"docid": "e1efeca0d73be6b09f5cf80437809bdb",
"text": "Deep convolutional neural networks have been shown to be vulnerable to arbitrary geometric transformations. However, there is no systematic method to measure the invariance properties of deep networks to such transformations. We propose ManiFool as a simple yet scalable algorithm to measure the invariance of deep networks. In particular, our algorithm measures the robustness of deep networks to geometric transformations in a worst-case regime as they can be problematic for sensitive applications. Our extensive experimental results show that ManiFool can be used to measure the invariance of fairly complex networks on high dimensional datasets and these values can be used for analyzing the reasons for it. Furthermore, we build on ManiFool to propose a new adversarial training scheme and we show its effectiveness on improving the invariance properties of deep neural networks.1",
"title": ""
},
{
"docid": "b009c2b4cc62f7cc430deb671de4a192",
"text": "Electric vehicles are gaining importance and help to reduce dependency on oil, increase energy efficiency of transportation, reduce carbon emissions and noise, and avoid tail pipe emissions. Because of short driving distances, high mileages, and intermediate waiting times, fossil-fuelled taxi vehicles are ideal candidates for being replaced by battery electric vehicles (BEVs). Moreover, taxis as BEVs would increase visibility of electric mobility and therefore encourage others to purchase an electric vehicle. Prior to replacing conventional taxis with BEVs, a suitable charging infrastructure has to be established. This infrastructure, which is a prerequisite for the use of BEVs in practice, consists of a sufficiently dense network of charging stations taking into account the lower driving ranges of BEVs. In this case study we propose a decision support system for placing charging stations to satisfy the charging demand of electric taxi vehicles. Operational taxi data from about 800 vehicles is used to identify and estimate the charging demand for electric taxis based on frequent origins and destinations of trips. Next, a variant of the maximal covering location problem is formulated and solved, aiming at satisfying as much charging demand as possible with a limited number of charging stations. Already existing fast charging locations are considered in the optimization problem. In this work, we focus on finding regions in which charging stations should be placed, rather than exact locations. The exact location within an area is identified in a post-optimization phase (e.g., by authorities), where environmental conditions are considered, e.g., the capacity of the power network, availability of space, and legal issues. Our approach is implemented in the city of Vienna, Austria, in the course of an applied research project conducted in 2014. Local authorities, power network operators, representatives of taxi driver guilds as well as a radio taxi provider participated in the project and identified exact locations for charging stations based on our decision support system. ∗Corresponding author Email addresses: [email protected] (Johannes Asamer), [email protected] (Martin Reinthaler), [email protected] (Mario Ruthmair), [email protected] (Markus Straub), [email protected] (Jakob Puchinger) Preprint submitted to Elsevier November 6, 2015",
"title": ""
},
{
"docid": "f4963c41832024b8cd7d3480204275fa",
"text": "Almost surreptitiously, crowdsourcing has entered software engineering practice. In-house development, contracting, and outsourcing still dominate, but many development projects use crowdsourcing-for example, to squash bugs, test software, or gather alternative UI designs. Although the overall impact has been mundane so far, crowdsourcing could lead to fundamental, disruptive changes in how software is developed. Various crowdsourcing models have been applied to software development. Such changes offer exciting opportunities, but several challenges must be met for crowdsourcing software development to reach its potential.",
"title": ""
},
{
"docid": "a81e4507632505b64f4839a1a23fa440",
"text": "Unity am e Deelopm nt w ith C# Alan Thorn In Pro Unity Game Development with C#, Alan Thorn, author of Learn Unity for 2D` Game Development and experienced game developer, takes you through the complete C# workflow for developing a cross-platform first person shooter in Unity. C# is the most popular programming language for experienced Unity developers, helping them get the most out of what Unity offers. If you’re already using C# with Unity and you want to take the next step in becoming an experienced, professional-level game developer, this is the book you need. Whether you are a student, an indie developer, or a seasoned game dev professional, you’ll find helpful C# examples of how to build intelligent enemies, create event systems and GUIs, develop save-game states, and lots more. You’ll understand and apply powerful programming concepts such as singleton classes, component based design, resolution independence, delegates, and event driven programming.",
"title": ""
},
{
"docid": "683bf5f20e2102903f569195c806d78c",
"text": "A recent survey among developers revealed that half plan to use HTML5 for mobile apps in the future. An earlier survey showed that access to native device APIs is the biggest shortcoming of HTML5 compared to native apps. Several different approaches exist to overcome this limitation, among them cross-compilation and packaging the HTML5 as a native app. In this paper we propose a novel approach by using a device-local service that runs on the smartphone and that acts as a gateway to the native layer for HTML5-based apps running inside the standard browser. WebSockets are used for bi-directional communication between the web apps and the device-local service. The service approach is a generalization of the packaging solution. In this paper we describe our approach and compare it with other popular ways to grant web apps access to the native API layer of the operating system.",
"title": ""
},
{
"docid": "b22b2d75b5c4a3934aafceb73a3b911a",
"text": "At present, beyond the fact that dogs can be easier socialized with humans than wolves, we know little about the motivational and cognitive effects of domestication. Despite this, it has been suggested that during domestication dogs have become socially more tolerant and attentive than wolves. These two characteristics are crucial for cooperation, and it has been argued that these changes allowed dogs to successfully live and work with humans. However, these domestication hypotheses have been put forward mainly based on dog-wolf differences reported in regard to their interactions with humans. Thus, it is possible that these differences reflect only an improved capability of dogs to accept humans as social partners instead of an increase of their general tolerance, attentiveness and cooperativeness. At the Wolf Science Center, in order to detangle these two explanations, we raise and keep dogs and wolves similarly socializing them with conspecifics and humans and then test them in interactions not just with humans but also conspecifics. When investigating attentiveness toward human and conspecific partners using different paradigms, we found that the wolves were at least as attentive as the dogs to their social partners and their actions. Based on these findings and the social ecology of wolves, we propose the Canine Cooperation Hypothesis suggesting that wolves are characterized with high social attentiveness and tolerance and are highly cooperative. This is in contrast with the implications of most domestication hypotheses about wolves. We argue, however, that these characteristics of wolves likely provided a good basis for the evolution of dog-human cooperation.",
"title": ""
},
{
"docid": "6f266de0973bfe142e6a5b820cf5a2c2",
"text": "The use of computer technology in medical sciences is spreading with technology. The use of computers especially for imaging has become a third eye for physicians. In orthopedic surgeons, after simple roentgenograms for fracture detection, the use of computerized tomography and magnetic resonance has provided great convenience in the detection of fracture, typing, and therefore the appropriate treatment of the patient. The advancing technology has increased the quality of the images in the x-rayograms, reduced artifacts and enabled digital measurements. In this study, image processing and learning techniques were used to diagnose long bone fractures. The proposed artificial neural network has 89% success rate.",
"title": ""
},
{
"docid": "a0d1b5c1745fb676163c36644041bafa",
"text": "ive 2.8 3.1 3.3 5.0% Our System 3.6 4.8 4.2 18.0% Human Abstract (reference) 4.2 4.8 4.5 65.5% Sample Summaries • Movie: The Neverending Story • Human: A magical journey about the power of a young boy’s imagination to save a dying fantasy land, The Neverending Story remains a much-loved kids adventure. • LexRank: It pokes along at times and lapses occasionally into dark moments of preachy philosophy, but this is still a charming, amusing and harmless film for kids. • Opinosis: The Neverending Story is a silly fantasy movie that often shows its age . • Our System: The Neverending Story is an entertaining children’s adventure, with heart and imagination to spare.",
"title": ""
}
] |
scidocsrr
|
b4e089bd50a7bcfb8232b12dbe063d09
|
Want a Good Answer? Ask a Good Question First!
|
[
{
"docid": "c1ca7ef76472258c6359111dd4d014d5",
"text": "Online forums contain huge amounts of valuable user-generated content. In current forum systems, users have to passively wait for other users to visit the forum systems and read/answer their questions. The user experience for question answering suffers from this arrangement. In this paper, we address the problem of \"pushing\" the right questions to the right persons, the objective being to obtain quick, high-quality answers, thus improving user satisfaction. We propose a framework for the efficient and effective routing of a given question to the top-k potential experts (users) in a forum, by utilizing both the content and structures of the forum system. First, we compute the expertise of users according to the content of the forum system—-this is to estimate the probability of a user being an expert for a given question based on the previous question answering of the user. Specifically, we design three models for this task, including a profile-based model, a thread-based model, and a cluster-based model. Second, we re-rank the user expertise measured in probability by utilizing the structural relations among users in a forum system. The results of the two steps can be integrated naturally in a probabilistic model that computes a final ranking score for each user. Experimental results show that the proposals are very promising.",
"title": ""
}
] |
[
{
"docid": "4599529680781f9d3d19f766e51a7734",
"text": "Existing support vector regression (SVR) based image superresolution (SR) methods always utilize single layer SVR model to reconstruct source image, which are incapable of restoring the details and reduce the reconstruction quality. In this paper, we present a novel image SR approach, where a multi-layer SVR model is adopted to describe the relationship between the low resolution (LR) image patches and the corresponding high resolution (HR) ones. Besides, considering the diverse content in the image, we introduce pixel-wise classification to divide pixels into different classes, such as horizontal edges, vertical edges and smooth areas, which is more conductive to highlight the local characteristics of the image. Moreover, the input elements to each SVR model are weighted respectively according to their corresponding output pixel's space positions in the HR image. Experimental results show that, compared with several other learning-based SR algorithms, our method gains high-quality performance.",
"title": ""
},
{
"docid": "b5fea029d64084089de8e17ae9debffc",
"text": "While there has been increasing interest in the task of describing video with natural language, current computer vision algorithms are still severely limited in terms of the variability and complexity of the videos and their associated language that they can recognize. This is in part due to the simplicity of current benchmarks, which mostly focus on specific fine-grained domains with limited videos and simple descriptions. While researchers have provided several benchmark datasets for image captioning, we are not aware of any large-scale video description dataset with comprehensive categories yet diverse video content. In this paper we present MSR-VTT (standing for \"MSRVideo to Text\") which is a new large-scale video benchmark for video understanding, especially the emerging task of translating video to text. This is achieved by collecting 257 popular queries from a commercial video search engine, with 118 videos for each query. In its current version, MSR-VTT provides 10K web video clips with 41.2 hours and 200K clip-sentence pairs in total, covering the most comprehensive categories and diverse visual content, and representing the largest dataset in terms of sentence and vocabulary. Each clip is annotated with about 20 natural sentences by 1,327 AMT workers. We present a detailed analysis of MSR-VTT in comparison to a complete set of existing datasets, together with a summarization of different state-of-the-art video-to-text approaches. We also provide an extensive evaluation of these approaches on this dataset, showing that the hybrid Recurrent Neural Networkbased approach, which combines single-frame and motion representations with soft-attention pooling strategy, yields the best generalization capability on MSR-VTT.",
"title": ""
},
{
"docid": "5db65bd010fc8e28164262c0da469448",
"text": "Forklift robots are frequently applied in automated logistics systems to optimize the transportation tasks and, consequently, to reduce costs. Nowadays, in a scenario of extremely fast technological development and constant search for costs minimization, the automation of logistic process is essential to improve the productivity and reduce costs. In order to decrease costs of logistics and distribution of goods, it is quite common to find in developed countries mechatronic systems performing several tasks in harbor, warehouses, storages and products distribution center. Therefore, research in this topic is considered strategic to ensure a greater insertion of the individual countries in the international trade scenario. In this application, the vehicle routing decision is one of the main issues to be solved. It is important to emphasize that its productivity is highly dependent on the adopted routing scheme. Consequently, it is essential to use efficient routes schemes. This paper proposes an algorithm that produces optimal routes for AGVs (Automated Guided Vehicles) working inside warehouse as forklift robots. The algorithm was conceived to deal with different real situations, such as the need of conflict-free paths and the presence of obstacles. In the routing algorithm each AGV executes the task starting in an initial position and orientation and moving to a pre-established position and orientation, generating a minimum path. This path is a continuous sequence of positions and orientations of the AGVs. The algorithm is based on Dijkstra's shortest-path method and was implemented in C++. Computer simulation tests are used to validate the algorithm efficiency in different working conditions.",
"title": ""
},
{
"docid": "cfc0caeb9c00b375d930cde8f5eed66e",
"text": "Usability is an important and determinant factor in human-computer systems acceptance. Usability issues are still identified late in the software development process, during testing and deployment. One of the reasons these issues arise late in the process is that current requirements engineering practice does not incorporate usability perspectives effectively into software requirements specifications. The main strength of usability-focused software requirements is the clear visibility of usability aspects for both developers and testers. The explicit expression of these aspects of human-computer systems can be built for optimal usability and also evaluated effectively to uncover usability issues. This paper presents a design science-oriented research design to test the proposition that incorporating user modelling and usability modelling in software requirements specifications improves design. The proposal and the research design are expected to make a contribution to knowledge by theory testing and to practice with effective techniques to produce usable human computer systems.",
"title": ""
},
{
"docid": "c2453816adf52157fca295274a4d8627",
"text": "Air quality monitoring is extremely important as air pollution has a direct impact on human health. Low-cost gas sensors are used to effectively perceive the environment by mounting them on top of mobile vehicles, for example, using a public transport network. Thus, these sensors are part of a mobile network and perform from time to time measurements in each others vicinity. In this paper, we study three calibration algorithms that exploit co-located sensor measurements to enhance sensor calibration and consequently the quality of the pollution measurements on-the-fly. Forward calibration, based on a traditional approach widely used in the literature, is used as performance benchmark for two novel algorithms: backward and instant calibration. We validate all three algorithms with real ozone pollution measurements carried out in an urban setting by comparing gas sensor output to high-quality measurements from analytical instruments. We find that both backward and instant calibration reduce the average measurement error by a factor of two compared to forward calibration. Furthermore, we unveil the arising difficulties if sensor calibration is not based on reliable reference measurements but on sensor readings of low-cost gas sensors which is inevitable in a mobile scenario with only a few reliable sensors. We propose a solution and evaluate its effect on the measurement accuracy in experiments and simulation.",
"title": ""
},
{
"docid": "432e7ae2e76d76dbb42d92cd9103e3d2",
"text": "Previous work has used monolingual parallel corpora to extract and generate paraphrases. We show that this task can be done using bilingual parallel corpora, a much more commonly available resource. Using alignment techniques from phrasebased statistical machine translation, we show how paraphrases in one language can be identified using a phrase in another language as a pivot. We define a paraphrase probability that allows paraphrases extracted from a bilingual parallel corpus to be ranked using translation probabilities, and show how it can be refined to take contextual information into account. We evaluate our paraphrase extraction and ranking methods using a set of manual word alignments, and contrast the quality with paraphrases extracted from automatic alignments.",
"title": ""
},
{
"docid": "5409b6586b89bd3f0b21e7984383e1e1",
"text": "The dream of creating artificial devices that reach or outperform human intelligence is many centuries old. In this talk I present an elegant parameter-free theory of an optimal reinforcement learning agent embedded in an arbitrary unknown environment that possesses essentially all aspects of rational intelligence. The theory reduces all conceptual AI problems to pure computational questions. The necessary and sufficient ingredients are Bayesian probability theory; algorithmic information theory; universal Turing machines; the agent framework; sequential decision theory; and reinforcement learning, which are all important subjects in their own right. I also present some recent approximations, implementations, and applications of this modern top-down approach to AI. Marcus Hutter 3 Universal Artificial Intelligence Overview Goal: Construct a single universal agent that learns to act optimally in any environment. State of the art: Formal (mathematical, non-comp.) definition of such an agent. Accomplishment: Well-defines AI. Formalizes rational intelligence. Formal “solution” of the AI problem in the sense of ... =⇒ Reduces the conceptional AI problem to a (pure) computational problem. Evidence: Mathematical optimality proofs and some experimental results. Marcus Hutter 4 Universal Artificial Intelligence",
"title": ""
},
{
"docid": "d156813b45cb419d86280ee2947b6cde",
"text": "Within the realm of service robotics, researchers have placed a great amount of effort into learning motions and manipulations for task execution by robots. The task of robot learning is very broad, as it involves many tasks such as object detection, action recognition, motion planning, localization, knowledge representation and retrieval, and the intertwining of computer vision and machine learning techniques. In this paper, we focus on how knowledge can be gathered, represented, and reproduced to solve problems as done by researchers in the past decades. We discuss the problems which have existed in robot learning and the solutions, technologies or developments (if any) which have contributed to solving them. Specifically, we look at three broad categories involved in task representation and retrieval for robotics: 1) activity recognition from demonstrations, 2) scene understanding and interpretation, and 3) task representation in robotics datasets and networks. Within each section, we discuss major breakthroughs and how their methods address present issues in robot learning and manipulation.",
"title": ""
},
{
"docid": "d6f15e49f3ecdbe3e2949520c3e0c643",
"text": "In this paper we explore the connection between clustering categorical data and entropy: clusters of similar poi lower entropy than those of dissimilar ones. We use this connection to design an incremental heuristic algorithm, COOLCAT, which is capable of efficiently clustering large data sets of records with categorical attributes, and data streams. In contrast with other categorical clustering algorithms published in the past, COOLCAT's clustering results are very stable for different sample sizes and parameter settings. Also, the criteria for clustering is a very intuitive one, since it is deeply rooted on the well-known notion of entropy. Most importantly, COOLCAT is well equipped to deal with clustering of data streams(continuously arriving streams of data point) since it is an incremental algorithm capable of clustering new points without having to look at every point that has been clustered so far. We demonstrate the efficiency and scalability of COOLCAT by a series of experiments on real and synthetic data sets.",
"title": ""
},
{
"docid": "9ff22294cf279d757a84ae00d4e29473",
"text": "We usually endow the investigated objects with pairwise relationships, which can be illustrated as graphs. In many real-world problems, however, relationships among the objects of our interest are more complex than pairwise. Naively squeezing the complex relationships into pairwise ones will inevitably lead to loss of information which can be expected valuable for our learning tasks however. Therefore we consider using hypergraphs instead to completely represent complex relationships among the objects of our interest, and thus the problem of learning with hypergraphs arises. Our main contribution in this paper is to generalize the powerful methodology of spectral clustering which originally operates on undirected graphs to hypergraphs, and further develop algorithms for hypergraph embedding and transductive classification on the basis of the spectral hypergraph clustering approach. Our experiments on a number of benchmarks showed the advantages of hypergraphs over usual graphs.",
"title": ""
},
{
"docid": "78283b148e6340ef9c49e503f9f39a2e",
"text": "Blur in facial images significantly impedes the efficiency of recognition approaches. However, most existing blind deconvolution methods cannot generate satisfactory results due to their dependence on strong edges, which are sufficient in natural images but not in facial images. In this paper, we represent point spread functions (PSFs) by the linear combination of a set of pre-defined orthogonal PSFs, and similarly, an estimated intrinsic (EI) sharp face image is represented by the linear combination of a set of pre-defined orthogonal face images. In doing so, PSF and EI estimation is simplified to discovering two sets of linear combination coefficients, which are simultaneously found by our proposed coupled learning algorithm. To make our method robust to different types of blurry face images, we generate several candidate PSFs and EIs for a test image, and then, a non-blind deconvolution method is adopted to generate more EIs by those candidate PSFs. Finally, we deploy a blind image quality assessment metric to automatically select the optimal EI. Thorough experiments on the facial recognition technology database, extended Yale face database B, CMU pose, illumination, and expression (PIE) database, and face recognition grand challenge database version 2.0 demonstrate that the proposed approach effectively restores intrinsic sharp face images and, consequently, improves the performance of face recognition.",
"title": ""
},
{
"docid": "097912a74fbc55ba7909b6e0622c0b42",
"text": "Many ubiquitous computing applications involve human activity recognition based on wearable sensors. Although this problem has been studied for a decade, there are a limited number of publicly available datasets to use as standard benchmarks to compare the performance of activity models and recognition algorithms. In this paper, we describe the freely available USC human activity dataset (USC-HAD), consisting of well-defined low-level daily activities intended as a benchmark for algorithm comparison particularly for healthcare scenarios. We briefly review some existing publicly available datasets and compare them with USC-HAD. We describe the wearable sensors used and details of dataset construction. We use high-precision well-calibrated sensing hardware such that the collected data is accurate, reliable, and easy to interpret. The goal is to make the dataset and research based on it repeatable and extendible by others.",
"title": ""
},
{
"docid": "a73968f28de7c80cf45f118a442cf09b",
"text": "Laypeople are frequently exposed to unfamiliar numbers published by journalists, social media users, and algorithms. These figures can be difficult for readers to comprehend, especially when they are extreme in magnitude or contain unfamiliar units. Prior work has shown that adding \"perspective sentences\" that employ ratios, ranks, and unit changes to such measurements can improve people's ability to understand unfamiliar numbers (e.g., \"695,000 square kilometers is about the size of Texas\"). However, there are many ways to provide context for a measurement. In this paper we systematically test what factors influence the quality of perspective sentences through randomized experiments involving over 1,000 participants. We develop a statistical model for generating perspectives and test it against several alternatives, finding beneficial effects of perspectives on comprehension that persist for six weeks. We conclude by discussing future work in deploying and testing perspectives at scale.",
"title": ""
},
{
"docid": "5eb03beba0ac2c94e6856d16e90799fc",
"text": "The explosive growth of malware variants poses a major threat to information security. Traditional anti-virus systems based on signatures fail to classify unknown malware into their corresponding families and to detect new kinds of malware programs. Therefore, we propose a machine learning based malware analysis system, which is composed of three modules: data processing, decision making, and new malware detection. The data processing module deals with gray-scale images, Opcode n-gram, and import functions, which are employed to extract the features of the malware. The decision-making module uses the features to classify the malware and to identify suspicious malware. Finally, the detection module uses the shared nearest neighbor (SNN) clustering algorithm to discover new malware families. Our approach is evaluated on more than 20 000 malware instances, which were collected by Kingsoft, ESET NOD32, and Anubis. The results show that our system can effectively classify the unknown malware with a best accuracy of 98.9%, and successfully detects 86.7% of the new malware.",
"title": ""
},
{
"docid": "a59e56199b81bb741470455c47668a03",
"text": "Cloud-based file synchronization services, such as Dropbox and OneDrive, are a worldwide resource for many millions of users. However, individual services often have tight resource limits, suffer from temporary outages or even shutdowns, and sometimes silently corrupt or leak user data. We design, implement, and evaluate MetaSync, a secure and reliable file synchronization service that uses multiple cloud synchronization services as untrusted storage providers. To make MetaSync work correctly, we devise a novel variant of Paxos that provides efficient and consistent updates on top of the unmodified APIs exported by existing services. Our system automatically redistributes files upon adding, removing, or resizing a provider. Our evaluation shows that MetaSync provides low update latency and high update throughput, close to the performance of commercial services, but is more reliable and available. MetaSync outperforms its underlying cloud services by 1.2-10× on three realistic workloads.",
"title": ""
},
{
"docid": "05f941acd4b2bd1188c7396d7edbd684",
"text": "A blockchain is a distributed ledger for recording transactions, maintained by many nodes without central authority through a distributed cryptographic protocol. All nodes validate the information to be appended to the blockchain, and a consensus protocol ensures that the nodes agree on a unique order in which entries are appended. Consensus protocols for tolerating Byzantine faults have received renewed attention because they also address blockchain systems. This work discusses the process of assessing and gaining confidence in the resilience of a consensus protocols exposed to faults and adversarial nodes. We advocate to follow the established practice in cryptography and computer security, relying on public reviews, detailed models, and formal proofs; the designers of several practical systems appear to be unaware of this. Moreover, we review the consensus protocols in some prominent permissioned blockchain platforms with respect to their fault models and resilience against attacks. 1998 ACM Subject Classification C.2.4 Distributed Systems, D.1.3 Concurrent Programming",
"title": ""
},
{
"docid": "2d6ea84dcdae28291c5fdca01495d51f",
"text": "This paper presents how to generate questions from given passages using neural networks, where large scale QA pairs are automatically crawled and processed from Community-QA website, and used as training data. The contribution of the paper is 2-fold: First, two types of question generation approaches are proposed, one is a retrieval-based method using convolution neural network (CNN), the other is a generation-based method using recurrent neural network (RNN); Second, we show how to leverage the generated questions to improve existing question answering systems. We evaluate our question generation method for the answer sentence selection task on three benchmark datasets, including SQuAD, MS MARCO, and WikiQA. Experimental results show that, by using generated questions as an extra signal, significant QA improvement can be achieved.",
"title": ""
},
{
"docid": "55f253cfb67ee0ba79b1439cc7e1764b",
"text": "Despite legislative attempts to curtail financial statement fraud, it continues unabated. This study makes a renewed attempt to aid in detecting this misconduct using linguistic analysis with data mining on narrative sections of annual reports/10-K form. Different from the features used in similar research, this paper extracts three distinct sets of features from a newly constructed corpus of narratives (408 annual reports/10-K, 6.5 million words) from fraud and non-fraud firms. Separately each of these three sets of features is put through a suite of classification algorithms, to determine classifier performance in this binary fraud/non-fraud discrimination task. From the results produced, there is a clear indication that the language deployed by management engaged in wilful falsification of firm performance is discernibly different from truth-tellers. For the first time, this new interdisciplinary research extracts features for readability at a much deeper level, attempts to draw out collocations using n-grams and measures tone using appropriate financial dictionaries. This linguistic analysis with machine learning-driven data mining approach to fraud detection could be used by auditors in assessing financial reporting of firms and early detection of possible misdemeanours.",
"title": ""
},
{
"docid": "c0a51f27931d8314b73a7de969bdfb08",
"text": "Organizations need practical security benchmarking tools in order to plan effective security strategies. This paper explores a number of techniques that can be used to measure security within an organization. It proposes a benchmarking methodology that produces results that are of strategic importance to both decision makers and technology implementers.",
"title": ""
},
{
"docid": "4072b14516d9a7b74bec64535cdb64d8",
"text": "The idea of a unified citation index to the literature of science was first outlined by Eugene Garfield [1] in 1955 in the journal Science. Science Citation Index has since established itself as the gold standard for scientific information retrieval. It has also become the database of choice for citation analysts and evaluative bibliometricians worldwide. As scientific publication moves to the web, and novel approaches to scholarly communication and peer review establish themselves, new methods of citation and link analysis will emerge to capture often liminal expressions of peer esteem, influence and approbation. The web thus affords bibliometricians rich opportunities to apply and adapt their techniques to new contexts and content: the age of ‘bibliometric spectroscopy’ [2] is dawning.",
"title": ""
}
] |
scidocsrr
|
cff1edebfa8507fb43893164df149c59
|
A Comparison of Crossover and Mutation in Genetic Programming
|
[
{
"docid": "8e1a65dd8bf9d8a4b67c46a0067ca42d",
"text": "Reading Genetic Programming IE Automatic Discovery ofReusable Programs (GPII) in its entirety is not a task for the weak-willed because the book without appendices is about 650 pages. An entire previous book by the same author [1] is devoted to describing Genetic Programming (GP), while this book is a sequel extolling an extension called Automatically Defined Functions (ADFs). The author, John R. Koza, argues that ADFs can be used in conjunction with GP to improve its efficacy on large problems. \"An automatically defined function (ADF) is a function (i.e., subroutine, procedure, module) that is dynamically evolved during a run of genetic programming and which may be called by a calling program (e.g., a main program) that is simultaneously being evolved\" (p. 1). Dr. Koza recommends adding the ADF technique to the \"GP toolkit.\" The book presents evidence that it is possible to interpret GP with ADFs as performing either a top-down process of problem decomposition or a bottom-up process of representational change to exploit identified regularities. This is stated as Main Point 1. Main Point 2 states that ADFs work by exploiting inherent regularities, symmetries, patterns, modularities, and homogeneities within a problem, though perhaps in ways that are very different from the style of programmers. Main Points 3 to 7 are appropriately qualified statements to the effect that, with a variety of problems, ADFs pay off be-",
"title": ""
}
] |
[
{
"docid": "2710599258f440d27efe958ed2cfb576",
"text": "In this paper, we present an evaluation of learning algorithms of a novel rule evaluation support method for postprocessing of mined results with rule evaluation models based on objective indices. Post-processing of mined results is one of the key processes in a data mining process. However, it is difficult for human experts to completely evaluate several thousands of rules from a large dataset with noises. To reduce the costs in such rule evaluation task, we have developed the rule evaluation support method with rule evaluation models, which learn from objective indices for mined classification rules and evaluations by a human expert for each rule. To enhance adaptability of rule evaluation models, we introduced a constructive meta-learning system to choose proper learning algorithms. Then, we have done the case study on the meningitis data mining as an actual problem",
"title": ""
},
{
"docid": "0f452a5b005437d05a18822dc929828b",
"text": "In recent years, new studies concentrating on analyzing user personality and finding credible content in social media have become quite popular. Most such work augments features from textual content with features representing the user's social ties and the tie strength. Social ties are crucial in understanding the network the people are a part of. However, textual content is extremely useful in understanding topics discussed and the personality of the individual. We bring a new dimension to this type of analysis with methods to compute the type of ties individuals have and the strength of the ties in each dimension. We present a new genre of behavioral features that are able to capture the \"function\" of a specific relationship without the help of textual features. Our novel features are based on the statistical properties of communication patterns between individuals such as reciprocity, assortativity, attention and latency. We introduce a new methodology for determining how such features can be compared to textual features, and show, using Twitter data, that our features can be used to capture contextual information present in textual features very accurately. Conversely, we also demonstrate how textual features can be used to determine social attributes related to an individual.",
"title": ""
},
{
"docid": "4c102cb77b3992f6cb29a117994804eb",
"text": "These current studies explored the impact of individual differences in personality factors on interface interaction and learning performance behaviors in both an interactive visualization and a menu-driven web table in two studies. Participants were administered 3 psychometric measures designed to assess Locus of Control, Extraversion, and Neuroticism. Participants were then asked to complete multiple procedural learning tasks in each interface. Results demonstrated that all three measures predicted completion times. Additionally, results analyses demonstrated personality factors also predicted the number of insights participants reported while completing the tasks in each interface. We discuss how these findings advance our ongoing research in the Personal Equation of Interaction.",
"title": ""
},
{
"docid": "69ad93c7b6224321d69456c23a4185ce",
"text": "Modeling fashion compatibility is challenging due to its complexity and subjectivity. Existing work focuses on predicting compatibility between product images (e.g. an image containing a t-shirt and an image containing a pair of jeans). However, these approaches ignore real-world ‘scene’ images (e.g. selfies); such images are hard to deal with due to their complexity, clutter, variations in lighting and pose (etc.) but on the other hand could potentially provide key context (e.g. the user’s body type, or the season) for making more accurate recommendations. In this work, we propose a new task called ‘Complete the Look’, which seeks to recommend visually compatible products based on scene images. We design an approach to extract training data for this task, and propose a novel way to learn the scene-product compatibility from fashion or interior design images. Our approach measures compatibility both globally and locally via CNNs and attention mechanisms. Extensive experiments show that our method achieves significant performance gains over alternative systems. Human evaluation and qualitative analysis are also conducted to further understand model behavior. We hope this work could lead to useful applications which link large corpora of real-world scenes with shoppable products.",
"title": ""
},
{
"docid": "c819096800cc1d758cd3bcf4949f2690",
"text": "Recent years have witnessed the trend of leveraging cloud-based services for large scale content storage, processing, and distribution. Security and privacy are among top concerns for the public cloud environments. Towards these security challenges, we propose and implement, on OpenStack Swift, a new client-side deduplication scheme for securely storing and sharing outsourced data via the public cloud. The originality of our proposal is twofold. First, it ensures better confidentiality towards unauthorized users. That is, every client computes a per data key to encrypt the data that he intends to store in the cloud. As such, the data access is managed by the data owner. Second, by integrating access rights in metadata file, an authorized user can decipher an encrypted file only with his private key.",
"title": ""
},
{
"docid": "ecaa3186ed84d41a4f2f451168da3ad8",
"text": "This paper introduces a new architecture for human pose estimation using a multilayer convolutional network architecture and a modified learning technique that learns low-level features and a higher-level weak spatial model. Unconstrained human pose estimation is one of the hardest problems in computer vision, and our new architecture and learning schema shows improvement over the current stateof-the-art. The main contribution of this paper is showing, for the first time, that a specific variation of deep learning is able to meet the performance, and in many cases outperform, existing traditional architectures on this task. The paper also discusses several lessons learned while researching alternatives, most notably, that it is possible to learn strong low-level feature detectors on regions that might only cover a few pixels in the image. Higher-level spatial models improve somewhat the overall result, but to a much lesser extent than expected. Many researchers previously argued that the kinematic structure and top-down information are crucial for this domain, but with our purely bottom-up, and weak spatial model, we improve on other more complicated architectures that currently produce the best results. This echos what many other researchers, like those in the speech recognition, object recognition, and other domains have experienced [26]. Figure 1: The green cross is our new technique’s wrist locator, the red cross is the state-of-the-art CVPR13 MODEC detector [38] on the FLIC database.",
"title": ""
},
{
"docid": "7f4701d8c9f651c3a551a91d19fd28d9",
"text": "Road extraction from aerial images has been a hot research topic in the field of remote sensing image analysis. In this letter, a semantic segmentation neural network, which combines the strengths of residual learning and U-Net, is proposed for road area extraction. The network is built with residual units and has similar architecture to that of U-Net. The benefits of this model are twofold: first, residual units ease training of deep networks. Second, the rich skip connections within the network could facilitate information propagation, allowing us to design networks with fewer parameters, however, better performance. We test our network on a public road data set and compare it with U-Net and other two state-of-the-art deep-learning-based road extraction methods. The proposed approach outperforms all the comparing methods, which demonstrates its superiority over recently developed state of the arts.",
"title": ""
},
{
"docid": "470d2c319aaff0e9afcbd6deab56dca8",
"text": "BACKGROUND\nMotivation and job satisfaction have been identified as key factors for health worker retention and turnover in low- and middle-income countries. District health managers in decentralized health systems usually have a broadened 'decision space' that enables them to positively influence health worker motivation and job satisfaction, which in turn impacts on retention and performance at district-level. The study explored the effects of motivation and job satisfaction on turnover intention and how motivation and satisfaction can be improved by district health managers in order to increase retention of health workers.\n\n\nMETHODS\nWe conducted a cross-sectional survey in three districts of the Eastern Region in Ghana and interviewed 256 health workers from several staff categories (doctors, nursing professionals, allied health workers and pharmacists) on their intentions to leave their current health facilities as well as their perceptions on various aspects of motivation and job satisfaction. The effects of motivation and job satisfaction on turnover intention were explored through logistic regression analysis.\n\n\nRESULTS\nOverall, 69% of the respondents reported to have turnover intentions. Motivation (OR = 0.74, 95% CI: 0.60 to 0.92) and job satisfaction (OR = 0.74, 95% CI: 0.57 to 0.96) were significantly associated with turnover intention and higher levels of both reduced the risk of health workers having this intention. The dimensions of motivation and job satisfaction significantly associated with turnover intention included career development (OR = 0.56, 95% CI: 0.36 to 0.86), workload (OR = 0.58, 95% CI: 0.34 to 0.99), management (OR = 0.51. 95% CI: 0.30 to 0.84), organizational commitment (OR = 0.36, 95% CI: 0.19 to 0.66), and burnout (OR = 0.59, 95% CI: 0.39 to 0.91).\n\n\nCONCLUSIONS\nOur findings indicate that effective human resource management practices at district level influence health worker motivation and job satisfaction, thereby reducing the likelihood for turnover. Therefore, it is worth strengthening human resource management skills at district level and supporting district health managers to implement retention strategies.",
"title": ""
},
{
"docid": "5e2e5ba17b6f44f2032c6c542918e23c",
"text": "BACKGROUND\nSubfertility and poor nutrition are increasing problems in Western countries. Moreover, nutrition affects fertility in both women and men. In this study, we investigate the association between adherence to general dietary recommendations in couples undergoing IVF/ICSI treatment and the chance of ongoing pregnancy.\n\n\nMETHODS\nBetween October 2007 and October 2010, couples planning pregnancy visiting the outpatient clinic of the Department of Obstetrics and Gynaecology of the Erasmus Medical Centre in Rotterdam, the Netherlands were offered preconception counselling. Self-administered questionnaires on general characteristics and diet were completed and checked during the visit. Six questions, based on dietary recommendations of the Netherlands Nutrition Centre, covered the intake of six main food groups (fruits, vegetables, meat, fish, whole wheat products and fats). Using the questionnaire results, we calculated the Preconception Dietary Risk score (PDR), providing an estimate of nutritional habits. Dietary quality increases with an increasing PDR score. We define ongoing pregnancy as an intrauterine pregnancy with positive heart action confirmed by ultrasound. For this analysis we selected all couples (n=199) who underwent a first IVF/ICSI treatment within 6 months after preconception counselling. We applied adjusted logistic regression analysis on the outcomes of interest using SPSS.\n\n\nRESULTS\nAfter adjustment for age of the woman, smoking of the woman, PDR of the partner, BMI of the couple and treatment indication we show an association between the PDR of the woman and the chance of ongoing pregnancy after IVF/ICSI treatment (odds ratio 1.65, confidence interval: 1.08-2.52; P=0.02]. Thus, a one-point increase in the PDR score associates with a 65% increased chance of ongoing pregnancy.\n\n\nCONCLUSIONS\nOur results show that increasing adherence to Dutch dietary recommendations in women undergoing IVF/ICSI treatment increases the chance of ongoing pregnancy. These data warrant further confirmation in couples achieving a spontaneous pregnancy and in randomized controlled trials.",
"title": ""
},
{
"docid": "6b754a8f97e8150118afdb0212af3d1d",
"text": "Association Rule Mining is a data mining technique which is well suited for mining Marketbasket dataset. The research described in the current paper came out during the early days of data mining research and was also meant to demonstrate the feasibility of fast scalable data mining algorithms. Although a few algorithms for mining association rules existed at the time, the Apriori and Apriori TID algorithms greatly reduced the overhead costs associated with generating association rules.",
"title": ""
},
{
"docid": "48a476d5100f2783455fabb6aa566eba",
"text": "Phylogenies are usually dated by calibrating interior nodes against the fossil record. This relies on indirect methods that, in the worst case, misrepresent the fossil information. Here, we contrast such node dating with an approach that includes fossils along with the extant taxa in a Bayesian total-evidence analysis. As a test case, we focus on the early radiation of the Hymenoptera, mostly documented by poorly preserved impression fossils that are difficult to place phylogenetically. Specifically, we compare node dating using nine calibration points derived from the fossil record with total-evidence dating based on 343 morphological characters scored for 45 fossil (4--20 complete) and 68 extant taxa. In both cases we use molecular data from seven markers (∼5 kb) for the extant taxa. Because it is difficult to model speciation, extinction, sampling, and fossil preservation realistically, we develop a simple uniform prior for clock trees with fossils, and we use relaxed clock models to accommodate rate variation across the tree. Despite considerable uncertainty in the placement of most fossils, we find that they contribute significantly to the estimation of divergence times in the total-evidence analysis. In particular, the posterior distributions on divergence times are less sensitive to prior assumptions and tend to be more precise than in node dating. The total-evidence analysis also shows that four of the seven Hymenoptera calibration points used in node dating are likely to be based on erroneous or doubtful assumptions about the fossil placement. With respect to the early radiation of Hymenoptera, our results suggest that the crown group dates back to the Carboniferous, ∼309 Ma (95% interval: 291--347 Ma), and diversified into major extant lineages much earlier than previously thought, well before the Triassic. [Bayesian inference; fossil dating; morphological evolution; relaxed clock; statistical phylogenetics.].",
"title": ""
},
{
"docid": "94e7afa5407dff50d7f7313f1ebc8016",
"text": "The ability to map descriptions of scenes to 3D geometric representations has many applications in areas such as art, education, and robotics. However, prior work on the text to 3D scene generation task has used manually specified object categories and language that identifies them. We introduce a dataset of 3D scenes annotated with natural language descriptions and learn from this data how to ground textual descriptions to physical objects. Our method successfully grounds a variety of lexical terms to concrete referents, and we show quantitatively that our method improves 3D scene generation over previous work using purely rule-based methods. We evaluate the fidelity and plausibility of 3D scenes generated with our grounding approach through human judgments. To ease evaluation on this task, we also introduce an automated metric that strongly correlates with human judgments.",
"title": ""
},
{
"docid": "0ba1f5e5828dfffa5dcb54b5f311453a",
"text": "BACKGROUND\nThe potential benefits of earthworm (Pheretima aspergillum) for healing have received considerable attention recently. Osteoblast and osteoclast activities are very important in bone remodeling, which is crucial to repair bone injuries. This study investigated the effects of earthworm extract on bone cell activities.\n\n\nMETHODS\nOsteoblast-like MG-63 cells and RAW 264.7 macrophage cells were used for identifying the cellular effects of different concentrations of earthworm extract on osteoblasts and osteoclasts, respectively. The optimal concentration of earthworm extract was determined by mitochondrial colorimetric assay, alkaline phosphatase activity, matrix calcium deposition, Western blotting and tartrate-resistant acid phosphatase activity.\n\n\nRESULTS\nEarthworm extract had a dose-dependent effect on bone cell activities. The most effective concentration of earthworm extract was 3 mg/ml, significantly increasing osteoblast proliferation and differentiation, matrix calcium deposition and the expression levels of alkaline phosphatase, osteopontin and osteocalcin. Conversely, 3 mg/ml earthworm extract significantly reduced the tartrate-resistant acid phosphatase activity of osteoclasts without altering cell viability.\n\n\nCONCLUSIONS\nEarthworm extract has beneficial effects on bone cell cultures, indicating that earthworm extract is a potential agent for use in bone regeneration.",
"title": ""
},
{
"docid": "bfd03d07c6a97a3b0a0c974f65070629",
"text": "People of skin of colour comprise the majority of the world's population and Asian subjects comprise more than half of the total population of the earth. Even so, the literature on the characteristics of the subjects with skin of colour is limited. Several groups over the past decades have attempted to decipher the underlying differences in skin structure and function in different ethnic skin types. However, most of these studies have been of small scale and in some studies interindividual differences in skin quality overwhelm any racial differences. There has been a recent call for more studies to address genetic together with phenotypic differences among different racial groups and in this respect several large-scale studies have been conducted recently. The most obvious ethnic skin difference relates to skin colour which is dominated by the presence of melanin. The photoprotection derived from this polymer influences the rate of the skin aging changes between the different racial groups. However, all racial groups are eventually subjected to the photoaging process. Generally Caucasians have an earlier onset and greater skin wrinkling and sagging signs than other skin types and in general increased pigmentary problems are seen in skin of colour although one large study reported that East Asians living in the U.S.A. had the least pigment spots. Induction of a hyperpigmentary response is thought to be through signaling by the protease-activated receptor-2 which together with its activating protease is increased in the epidermis of subjects with skin of colour. Changes in skin biophysical properties with age demonstrate that the more darkly pigmented subjects retaining younger skin properties compared with the more lightly pigmented groups. However, despite having a more compact stratum corneum (SC) there are conflicting reports on barrier function in these subjects. Nevertheless, upon a chemical or mechanical challenge the SC barrier function is reported to be stronger in subjects with darker skin despite having the reported lowest ceramide levels. One has to remember that barrier function relates to the total architecture of the SC and not just its lipid levels. Asian skin is reported to possess a similar basal transepidermal water loss (TEWL) to Caucasian skin and similar ceramide levels but upon mechanical challenge it has the weakest barrier function. Differences in intercellular cohesion are obviously apparent. In contrast reduced SC natural moisturizing factor levels have been reported compared with Caucasian and African American skin. These differences will contribute to differences in desquamation but few data are available. One recent study has shown reduced epidermal Cathepsin L2 levels in darker skin types which if also occurs in the SC could contribute to the known skin ashing problems these subjects experience. In very general terms as the desquamatory enzymes are extruded with the lamellar granules subjects with lowered SC lipid levels are expected to have lowered desquamatory enzyme levels. Increased pores size, sebum secretion and skin surface microflora occur in Negroid subjects. Equally increased mast cell granule size occurs in these subjects. The frequency of skin sensitivity is quite similar across different racial groups but the stimuli for its induction shows subtle differences. Nevertheless, several studies indicate that Asian skin maybe more sensitive to exogenous chemicals probably due to a thinner SC and higher eccrine gland density. In conclusion, we know more of the biophysical and somatosensory characteristics of ethnic skin types but clearly, there is still more to learn and especially about the inherent underlying biological differences in ethnic skin types.",
"title": ""
},
{
"docid": "5dbd994583805d41fb34837ca52fc712",
"text": "This editorial is part of a For-Discussion-Section of Methods of Information in Medicine about the paper \"Evidence-based Health informatics: How do we know what we know?\", written by Elske Ammenwerth [1]. Health informatics uses and applications have crept up on health systems over half a century, starting as simple automation of large-scale calculations, but now manifesting in many cases as rule- and algorithm-based creation of composite clinical analyses and 'black box' computation of clinical aspects, as well as enablement of increasingly complex care delivery modes and consumer health access. In this process health informatics has very largely bypassed the rules of precaution, proof of effectiveness, and assessment of safety applicable to all other health sciences and clinical support systems. Evaluation of informatics applications, compilation and recognition of the importance of evidence, and normalisation of Evidence Based Health Informatics, are now long overdue on grounds of efficiency and safety. Ammenwerth has now produced a rigorous analysis of the current position on evidence, and evaluation as its lifeblood, which demands careful study then active promulgation. Decisions based on political aspirations, 'modernisation' hopes, and unsupported commercial claims must cease - poor decisions are wasteful and bad systems can kill. Evidence Based Health Informatics should be promoted, and expected by users, as rigorously as Cochrane promoted Effectiveness and Efficiency, and Sackett promoted Evidence Based Medicine - both of which also were introduced retrospectively to challenge the less robust and partially unsafe traditional 'wisdom' in vogue. Ammenwerth's analysis gives the necessary material to promote that mission.",
"title": ""
},
{
"docid": "191b8e99293f90907a7e923ba7102832",
"text": "Nanosecond-level clock synchronization can be an enabler of a new spectrum of timingand delay-critical applications in data centers. However, the popular clock synchronization algorithm, NTP, can only achieve millisecond-level accuracy. Current solutions for achieving a synchronization accuracy of 10s-100s of nanoseconds require specially designed hardware throughout the network for combatting random network delays and component noise or to exploit clock synchronization inherent in Ethernet standards for the PHY. In this paper, we present HUYGENS, a software clock synchronization system that uses a synchronization network and leverages three key ideas. First, coded probes identify and reject impure probe data—data captured by probes which suffer queuing delays, random jitter, and NIC timestamp noise. Next, HUYGENS processes the purified data with Support Vector Machines, a widely-used and powerful classifier, to accurately estimate one-way propagation times and achieve clock synchronization to within 100 nanoseconds. Finally, HUYGENS exploits a natural network effect—the idea that a group of pair-wise synchronized clocks must be transitively synchronized— to detect and correct synchronization errors even further. Through evaluation of two hardware testbeds, we quantify the imprecision of existing clock synchronization across server-pairs, and the effect of temperature on clock speeds. We find the discrepancy between clock frequencies is typically 5-10μs/sec, but it can be as much as 30μs/sec. We show that HUYGENS achieves synchronization to within a few 10s of nanoseconds under varying loads, with a negligible overhead upon link bandwidth due to probes. Because HUYGENS is implemented in software running on standard hardware, it can be readily deployed in current data centers.",
"title": ""
},
{
"docid": "dd0d89e7f223023bd1624e6e46017cb1",
"text": "We present an attention-based model that reasons on human body shape and motion dynamics to identify individuals in the absence of RGB information, hence in the dark. Our approach leverages unique 4D spatio-temporal signatures to address the identification problem across days. Formulated as a reinforcement learning task, our model is based on a combination of convolutional and recurrent neural networks with the goal of identifying small, discriminative regions indicative of human identity. We demonstrate that our model produces state-of-the-art results on several published datasets given only depth images. We further study the robustness of our model towards viewpoint, appearance, and volumetric changes. Finally, we share insights gleaned from interpretable 2D, 3D, and 4D visualizations of our model's spatio-temporal attention.",
"title": ""
},
{
"docid": "62818ee0abe188240a41368454945b89",
"text": "In this paper, we discuss the use of a robotic arm for testing phone software features, such as image rectification, on mobile devices. The problem statement is that we needed an accurate and a precise test automation system for testing and validating the computer vision algorithms used for image rectification in a mobile phone. Manual testing may be error-prone and tedious and thereby the need for a reliable test automation system is of utmost necessity to check the software quality of the image rectification algorithms. The solution to this problem was to design and develop a test automation system using a robotic arm to validate the image rectification algorithms. The robotic arm-based software test automation was deployed and has performed functional performance-based stability tests on multiple software products. The reason for using a robotic arm setup is because it provides us with the flexibility to run our test cases using different speeds, rotation angles, and tilting angles. In this paper, we describe how the robotic arm rotation works. We first measure the center coordinate of the test subject relative to the base of the robotic arm. Then, a 3-D model of the subject is created with those coordinates via simulation mode to represent the real distance ratio setup. Then, the tip of the robotic arm is moved to the proper distance facing the subject. The tests were executed with clear and blurry images containing text with and without image rectification enabled. The result shows the increase in accuracy of text recognition with image rectification algorithm enabled. This paper talks about the design and development of the test automation for the image rectification feature and how we have used a robotic arm for automating this use case.",
"title": ""
},
{
"docid": "21aa2df33199b6fbdc64abd1ea65341b",
"text": "AIM\nBefore an attempt is made to develop any population-specific behavioural change programme, it is important to know what the factors that influence behaviours are. The aim of this study was to identify what are the perceived determinants that attribute to young people's choices to both consume and misuse alcohol.\n\n\nMETHOD\nUsing a descriptive survey design, a web-based questionnaire based on the Theory of Triadic Influence was administered to students aged 18-29 years at one university in Northern Ireland.\n\n\nRESULTS\nOut of the total respondents ( n = 595), knowledge scores on alcohol consumption and the health risks associated with heavy episodic drinking were high (92.4%, n = 550). Over half (54.1%, n = 322) cited the Internet as their main source for alcohol-related information. The three most perceived influential factors of inclination to misuse alcohol were strains/conflict within the family home ( M = 2.98, standard deviation ( SD) = 0.18, 98.7%, n = 587), risk taking/curiosity behaviour ( M = 2.97, SD = 0.27, 97.3%, n = 579) and the desire not to be socially alienated ( M = 2.94, SD = 0.33, 96%, n = 571). Females were statistically significantly more likely to be influenced by desire not to be socially alienated than males ( p = .029). Religion and personal reasons were the most commonly cited reasons for not drinking.\n\n\nCONCLUSION\nFuture initiatives to reduce alcohol misuse and alcohol-related harms need to focus on changing social normative beliefs and attitudes around alcohol consumption and the family and environmental factors that influence the choice of young adult's alcohol drinking behaviour. Investment in multi-component interventions may be a useful approach.",
"title": ""
},
{
"docid": "9c800a53208bf1ded97e963ed4f80b28",
"text": "We have developed a multi-material 3D printing platform that is high-resolution, low-cost, and extensible. The key part of our platform is an integrated machine vision system. This system allows for self-calibration of printheads, 3D scanning, and a closed-feedback loop to enable print corrections. The integration of machine vision with 3D printing simplifies the overall platform design and enables new applications such as 3D printing over auxiliary parts. Furthermore, our platform dramatically expands the range of parts that can be 3D printed by simultaneously supporting up to 10 different materials that can interact optically and mechanically. The platform achieves a resolution of at least 40 μm by utilizing piezoelectric inkjet printheads adapted for 3D printing. The hardware is low cost (less than $7,000) since it is built exclusively from off-the-shelf components. The architecture is extensible and modular -- adding, removing, and exchanging printing modules can be done quickly. We provide a detailed analysis of the system's performance. We also demonstrate a variety of fabricated multi-material objects.",
"title": ""
}
] |
scidocsrr
|
b1feb44601375ed2127eb60d9afa50a6
|
Higher-order Web link analysis using multilinear algebra
|
[
{
"docid": "493748a07dbf457e191487fe7459ee7e",
"text": "60 Computer T he Web is a hypertext body of approximately 300 million pages that continues to grow at roughly a million pages per day. Page variation is more prodigious than the data's raw scale: Taken as a whole, the set of Web pages lacks a unifying structure and shows far more author-ing style and content variation than that seen in traditional text-document collections. This level of complexity makes an \" off-the-shelf \" database-management and information-retrieval solution impossible. To date, index-based search engines for the Web have been the primary tool by which users search for information. The largest such search engines exploit technology's ability to store and index much of the Web. Such engines can therefore build giant indices that let you quickly retrieve the set of all Web pages containing a given word or string. Experienced users can make effective use of such engines for tasks that can be solved by searching for tightly constrained keywords and phrases. These search engines are, however, unsuited for a wide range of equally important tasks. In particular, a topic of any breadth will typically contain several thousand or million relevant Web pages. Yet a user will be willing, typically , to look at only a few of these pages. How then, from this sea of pages, should a search engine select the correct ones—those of most value to the user? AUTHORITATIVE WEB PAGES First, to distill a large Web search topic to a size that makes sense to a human user, we need a means of identifying the topic's most definitive or authoritative Web pages. The notion of authority adds a crucial second dimension to the concept of relevance: We wish to locate not only a set of relevant pages, but also those relevant pages of the highest quality. Second, the Web consists not only of pages, but hyperlinks that connect one page to another. This hyperlink structure contains an enormous amount of latent human annotation that can help automatically infer notions of authority. Specifically, the creation of a hyperlink by the author of a Web page represents an implicit endorsement of the page being pointed to; by mining the collective judgment contained in the set of such endorsements, we can gain a richer understanding of the relevance and quality of the Web's contents. To address both these parameters, we began development of the Clever system 1-3 three years ago. Clever …",
"title": ""
}
] |
[
{
"docid": "d23649c81665bc76134c09b7d84382d0",
"text": "This paper demonstrates the advantages of using controlled mobility in wireless sensor networks (WSNs) for increasing their lifetime, i.e., the period of time the network is able to provide its intended functionalities. More specifically, for WSNs that comprise a large number of statically placed sensor nodes transmitting data to a collection point (the sink), we show that by controlling the sink movements we can obtain remarkable lifetime improvements. In order to determine sink movements, we first define a Mixed Integer Linear Programming (MILP) analytical model whose solution determines those sink routes that maximize network lifetime. Our contribution expands further by defining the first heuristics for controlled sink movements that are fully distributed and localized. Our Greedy Maximum Residual Energy (GMRE) heuristic moves the sink from its current location to a new site as if drawn toward the area where nodes have the highest residual energy. We also introduce a simple distributed mobility scheme (Random Movement or S. Basagni ( ) Department of Electrical and Computer Engineering, Northeastern University e-mail: [email protected] A. Carosi · C. Petrioli Dipartimento di Informatica, Università di Roma “La Sapienza” e-mail: [email protected] C. Petrioli e-mail: [email protected] E. Melachrinoudis · Z. M. Wang Department of Mechanical and Industrial Engineering, Northeastern University e-mail: [email protected] Z. M. Wang e-mail: [email protected] RM) according to which the sink moves uncontrolled and randomly throughout the network. The different mobility schemes are compared through extensive ns2-based simulations in networks with different nodes deployment, data routing protocols, and constraints on the sink movements. In all considered scenarios, we observe that moving the sink always increases network lifetime. In particular, our experiments show that controlling the mobility of the sink leads to remarkable improvements, which are as high as sixfold compared to having the sink statically (and optimally) placed, and as high as twofold compared to uncontrolled mobility.",
"title": ""
},
{
"docid": "9b72d423e13bdd125b3a8c30b40e6b49",
"text": "With the increasing popularity of the web, some new web technologies emerged and introduced dynamics to web applications, in comparison to HTML, as a static programming language. JavaScript is the language that provided a dynamic web site which actively communicates with users. JavaScript is used in today's web applications as a client script language and on the server side. The JavaScript language supports the Model View Controller (MVC) architecture that maintains a readable code and clearly separates parts of the program code. The topic of this research is to compare the popular JavaScript frameworks: AngularJS, Ember, Knockout, Backbone. All four frameworks are based on MVC or similar architecture. In this paper, the advantages and disadvantages of each framework, the impact on application speed, the ways of testing such JS applications and ways to improve code security are presented.",
"title": ""
},
{
"docid": "3f3d63200529d015fb3f09ca7b268a79",
"text": "In this letter, a novel high-gain tetrahedron origami antenna is introduced. The antenna comprises a triangular-shaped monopole, a reflector, and two parasitic strip directors on a paper substrate. The directors and the reflector are employed to increase the antenna gain. The step-by-step origami folding procedure is presented in detail. The proposed design of antenna is verified by both simulations and measurements with a fabricated prototype. The antenna exhibits a 10-dB impedance bandwidth of 66% (2–4 GHz) and a peak gain of 9.5 dBi at 2.6 GHz.",
"title": ""
},
{
"docid": "af7584c0067de64024d364e321af133b",
"text": "Recommendation systems have wide-spread applications in both academia and industry. Traditionally, performance of recommendation systems has been measured by their precision. By introducing novelty and diversity as key qualities in recommender systems, recently increasing attention has been focused on this topic. Precision and novelty of recommendation are not in the same direction, and practical systems should make a trade-off between these two quantities. Thus, it is an important feature of a recommender system to make it possible to adjust diversity and accuracy of the recommendations by tuning the model. In this paper, we introduce a probabilistic structure to resolve the diversity–accuracy dilemma in recommender systems. We propose a hybrid model with adjustable level of diversity and precision such that one can perform this by tuning a single parameter. The proposed recommendation model consists of two models: one for maximization of the accuracy and the other one for specification of the recommendation list to tastes of users. Our experiments on two real datasets show the functionality of the model in resolving accuracy–diversity dilemma and outperformance of the model over other classic models. The proposed method could be extensively applied to real commercial systems due to its low computational complexity and significant performance.",
"title": ""
},
{
"docid": "b1cdd7440e956c668e547b04adb51e7f",
"text": "Database design for data warehouses is based on the notion of the snowflake schema and its important special case, the star schema. The snowflake schema represents a dimensional model which is composed of a central fact table and a set of constituent dimension tables which can be further broken up into subdimension tables. We formalise the concept of a snowflake schema in terms of an acyclic database schema whose join tree satisfies certain structural properties. We then define a normal form for snowflake schemas which captures its intuitive meaning with respect to a set of functional and inclusion dependencies. We show that snowflake schemas in this normal form are independent as well as separable when the relation schemas are pairwise incomparable. This implies that relations in the data warehouse can be updated independently of each other as long as referential integrity is maintained. In addition, we show that a data warehouse in snowflake normal form can be queried by joining the relation over the fact table with the relations over its dimension and subdimension tables. We also examine an information-theoretic interpretation of the snowflake schema and show that the redundancy of the primary key of the fact table is zero. r 2002 Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "ba6a8f6ba04434ab7fccf0abfc7c784c",
"text": "In this paper I discuss the curious lock of contact between developmental psychologists studying the principles of early learning and those concentrating on loter learning in children, where predispositions to learn certain types of concepts are less reodlly discussed. Instead, there is tacit agreement thot learning and tronsfer mechanisms ore content-independent and age-dependent. I argue here that one cannot study leornlng and transfer In a vacuum ond that children's ablllty to learn is lntimotely dependent on what they ore required to learn and the context in which they must learn it. Specifically, I orgue that children learn and transfer readily, even in traditlonol laboratory settings, if they are requlred ta extend their knowledge about causal mechanisms that they already understond. This point Is illustrated In o series of studies with children from 1 to 3 years of age leorning about simple mechanisms of physical causality (pushing-pulling, wetting, cutting, etc.). In addition, I document children's difficulty learning about causally lmpassi-ble events, such OS pulling with strings thot da not appear to make contact with the object they are pulling. Even young children transfer an the bosis of deep structural principles rather than perceptual features when they have access to the requisite domain-specific knowledge. I argue that a search far causal ex-plonatlons is the basis of broad understanding, of wide patterns of generalization , and of flexible transfer ond creative Inferential projections-in sum, the essential elements of meanlngful learning. In this paper I will consider the effects of principles that guide early learning, such as those described by Gelman (this issue), on later learning in children. This is not an easy task, as psychologists who have studied constraints, This paper is based on a talk given in the symposium, Structural Constraints on Cognitive Development, Psychonomics, 1986. Preparation of the manuscript was supported by NICHD Grant HD 06864. I wish to thank Anne Slattery for her patience and sensitivity with the toddlers in the string and tool studies. I thank Rita Gaskill for her word processing skills and patient work on the many versions of this manuscript, Usha Goswami and Mary Jo Kane for collaborating on studies, and Stephanie Lyons-Olsen and Alison McClain for helping collect data. I would also like to thank Rachel Gelman for her helpful comments, and Jim Greeno, Annette Karmiloff-Smith, and Doug Medin for their thoughtful reviews of this manuscript. Portions of the discussion are adapted from Brown (1989).",
"title": ""
},
{
"docid": "89a9293fb0fcac7d55cfb44a8032ce71",
"text": "Traditional spectral clustering methods cannot naturally learn the number of communities in a network and often fail to detect smaller community structure in dense networks because they are based upon external community connectivity properties such as graph cuts. We propose an algorithm for detecting community structure in networks called the leader-follower algorithm which is based upon the natural internal structure expected of communities in social networks. The algorithm uses the notion of network centrality in a novel manner to differentiate leaders (nodes which connect different communities) from loyal followers (nodes which only have neighbors within a single community). Using this approach, it is able to naturally learn the communities from the network structure and does not require the number of communities as an input, in contrast to other common methods such as spectral clustering. We prove that it will detect all of the communities exactly for any network possessing communities with the natural internal structure expected in social networks. More importantly, we demonstrate the effectiveness of the leader-follower algorithm in the context of various real networks ranging from social networks such as Facebook to biological networks such as an fMRI based human brain network. We find that the leader-follower algorithm finds the relevant community structure in these networks without knowing the number of communities beforehand. Also, because the leader-follower algorithm detects communities using their internal structure, we find that it can resolve a finer community structure in dense networks than common spectral clustering methods based on external community structure.",
"title": ""
},
{
"docid": "1065c331b4a9ae5209ee3f35e5a2041b",
"text": "Recent acts of extreme violence involving teens and associated links to violent video games have led to an increased interest in video game violence. Research suggests that violent video games influence aggressive behavior, aggressive affect, aggressive cognition, and physiological arousal. Anderson and Bushman [Annu. Rev. Psychol. 53 (2002) 27.] have posited a General Aggression Model (GAM) to explain the mechanism behind the link between violent video games and aggressive behavior. However, the influence of violent video games as a function of developmental changes across adolescence has yet to be addressed. The purpose of this review is to integrate the GAM with developmental changes that occur across adolescence. D 2002 Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "f693b26866ca8eb2a893dead7aa0fb21",
"text": "This paper deals with response signals processing in eddy current non-destructive testing. Non-sinusoidal excitation is utilized to drive eddy currents in a conductive specimen. The response signals due to a notch with variable depth are calculated by numerical means. The signals are processed in order to evaluate the depth of the notch. Wavelet transformation is used for this purpose. Obtained results are presented and discussed in this paper. Streszczenie. Praca dotyczy sygnałów wzbudzanych przy nieniszczącym testowaniu za pomocą prądów wirowych. Przy pomocy symulacji numerycznych wyznaczono sygnały odpowiedzi dla niesinusoidalnych sygnałów wzbudzających i defektów o różnej głębokości. Celem symulacji jest wyznaczenie zależności pozwalającej wyznaczyć głębokość defektu w zależności od odbieranego sygnału. W artykule omówiono wykorzystanie do tego celu transformaty falkowej. (Analiza falkowa impulsowych prądów wirowych)",
"title": ""
},
{
"docid": "39598533576bdd3fa94df5a6967b9b2d",
"text": "Genetic Algorithm (GA) and other Evolutionary Algorithms (EAs) have been successfully applied to solve constrained minimum spanning tree (MST) problems of the communication network design and also have been used extensively in a wide variety of communication network design problems. Choosing an appropriate representation of candidate solutions to the problem is the essential issue for applying GAs to solve real world network design problems, since the encoding and the interaction of the encoding with the crossover and mutation operators have strongly influence on the success of GAs. In this paper, we investigate a new encoding crossover and mutation operators on the performance of GAs to design of minimum spanning tree problem. Based on the performance analysis of these encoding methods in GAs, we improve predecessor-based encoding, in which initialization depends on an underlying random spanning-tree algorithm. The proposed crossover and mutation operators offer locality, heritability, and computational efficiency. We compare with the approach to others that encode candidate spanning trees via the Pr?fer number-based encoding, edge set-based encoding, and demonstrate better results on larger instances for the communication spanning tree design problems. key words: minimum spanning tree (MST), communication network design, genetic algorithm (GA), node-based encoding",
"title": ""
},
{
"docid": "efc79bbb1800951e7193047fdc951101",
"text": "Location-based services (LBS) have attracted a great deal of attention recently. Outdoor localization can be solved by the GPS technique, but how to accurately and efficiently localize pedestrians in indoor environments is still a challenging problem. Recent techniques based on WiFi or pedestrian dead reckoning (PDR) have several limiting problems, such as the variation of WiFi signals and the drift of PDR. An auxiliary tool for indoor localization is landmarks, which can be easily identified based on specific sensor patterns in the environment, and this will be exploited in our proposed approach. In this work, we propose a sensor fusion framework for combining WiFi, PDR and landmarks. Since the whole system is running on a smartphone, which is resource limited, we formulate the sensor fusion problem in a linear perspective, then a Kalman filter is applied instead of a particle filter, which is widely used in the literature. Furthermore, novel techniques to enhance the accuracy of individual approaches are adopted. In the experiments, an Android app is developed for real-time indoor localization and navigation. A comparison has been made between our proposed approach and individual approaches. The results show significant improvement using our proposed framework. Our proposed system can provide an average localization accuracy of 1 m.",
"title": ""
},
{
"docid": "b62b8862d26e5ce5bcbd2b434aff5d0e",
"text": "In this demo paper we present Docear's research paper recommender system. Docear is an academic literature suite to search, organize, and create research articles. The users' data (papers, references, annotations, etc.) is managed in mind maps and these mind maps are utilized for the recommendations. Using content-based filtering methods, Docear's recommender achieves click-through rates around 6%, in some scenarios even over 10%.",
"title": ""
},
{
"docid": "da989da66f8c2019adf49eae97fc2131",
"text": "Psychedelic drugs are making waves as modern trials support their therapeutic potential and various media continue to pique public interest. In this opinion piece, we draw attention to a long-recognised component of the psychedelic treatment model, namely ‘set’ and ‘setting’ – subsumed here under the umbrella term ‘context’. We highlight: (a) the pharmacological mechanisms of classic psychedelics (5-HT2A receptor agonism and associated plasticity) that we believe render their effects exceptionally sensitive to context, (b) a study design for testing assumptions regarding positive interactions between psychedelics and context, and (c) new findings from our group regarding contextual determinants of the quality of a psychedelic experience and how acute experience predicts subsequent long-term mental health outcomes. We hope that this article can: (a) inform on good practice in psychedelic research, (b) provide a roadmap for optimising treatment models, and (c) help tackle unhelpful stigma still surrounding these compounds, while developing an evidence base for long-held assumptions about the critical importance of context in relation to psychedelic use that can help minimise harms and maximise potential benefits.",
"title": ""
},
{
"docid": "0c8d6441b5756d94cd4c3a0376f94fdc",
"text": "Electronic word of mouth (eWOM) has been an important factor influencing consumer purchase decisions. Using the ABC model of attitude, this study proposes a model to explain how eWOM affects online discussion forums. Specifically, we propose that platform (Web site reputation and source credibility) and customer (obtaining buying-related information and social orientation through information) factors influence purchase intentions via perceived positive eWOM review credibility, as well as product and Web site attitudes in an online community context. A total of 353 online discussion forum users in an online community (Fashion Guide) in Taiwan were recruited, and structural equation modeling (SEM) was used to test the research hypotheses. The results indicate that Web site reputation, source credibility, obtaining buying-related information, and social orientation through information positively influence perceived positive eWOM review credibility. In turn, perceived positive eWOM review credibility directly influences purchase intentions and also indirectly influences purchase intentions via product and Web site attitudes. Finally, we discuss the theoretical and managerial implications of the findings.",
"title": ""
},
{
"docid": "5dcf33299ebbf8b1de1a8e162a7859c1",
"text": "Firstly, olfactory association learning was used to determine the modulating effect of 5-HT4 receptor involvement in learning and long-term memory. Secondly, the effects of systemic injections of a 5-HT4 partial agonist and an antagonist on long-term potentiation (LTP) and depotentiation in the dentate gyrus (DG) were tested in freely moving rats. The modulating role of the 5-HT4 receptors was studied by using a potent, 5-HT4 partial agonist RS 67333 [1-(4-amino-5-chloro-2-methoxyphenyl)-3-(1-n-butyl-4-piperidinyl)-1-propanone] and a selective 5-HT4 receptor antagonist RS 67532 [1-(4-amino-5-chloro-2-(3,5-dimethoxybenzyloxyphenyl)-5-(1-piperidinyl)-1-propanone]. Agonist or antagonist systemic chronic injections prior to five training sessions yielded a facilitatory effect on procedural memory during the first session only with the antagonist. Systemic injection of the antagonist only before the first training session improved procedural memory during the first session and associative memory during the second session. Similar injection with the 5-HT4 partial agonist had an opposite effect. The systemic injection of the 5-HT4 partial agonist prior to the induction of LTP in the dentate gyrus by high-frequency stimulation was followed by a population spike increase, while the systemic injection of the antagonist accelerated the depotentiation 48 h later. The behavioural and physiological results pointed out the involvement of 5-HT4 receptors in processing related to the long-term hippocampal-dependent memory system, and suggest that specific 5-HT4 agonists could be used to treat amnesic patients with a dysfunction in this particular system.",
"title": ""
},
{
"docid": "9d60842315ad481ac55755160a581d74",
"text": "This paper presents an efficient DNN design with stochastic computing. Observing that directly adopting stochastic computing to DNN has some challenges including random error fluctuation, range limitation, and overhead in accumulation, we address these problems by removing near-zero weights, applying weight-scaling, and integrating the activation function with the accumulator. The approach allows an easy implementation of early decision termination with a fixed hardware design by exploiting the progressive precision characteristics of stochastic computing, which was not easy with existing approaches. Experimental results show that our approach outperforms the conventional binary logic in terms of gate area, latency, and power consumption.",
"title": ""
},
{
"docid": "00759cb892009cb002c3e1de9cb1bf7c",
"text": "Vehicles are currently being developed and sold with increasing levels of connectivity and automation. As with all networked computing devices, increased connectivity often results in a heightened risk of a cyber security attack. Furthermore, increased automation exacerbates any risk by increasing the opportunities for the adversary to implement a successful attack. In this paper, a large volume of publicly accessible literature is reviewed and compartmentalized based on the vulnerabilities identified and mitigation techniques developed. This review highlighted that the majority of studies are reactive and vulnerabilities are often discovered by friendly adversaries (white-hat hackers). Many gaps in the knowledge base were identified. Priority should be given to address these knowledge gaps to minimize future cyber security risks in the connected and autonomous vehicle sector.",
"title": ""
},
{
"docid": "d4bd583808c9e105264c001cbcb6b4b0",
"text": "It is common for clinicians, researchers, and public policymakers to describe certain drugs or objects (e.g., games of chance) as “addictive,” tacitly implying that the cause of addiction resides in the properties of drugs or other objects. Conventional wisdom encourages this view by treating different excessive behaviors, such as alcohol dependence and pathological gambling, as distinct disorders. Evidence supporting a broader conceptualization of addiction is emerging. For example, neurobiological research suggests that addictive disorders might not be independent:2 each outwardly unique addiction disorder might be a distinctive expression of the same underlying addiction syndrome. Recent research pertaining to excessive eating, gambling, sexual behaviors, and shopping also suggests that the existing focus on addictive substances does not adequately capture the origin, nature, and processes of addiction. The current view of separate addictions is similar to the view espoused during the early days of AIDS diagnosis, when rare diseases were not",
"title": ""
},
{
"docid": "b01e3b03cd418b9748de7546ef7a9ca2",
"text": "We describe a lightweight protocol for oblivious evaluation of a pseudorandom function (OPRF) in the presence of semihonest adversaries. In an OPRF protocol a receiver has an input r; the sender gets output s and the receiver gets output F(s; r), where F is a pseudorandom function and s is a random seed. Our protocol uses a novel adaptation of 1-out-of-2 OT-extension protocols, and is particularly efficient when used to generate a large batch of OPRF instances. The cost to realize m OPRF instances is roughly the cost to realize 3:5m instances of standard 1-out-of-2 OTs (using state-of-the-art OT extension). We explore in detail our protocol's application to semihonest secure private set intersection (PSI). The fastest state-of- the-art PSI protocol (Pinkas et al., Usenix 2015) is based on efficient OT extension. We observe that our OPRF can be used to remove their PSI protocol's dependence on the bit-length of the parties' items. We implemented both PSI protocol variants and found ours to be 3.1{3.6 faster than Pinkas et al. for PSI of 128-bit strings and sufficiently large sets. Concretely, ours requires only 3.8 seconds to securely compute the intersection of 220-size sets, regardless of the bitlength of the items. For very large sets, our protocol is only 4:3 slower than the insecure naive hashing approach for PSI.",
"title": ""
},
{
"docid": "ca898f6e889632dc01576e36ca5b4b8b",
"text": "In recent years, deep learning has had a profound impact on machine learning and artificial intelligence. Here we investigate if quantum algorithms for deep learning lead to an advantage over existing classical deep learning algorithms. We develop two quantum machine learning algorithms that reduce the time required to train a deep Boltzmann machine and allow richer classes of models, namely multi–layer, fully connected networks, to be efficiently trained without the use of contrastive divergence or similar approximations. Our algorithms may be used to efficiently train either full or restricted Boltzmann machines. By using quantum state preparation methods, we avoid the use of contrastive divergence approximation and obtain improved maximization of the underlying objective function.",
"title": ""
}
] |
scidocsrr
|
e15a1e7a28acde00a2749a3a091c9574
|
Non-rigid range-scan alignment using thin-plate splines
|
[
{
"docid": "d529b4f1992f438bb3ce4373090f8540",
"text": "One conventional tool for interpolating surfaces over scattered data, the thin-plate spline, has an elegant algebra expressing the dependence of the physical bending energy of a thin metal plate on point constraints. For interpolation of a surface over a fixed set of nodes in the plane, the bending energy is a quadratic form in the heights assigned to the surface. The spline is the superposition of eigenvectors of the bending energy matrix, of successively larger physical scales, over a tilted flat plane having no bending energy at all. When these splines are paired, one representing the x-coordinate of another form and the other the y-coordinate, they aid greatly in the modeling of biological shape change as deformation. In this context, the pair becomes an interpolation map from RZ to R' relating two sets of landmark points. The spline maps decompose, in the same way as the spline surfaces, into a linear part (an affine transformation) together with the superposition of principal warps, which are geometrically independent, affine-free deformations of progressively smaller geometrical scales. The warps decompose an empirical deformation into orthogonal features more or less as a conventional orthogonal functional analysis decomposes the single scene. This paper demonstrates the decomposition of deformations by principal warps, extends the method to deal with curving edges between landmarks, relates this formalism to other applications of splines current in computer vision, and indicates how they might aid in the extraction of features for analysis, comparison, and diagnosis of biological and medical images.",
"title": ""
}
] |
[
{
"docid": "f8d0929721ba18b2412ca516ac356004",
"text": "Because of the fact that vehicle crash tests are complex and complicated experiments it is advisable to establish their mathematical models. This paper contains an overview of the kinematic and dynamic relationships of a vehicle in a collision. There is also presented basic mathematical model representing a collision together with its analysis. The main part of this paper is devoted to methods of establishing parameters of the vehicle crash model and to real crash data investigation i.e. – creation of a Kelvin model for a real experiment, its analysis and validation. After model’s parameters extraction a quick assessment of an occupant crash severity is done. Key-Words: Modeling, vehicle crash, Kelvin model, data processing.",
"title": ""
},
{
"docid": "c5c7e3f4c18f660281a5bc25077aa184",
"text": "Procrastination, Academic Success and the Effectiveness of a Remedial Program Procrastination produces harmful effects for human capital investments and studying activities. Using data from a large sample of Italian undergraduates, we measure procrastination with the actual behaviour of students, considering the delay in finalizing their university enrolment procedure. We firstly show that procrastination is a strong predictor of students’ educational achievements. This result holds true controlling for quite reliable measures of cognitive abilities, a number of background characteristics and indicators of students’ motivation. Secondly, we investigate, using a Regression Discontinuity Design, the effects of a remedial program in helping students with different propensity to procrastinate. We show that the policy especially advantages students who tend to procrastinate, suggesting that also policies not directly aimed at handling procrastination can help to solve self-control problems. JEL Classification: D03, I21, D91, J01, J24",
"title": ""
},
{
"docid": "8b1fa33cc90434abddf5458e05db0293",
"text": "The Stand-Alone Modula-2 System (SAM2S) is a portable, concurrent operating system and Modula-2 programming support environment. It is based on a highly modular kernel task running on single process-multiplexed microcomputers. SAM2S offers extensive network communication facilities. It provides the foundation for the locally resident portions of the MICROS distributed operating system for large netcomputers. SAM2S now supports a five-pass Modula-2 compiler, a task linker, link and load file decoders, a static symbolic debugger, a filer, and other utility tasks. SAM2S is currently running on each node of a network of DEC LSI-11/23 and Heurikon/Motorola 68000 workstations connected by an Ethernet. This paper reviews features of Modula-2 for operating system development and outlines the design of SAM2S with special emphasis on its modularity and communication flexibility. The two SAM2S implementations differ mainly in their peripheral drivers and in the large amount of memory available on the 68000 systems. Modula-2 has proved highly suitable for writing large, portable, concurrent and distributed operating systems.",
"title": ""
},
{
"docid": "211aaf2a8935c42a5491fbe3acabde04",
"text": "This paper focuses on an important research problem of Big Data classification in intrusion detection system. Deep Belief Networks is introduced to the field of intrusion detection, and an intrusion detection model based on Deep Belief Networks is proposed to apply in intrusion recognition domain. The deep hierarchical model is a deep neural network classifier of a combination of multilayer unsupervised learning networks, which is called as Restricted Boltzmann Machine, and a supervised learning network, which is called as Back-propagation network. The experimental results on KDD CUP 1999 dataset demonstrate that the performance of Deep Belief Networks model is better than that of SVM and ANN.",
"title": ""
},
{
"docid": "dec78cff9fa87a3b51fc32681ba39a08",
"text": "Alkaline saponification is often used to remove interfering chlorophylls and lipids during carotenoids analysis. However, saponification also hydrolyses esterified carotenoids and is known to induce artifacts. To avoid carotenoid artifact formation during saponification, Larsen and Christensen (2005) developed a gentler and simpler analytical clean-up procedure involving the use of a strong basic resin (Ambersep 900 OH). They hypothesised a saponification mechanism based on their Liquid Chromatography-Photodiode Array (LC-PDA) data. In the present study, we show with LC-PDA-accurate mass-Mass Spectrometry that the main chlorophyll removal mechanism is not based on saponification, apolar adsorption or anion exchange, but most probably an adsorption mechanism caused by H-bonds and dipole-dipole interactions. We showed experimentally that esterified carotenoids and glycerolipids were not removed, indicating a much more selective mechanism than initially hypothesised. This opens new research opportunities towards a much wider scope of applications (e.g. the refinement of oils rich in phytochemical content).",
"title": ""
},
{
"docid": "038058a935f5068f0479eed364808b2f",
"text": "Recently, an increasing body of evidence suggests that developmental abnormalities related to schizophrenia may occur as early as the neonatal stage. Impairments of brain gray matter and wiring problems of axonal fibers are commonly suspected to be responsible for the disconnection hypothesis in schizophrenia adults, but significantly less is known in neonates. In this study, we investigated 26 neonates who were at genetic risk for schizophrenia and 26 demographically matched healthy neonates using both morphological and white matter networks to examine possible brain connectivity abnormalities. The results showed that both populations exhibited small-world network topology. Morphological network analysis indicated that the brain structural associations of the high-risk neonates tended to have globally lower efficiency, longer connection distance, and less number of hub nodes and edges with relatively higher betweenness. Subgroup analysis showed that male neonates were significantly disease-affected, while the female neonates were not. White matter network analysis, however, showed that the fiber networks were globally unaffected, although several subcortical-cortical connections had significantly less number of fibers in high-risk neonates. This study provides new lines of evidence in support of the disconnection hypothesis, reinforcing the notion that the genetic risk of schizophrenia induces alterations in both gray matter structural associations and white matter connectivity.",
"title": ""
},
{
"docid": "00ee345b31f0acc9d3ee59eb2daab737",
"text": "This communication sets the problem of incremental parsing in the context of a complete incremental compiling system. It turns out that, according to the incrementally paradigm of the attribute evaluator and data-flow analyzer to be used, two definitions of optimal incrementality in a parser are possible. Algorithms for achieving both forms of optimality are given, both of them based on ordinary LALR(1) parse tables. Optimality and correctness proofs, which are merely outlined in this communication, are made intuitive thanks to the concept of a well-formed list of threaded trees, a natural extension of the concept of threaded tree found in earlier works on incremental parsing.",
"title": ""
},
{
"docid": "921b4ecaed69d7396285909bd53a3790",
"text": "Brain mapping transforms the brain cortical surface to canonical planar domains, which plays a fundamental role in morphological study. Most existing brain mapping methods are based on angle preserving maps, which may introduce large area distortions. This work proposes an area preserving brain mapping method based on Monge-Brenier theory. The brain mapping is intrinsic to the Riemannian metric, unique, and diffeomorphic. The computation is equivalent to convex energy minimization and power Voronoi diagram construction. Comparing to the existing approaches based on Monge-Kantorovich theory, the proposed one greatly reduces the complexity (from n2 unknowns to n ), and improves the simplicity and efficiency. Experimental results on caudate nucleus surface mapping and cortical surface mapping demonstrate the efficacy and efficiency of the proposed method. Conventional methods for caudate nucleus surface mapping may suffer from numerical instability, in contrast, current method produces diffeomorpic mappings stably. In the study of cortical surface classification for recognition of Alzheimer's Disease, the proposed method outperforms some other morphometry features.",
"title": ""
},
{
"docid": "0acc3e9d20fe4c6007d41996aad135a1",
"text": "Introduction. This review studies rationale and outcome of vulvovaginal aesthetic surgery. Aim. Discuss procedures designed to alter genital appearance and function; investigate sexual, philosophical, and ethical issues; examine outcomes. Methods. (i) Medline search of the existing literature utilizing terms labiaplasty, clitoral hood reduction, hymenoplasty (HP), vaginoplasty (VP), perineoplasty (PP), female genital surgery, sexual satisfaction/body image, and anterior/posterior colporrhaphy; (ii) references from bibliographies of papers found through the literature search and in the author’s reading of available literature. Main Outcome Measures. (i) Demographics and psychosexual dynamics of women requesting female genital plastic/cosmetic surgery; (ii) overall and sexual satisfaction of subjects undergoing these procedures. Results. The majority of studies regarding patient satisfaction and sexual function after vaginal aesthetic and functional plastic procedures report beneficial results, with overall patient satisfaction in the 90–95% range, sexual satisfaction over 80–85%. These data are supported by outcome data from nonelective vaginal support procedures. Complications appear minor and acceptable to patients. There are little data available regarding outcomes and satisfaction of HP, or function during the rigors of subsequent vaginal childbirth, although the literature contains no case reports of labiaplasty disruption during parturition. Conclusion. Women requesting labiaplasty and reduction of their clitoral hoods do so for both cosmetic and functional (chafing, interference with coitus, interference with athletic activities, etc.) reasons, while patients requesting VP and/or PP do so in order to increase friction and sexual satisfaction, occasionally for aesthetic reasons. Patients appear generally happy with outcomes. The majority of patients undergoing genital plastic surgery report overall satisfaction and subjective enhancement of sexual function and body image, but the literature is retrospective. Female genital plastic surgery procedures appear to fulfill the majority of patient’s desires for cosmetic and functional improvement, as well as enhancement of the sexual experience. Little information is available regarding HP outcomes. Goodman MP. Female genital cosmetic and plastic surgery: A review. J Sex Med **;**:**–**.",
"title": ""
},
{
"docid": "f29cee48c229ba57d58a07650633bec4",
"text": "In this work, we improve the performance of intra-sentential zero anaphora resolution in Japanese using a novel method of recognizing subject sharing relations. In Japanese, a large portion of intrasentential zero anaphora can be regarded as subject sharing relations between predicates, that is, the subject of some predicate is also the unrealized subject of other predicates. We develop an accurate recognizer of subject sharing relations for pairs of predicates in a single sentence, and then construct a subject shared predicate network, which is a set of predicates that are linked by the subject sharing relations recognized by our recognizer. We finally combine our zero anaphora resolution method exploiting the subject shared predicate network and a state-ofthe-art ILP-based zero anaphora resolution method. Our combined method achieved a significant improvement over the the ILPbased method alone on intra-sentential zero anaphora resolution in Japanese. To the best of our knowledge, this is the first work to explicitly use an independent subject sharing recognizer in zero anaphora resolution.",
"title": ""
},
{
"docid": "231a4c5c5ef010300422b3cab8105290",
"text": "There have been many claims that the Internet represents a new nearly “frictionless market.” Our research empirically analyzes the characteristics of the Internet as a channel for two categories of homogeneous products—books and CDs. Using a data set of over 8,500 price observations collected over a period of 15 months, we compare pricing behavior at 41 Internet and conventional retail outlets. We find that prices on the Internet are 9–16% lower than prices in conventional outlets, depending on whether taxes, shipping, and shopping costs are included in the price. Additionally, we find that Internet retailers’ price adjustments over time are up to 100 times smaller than conventional retailers’ price adjustments—presumably reflecting lower menu costs in Internet channels. We also find that levels of price dispersion depend importantly on the measures employed. When we compare the prices posted by different Internet retailers we find substantial dispersion. Internet retailer prices differ by an average of 33% for books and 25% for CDs. However, when we weight these prices by proxies for market share, we find dispersion is lower in Internet channels than in conventional channels, reflecting the dominance of certain heavily branded retailers. We conclude that while there is lower friction in many dimensions of Internet competition, branding, awareness, and trust remain important sources of heterogeneity among Internet retailers. (Search; Competition; Internet; Price Dispersion; Menu Costs; Pricing; Intermediaries)",
"title": ""
},
{
"docid": "d87730770e080ee926a4859e421d4309",
"text": "The term metastasis is widely used to describe the endpoint of the process by which tumour cells spread from the primary location to an anatomically distant site. Achieving successful dissemination is dependent not only on the molecular alterations of the cancer cells themselves, but also on the microenvironment through which they encounter. Here, we reviewed the molecular alterations of metastatic gastric cancer (GC) as it reflects a large proportion of GC patients currently seen in clinic. We hope that further exploration and understanding of the multistep metastatic cascade will yield novel therapeutic targets that will lead to better patient outcomes.",
"title": ""
},
{
"docid": "731d9faffc834156d5218a09fbb82e27",
"text": "With this paper we take a first step to understand the appropriation of social media by the police. For this purpose we analyzed the Twitter communication by the London Metropolitan Police (MET) and the Greater Manchester Police (GMP) during the riots in August 2011. The systematic comparison of tweets demonstrates that the two forces developed very different practices for using Twitter. While MET followed an instrumental approach in their communication, in which the police aimed to remain in a controlled position and keep a distance to the general public, GMP developed an expressive approach, in which the police actively decreased the distance to the citizens. In workshops and interviews, we asked the police officers about their perspectives, which confirmed the identified practices. Our study discusses benefits and risks of the two approaches and the potential impact of social media on the evolution of the role of police in society.",
"title": ""
},
{
"docid": "f87ca97864fe5326b08264b0b127d12c",
"text": "In recent years, the computer science education community has shown strong commitment to broadening participation in computing in K-12 classrooms. Educational research highlights the critical role of professional development in supporting teachers to attract and effectively teach underrepresented students in computing. In this paper we present the Exploring Computer Science (ECS) professional development model and the research on which it is based. We also present findings about the impact of ECS professional development on teachers' practice. As computing education initiatives become increasingly concerned with scaling up from a regional to a nationwide presence, it is important to consider how the essential components of effective professional development can drive this reform.",
"title": ""
},
{
"docid": "47992375dbd3c5d0960c114d5a4854b2",
"text": "A new method is developed to represent probabilistic relations on multiple random events. Where previously knowledge bases containing probabilistic rules were used for this purpose, here a probabilitydistributionover the relations is directly represented by a Bayesian network. By using a powerful way of specifying conditional probability distributions in these networks, the resulting formalism is more expressive than the previous ones. Particularly, it provides for constraints on equalities of events, and it allows to define complex, nested combination functions.",
"title": ""
},
{
"docid": "c0e4aa45a961aa69bc5c52e7cf7c889d",
"text": "CRM gains increasing importance due to intensive competition and saturated markets. With the purpose of retaining customers, academics as well as practitioners find it crucial to build a churn prediction model that is as accurate as possible. This study applies support vector machines in a newspaper subscription context in order to construct a churn model with a higher predictive performance. Moreover, a comparison is made between two parameter-selection techniques, needed to implement support vector machines. Both techniques are based on grid search and cross-validation. Afterwards, the predictive performance of both kinds of support vector machine models is benchmarked to logistic regression and random forests. Our study shows that support vector machines show good generalization performance when applied to noisy marketing data. Nevertheless, the parameter optimization procedure plays an important role in the predictive performance. We show that only when the optimal parameter selection procedure is applied, support vector machines outperform traditional logistic regression, whereas random forests outperform both kinds of support vector machines. As a substantive contribution, an overview of the most important churn drivers is given. Unlike ample research, monetary value and frequency do not play an important role in explaining churn in this subscription-services application. Even though most important churn predictors belong to the category of variables describing the subscription, the influence of several client/company-interaction variables can not be neglected.",
"title": ""
},
{
"docid": "184da4d4589a3a9dc1f339042e6bc674",
"text": "Ocular dominance plasticity has long served as a successful model for examining how cortical circuits are shaped by experience. In this paradigm, altered retinal activity caused by unilateral eye-lid closure leads to dramatic shifts in the binocular response properties of neurons in the visual cortex. Much of the recent progress in identifying the cellular and molecular mechanisms underlying ocular dominance plasticity has been achieved by using the mouse as a model system. In this species, monocular deprivation initiated in adulthood also causes robust ocular dominance shifts. Research on ocular dominance plasticity in the mouse is starting to provide insight into which factors mediate and influence cortical plasticity in juvenile and adult animals.",
"title": ""
},
{
"docid": "8a7ca3dabe17e3e3b9b94a4348d362ff",
"text": "Contrast enhancement of an image can efficiently performed by Histogram Equalization. However, this method tends to introduce unnecessary visual deterioration such as saturation effect. One of the solutions to overcome this weakness is by preserving the mean brightness of the input image inside the output image. This paper proposes a new histogram equalization method called Contrast Stretching Recursively Separated Histogram Equalization (CSRSHE), for brightness preservation and image contrast enhancement. This algorithm applies a two stage approach: 1) A new intensity is assigned to each pixel according to an adaptive transfer function that is designed on the basis of the global and local statistics of the input image. 2) Performing recursive mean separate histogram equalization based on a modified local contrast stretching manipulation. We show that compared to other existent methods, CSRSHE preserves the image brightness more accurately and produces images with better contrast enhancement.",
"title": ""
},
{
"docid": "556c9a28f9bbd81d53e093b139ce7866",
"text": "This paper is devoted to a new method of using Microsoft (MS) Kinect sensors for non-contact monitoring of breathing and heart rate estimation to detect possible medical and neurological disorders. Video sequences of facial features and thorax movements are recorded by MS Kinect image, depth and infrared sensors to enable their time analysis in selected regions of interest. The proposed methodology includes the use of computational methods and functional transforms for data selection, as well as their denoising, spectral analysis and visualization, in order to determine specific biomedical features. The results that were obtained verify the correspondence between the evaluation of the breathing frequency that was obtained from the image and infrared data of the mouth area and from the thorax movement that was recorded by the depth sensor. Spectral analysis of the time evolution of the mouth area video frames was also used for heart rate estimation. Results estimated from the image and infrared data of the mouth area were compared with those obtained by contact measurements by Garmin sensors (www.garmin.com). The study proves that simple image and depth sensors can be used to efficiently record biomedical multidimensional data with sufficient accuracy to detect selected biomedical features using specific methods of computational intelligence. The achieved accuracy for non-contact detection of breathing rate was 0.26% and the accuracy of heart rate estimation was 1.47% for the infrared sensor. The following results show how video frames with depth data can be used to differentiate different kinds of breathing. The proposed method enables us to obtain and analyse data for diagnostic purposes in the home environment or during physical activities, enabling efficient human-machine interaction.",
"title": ""
},
{
"docid": "9cdcf6718ace17a768f286c74c0eb11c",
"text": "Trapa bispinosa Roxb. which belongs to the family Trapaceae is a small herb well known for its medicinal properties and is widely used worldwide. Trapa bispinosa or Trapa natans is an important plant of Indian Ayurvedic system of medicine which is used in the problems of stomach, genitourinary system, liver, kidney, and spleen. It is bitter, astringent, stomachic, diuretic, febrifuge, and antiseptic. The whole plant is used in gonorrhea, menorrhagia, and other genital affections. It is useful in diarrhea, dysentery, ophthalmopathy, ulcers, and wounds. These are used in the validated conditions in pitta, burning sensation, dipsia, dyspepsia, hemorrhage, hemoptysis, diarrhea, dysentery, strangely, intermittent fever, leprosy, fatigue, inflammation, urethrorrhea, fractures, erysipelas, lumbago, pharyngitis, bronchitis and general debility, and suppressing stomach and heart burning. Maybe it is due to photochemical content of Trapa bispinosa having high quantity of minerals, ions, namely, Ca, K, Na, Zn, and vitamins; saponins, phenols, alkaloids, H-donation, flavonoids are reported in the plants. Nutritional and biochemical analyses of fruits of Trapa bispinosa in 100 g showed 22.30 and 71.55% carbohydrate, protein contents were 4.40% and 10.80%, a percentage of moisture, fiber, ash, and fat contents were 70.35 and 7.30, 2.05 and 6.35, 2.30 and 8.50, and 0.65 and 1.85, mineral contents of the seeds were 32 mg and 102.85 mg calcium, 1.4 and 3.8 mg Iron, and 121 and 325 mg phosphorus in 100 g, and seeds of Trapa bispinosa produced 115.52 and 354.85 Kcal of energy, in fresh and dry fruits, respectively. Chemical analysis of the fruit and fresh nuts having considerable water content citric acid and fresh fruit which substantiates its importance as dietary food also reported low crude lipid, and major mineral present with confirming good amount of minerals as an iron and manganese potassium were contained in the fruit. Crude fiber, total protein content of the water chestnut kernel, Trapa bispinosa are reported. In this paper, the recent reports on nutritional, phytochemical, and pharmacological aspects of Trapa bispinosa Roxb, as a medicinal and nutritional food, are reviewed.",
"title": ""
}
] |
scidocsrr
|
723c164531ea26a6ea564332c27f1b8c
|
Adversarial Examples Are a Natural Consequence of Test Error in Noise
|
[
{
"docid": "8c0c2d5abd8b6e62f3184985e8e01d66",
"text": "Neural networks are known to be vulnerable to adversarial examples: inputs that are close to natural inputs but classified incorrectly. In order to better understand the space of adversarial examples, we survey ten recent proposals that are designed for detection and compare their efficacy. We show that all can be defeated by constructing new loss functions. We conclude that adversarial examples are significantly harder to detect than previously appreciated, and the properties believed to be intrinsic to adversarial examples are in fact not. Finally, we propose several simple guidelines for evaluating future proposed defenses.",
"title": ""
},
{
"docid": "7774017a3468e3e390753ebbe98af4d0",
"text": "We show that there exists an inherent tension between the goal of adversarial robustness and that of standard generalization. Specifically, training robust models may not only be more resource-consuming, but also lead to a reduction of standard accuracy. We demonstrate that this trade-off between the standard accuracy of a model and its robustness to adversarial perturbations provably exists even in a fairly simple and natural setting. These findings also corroborate a similar phenomenon observed in practice. Further, we argue that this phenomenon is a consequence of robust classifiers learning fundamentally different feature representations than standard classifiers. These differences, in particular, seem to result in unexpected benefits: the representations learned by robust models tend to align better with salient data characteristics and human perception.",
"title": ""
},
{
"docid": "4e1a8239889f95f159a086f4c2fb20c6",
"text": "Advances in machine learning have led to broad deployment of systems with impressive performance on important problems. Nonetheless, these systems can be induced to make errors on data that are surprisingly similar to examples the learned system handles correctly. The existence of these errors raises a variety of questions about out-of-sample generalization and whether bad actors might use such examples to abuse deployed systems. As a result of these security concerns, there has been a flurry of recent papers proposing algorithms to defend against such malicious perturbations of correctly handled examples. It is unclear how such misclassifications represent a different kind of security problem than other errors, or even other attacker-produced examples that have no specific relationship to an uncorrupted input. In this paper, we argue that adversarial example defense papers have, to date, mostly considered abstract, toy games that do not relate to any specific security concern. Furthermore, toy games that do not relate to any specific security concern. Furthermore, defense papers have not yet precisely described all the abilities and limitations of attackers that would be relevant in practical security. Towards this end, we establish a taxonomy of motivations, constraints, and abilities for more plausible adversaries. Finally, we provide a series of recommendations outlining a path forward for future work to more clearly articulate the threat model and perform more meaningful evaluation.",
"title": ""
}
] |
[
{
"docid": "488d55fbf55e9a7eb6e1122ac262bc35",
"text": "Adult stem cells provide replacement and repair descendants for normal turnover or injured tissues. These cells have been isolated and expanded in culture, and their use for therapeutic strategies requires technologies not yet perfected. In the 1970s, the embryonic chick limb bud mesenchymal cell culture system provided data on the differentiation of cartilage, bone, and muscle. In the 1980s, we used this limb bud cell system as an assay for the purification of inductive factors in bone. In the 1990s, we used the expertise gained with embryonic mesenchymal progenitor cells in culture to develop the technology for isolating, expanding, and preserving the stem cell capacity of adult bone marrow-derived mesenchymal stem cells (MSCs). The 1990s brought us into the new field of tissue engineering, where we used MSCs with site-specific delivery vehicles to repair cartilage, bone, tendon, marrow stroma, muscle, and other connective tissues. In the beginning of the 21st century, we have made substantial advances: the most important is the development of a cell-coating technology, called painting, that allows us to introduce informational proteins to the outer surface of cells. These paints can serve as targeting addresses to specifically dock MSCs or other reparative cells to unique tissue addresses. The scientific and clinical challenge remains: to perfect cell-based tissue-engineering protocols to utilize the body's own rejuvenation capabilities by managing surgical implantations of scaffolds, bioactive factors, and reparative cells to regenerate damaged or diseased skeletal tissues.",
"title": ""
},
{
"docid": "6943382325a38692a5fbb1ceb7bdb2fc",
"text": "Widespread use of wireless devices presents new challenges for network operators, who need to provide service to ever larger numbers of mobile end users, while ensuring Quality-of-Service guarantees. In this paper we describe a new distributed routing algorithm that performs dynamic load-balancing for wireless access networks. The algorithm constructs a load-balanced backbone tree, which simplifies routing and avoids per-destination state for routing and per-flow state for QoS reservations. We evaluate the performance of the algorithm using several metrics including adaptation to mobility, degree of load-balance, bandwidth blocking rate, and convergence speed. We find that the algorithm achieves better network utilization by lowering bandwidth blocking rates than other methods.",
"title": ""
},
{
"docid": "45252c6ffe946bf0f9f1984f60ffada6",
"text": "Reparameterization of variational auto-encoders with continuous random variables is an effective method for reducing the variance of their gradient estimates. In this work we reparameterize discrete variational auto-encoders using the Gumbel-Max perturbation model that represents the Gibbs distribution using the arg max of randomly perturbed encoder. We subsequently apply the direct loss minimization technique to propagate gradients through the reparameterized arg max. The resulting gradient is estimated by the difference of the encoder gradients that are evaluated in two arg max predictions.",
"title": ""
},
{
"docid": "db550980a6988bcd9a96486619d6478c",
"text": "Atmospheric turbulence induced fading is one of the main impairments affecting free-space optics (FSO) communications. In recent years, Gamma-Gamma fading has become the dominant fading model for FSO links because of its excellent agreement with measurement data for a wide range of turbulence conditions. However, in contrast to RF communications, the analysis techniques for FSO are not well developed and prior work has mostly resorted to simulations and numerical integration for performance evaluation in Gamma-Gamma fading. In this paper, we express the pairwise error probabilities of single-input single- output (SISO) and multiple-input multiple-output (MIMO) FSO systems with intensity modulation and direct detection (IM/DD) as generalized infinite power series with respect to the signal- to-noise ratio. For numerical evaluation these power series are truncated to a finite number of terms and an upper bound for the associated approximation error is provided. The resulting finite power series enables fast and accurate numerical evaluation of the bit error rate of IM/DD FSO with on-off keying and pulse position modulation in SISO and MIMO Gamma-Gamma fading channels. Furthermore, we extend the well-known RF concepts of diversity and combining gain to FSO and Gamma-Gamma fading. In particular, we provide simple closed-form expressions for the diversity gain and the combining gain of MIMO FSO with repetition coding across lasers at the transmitter and equal gain combining or maximal ratio combining at the receiver.",
"title": ""
},
{
"docid": "08dfd4bb173f7d70cff710590b988f08",
"text": "Gallium-67 citrate is currently considered as the tracer of first choice in the diagnostic workup of fever of unknown origin (FUO). Fluorine-18 2'-deoxy-2-fluoro-D-glucose (FDG) has been shown to accumulate in malignant tumours but also in inflammatory processes. The aim of this study was to prospectively evaluate FDG imaging with a double-head coincidence camera (DHCC) in patients with FUO in comparison with planar and single-photon emission tomography (SPET) 67Ga citrate scanning. Twenty FUO patients underwent FDG imaging with a DHCC which included transaxial and longitudinal whole-body tomography. In 18 of these subjects, 67Ga citrate whole-body and SPET imaging was performed. The 67Ga citrate and FDG images were interpreted by two investigators, both blinded to the results of other diagnostic modalities. Forty percent (8/20) of the patients had infection, 25% (5/20) had auto-immune diseases, 10% (2/20) had neoplasms and 15% (3/20) had other diseases. Fever remained unexplained in 10% (2/20) of the patients. Of the 20 patients studied, FDG imaging was positive and essentially contributed to the final diagnosis in 11 (55%). The sensitivity of transaxial FDG tomography in detecting the focus of fever was 84% and the specificity, 86%. Positive and negative predictive values were 92% and 75%, respectively. If the analysis was restricted to the 18 patients who were investigated both with 67Ga citrate and FDG, sensitivity was 81% and specificity, 86%. Positive and negative predictive values were 90% and 75%, respectively. The diagnostic accuracy of whole-body FDG tomography (again restricted to the aforementioned 18 patients) was lower (sensitivity, 36%; specificity, 86%; positive and negative predictive values, 80% and 46%, respectively). 67Ga citrate SPET yielded a sensitivity of 67% in detecting the focus of fever and a specificity of 78%. Positive and negative predictive values were 75% and 70%, respectively. A low sensitivity (45%), but combined with a high specificity (100%), was found in planar 67Ga imaging. Positive and negative predictive values were 100% and 54%, respectively. It is concluded that in the context of FUO, transaxial FDG tomography performed with a DHCC is superior to 67Ga citrate SPET. This seems to be the consequence of superior tracer kinetics of FDG compared with those of 67Ga citrate and of a better spatial resolution of a DHCC system compared with SPET imaging. In patients with FUO, FDG imaging with either dedicated PET or DHCC should be considered the procedure of choice.",
"title": ""
},
{
"docid": "5ac66257b2e43eb11ae906672acef904",
"text": "Noticing that different information sources often provide complementary coverage of word sense and meaning, we propose a simple and yet effective strategy for measuring lexical semantics. Our model consists of a committee of vector space models built on a text corpus, Web search results and thesauruses, and measures the semantic word relatedness using the averaged cosine similarity scores. Despite its simplicity, our system correlates with human judgements better or similarly compared to existing methods on several benchmark datasets, including WordSim353.",
"title": ""
},
{
"docid": "52462bd444f44910c18b419475a6c235",
"text": "Snoring is a common symptom of serious chronic disease known as obstructive sleep apnea (OSA). Knowledge about the location of obstruction site (VVelum, OOropharyngeal lateral walls, T-Tongue, E-Epiglottis) in the upper airways is necessary for proper surgical treatment. In this paper we propose a dual source-filter model similar to the source-filter model of speech to approximate the generation process of snore audio. The first filter models the vocal tract from lungs to the point of obstruction with white noise excitation from the lungs. The second filter models the vocal tract from the obstruction point to the lips/nose with impulse train excitation which represents vibrations at the point of obstruction. The filter coefficients are estimated using the closed and open phases of the snore beat cycle. VOTE classification is done by using SVM classifier and filter coefficients as features. The classification experiments are performed on the development set (283 snore audios) of the MUNICH-PASSAU SNORE SOUND CORPUS (MPSSC). We obtain an unweighted average recall (UAR) of 49.58%, which is higher than the INTERSPEECH-2017 snoring sub-challenge baseline technique by ∼3% (absolute).",
"title": ""
},
{
"docid": "031a29780f2545a2bd21e9d85adf3791",
"text": "This paper presents a system for example-based character animation using shape interpolation (blend shapes) generated from tracked offline facial feature. First, the subject was recorded expressing various facial expressions (happy, sad, fear, anger, surprise). The recordings were then processed and marked on specific places (feature points) on the face where certain expressions would have the most significant change (the eyes and mouth area). The facial movements, or control parameters, of the human subject are used as movement parameter for the freeform deformation blend shape interpolation.",
"title": ""
},
{
"docid": "07b6fffa11253bcaf80f38518c74b1e9",
"text": "Targeted cancer therapies that use genetics are successful, but principles for selectively targeting tumor metabolism that is also dependent on the environment remain unknown. We now show that differences in rate-controlling enzymes during the Warburg effect (WE), the most prominent hallmark of cancer cell metabolism, can be used to predict a response to targeting glucose metabolism. We establish a natural product, koningic acid (KA), to be a selective inhibitor of GAPDH, an enzyme we characterize to have differential control properties over metabolism during the WE. With machine learning and integrated pharmacogenomics and metabolomics, we demonstrate that KA efficacy is not determined by the status of individual genes, but by the quantitative extent of the WE, leading to a therapeutic window in vivo. Thus, the basis of targeting the WE can be encoded by molecular principles that extend beyond the status of individual genes.",
"title": ""
},
{
"docid": "4cfd7fab35e081f2d6f81ec23c4d0d18",
"text": "In the past few decades, analysis of heart sound signals (i.e. the phonocardiogram or PCG), especially for automated heart sound segmentation and classification, has been widely studied and has been reported to have the potential value to detect pathology accurately in clinical applications. However, comparative analyses of algorithms in the literature have been hindered by the lack of high-quality, rigorously validated, and standardized open databases of heart sound recordings. This paper describes a public heart sound database, assembled for an international competition, the PhysioNet/Computing in Cardiology (CinC) Challenge 2016. The archive comprises nine different heart sound databases sourced from multiple research groups around the world. It includes 2435 heart sound recordings in total collected from 1297 healthy subjects and patients with a variety of conditions, including heart valve disease and coronary artery disease. The recordings were collected from a variety of clinical or nonclinical (such as in-home visits) environments and equipment. The length of recording varied from several seconds to several minutes. This article reports detailed information about the subjects/patients including demographics (number, age, gender), recordings (number, location, state and time length), associated synchronously recorded signals, sampling frequency and sensor type used. We also provide a brief summary of the commonly used heart sound segmentation and classification methods, including open source code provided concurrently for the Challenge. A description of the PhysioNet/CinC Challenge 2016, including the main aims, the training and test sets, the hand corrected annotations for different heart sound states, the scoring mechanism, and associated open source code are provided. In addition, several potential benefits from the public heart sound database are discussed.",
"title": ""
},
{
"docid": "7a9572c3c74f9305ac0d817b04e4399a",
"text": "Due to the limited length and freely constructed sentence structures, it is a difficult classification task for short text classification. In this paper, a short text classification framework based on Siamese CNNs and few-shot learning is proposed. The Siamese CNNs will learn the discriminative text encoding so as to help classifiers distinguish those obscure or informal sentence. The different sentence structures and different descriptions of a topic are viewed as ‘prototypes’, which will be learned by few-shot learning strategy to improve the classifier’s generalization. Our experimental results show that the proposed framework leads to better results in accuracies on twitter classifications and outperforms some popular traditional text classification methods and a few deep network approaches.",
"title": ""
},
{
"docid": "19a697a6c02d0519c3ed619763db5c73",
"text": "Consider a communication network in which certain source nodes multicast information to other nodes on the network in the multihop fashion where every node can pass on any of its received data to others. We are interested in how fast eachnode can receive the complete information, or equivalently, what the information rate arriving at eachnode is. Allowing a node to encode its received data before passing it on, the question involves optimization of the multicast mechanisms at the nodes. Among the simplest coding schemes is linear coding, which regards a block of data as a vector over a certain base field and allows a node to apply a linear transformation to a vector before passing it on. We formulate this multicast problem and prove that linear coding suffices to achieve the optimum, which is the max-flow from the source to each receiving node.",
"title": ""
},
{
"docid": "e3a4a8470fe3fdbd8f49386ee39de8d4",
"text": "This paper studies the problem of categorical data clustering, especially for transactional data characterized by high dimensionality and large volume. Starting from a heuristic method of increasing the height-to-width ratio of the cluster histogram, we develop a novel algorithm -- CLOPE, which is very fast and scalable, while being quite effective. We demonstrate the performance of our algorithm on two real world datasets, and compare CLOPE with the state-of-art algorithms.",
"title": ""
},
{
"docid": "753e0af8b59c8bfd13b63c3add904ffe",
"text": "Background: Surgery of face and parotid gland may cause injury to branches of the facial nerve, which results in paralysis of muscles of facial expression. Knowledge of branching patterns of the facial nerve and reliable landmarks of the surrounding structures are essential to avoid this complication. Objective: Determine the facial nerve branching patterns, the course of the marginal mandibular branch (MMB), and the extraparotid ramification in relation to the lateral palpebral line (LPL). Materials and methods: One hundred cadaveric half-heads were dissected for determining the facial nerve branching patterns according to the presence of anastomosis between branches. The course of the MMB was followed until it entered the depressor anguli oris in 49 specimens. The vertical distance from the mandibular angle to this branch was measured. The horizontal distance from the LPL to the otobasion superious (LPL-OBS) and the apex of the parotid gland (LPL-AP) were measured in 52 specimens. Results: The branching patterns of the facial nerve were categorized into six types. The least common (1%) was type I (absent of anastomosis), while type V, the complex pattern was the most common (29%). Symmetrical branching pattern occurred in 30% of cases. The MMB was coursing below the lower border of the mandible in 57% of cases. The mean vertical distance was 0.91±0.22 cm. The mean horizontal distances of LPL-OBS and LPLAP were 7.24±0.6 cm and 3.95±0.96 cm, respectively. The LPL-AP length was 54.5±11.4% of LPL-OBS. Conclusion: More complex branching pattern of the facial nerve was found in this population and symmetrical branching pattern occurred less of ten. The MMB coursed below the lower border of the angle of mandible with a mean vertical distance of one centimeter. The extraparotid ramification of the facial nerve was located in the area between the apex of the parotid gland and the LPL.",
"title": ""
},
{
"docid": "e5fa2011c64c3e1f7e9d97f545579d2b",
"text": "Remote health monitoring (RHM) can help save the cost burden of unhealthy lifestyles. Of increased popularity is the use of smartphones to collect data, measure physical activity, and provide coaching and feedback to users. One challenge with this method is to improve adherence to prescribed medical regimens. In this paper we present a new battery optimization method that increases the battery lifetime of smartphones which monitor physical activity. We designed a system, WANDA-CVD, to test our battery optimization method. The focus of this report describes our in-lab pilot study and a study aimed at reducing cardiovascular disease (CVD) in young women, the Women's Heart Health study. Conclusively, our battery optimization technique improved battery lifetime by 300%. This method also increased participant adherence to the remote health monitoring system in the Women's Heart Health study by 53%.",
"title": ""
},
{
"docid": "5d821a8605db10dc670cc5fcc3115d78",
"text": "Absfrucf In this paper a low-voltage two-stage O p Amp is presented. The O p Amp features rail-to-rail operation and has an @put stage with a constant transconductance (%) over the entire common-mode input range. The input stage consists of an nand a PMOS differential pair connected in parallel. The constant gm is accomplished by regulating the tail-currents with the aid of an MOS translinear (MTL) circuit. The resulting gn is constant within 5%",
"title": ""
},
{
"docid": "4b445e5198c74bd55acdeafb9d52fbb4",
"text": "It has been increasingly popular to build voice-over-IP (VoIP) applications based on peer-to-peer (P2P) networks in the Internet. However, many such VoIP applications free-ride the network bandwidth of Internet Service Providers (ISPs). Thus their success may come at a cost to ISPs, especially those on the edge of the Internet. In this paper, we study the VoIP quality of Skype, a popular P2P-based VoIP application. Specifically, using large-scale end-toend measurements, we first conduct a systematic analysis of Skype supernode network. We then investigate the impacts of the access capacity constraint and the AS policy constraint on the VoIP quality of Skype. We show that even when free-riding is no longer possible for only 20% of supernodes that are located in stub ISPs, the overall VoIP quality of Skype degrades significantly, and a large percentage of VoIP sessions will have unacceptable quality. This result clearly demonstrates the potential danger of building VoIP applications based on P2P networks without taking into account operational models of the Internet. We also study using time diversity in traffic patterns to reduce the impacts of the preceding",
"title": ""
},
{
"docid": "1ab4f605d67dabd3b2815a39b6123aa4",
"text": "This paper examines and provides the theoretical evidence of the feasibility of 60 GHz mmWave in wireless body area networks (WBANs), by analyzing its properties. It has been shown that 60 GHz based communication could better fit WBANs compared to traditional 2.4 GHz based communication because of its compact network coverage, miniaturized devices, superior frequency reuse, multi-gigabyte transmission rate and the therapeutic merits for human health. Since allowing coexistence among the WBANs can enhance the efficiency of the mmWave based WBANs, we formulated the coexistence problem as a non-cooperative distributed power control game. This paper proves the existence of Nash equilibrium (NE) and derives the best response move as a solution. The efficiency of the NE is also improved by modifying the utility function and introducing a pair of pricing factors. Our simulation results indicate that the proposed pricing policy significantly improves the efficiency in terms of Pareto optimality and social optimality.",
"title": ""
},
{
"docid": "2a1f1576ab73e190dce400dedf80df36",
"text": "No wonder you activities are, reading will be always needed. It is not only to fulfil the duties that you need to finish in deadline time. Reading will encourage your mind and thoughts. Of course, reading will greatly develop your experiences about everything. Reading motivation reconsidered the concept of competence is also a way as one of the collective books that gives many advantages. The advantages are not only for you, but for the other peoples with those meaningful benefits.",
"title": ""
},
{
"docid": "1a5f56c7c7a9d44a762ba94297f3ca7a",
"text": "BACKGROUND\nFloods are the most common type of global natural disaster. Floods have a negative impact on mental health. Comprehensive evaluation and review of the literature are lacking.\n\n\nOBJECTIVE\nTo systematically map and review available scientific evidence on mental health impacts of floods caused by extended periods of heavy rain in river catchments.\n\n\nMETHODS\nWe performed a systematic mapping review of published scientific literature in five languages for mixed studies on floods and mental health. PUBMED and Web of Science were searched to identify all relevant articles from 1994 to May 2014 (no restrictions).\n\n\nRESULTS\nThe electronic search strategy identified 1331 potentially relevant papers. Finally, 83 papers met the inclusion criteria. Four broad areas are identified: i) the main mental health disorders-post-traumatic stress disorder, depression and anxiety; ii] the factors associated with mental health among those affected by floods; iii) the narratives associated with flooding, which focuses on the long-term impacts of flooding on mental health as a consequence of the secondary stressors; and iv) the management actions identified. The quantitative and qualitative studies have consistent findings. However, very few studies have used mixed methods to quantify the size of the mental health burden as well as exploration of in-depth narratives. Methodological limitations include control of potential confounders and short-term follow up.\n\n\nLIMITATIONS\nFloods following extreme events were excluded from our review.\n\n\nCONCLUSIONS\nAlthough the level of exposure to floods has been systematically associated with mental health problems, the paucity of longitudinal studies and lack of confounding controls precludes strong conclusions.\n\n\nIMPLICATIONS\nWe recommend that future research in this area include mixed-method studies that are purposefully designed, using more rigorous methods. Studies should also focus on vulnerable groups and include analyses of policy and practical responses.",
"title": ""
}
] |
scidocsrr
|
0695e55fafd7f12a1058c4e4a5a09da9
|
e-Services: Problems, Opportunities, and Digital Platforms
|
[
{
"docid": "667a2ea2b8ed7d2c709f04d8cd6617c6",
"text": "Knowledge centric activities of developing new products and services are becoming the primary source of sustainable competitive advantage in an era characterized by short product life cycles, dynamic markets and complex processes. We Ž . view new product development NPD as a knowledge-intensive activity. Based on a case study in the consumer electronics Ž . industry, we identify problems associated with knowledge management KM in the context of NPD by cross-functional collaborative teams. We map these problems to broad Information Technology enabled solutions and subsequently translate these into specific system characteristics and requirements. A prototype system that meets these requirements developed to capture and manage tacit and explicit process knowledge is further discussed. The functionalities of the system include functions for representing context with informal components, easy access to process knowledge, assumption surfacing, review of past knowledge, and management of dependencies. We demonstrate the validity our proposed solutions using scenarios drawn from our case study. q 1999 Elsevier Science B.V. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "378bbcc0416531f51ea05219361167f1",
"text": "This paper introduces a knowledge-driven approach to real-time, continuous activity recognition based on multisensor data streams in smart homes. The approach goes beyond the traditional data-centric methods for activity recognition in three ways. First, it makes extensive use of domain knowledge in the life cycle of activity recognition. Second, it uses ontologies for explicit context and activity modeling and representation. Third and finally, it exploits semantic reasoning and classification for activity inferencing, thus enabling both coarse-grained and fine-grained activity recognition. In this paper, we analyze the characteristics of smart homes and Activities of Daily Living (ADL) upon which we built both context and ADL ontologies. We present a generic system architecture for the proposed knowledge-driven approach and describe the underlying ontology-based recognition process. Special emphasis is placed on semantic subsumption reasoning algorithms for activity recognition. The proposed approach has been implemented in a function-rich software system, which was deployed in a smart home research laboratory. We evaluated the proposed approach and the developed system through extensive experiments involving a number of various ADL use scenarios. An average activity recognition rate of 94.44 percent was achieved and the average recognition runtime per recognition operation was measured as 2.5 seconds.",
"title": ""
},
{
"docid": "de106c668f8c7257deedbb6a0d51f5b7",
"text": "Human capital is a key success factor in any organisation. Dissatisfied and unhappy staff may not perform maximally, and this could affect an organisation’s products and services. This study examined the extent to which leadership style, correlated with job satisfaction intention in private university libraries South-West, Nigeria.Survey research design was adopted. The population consisted of all the 361 library staff. Findings revealed that the level of job satisfaction of library staff was low, and that the most practice leadership style is autocratic. It also revealed a significant relationship between leadership style and job satisfaction (r = 0.028, p < 0.05); The study concluded that leadership style contributed significantly to the low level of job satisfaction. It is recommended that library management should be more democratic, payment of allowances should be put in place. This would increase the job satisfaction of the employees in private university libraries. Background to the Study Organization’s competitive advantage, success, and sustainability in an ever increasing turbulent global market are mainly predicated on the job satisfaction and turnover intention of quality human capital. One major reason for a continued interest in the phenomenon of job satisfaction lies in embedded propensity for positive or negative effects on many forms of employees’ behavioural tendencies such as efficiency, productivity, employee relations, absenteeism and rate of turnover. Job satisfaction implies the way an individual feels about rewards, people, events and amount of mental gladness on the job; it can also be described as an emotional response to a job circumstance that may not be seen (Somvir, 2013). Job satisfaction therefore is a veritable ingredient in any work environment as it determines the behavioural patterns of the employees. It also relates to the degree to which workers’ needs and expectations are met in comparison to the prevailing national and global standard. Job satisfaction is conceptualized to mean the level of positive attitude that a librarian and other library staff displays when performing his/her duties in the university library, and the rate at which his/her basic needs are met by the employers. It is interesting to note that if librarians and other library staff are well catered for by the university authorities in the area of due recognition for a job well done, good leadership style for the administration of the university library coupled with a career development opportunities for librarians and other library staff to enhance development of their managerial skills, and conducive work environment as well as improved remunerations (good salaries and wages); their level of job satisfaction will be greatly improved from what is presently existing in most Nigerian universities. (Yaya, 2016). Unfortunately, it is observed that the level of job satisfaction among librarians and other library staff in most university libraries in Nigeria is probably very low compared to what is obtainable among other faculty members of the same educational sector. Job satisfaction as noted by Babalola and Nwalo (2013), enhance organisational success and reduce turnover intention of workers in any organizationespecially in library and information centres as a job satisfied worker is a happy and effective worker. Some factors that are in organisation may affect the job satisfaction of library staff. One of such factors is leadership style. Leadership style is an issue of concern that organizations should pay attention to; the leadership style prevalent in any establishment (including library and information centres) will influence the behaviour of employees in that organization. Leadership style plays a major role in determining the library staff job satisfaction. Thus, effective leadership is a key success factor in employees and organisation’s success or failure. It could be perceived as a process of working through people to achieve organisational goals and objectives. Leadership style can be described as the method or the style that a leader adopts in the management of resources in the organizations including human resources. Findings have shown that there are various leadership styles that can be adopted in the administration of organisations; (Khan, Khan, Qureshi, Ismail, Rauf, Latif, and Tahir 2015; Segun-Adeniran, 2015; Sharma &Jain 2013; Onuoha, 2013). Some of these styles are autocratic, democratic and laissez-fair. Other researchers also classified leadership styles as transactional, transformational and situational. In general, leaders at one point in time adopt a style of leadership in the day to day administration of their organizations; and the style of leadership that is prevalent in an organization/library and information centres will have influence on the organizational resources, functions and services or products. Leadership is a process or an act of inspiring people so as to get the best out of them and at the same time achieve expected results. The leadership style adopted by managers or leaders at one point or the other will influence the librarians’ job satisfaction and turnover intention. Various researchers such as (Kaladeh, (2013), Izidor and Iheriohanma (2015). pointed out that leadership style is crucial for staff job satisfaction and intention to stay, bearing in mind that lack of staff satisfaction can increase the gap in turnover intention rate and manpower deficiency in any organisation including library and information centres; and that leadership and supervision are important in employee retention, and that leadership behavior as perceived by employee, is an important factor of workers’ job satisfaction, dedication, retention and turnover intention. A suitable leadership style existing in any kind of organization could possibly foist and foster enduring organisational culture capable of inspiring employees buy-in for greater satisfaction and loyalty. Statement of the Problem Human capital is a key success factor in any organisation. Dissatisfied and unhappy employees in any organisation may not perform optimally and this may translate into poor productivity, high rate of staff turnover and threat to the organisation generally. Research has shown that the level of job satisfaction of library personnel in Nigerian university libraries is low (Babalola & Nwalo, 2013). This actually is a course for concern. Although some researchers such as Seed and Weseem, (2014); have been carried out on job satisfaction of staff in university libraries, from the researcher’s knowledge, these studies have not studied the variable of leadership style especially in private university libraries. The aim of this research is to find out the relationships among these variables; especially, the extent to which leadership styleinfluence the job satisfaction of library staff in private university libraries, South-West, Nigeria. Objective of the Study The general objective of the study is to investigate leadership style as determinant of job satisfactionof library staff in private universities library, South-West, Nigeria. The specific objectives are to: 1.find out the level of job satisfaction of library staff in private university libraries in south-west Nigeria; 2.ascertain the leadership styles prevalent in private university libraries in south-west Nigeria; 3.find out the relationship between leadership style and job satisfaction of library staff in private university libraries south-west, Nigeria.",
"title": ""
},
{
"docid": "d2abcdcdb6650c30838507ec1521b263",
"text": "Deep neural networks (DNNs) have achieved great success in solving a variety of machine learning (ML) problems, especially in the domain of image recognition. However, recent research showed that DNNs can be highly vulnerable to adversarially generated instances, which look seemingly normal to human observers, but completely confuse DNNs. These adversarial samples are crafted by adding small perturbations to normal, benign images. Such perturbations, while imperceptible to the human eye, are picked up by DNNs and cause them to misclassify the manipulated instances with high confidence. In this work, we explore and demonstrate how systematic JPEG compression can work as an effective pre-processing step in the classification pipeline to counter adversarial attacks and dramatically reduce their effects (e.g., Fast Gradient Sign Method, DeepFool). An important component of JPEG compression is its ability to remove high frequency signal components, inside square blocks of an image. Such an operation is equivalent to selective blurring of the image, helping remove additive perturbations. Further, we propose an ensemble-based technique that can be constructed quickly from a given well-performing DNN, and empirically show how such an ensemble that leverages JPEG compression can protect a model from multiple types of adversarial attacks, without requiring knowledge about the model.",
"title": ""
},
{
"docid": "a2df6d7e35323f02026b180270dcf205",
"text": "In an early study, a thermal model has been developed, using finite element simulations, to study the temperature field and response in the electron beam additive manufacturing (EBAM) process, with an ability to simulate single pass scanning only. In this study, an investigation was focused on the initial thermal conditions, redesigned to analyze a critical substrate thickness, above which the preheating temperature penetration will not be affected. Extended studies are also conducted on more complex process configurations, such as multi-layer raster scanning, which are close to actual operations, for more accurate representations of the transient thermal phenomenon.",
"title": ""
},
{
"docid": "d34e4224d30a367e0254ad4ba09425a7",
"text": "In this chapter, the intuitive link between balanced, healthy, and supportive psychosocial work environments and a variety of vitally important patient, nurse, and organizational outcomes is discussed with reference to a number of clearly defined and well-researched concepts. Among the essential concepts that ground the rest of the book is the notion of a bundle of factors that provide a context for nurses’ work and are known collectively as the practice environment. Landmark studies that focused specifically on nurses’ experiences of their work environments in exemplary hospitals examined so-called Magnet hospitals, leading to a framework that describes the practice environment and its linkage with professional wellbeing, occupational stress, and quality of practice and productivity. Many ideas and models have obvious connections to the notion of practice environment such as Job Demand– Control–Support model, worklife dimensions and burnout, concepts related to burnout such as compassion fatigue, and work engagement as a mirror image concept of burnout, as well as notions of empowerment and authentic leadership. These concepts have been chosen for discussion here based on critical masses of evidence pointing to their usefulness in healthcare management and specifically in the management of nursing services. Together all of these concepts and supporting research and scholarship speak to a common point: intentional leadership approaches, grounded in a comprehensive understanding of nurses’ psychosocial experiences of their work, are essential to nurses’ abilities to respond to complex patients’ needs in rapidly changing healthcare contexts and socioeconomic conditions.",
"title": ""
},
{
"docid": "914daf0fd51e135d6d964ecbe89a5b29",
"text": "Large-scale parallel programming environments and algorithms require efficient group-communication on computing systems with failing nodes. Existing reliable broadcast algorithms either cannot guarantee that all nodes are reached or are very expensive in terms of the number of messages and latency. This paper proposes Corrected-Gossip, a method that combines Monte Carlo style gossiping with a deterministic correction phase, to construct a Las Vegas style reliable broadcast that guarantees reaching all the nodes at low cost. We analyze the performance of this method both analytically and by simulations and show how it reduces the latency and network load compared to existing algorithms. Our method improves the latency by 20% and the network load by 53% compared to the fastest known algorithm on 4,096 nodes. We believe that the principle of corrected-gossip opens an avenue for many other reliable group communication operations.",
"title": ""
},
{
"docid": "b4c5ddab0cb3e850273275843d1f264f",
"text": "The increase of malware that are exploiting the Internet daily has become a serious threat. The manual heuristic inspection of malware analysis is no longer considered effective and efficient compared against the high spreading rate of malware. Hence, automated behavior-based malware detection using machine learning techniques is considered a profound solution. The behavior of each malware on an emulated (sandbox) environment will be automatically analyzed and will generate behavior reports. These reports will be preprocessed into sparse vector models for further machine learning (classification). The classifiers used in this research are k-Nearest Neighbors (kNN), Naïve Bayes, J48 Decision Tree, Support Vector Machine (SVM), and Multilayer Perceptron Neural Network (MlP). Based on the analysis of the tests and experimental results of all the 5 classifiers, the overall best performance was achieved by J48 decision tree with a recall of 95.9%, a false positive rate of 2.4%, a precision of 97.3%, and an accuracy of 96.8%. In summary, it can be concluded that a proof-of-concept based on automatic behavior-based malware analysis and the use of machine learning techniques could detect malware quite effectively and efficiently.",
"title": ""
},
{
"docid": "c7162cc2e65c52d9575fe95e2c4f62f4",
"text": "The enactive approach to cognition is typically proposed as a viable alternative to traditional cognitive science. Enactive cognition displaces the explanatory focus from the internal representations of the agent to the direct sensorimotor interaction with its environment. In this paper, we investigate enactive learning through means of artificial agent simulations. We compare the performances of the enactive agent to an agent operating on classical reinforcement learning in foraging tasks within maze environments. The characteristics of the agents are analysed in terms of the accessibility of the environmental states, goals, and exploration/exploitation tradeoffs. We confirm that the enactive agent can successfully interact with its environment and learn to avoid unfavourable interactions using intrinsically defined goals. The performance of the enactive agent is shown to be limited by the number of affordable actions.",
"title": ""
},
{
"docid": "a5d56e4cd8273a1ce9e1a3c8b02a3cb4",
"text": "BACKGROUND\nTo date, only 1 controlled study has found a drug (haloperidol) to be efficacious in augmenting response in patients with obsessive-compulsive disorder (OCD) refractory to serotonin reuptake inhibitor (SRI) monotherapy; patients with comorbid chronic tic disorders showed a preferential response. This report describes the first controlled study of risperidone addition in patients with OCD refractory to treatment with SRI alone.\n\n\nMETHODS\nSeventy adult patients with a primary DSM-IV diagnosis of OCD received 12 weeks of treatment with an SRI. Thirty-six patients were refractory to the SRI and were randomized in a double-blind manner to 6 weeks of risperidone (n = 20) or placebo (n = 16) addition. Behavioral ratings, including the Yale-Brown Obsessive Compulsive Scale, were obtained at baseline and throughout the trial. Placebo-treated patients subsequently received an identical open-label trial of risperidone addition.\n\n\nRESULTS\nFor study completers, 9 (50%) of 18 risperidone-treated patients were responders (mean daily dose, 2.2 +/-0.7 mg/d) compared with 0 of 15 in the placebo addition group (P<. 005). Seven (50%) of 14 patients who received open-label risperidone addition responded. Risperidone addition was superior to placebo in reducing OCD (P<.001), depressive (P<.001), and anxiety (P =.003) symptoms. There was no difference in response between OCD patients with and without comorbid diagnoses of chronic tic disorder or schizotypal personalty disorder. Other than mild, transient sedation, risperidone was well tolerated.\n\n\nCONCLUSION\nThese results suggest that OCD patients with and without comorbid chronic tic disorders or schizotypal personality disorder may respond to the addition of low-dose risperidone to ongoing SRI therapy.",
"title": ""
},
{
"docid": "f7a2f86526209860d7ea89d3e7f2b576",
"text": "Natural Language Processing continues to grow in popularity in a range of research and commercial applications, yet managing the wide array of potential NLP components remains a difficult problem. This paper describes CURATOR, an NLP management framework designed to address some common problems and inefficiencies associated with building NLP process pipelines; and EDISON, an NLP data structure library in Java that provides streamlined interactions with CURATOR and offers a range of useful supporting functionality.",
"title": ""
},
{
"docid": "2574576033f9cb0d3d65119d077cf9cf",
"text": "In this paper, we introduce a simple, yet powerful pipeline for medical image segmentation that combines Fully Convolutional Networks (FCNs) with Fully Convolutional Residual Networks (FC-ResNets). We propose and examine a design that takes particular advantage of recent advances in the understanding of both Convolutional Neural Networks as well as ResNets. Our approach focuses upon the importance of a trainable pre-processing when using FC-ResNets and we show that a low-capacity FCN model can serve as a pre-processor to normalize medical input data. In our image segmentation pipeline, we use FCNs to obtain normalized images, which are then iteratively refined by means of a FC-ResNet to generate a segmentation prediction. As in other fully convolutional approaches, our pipeline can be used off-the-shelf on different image modalities. We show that using this pipeline, we exhibit state-of-the-art performance on the challenging Electron Microscopy benchmark, when compared to other 2D methods. We improve segmentation results on CT images of liver lesions, when contrasting with standard FCN methods. Moreover, when applying our 2D pipeline on a challenging 3D MRI prostate segmentation challenge we reach results that are competitive even when compared to 3D methods. The obtained results illustrate the strong potential and versatility of the pipeline by achieving accurate segmentations on a variety of image modalities and different anatomical regions.",
"title": ""
},
{
"docid": "76882dc402b82d9fffb0621bc6016259",
"text": "Representing discrete words in a continuous vector space turns out to be useful for natural language applications related to text understanding. Meanwhile, it poses extensive challenges, one of which is due to the polysemous nature of human language. A common solution (a.k.a word sense induction) is to separate each word into multiple senses and create a representation for each sense respectively. However, this approach is usually computationally expensive and prone to data sparsity, since each sense needs to be managed discriminatively. In this work, we propose a new framework for generating context-aware text representations without diving into the sense space. We model the concept space shared among senses, resulting in a framework that is efficient in both computation and storage. Specifically, the framework we propose is one that: i) projects both words and concepts into the same vector space; ii) obtains unambiguous word representations that not only preserve the uniqueness among words, but also reflect their context-appropriate meanings. We demonstrate the effectiveness of the framework in a number of tasks on text understanding, including word/phrase similarity measurements, paraphrase identification and question-answer relatedness classification.",
"title": ""
},
{
"docid": "957073d854607640cc3ca2255efe7315",
"text": "The mixed methods approach has emerged as a ‘‘third paradigm’’ for social research. It has developed a platform of ideas and practices that are credible and distinctive and that mark the approach out as a viable alternative to quantitative and qualitative paradigms. However, there are also a number of variations and inconsistencies within the mixed methods approach that should not be ignored. This article argues the need for a vision of research paradigm that accommodates such variations and inconsistencies. It is argued that the use of ‘‘communities of practice’’ as the basis for such a research paradigm is (a) consistent with the pragmatist underpinnings of the mixed methods approach, (b) accommodates a level of diversity, and (c) has good potential for understanding the methodological choices made by those conducting mixed methods research.",
"title": ""
},
{
"docid": "159e040b0e74ad1b6124907c28e53daf",
"text": "People (pedestrians, drivers, passengers in public transport) use different services on small mobile gadgets on a daily basis. So far, mobile applications don't react to context changes. Running services should adapt to the changing environment and new services should be installed and deployed automatically. We propose a classification of context elements that influence the behavior of the mobile services, focusing on the challenges of the transportation domain. Malware Detection on Mobile Devices Asaf Shabtai*, Ben-Gurion University, Israel Abstract: We present various approaches for mitigating malware on mobile devices which we have implemented and evaluated on Google Android. Our work is divided into the following three segments: a host-based intrusion detection framework; an implementation of SELinux in Android; and static analysis of Android application files. We present various approaches for mitigating malware on mobile devices which we have implemented and evaluated on Google Android. Our work is divided into the following three segments: a host-based intrusion detection framework; an implementation of SELinux in Android; and static analysis of Android application files. Dynamic Approximative Data Caching in Wireless Sensor Networks Nils Hoeller*, IFIS, University of Luebeck Abstract: Communication in Wireless Sensor Networks generally is the most energy consuming task. Retrieving query results from deep within the sensor network therefore consumes a lot of energy and hence shortens the network's lifetime. In this work optimizations Communication in Wireless Sensor Networks generally is the most energy consuming task. Retrieving query results from deep within the sensor network therefore consumes a lot of energy and hence shortens the network's lifetime. In this work optimizations for processing queries by using adaptive caching structures are discussed. Results can be retrieved from caches that are placed nearer to the query source. As a result the communication demand is reduced and hence energy is saved by using the cached results. To verify cache coherence in networks with non-reliable communication channels, an approximate update policy is presented. A degree of result quality can be defined for a query to find the adequate cache adaptively. Gossip-based Data Fusion Framework for Radio Resource Map Jin Yang*, Ilmenau University of Technology Abstract: In disaster scenarios, sensor networks are used to detect changes and estimate resource availability to further support the system recovery and rescue process. In this PhD thesis, sensor networks are used to detect available radio resources in order to form a global view of the radio resource map, based on locally sensed and measured data. Data fusion and harvesting techniques are employed for the generation and maintenance of this “radio resource map.” In order to guarantee the flexibility and fault tolerance goals of disaster scenarios, a gossip protocol is used to exchange information. The radio propagation field knowledge is closely coupled to harvesting and fusion protocols in order to achieve efficient fusing of radio measurement data. For the evaluation, simulations will be used to measure the efficiency and robustness in relation to time critical applications and various deployment densities. Actual radio data measurements within the Ilmenau area are being collected for further analysis of the map quality and in order to verify simulation results. In disaster scenarios, sensor networks are used to detect changes and estimate resource availability to further support the system recovery and rescue process. In this PhD thesis, sensor networks are used to detect available radio resources in order to form a global view of the radio resource map, based on locally sensed and measured data. Data fusion and harvesting techniques are employed for the generation and maintenance of this “radio resource map.” In order to guarantee the flexibility and fault tolerance goals of disaster scenarios, a gossip protocol is used to exchange information. The radio propagation field knowledge is closely coupled to harvesting and fusion protocols in order to achieve efficient fusing of radio measurement data. For the evaluation, simulations will be used to measure the efficiency and robustness in relation to time critical applications and various deployment densities. Actual radio data measurements within the Ilmenau area are being collected for further analysis of the map quality and in order to verify simulation results. Dynamic Social Grouping Based Routing in a Mobile Ad-Hoc Network Roy Cabaniss*, Missouri S&T Abstract: Trotta, University of Missouri, Kansas City, Srinivasa Vulli, Missouri University S&T The patterns of movement used by Mobile Ad-Hoc networks are application specific, in the sense that networks use nodes which travel in different paths. When these nodes are used in experiments involving social patterns, such as wildlife tracking, algorithms which detect and use these patterns can be used to improve routing efficiency. The intent of this paper is to introduce a routing algorithm which forms a series of social groups which accurately indicate a node’s regular contact patterns while dynamically shifting to represent changes to the social environment. With the social groups formed, a probabilistic routing schema is used to effectively identify which social groups have consistent contact with the base station, and route accordingly. The algorithm can be implemented dynamically, in the sense that the nodes initially have no awareness of their environment, and works to reduce overhead and message traffic while maintaining high delivery ratio. Trotta, University of Missouri, Kansas City, Srinivasa Vulli, Missouri University S&T The patterns of movement used by Mobile Ad-Hoc networks are application specific, in the sense that networks use nodes which travel in different paths. When these nodes are used in experiments involving social patterns, such as wildlife tracking, algorithms which detect and use these patterns can be used to improve routing efficiency. The intent of this paper is to introduce a routing algorithm which forms a series of social groups which accurately indicate a node’s regular contact patterns while dynamically shifting to represent changes to the social environment. With the social groups formed, a probabilistic routing schema is used to effectively identify which social groups have consistent contact with the base station, and route accordingly. The algorithm can be implemented dynamically, in the sense that the nodes initially have no awareness of their environment, and works to reduce overhead and message traffic while maintaining high delivery ratio. MobileSOA framework for Context-Aware Mobile Applications Aaratee Shrestha*, University of Leipzig Abstract: Mobile application development is more challenging when context-awareness is taken into account. This research introduces the benefit of implementing a Mobile Service Oriented Architecture (SOA). A robust mobile SOA framework is designed for building and operating lightweight and flexible Context-Aware Mobile Application (CAMA). We develop a lightweight and flexible CAMA to show dynamic integration of the systems, where all operations run smoothly in response to the rapidly changing environment using local and remote services. Keywords-service oriented architecture (SOA); mobile service; context-awareness; contextaware mobile application (CAMA). Mobile application development is more challenging when context-awareness is taken into account. This research introduces the benefit of implementing a Mobile Service Oriented Architecture (SOA). A robust mobile SOA framework is designed for building and operating lightweight and flexible Context-Aware Mobile Application (CAMA). We develop a lightweight and flexible CAMA to show dynamic integration of the systems, where all operations run smoothly in response to the rapidly changing environment using local and remote services. Keywords-service oriented architecture (SOA); mobile service; context-awareness; contextaware mobile application (CAMA). Performance Analysis of Secure Hierarchical Data Aggregation in Wireless Sensor Networks Vimal Kumar*, Missouri S&T Abstract: Data aggregation is a technique used to conserve battery power in wireless sensor networks (WSN). While providing security in such a scenario it is also important that we minimize the number of security operations as they are computationally expensive, without compromising on the security. In this paper we evaluate the performance of such an end to end security algorithm. We provide our results from the implementation of the algorithm on mica2 motes and conclude how it is better than traditional hop by hop security. Data aggregation is a technique used to conserve battery power in wireless sensor networks (WSN). While providing security in such a scenario it is also important that we minimize the number of security operations as they are computationally expensive, without compromising on the security. In this paper we evaluate the performance of such an end to end security algorithm. We provide our results from the implementation of the algorithm on mica2 motes and conclude how it is better than traditional hop by hop security. A Communication Efficient Framework for Finding Outliers in Wireless Sensor Networks Dylan McDonald*, MS&T Abstract: Outlier detection is a well studied problem in various fields. The unique challenges of wireless sensor networks make this problem especially challenging. Sensors can detect outliers for a plethora of reasons and these reasons need to be inferred in real time. Here, we present a new communication technique to find outliers in a wireless sensor network. Communication is minimized through controlling sensor when sensors are allowed to communicate. At the same time, minimal assumptions are made about the nature of the data set as to ",
"title": ""
},
{
"docid": "e4694f9cdbc8756398e5996b9cd78989",
"text": "In this paper, a 3D computer vision system for cognitive assessment and rehabilitation based on the Kinect device is presented. It is intended for individuals with body scheme dysfunctions and left-right confusion. The system processes depth information to overcome the shortcomings of a previously presented 2D vision system for the same application. It achieves left and right-hand tracking, and face and facial feature detection (eye, nose, and ears) detection. The system is easily implemented with a consumer-grade computer and an affordable Kinect device and is robust to drastic background and illumination changes. The system was tested and achieved a successful monitoring percentage of 96.28%. The automation of the human body parts motion monitoring, its analysis in relation to the psychomotor exercise indicated to the patient, and the storage of the result of the realization of a set of exercises free the rehabilitation experts of doing such demanding tasks. The vision-based system is potentially applicable to other tasks with minor changes.",
"title": ""
},
{
"docid": "a87ff679a2f3e71d9181a67b7542122c",
"text": "4",
"title": ""
},
{
"docid": "54b4726650b3afcddafb120ff99c9951",
"text": "Online harassment has been a problem to a greater or lesser extent since the early days of the internet. Previous work has applied anti-spam techniques like machine-learning based text classification (Reynolds, 2011) to detecting harassing messages. However, existing public datasets are limited in size, with labels of varying quality. The #HackHarassment initiative (an alliance of 1 tech companies and NGOs devoted to fighting bullying on the internet) has begun to address this issue by creating a new dataset superior to its predecssors in terms of both size and quality. As we (#HackHarassment) complete further rounds of labelling, later iterations of this dataset will increase the available samples by at least an order of magnitude, enabling corresponding improvements in the quality of machine learning models for harassment detection. In this paper, we introduce the first models built on the #HackHarassment dataset v1.0 (a new open dataset, which we are delighted to share with any interested researcherss) as a benchmark for future research.",
"title": ""
},
{
"docid": "549d486d6ff362bc016c6ce449e29dc9",
"text": "Aging is very often associated with magnesium (Mg) deficit. Total plasma magnesium concentrations are remarkably constant in healthy subjects throughout life, while total body Mg and Mg in the intracellular compartment tend to decrease with age. Dietary Mg deficiencies are common in the elderly population. Other frequent causes of Mg deficits in the elderly include reduced Mg intestinal absorption, reduced Mg bone stores, and excess urinary loss. Secondary Mg deficit in aging may result from different conditions and diseases often observed in the elderly (i.e. insulin resistance and/or type 2 diabetes mellitus) and drugs (i.e. use of hypermagnesuric diuretics). Chronic Mg deficits have been linked to an increased risk of numerous preclinical and clinical outcomes, mostly observed in the elderly population, including hypertension, stroke, atherosclerosis, ischemic heart disease, cardiac arrhythmias, glucose intolerance, insulin resistance, type 2 diabetes mellitus, endothelial dysfunction, vascular remodeling, alterations in lipid metabolism, platelet aggregation/thrombosis, inflammation, oxidative stress, cardiovascular mortality, asthma, chronic fatigue, as well as depression and other neuropsychiatric disorders. Both aging and Mg deficiency have been associated to excessive production of oxygen-derived free radicals and low-grade inflammation. Chronic inflammation and oxidative stress are also present in several age-related diseases, such as many vascular and metabolic conditions, as well as frailty, muscle loss and sarcopenia, and altered immune responses, among others. Mg deficit associated to aging may be at least one of the pathophysiological links that may help to explain the interactions between inflammation and oxidative stress with the aging process and many age-related diseases.",
"title": ""
},
{
"docid": "e84dfdba40e25e3705a8aeee2f2e65f2",
"text": "Alopecia areata (AA) is a common form of autoimmune nonscarring hair loss of scalp and/or body. Atypical hair regrowth in AA is considered a rare phenomenon. It includes atypical pattern of hair growth (sudden graying, perinevoid alopecia, Renbok phenomenon, castling phenomenon, and concentric or targetoid regrowth) and atypical dark color hair regrowth. We report a case of AA that resulted in a concentric targetoid hair regrowth and discuss the possible related theories regarding the significance of this phenomenon.",
"title": ""
},
{
"docid": "6f845762227f11525173d6d0869f6499",
"text": "We argue that the estimation of mutual information between high dimensional continuous random variables can be achieved by gradient descent over neural networks. We present a Mutual Information Neural Estimator (MINE) that is linearly scalable in dimensionality as well as in sample size, trainable through back-prop, and strongly consistent. We present a handful of applications on which MINE can be used to minimize or maximize mutual information. We apply MINE to improve adversarially trained generative models. We also use MINE to implement the Information Bottleneck, applying it to supervised classification; our results demonstrate substantial improvement in flexibility and performance in these settings.",
"title": ""
}
] |
scidocsrr
|
a9d60bf676ddffd0a1dc8fb4ea0144e3
|
Cortical oscillations and speech processing: emerging computational principles and operations
|
[
{
"docid": "dca57315342d58d96836fef9d7f52a71",
"text": "We examine the evidence that speech and musical sounds exploit different acoustic cues: speech is highly dependent on rapidly changing broadband sounds, whereas tonal patterns tend to be slower, although small and precise changes in frequency are important. We argue that the auditory cortices in the two hemispheres are relatively specialized, such that temporal resolution is better in left auditory cortical areas and spectral resolution is better in right auditory cortical areas. We propose that cortical asymmetries might have developed as a general solution to the need to optimize processing of the acoustic environment in both temporal and frequency domains.",
"title": ""
}
] |
[
{
"docid": "4dbcb1c6f2e855fa3e7d1a491b108689",
"text": "Guaranteed tuple processing has become critically important for many streaming applications. This paper describes how we enabled IBM Streams, an enterprise-grade stream processing system, to provide data processing guarantees. Our solution goes from language-level abstractions to a runtime protocol. As a result, with a couple of simple annotations at the source code level, IBM Streams developers can define consistent regions, allowing any subgraph of their streaming application to achieve guaranteed tuple processing. At runtime, a consistent region periodically executes a variation of the Chandy-Lamport snapshot algorithm to establish a consistent global state for that region. The coupling of consistent states with data replay enables guaranteed tuple processing.",
"title": ""
},
{
"docid": "94013936968a4864167ed4e764398deb",
"text": "A prime requirement for autonomous driving is a fast and reliable estimation of the motion state of dynamic objects in the ego-vehicle's surroundings. An instantaneous approach for extended objects based on two Doppler radar sensors has recently been proposed. In this paper, that approach is augmented by prior knowledge of the object's heading angle and rotation center. These properties can be determined reliably by state-of-the-art methods based on sensors such as LIDAR or cameras. The information fusion is performed utilizing an appropriate measurement model, which directly maps the motion state in the Doppler velocity space. This model integrates the geometric properties. It is used to estimate the object's motion state using a linear regression. Additionally, the model allows a straightforward calculation of the corresponding variances. The resulting method shows a promising accuracy increase of up to eight times greater than the original approach.",
"title": ""
},
{
"docid": "e2c6437d257559211d182b5707aca1a4",
"text": "In present times, social forums such as Quora and Yahoo! Answers constitute powerful media through which people discuss on a variety of topics and express their intentions and thoughts. Here they often reveal their potential intent to purchase ‘Purchase Intent’ (PI). A purchase intent is defined as a text expression showing a desire to purchase a product or a service in future. Extracting posts having PI from a user’s social posts gives huge opportunities towards web personalization, targeted marketing and improving community observing systems. In this paper, we explore the novel problem of detecting PIs from social posts and classifying them. We find that using linguistic features along with statistical features of PI expressions achieves a significant improvement in PI classification over ‘bag-ofwords’ based features used in many present day socialmedia classification tasks. Our approach takes into consideration the specifics of social posts like limited contextual information, incorrect grammar, language ambiguities, etc. by extracting features at two different levels of text granularity word and phrase based features and grammatical dependency based features. Apart from these, the patterns observed in PI posts help us to identify some specific features.",
"title": ""
},
{
"docid": "a1bff389a9a95926a052ded84c625a9e",
"text": "Automatically assessing the subjective quality of a photo is a challenging area in visual computing. Previous works study the aesthetic quality assessment on a general set of photos regardless of the photo's content and mainly use features extracted from the entire image. In this work, we focus on a specific genre of photos: consumer photos with faces. This group of photos constitutes an important part of consumer photo collections. We first conduct an online study on Mechanical Turk to collect ground-truth and subjective opinions for a database of consumer photos with faces. We then extract technical features, perceptual features, and social relationship features to represent the aesthetic quality of a photo, by focusing on face-related regions. Experiments show that our features perform well for categorizing or predicting the aesthetic quality.",
"title": ""
},
{
"docid": "79729b8f7532617015cbbdc15a876a5c",
"text": "We introduce recurrent neural networkbased Minimum Translation Unit (MTU) models which make predictions based on an unbounded history of previous bilingual contexts. Traditional back-off n-gram models suffer under the sparse nature of MTUs which makes estimation of highorder sequence models challenging. We tackle the sparsity problem by modeling MTUs both as bags-of-words and as a sequence of individual source and target words. Our best results improve the output of a phrase-based statistical machine translation system trained on WMT 2012 French-English data by up to 1.5 BLEU, and we outperform the traditional n-gram based MTU approach by up to 0.8 BLEU.",
"title": ""
},
{
"docid": "dfde9a2febe48e273d12131082071635",
"text": "Instagram, an online photo-sharing platform, has gained increasing popularity. It allows users to take photos, apply digital filters and share them with friends instantaneously by using mobile devices.Instagram provides users with the functionality to associate their photos with points of interest, and it thus becomes feasible to study the association between points of interest and Instagram photos. However, no previous work studies the association. In this paper, we propose to study the problem of mapping Instagram photos to points of interest. To understand the problem, we analyze Instagram datasets, and report our findings, which also characterize the challenges of the problem. To address the challenges, we propose to model the mapping problem as a ranking problem, and develop a method to learn a ranking function by exploiting the textual, visual and user information of photos. To maximize the prediction effectiveness for textual and visual information, and incorporate the users' visiting preferences, we propose three subobjectives for learning the parameters of the proposed ranking function. Experimental results on two sets of Instagram data show that the proposed method substantially outperforms existing methods that are adapted to handle the problem.",
"title": ""
},
{
"docid": "9d99851970492cc4e8f6ac54967a5229",
"text": "BACKGROUND AND PURPOSE\nTranscranial Doppler (TCD) is used for diagnosis of vasospasm in patients with subarachnoid hemorrhage due to a ruptured aneurysm. Our aim was to evaluate both the accuracy of TCD compared with angiography and its usefulness as a screening method in this setting.\n\n\nMETHODS\nA search (MEDLINE, EMBASE, Cochrane Library, bibliographies, hand searching, any language, through January 31, 2001) was performed for studies comparing TCD with angiography. Data were critically appraised using a modified published 10-point score and were combined using a random-effects model.\n\n\nRESULTS\nTwenty-six reports compared TCD with angiography. Median validity score was 4.5 (range 1 to 8). Meta-analyses could be performed with data from 7 trials. For the middle cerebral artery (5 trials, 317 tests), sensitivity was 67% (95% CI 48% to 87%), specificity was 99% (98% to 100%), positive predictive value (PPV) was 97% (95% to 98%), and negative predictive value (NPV) was 78% (65% to 91%). For the anterior cerebral artery (3 trials, 171 tests), sensitivity was 42% (11% to 72%), specificity was 76% (53% to 100%), PPV was 56% (27% to 84%), and NPV was 69% (43% to 95%). Three of these 7 studies reported on the same patients, each on another artery, and for 4, data recycling could not be disproved. Other arteries were tested in only 1 trial each.\n\n\nCONCLUSIONS\nFor the middle cerebral artery, TCD is not likely to indicate a spasm when angiography does not show one (high specificity), and TCD may be used to identify patients with a spasm (high PPV). For all other situations and arteries, there is either lack of evidence of accuracy or of any usefulness of TCD. Most of these data are of low methodological quality, bias cannot not be ruled out, and data reporting is often uncritical.",
"title": ""
},
{
"docid": "7573ef144a5c1bc1d702f3a0e50fd89a",
"text": "This paper designs a new five-fingered robotic hand with a camera. Several morphological features of the human hand are integrated to improve the appearance of the hand. The drive system of this hand is under-actuated to eliminate the weight of the hand and to embed all the actuators inside the palm. Despite of this under-actuation, this hand can grasp objects in several different ways. In addition, the two different transmissions are adopted to drive the fingers according to their roles. These transmissions help not only to improve drive efficiency but also to secure the space of the embedded camera.",
"title": ""
},
{
"docid": "66878197b06f3fac98f867d5457acafe",
"text": "As a result of disparities in the educational system, numerous scholars and educators across disciplines currently support the STEAM (Science, Technology, Engineering, Art, and Mathematics) movement for arts integration. An educational approach to learning focusing on guiding student inquiry, dialogue, and critical thinking through interdisciplinary instruction, STEAM values proficiency, knowledge, and understanding. Despite extant literature urging for this integration, the trend has yet to significantly influence federal or state standards for K-12 education in the United States. This paper provides a brief and focused review of key theories and research from the fields of cognitive psychology and neuroscience outlining the benefits of arts integrative curricula in the classroom. Cognitive psychologists have found that the arts improve participant retention and recall through semantic elaboration, generation of information, enactment, oral production, effort after meaning, emotional arousal, and pictorial representation. Additionally, creativity is considered a higher-order cognitive skill and EEG results show novel brain patterns associated with creative thinking. Furthermore, cognitive neuroscientists have found that long-term artistic training can augment these patterns as well as lead to greater plasticity and neurogenesis in associated brain regions. Research suggests that artistic training increases retention and recall, generates new patterns of thinking, induces plasticity, and results in strengthened higher-order cognitive functions related to creativity. These benefits of arts integration, particularly as approached in the STEAM movement, are what develops students into adaptive experts that have the skills to then contribute to innovation in a variety of disciplines.",
"title": ""
},
{
"docid": "78b07bce8817c60dce98ad434d1fc3e0",
"text": "Boost converters are widely used as power-factorcorrected preregulators. In high-power applications, interleaved operation of two or more boost converters has been proposed to increase the output power and to reduce the output ripple. A major design criterion then is to ensure equal current sharing among the parallel converters. In this paper, a converter consisting of two interleaved and intercoupled boost converter cells is proposed and investigated. The boost converter cells have very good current sharing characteristics even in the presence of relatively large duty cycle mismatch. In addition, it can be designed to have small input current ripple and zero boost-rectifier reverse-recovery loss. The operating principle, steady-state analysis, and comparison with the conventional boost converter are presented. Simulation and experimental results are also given.",
"title": ""
},
{
"docid": "f17e088915b40617b29300b9c39e6d08",
"text": "Lossy image compression is generally formulated as a joint rate-distortion optimization problem to learn encoder, quantizer, and decoder. Due to the non-differentiable quantizer and discrete entropy estimation, it is very challenging to develop a convolutional network (CNN)-based image compression system. In this paper, motivated by that the local information content is spatially variant in an image, we suggest that: (i) the bit rate of the different parts of the image is adapted to local content, and (ii) the content-aware bit rate is allocated under the guidance of a content-weighted importance map. The sum of the importance map can thus serve as a continuous alternative of discrete entropy estimation to control compression rate. The binarizer is adopted to quantize the output of encoder and a proxy function is introduced for approximating binary operation in backward propagation to make it differentiable. The encoder, decoder, binarizer and importance map can be jointly optimized in an end-to-end manner. And a convolutional entropy encoder is further presented for lossless compression of importance map and binary codes. In low bit rate image compression, experiments show that our system significantly outperforms JPEG and JPEG 2000 by structural similarity (SSIM) index, and can produce the much better visual result with sharp edges, rich textures, and fewer artifacts.",
"title": ""
},
{
"docid": "4bc7687ba89699a537329f37dda4e74d",
"text": "At the same time as cities are growing, their share of older residents is increasing. To engage and assist cities to become more “age-friendly,” the World Health Organization (WHO) prepared the Global Age-Friendly Cities Guide and a companion “Checklist of Essential Features of Age-Friendly Cities”. In collaboration with partners in 35 cities from developed and developing countries, WHO determined the features of age-friendly cities in eight domains of urban life: outdoor spaces and buildings; transportation; housing; social participation; respect and social inclusion; civic participation and employment; communication and information; and community support and health services. In 33 cities, partners conducted 158 focus groups with persons aged 60 years and older from lower- and middle-income areas of a locally defined geographic area (n = 1,485). Additional focus groups were held in most sites with caregivers of older persons (n = 250 caregivers) and with service providers from the public, voluntary, and commercial sectors (n = 515). No systematic differences in focus group themes were noted between cities in developed and developing countries, although the positive, age-friendly features were more numerous in cities in developed countries. Physical accessibility, service proximity, security, affordability, and inclusiveness were important characteristics everywhere. Based on the recurring issues, a set of core features of an age-friendly city was identified. The Global Age-Friendly Cities Guide and companion “Checklist of Essential Features of Age-Friendly Cities” released by WHO serve as reference for other communities to assess their age readiness and plan change.",
"title": ""
},
{
"docid": "c7de7b159579b5c8668f2a072577322c",
"text": "This paper presents a method for effectively using unlabeled sequential data in the learning of hidden Markov models (HMMs). With the conventional approach, class labels for unlabeled data are assigned deterministically by HMMs learned from labeled data. Such labeling often becomes unreliable when the number of labeled data is small. We propose an extended Baum-Welch (EBW) algorithm in which the labeling is undertaken probabilistically and iteratively so that the labeled and unlabeled data likelihoods are improved. Unlike the conventional approach, the EBW algorithm guarantees convergence to a local maximum of the likelihood. Experimental results on gesture data and speech data show that when labeled training data are scarce, by using unlabeled data, the EBW algorithm improves the classification performance of HMMs more robustly than the conventional naive labeling (NL) approach. keywords Unlabeled data, sequential data, hidden Markov models, extended Baum-Welch algorithm.",
"title": ""
},
{
"docid": "41e10927206bebd484b1f137c89e31fe",
"text": "Cable-driven parallel robots (CDPR) are efficient manipulators able to carry heavy payloads across large workspaces. Therefore, the dynamic parameters such as the mobile platform mass and center of mass location may considerably vary. Without any adaption, the erroneous parametric estimate results in mismatch terms added to the closed-loop system, which may decrease the robot performances. In this paper, we introduce an adaptive dual-space motion control scheme for CDPR. The proposed method aims at increasing the robot tracking performances, while keeping all the cable tensed despite uncertainties and changes in the robot dynamic parameters. Reel-time experimental tests, performed on a large redundantly actuated CDPR prototype, validate the efficiency of the proposed control scheme. These results are compared to those obtained with a non-adaptive dual-space feedforward control scheme.",
"title": ""
},
{
"docid": "34257e8924d8f9deec3171589b0b86f2",
"text": "The topics treated in The brain and emotion include the definition, nature, and functions of emotion (Ch. 3); the neural bases of emotion (Ch. 4); reward, punishment, and emotion in brain design (Ch. 10); a theory of consciousness and its application to understanding emotion and pleasure (Ch. 9); and neural networks and emotion-related learning (Appendix). The approach is that emotions can be considered as states elicited by reinforcers (rewards and punishers). This approach helps with understanding the functions of emotion, with classifying different emotions, and in understanding what information-processing systems in the brain are involved in emotion, and how they are involved. The hypothesis is developed that brains are designed around reward- and punishment-evaluation systems, because this is the way that genes can build a complex system that will produce appropriate but flexible behavior to increase fitness (Ch. 10). By specifying goals rather than particular behavioral patterns of responses, genes leave much more open the possible behavioral strategies that might be required to increase fitness. The importance of reward and punishment systems in brain design also provides a basis for understanding the brain mechanisms of motivation, as described in Chapters 2 for appetite and feeding, 5 for brain-stimulation reward, 6 for addiction, 7 for thirst, and 8 for sexual behavior.",
"title": ""
},
{
"docid": "d031e4e94a3303beece6c09160c08f67",
"text": "INTRODUCTION\nIrritable bowel syndrome (IBS) is characterized by recurrent abdominal pain, bloating, and changes in bowel habit.\n\n\nAIMS\nTo determine the clinical effectiveness of the antispasmodic agents available in Mexico for the treatment of IBS.\n\n\nMETHODS\nWe carried out a systematic review and meta-analysis of randomized controlled clinical trials on antispasmodic agents for IBS treatment. Clinical trials identified from January 1960 to May 2011 were searched for in MEDLINE, the Cochrane Library, and in the ClinicalTrials.gov registry. Treatment response was evaluated by global improvement of symptoms or abdominal pain, abdominal distention/bloating, and frequency of adverse events. The effect of antispasmodics vs placebo was expressed in OR and 95% CI.\n\n\nRESULTS\nTwenty-seven studies were identified, 23 of which fulfilled inclusion criteria. The studied agents were pinaverium bromide, mebeverine, otilonium, trimebutine, alverine, hyoscine, alverine/simethicone, pinaverium/simethicone, fenoverine, and dicyclomine. A total of 2585 patients were included in the meta-analysis. Global improvement was 1.55 (CI 95%: 1.33 to 1.83). Otilonium and the alverine/simethicone combination produced significant values in global improvement while the pinaverium/simethicone combination showed improvement in bloating. As for pain, 2394 patients were included with an OR of 1.52 (IC 95%: 1.28 a 1.80), favoring antispasmodics.\n\n\nCONCLUSIONS\nAntispasmodics were more effective than placebo in IBS, without any significant adverse events. The addition of simethicone improved the properties of the antispasmodic agents, as seen with the alverine/simethicone and pinaverium/simethicone combinations.",
"title": ""
},
{
"docid": "0964f14abc63d11b5dbbf538eb5f2443",
"text": "This paper proposes a novel double-stator axial-flux spoke-type permanent magnet vernier machine, which has a high torque density feature as well as a high-power factor at low speed for direct-drive systems. The operation principle and basic design procedure of the proposed machine are presented and discussed. The 3-D finite element method (3-D-FEM) is utilized to analyze its magnetic field and transient output performance. Furthermore, the analytical method and a simplified 2-D-FEM are also developed for the machine basic design and performance evaluation, which can effectively reduce the modeling and simulation time of the 3-D-FEM and achieve an adequate accuracy.",
"title": ""
},
{
"docid": "4bfac9df41641b88fb93f382202c6e85",
"text": "The objective was to evaluate the clinical efficacy of chemomechanical preparation of the root canals with sodium hypochlorite and interappointment medication with calcium hydroxide in the control of root canal infection and healing of periapical lesions. Fifty teeth diagnosed with chronic apical periodontitis were randomly allocated to one of three treatments: Single visit (SV group, n = 20), calcium hydroxide for one week (CH group n = 18), or leaving the canal empty but sealed for one week (EC group, n = 12). Microbiological samples were taken to monitor the infection during treatment. Periapical healing was controlled radiographically following the change in the periapical index at 52 wk and analyzed using one-way ANOVA. All cases showed microbiological growth in the beginning of the treatment. After mechanical preparation and irrigation with sodium hypochlorite in the first appointment, 20 to 33% of the cases showed growth. At the second appointment 33% of the cases in the CH group revealed bacteria, whereas the EC group showed remarkably more culture positive cases (67%). Sodium hypochlorite was effective also at the second appointment and only two teeth remained culture positive. Only minor differences in periapical healing were observed between the treatment groups. However, bacterial growth at the second appointment had a significant negative impact on healing of the periapical lesion (p < 0.01). The present study indicates good clinical efficacy of sodium hypochlorite irrigation in the control of root canal infection. Calcium hydroxide dressing between the appointments did not show the expected effect in disinfection the root canal system and treatment outcome, indicating the need to develop more efficient inter-appointment dressings.",
"title": ""
},
{
"docid": "e92e097189bd6135dd68b787bb4881aa",
"text": "Figure 1: (a) Our method with 6.7M triangles Rungholt scene. 55K shaded samples. Inset picture was taken through the lens of the Oculus Rift HMD. (b) Naı̈ve ray tracing. 1M shaded samples. Visual quality in our method is equivalent to the one produced by the naı̈ve method when seen through the HMD. (c) Our foveated sampling pattern and k-NN filtering method. Each cell corresponds to a sampling point. Real-time rendering over 60 fps is achieved with the OpenCLray tracer, running on four RadeonR9 290X GPUs.",
"title": ""
},
{
"docid": "ebb78503777a1a70fa32771094fe6a77",
"text": "In this paper we address the problem of unsupervised learning of discrete subword units. Our approach is based on Deep Autoencoders (AEs), whose encoding node values are thresholded to subsequently generate a symbolic, i.e., 1-of-K (with K = No. of subwords), representation of each speech frame. We experiment with two variants of the standard AE which we have named Binarized Autoencoder and Hidden-Markov-Model Encoder. The first forces the binary encoding nodes to have a Ushaped distribution (with peaks at 0 and 1) while minimizing the reconstruction error. The latter jointly learns the symbolic encoding representation (i.e., subwords) and the prior and transition distribution probabilities of the learned subwords. The ABX evaluation of the Zero Resource Challenge Track 1 shows that a deep AE with only 6 encoding nodes, which assigns to each frame a 1-of-K binary vector with K = 2, can outperform real-valued MFCC representations in the acrossspeaker setting. Binarized AEs can outperform standard AEs when using a larger number of encoding nodes, while HMM Encoders may allow more compact subword transcriptions without worsening the ABX performance.",
"title": ""
}
] |
scidocsrr
|
2542a7cc53a95db99d5801fea0128dbe
|
Plenoptic depth estimation from multiple aliased views
|
[
{
"docid": "21d9828d0851b4ded34e13f8552f3e24",
"text": "Light field cameras have been recently shown to be very effective in applications such as digital refocusing and 3D reconstruction. In a single snapshot these cameras provide a sample of the light field of a scene by trading off spatial resolution with angular resolution. Current methods produce images at a resolution that is much lower than that of traditional imaging devices. However, by explicitly modeling the image formation process and incorporating priors such as Lambertianity and texture statistics, these types of images can be reconstructed at a higher resolution. We formulate this method in a variational Bayesian framework and perform the reconstruction of both the surface of the scene and the (superresolved) light field. The method is demonstrated on both synthetic and real images captured with our light-field camera prototype.",
"title": ""
},
{
"docid": "751b853f780fc8047ff73ce646b68cd6",
"text": "This paper builds on previous research in the light field area of image-based rendering. We present a new reconstruction filter that significantly reduces the “ghosting” artifacts seen in undersampled light fields, while preserving important high-fidelity features such as sharp object boundaries and view-dependent reflectance. By improving the rendering quality achievable from undersampled light fields, our method allows acceptable images to be generated from smaller image sets. We present both frequency and spatial domain justifications for our techniques. We also present a practical framework for implementing the reconstruction filter in multiple rendering passes. CR Categories: I.3.3 [Computer Graphics]: Picture/Image Generation ― Viewing algorithms; I.3.6 [Computer Graphics]: Methodologies and Techniques ― Graphics data structures and data types; I.4.1 [Image Processing and Computer Vision]: Digitization and Image Capture ― Sampling",
"title": ""
},
{
"docid": "8d24516bda25e60bf68362a88668f675",
"text": "Plenoptic cameras, constructed with internal microlens arrays, focus those microlenses at infinity in order to sample the 4D radiance directly at the microlenses. The consequent assumption is that each microlens image is completely defocused with respect to to the image created by the main camera lens and the outside object. As a result, only a single pixel in the final image can be rendered from it, resulting in disappointingly low resolution. In this paper, we present a new approach to lightfield capture and image rendering that interprets the microlens array as an imaging system focused on the focal plane of the main camera lens. This approach captures a lightfield with significantly higher spatial resolution than the traditional approach, allowing us to render high resolution images that meet the expectations of modern photographers. Although the new approach samples the lightfield with reduced angular density, analysis and experimental results demonstrate that there is sufficient parallax to completely support lightfield manipulation algorithms such as refocusing and novel views",
"title": ""
}
] |
[
{
"docid": "1390f0c41895ecabbb16c54684b88ca1",
"text": "Deep learning has proven to be a powerful tool for computer vision and has seen widespread adoption for numerous tasks. However, deep learning algorithms are known to be vulnerable to adversarial examples. These adversarial inputs are created such that, when provided to a deep learning algorithm, they are very likely to be mislabeled. This can be problematic when deep learning is used to assist in safety critical decisions. Recent research has shown that classifiers can be attacked by physical adversarial examples under various physical conditions. Given the fact that stateof-the-art objection detection algorithms are harder to be fooled by the same set of adversarial examples, here we show that these detectors can also be attacked by physical adversarial examples. In this note, we briefly show both static and dynamic test results. We design an algorithm that produces physical adversarial inputs, which can fool the YOLO object detector and can also attack Faster-RCNN with relatively high success rate based on transferability. Furthermore, our algorithm can compress the size of the adversarial inputs to stickers that, when attached to the targeted object, result in the detector either mislabeling or not detecting the object a high percentage of the time. This note provides a small set of results. Our upcoming paper will contain a thorough evaluation on other object detectors, and will present the algorithm.",
"title": ""
},
{
"docid": "61a2b0e51b27f46124a8042d59c0f022",
"text": "We address the highly challenging problem of real-time 3D hand tracking based on a monocular RGB-only sequence. Our tracking method combines a convolutional neural network with a kinematic 3D hand model, such that it generalizes well to unseen data, is robust to occlusions and varying camera viewpoints, and leads to anatomically plausible as well as temporally smooth hand motions. For training our CNN we propose a novel approach for the synthetic generation of training data that is based on a geometrically consistent image-to-image translation network. To be more specific, we use a neural network that translates synthetic images to \"real\" images, such that the so-generated images follow the same statistical distribution as real-world hand images. For training this translation network we combine an adversarial loss and a cycle-consistency loss with a geometric consistency loss in order to preserve geometric properties (such as hand pose) during translation. We demonstrate that our hand tracking system outperforms the current state-of-the-art on challenging RGB-only footage.",
"title": ""
},
{
"docid": "60e94f9a6731e1a148e05aa0f9a31683",
"text": "Bright light therapy for seasonal affective disorder (SAD) has been investigated and applied for over 20 years. Physicians and clinicians are increasingly confident that bright light therapy is a potent, specifically active, nonpharmaceutical treatment modality. Indeed, the domain of light treatment is moving beyond SAD, to nonseasonal depression (unipolar and bipolar), seasonal flare-ups of bulimia nervosa, circadian sleep phase disorders, and more. Light therapy is simple to deliver to outpatients and inpatients alike, although the optimum dosing of light and treatment time of day requires individual adjustment. The side-effect profile is favorable in comparison with medications, although the clinician must remain vigilant about emergent hypomania and autonomic hyperactivation, especially during the first few days of treatment. Importantly, light therapy provides a compatible adjunct to antidepressant medication, which can result in accelerated improvement and fewer residual symptoms.",
"title": ""
},
{
"docid": "447d46cb861541c0b6e542018a05b9d0",
"text": "Acupuncture is currently gaining popularity as an important modality of alternative and complementary medicine in the western world. Modern neuroimaging techniques such as functional magnetic resonance imaging, positron emission tomography, and magnetoencephalography open a window into the neurobiological foundations of acupuncture. In this review, we have summarized evidence derived from neuroimaging studies and tried to elucidate both neurophysiological correlates and key experimental factors involving acupuncture. Converging evidence focusing on acute effects of acupuncture has revealed significant modulatory activities at widespread cerebrocerebellar brain regions. Given the delayed effect of acupuncture, block-designed analysis may produce bias, and acupuncture shared a common feature that identified voxels that coded the temporal dimension for which multiple levels of their dynamic activities in concert cause the processing of acupuncture. Expectation in acupuncture treatment has a physiological effect on the brain network, which may be heterogeneous from acupuncture mechanism. \"Deqi\" response, bearing clinical relevance and association with distinct nerve fibers, has the specific neurophysiology foundation reflected by neural responses to acupuncture stimuli. The type of sham treatment chosen is dependent on the research question asked and the type of acupuncture treatment to be tested. Due to the complexities of the therapeutic mechanisms of acupuncture, using multiple controls is an optimal choice.",
"title": ""
},
{
"docid": "80a2d8b888af8b1cddc5c78f697474f2",
"text": "Following an approximate 12 month planning and definition period, implementation of the Next Generation Air Transportation System (NextGen) Communication, Navigation and Surveillance (CNS) Test Bed began in earnest during mid-year 2006. This Test Bed, focused on the evaluation of promising CNS technologies and systems, presently encompasses three airports in the Cleveland, Ohio region and a Test and Demonstration Center located at NASA Glenn Research Center (GRC).The Test Bed coverage will be extended before the end of 2007 to include a 200 mile radius around the Cleveland metropolitan area thus creating a wide area air-ground test and demonstration capability. The three included airports (Hopkins International, Burke Lakefront, and Lorain County Regional) are representative of various classes of airports across the country close to population centers that will be central to accommodating forecasted air travel growth over the next 20 years. Sensis, in partnership with the Cleveland Airport System, the Lorain County Regional Airport Authority, and the FAA, has installed an advanced version multilateration surveillance system and wireless communications infrastructure at each of these airports. Implemented as well is a prototype of a Remote Tower System (a.k.a., a Staffed Virtual Tower) that \"shadow controls\" operations at Burke Lakefront Airport from the Test and Demonstration Center some 13 miles away. The Test Bed is integrated as a system via a prototype of a regional information sharing and management system. Sensis Corporation, GRC, and their Test Bed partners are collaborating to define and test possible solutions for some of the most significant challenges to the success of NextGen. In addition to remote monitoring and control of airports, the issues under investigation by this team include validating the promised efficiencies of negotiated 4-D arrival and departure trajectories, assessing advanced integrated surveillance capabilities, helping solve the continuing safely problem of runway incursions, and improving the efficiency of airport and airline surface operations. GRC is planning to explore the potential for an IEEE 802.16e standard-based wireless airport surface communications network operating in the 5.1 GHz band, soon to be allocated for safety critical air-ground communications services. This paper summarizes the mid-2007 status of this Test Bed and shares plans for the next twelve months.",
"title": ""
},
{
"docid": "b842d759b124e1da0240f977d95a8b9a",
"text": "In this paper we argue for a broader view of ontology patterns and therefore present different use-cases where drawbacks of the current declarative pattern languages can be seen. We also discuss usecases where a declarative pattern approach can replace procedural-coded ontology patterns. With previous work on an ontology pattern language in mind we argue for a general pattern language.",
"title": ""
},
{
"docid": "2e389715d9beb1bc7c9ab06131abc67a",
"text": "Digital forensic science is very much still in its infancy, but is becoming increasingly invaluable to investigators. A popular area for research is seeking a standard methodology to make the digital forensic process accurate, robust, and efficient. The first digital forensic process model proposed contains four steps: Acquisition, Identification, Evaluation and Admission. Since then, numerous process models have been proposed to explain the steps of identifying, acquiring, analysing, storage, and reporting on the evidence obtained from various digital devices. In recent years, an increasing number of more sophisticated process models have been proposed. These models attempt to speed up the entire investigative process or solve various of problems commonly encountered in the forensic investigation. In the last decade, cloud computing has emerged as a disruptive technological concept, and most leading enterprises such as IBM, Amazon, Google, and Microsoft have set up their own cloud-based services. In the field of digital forensic investigation, moving to a cloud-based evidence processing model would be extremely beneficial and preliminary attempts have been made in its implementation. Moving towards a Digital Forensics as a Service model would not only expedite the investigative process, but can also result in significant cost savings – freeing up digital forensic experts and law enforcement personnel to progress their caseload. This paper aims to evaluate the applicability of existing digital forensic process models and analyse how each of these might apply to a cloudbased evidence processing paradigm.",
"title": ""
},
{
"docid": "228ede4e6914b6b0745de11dd6f980b2",
"text": "This paper describes two sequential methods for recovering the camera pose together with the 3D shape of highly deformable surfaces from a monocular video. The nonrigid 3D shape is modeled as a linear combination of mode shapes with time-varying weights that define the shape at each frame and are estimated on-the-fly. The low-rank constraint is combined with standard smoothness priors to optimize the model parameters over a sliding window of image frames. We propose to obtain a physics-based shape basis using the initial frames on the video to code the time-varying shape along the sequence, reducing the problem from trilinear to bilinear. To this end, the 3D shape is discretized by means of a soup of elastic triangular finite elements where we apply a force balance equation. This equation is solved using modal analysis via a simple eigenvalue problem to obtain a shape basis that encodes the modes of deformation. Even though this strategy can be applied in a wide variety of scenarios, when the observations are denser, the solution can become prohibitive in terms of computational load. We avoid this limitation by proposing two efficient coarse-to-fine approaches that allow us to easily deal with dense 3D surfaces. This results in a scalable solution that estimates a small number of parameters per frame and could potentially run in real time. We show results on both synthetic and real videos with ground truth 3D data, while robustly dealing with artifacts such as noise and missing data.",
"title": ""
},
{
"docid": "1d1291cdad5f4ae0453417caa465cc95",
"text": "Multipath TCP is a new transport protocol that enables systems to exploit available paths through multiple network interfaces. MPTCP is particularly useful for mobile devices, which frequently have multiple wireless interfaces. However, these devices have limited power capacity and thus judicious use of these interfaces is required. In this work, we develop a model for MPTCP energy consumption derived from experimental measurements using MPTCP on a mobile device with both cellular and WiFi interfaces. Using our MPTCP energy model, we identify the operating region where MPTCP can be more power efficient than either standard TCP or MPTCP. Based on our findings, we also design and implement an improved energy-efficient MPTCP that reduces power consumption by up to 8% in our experiments, while preserving the availability and robustness benefits of MPTCP.",
"title": ""
},
{
"docid": "1131bf1423e807f6e51979c0a3a9ca0d",
"text": "The increasing use of Variable Stiffness Actuators (VSAs) in robotic joints is helping robots to meet the demands of human-robot interaction, requiring high safety and adaptability. The key feature of a VSA is the ability to exploit internal elastic elements to obtain a variable output stiffness. These allow the joints to store mechanical energy supplied through interaction with the environment and make the system more robust, efficient, and safe. This paper discusses the design of leaf springs for a sub-class of VSAs that use variable lever arm ratios as means to change their output stiffness. Given the trade-off between compactness and the maximum energy storage capacity, the internal springs' dimensions and material choice are assessed through a theoretical analysis and practical experiments.",
"title": ""
},
{
"docid": "f348748d56ee099c5f30a2629c878f37",
"text": "Agency in interactive narrative is often narrowly understood as a user’s freedom to either perform virtually embodied actions or alter the mechanics of narration at will, followed by an implicit assumption of “the more agency the better.” This paper takes notice of a broader range of agency phenomena in interactive narrative and gaming that may be addressed by integrating accounts of agency from diverse fields such as sociology of science, digital media studies, philosophy, and cultural theory. The upshot is that narrative agency is contextually situated, distributed between the player and system, and mediated through user interpretation of system behavior and system affordances for user actions. In our new and developing model of agency play, multiple dimensions of agency can be tuned during story execution as a narratively situated mechanism to convey meaning. More importantly, we propose that this model of variable dimensions of agency can be used as an expressive theoretical tool for interactive narrative design. Finally, we present our current interactive narrative work under development as a case study for how the agency play model can be deployed expressively.",
"title": ""
},
{
"docid": "c2816721fa6ccb0d676f7fdce3b880d4",
"text": "Due to the achievements in the Internet of Things (IoT) field, Smart Objects are often involved in business processes. However, the integration of IoT with Business Process Management (BPM) is far from mature: problems related to process compliance and Smart Objects configuration with respect to the process requirements have not been fully addressed yet; also, the interaction of Smart Objects with multiple business processes that belong to different stakeholders is still under investigation. My PhD thesis aims to fill this gap by extending the BPM lifecycle, with particular focus on the design and analysis phase, in order to explicitly support IoT and its requirements.",
"title": ""
},
{
"docid": "6e72c4401bfeedaffd92d5261face2c6",
"text": "OBJECTIVE\nTo examine the association between television advertising exposure and adults' consumption of fast foods.\n\n\nDESIGN\nCross-sectional telephone survey. Questions included measures of frequency of fast-food consumption at different meal times and average daily hours spent watching commercial television.\n\n\nSUBJECTS/SETTING\nSubjects comprised 1495 adults (41 % response rate) aged >or=18 years from Victoria, Australia.\n\n\nRESULTS\nTwenty-three per cent of respondents usually ate fast food for dinner at least once weekly, while 17 % consumed fast food for lunch on a weekly basis. The majority of respondents reported never eating fast food for breakfast (73 %) or snacks (65 %). Forty-one per cent of respondents estimated watching commercial television for <or=1 h/d (low viewers); 29 % watched for 2 h/d (moderate viewers); 30 % watched for >or=3 h/d (high viewers). After adjusting for demographic variables, high viewers were more likely to eat fast food for dinner at least once weekly compared with low viewers (OR = 1.45; 95 % CI 1.04, 2.03). Both moderate viewers (OR = 1.53; 95 % CI 1.01, 2.31) and high viewers (OR = 1.81; 95 % CI 1.20, 2.72) were more likely to eat fast food for snacks at least once weekly compared with low viewers. Commercial television viewing was not significantly related (P > 0.05) to fast-food consumption at breakfast or lunch.\n\n\nCONCLUSIONS\nThe results of the present study provide evidence to suggest that cumulative exposure to television food advertising is linked to adults' fast-food consumption. Additional research that systematically assesses adults' behavioural responses to fast-food advertisements is needed to gain a greater understanding of the mechanisms driving this association.",
"title": ""
},
{
"docid": "5dad2c804c4718b87ae6ee9d7cc5a054",
"text": "The masquerade attack, where an attacker takes on the identity of a legitimate user to maliciously utilize that user’s privileges, poses a serious threat to the security of information systems. Such attacks completely undermine traditional security mechanisms due to the trust imparted to user accounts once they have been authenticated. Many attempts have been made at detecting these attacks, yet achieving high levels of accuracy remains an open challenge. In this paper, we discuss the use of a specially tuned sequence alignment algorithm, typically used in bioinformatics, to detect instances of masquerading in sequences of computer audit data. By using the alignment algorithm to align sequences of monitored audit data with sequences known to have been produced by the user, the alignment algorithm can discover areas of similarity and derive a metric that indicates the presence or absence of masquerade attacks. Additionally, we present several scoring systems, methods for accommodating variations in user behavior, and heuristics for decreasing the computational requirements of the algorithm. Our technique is evaluated against the standard masquerade detection dataset provided by Schonlau et al. [14, 13], and the results show that the use of the sequence alignment technique provides, to our knowledge, the best results of all masquerade detection techniques to date.",
"title": ""
},
{
"docid": "acb569b267eae92a6e33b52725f28833",
"text": "A multi-objective design procedure is applied to the design of a close-coupled inductor for a three-phase interleaved 140kW DC-DC converter. For the multi-objective optimization, a genetic algorithm is used in combination with a detailed physical model of the inductive component. From the solution of the optimization, important conclusions about the advantages and disadvantages of using close-coupled inductors compared to separate inductors can be drawn.",
"title": ""
},
{
"docid": "9e95ce11f502478c11df990d3465360f",
"text": "This paper presents a ultra-wideband (UWB) micro-strip structure high-pass filter with multi-stubs. The proposed filter was designed using a combination of 4 short-circuited stubs and an open-circuited stub in the form of micro-strip lines. The short-circuited stubs are to realize a high-pass filter with a bad band rejection. In order to achieve a steep cutoff, a transmission zero can be added thus an open-circuited stub is used. The passband is 5-19 GHz. The insertion loss is greater than -2dB and the return loss is less than -10dB, while the suppression of the modified filter is better than 30 dB below 4.2GHz.",
"title": ""
},
{
"docid": "1681e90ec538f92f3f890c6b0264143f",
"text": "The scapholunate joint is one of the most involved in wrist injuries. Its stability depends on primary and secondary stabilisers forming together the scapholunate complex. This ligamentous complex is often evaluated by wrist arthroscopy. To avoid surgery as diagnostic procedure, optimization of MR imaging parameters as use of three-dimensional (3D) sequences with very thin slices and high spatial resolution, is needed to detect lesions of the intrinsic and extrinsic ligaments of the scapholunate complex. The paper reviews the literature on imaging of radial-sided carpal ligaments with advanced computed tomographic arthrography (CTA) and magnetic resonance arthrography (MRA) to evaluate the scapholunate complex. Anatomy and pathology of the ligamentous complex are described and illustrated with CTA, MRA and corresponding arthroscopy. Sprains, mid-substance tears, avulsions and fibrous infiltrations of carpal ligaments could be identified on CTA and MRA images using 3D fat-saturated PD and 3D DESS (dual echo with steady-state precession) sequences with 0.5-mm-thick slices. Imaging signs of scapholunate complex pathology include: discontinuity, nonvisualization, changes in signal intensity, contrast extravasation (MRA), contour irregularity and waviness and periligamentous infiltration by edema, granulation tissue or fibrosis. Based on this preliminary experience, we believe that 3 T MRA using 3D sequences with 0.5-mm-thick slices and multiplanar reconstructions is capable to evaluate the scapholunate complex and could help to reduce the number of diagnostic arthroscopies.",
"title": ""
},
{
"docid": "3a75c2db6b36aa00b48fa06aacf1ef74",
"text": "Enabling computer systems to recognize facial expressions and infer emotions from them in real time presents a challenging research topic. In this paper, we present a real time approach to emotion recognition through facial expression in live video. We employ an automatic facial feature tracker to perform face localization and feature extraction. The facial feature displacements in the video stream are used as input to a Support Vector Machine classifier. We evaluate our method in terms of recognition accuracy for a variety of interaction and classification scenarios. Our person-dependent and person-independent experiments demonstrate the effectiveness of a support vector machine and feature tracking approach to fully automatic, unobtrusive expression recognition in live video. We conclude by discussing the relevance of our work to affective and intelligent man-machine interfaces and exploring further improvements.",
"title": ""
},
{
"docid": "4f848f750cfe4543df43457235ff203a",
"text": "The U.S. National Security Agency (NSA) developed the Simon and Speck families of lightweight block ciphers as an aid for securing applications in very constrained environments where AES may not be suitable. This paper sum marizes the algorithms, their design rationale, along with current cryptanalysis and implemen tation results.",
"title": ""
},
{
"docid": "c60b80296d66f762b935c3c40d82a520",
"text": "Subjects The study sample was composed of 172 adults (U.S. Commissioned Corps and Air Force officers) recruited from dental clinics in military bases in Rockville, Maryland, San Antonio, Texas, and Biloxi, Mississippi. Patients were eligible for the study if they were to be treated with at least one dental composite restoration. Patients were excluded if they had received composite restorations or pit-and-fissure sealants within the last 3 months or wore removable dental appliances, such as orthodontic retainers or partial dentures. The mean age of participants was 43.9 years (standard deviation, SD, 1.1 years), and the sample was evenly distributed by gender (50.3% male, 49.7% female). Subjects were followed a maximum of 30 h after receiving the dental composite restoration, which was adequate to assess short-term changes in chemical concentrations of urine and saliva samples. The authors did not report the years of subject recruitment or data collection.",
"title": ""
}
] |
scidocsrr
|
a110a6928e66eea2160c5a452a40bd1f
|
Deep correspondence restricted Boltzmann machine for cross-modal retrieval
|
[
{
"docid": "6508fc8732fd22fde8c8ac180a2e19e3",
"text": "The problem of joint modeling the text and image components of multimedia documents is studied. The text component is represented as a sample from a hidden topic model, learned with latent Dirichlet allocation, and images are represented as bags of visual (SIFT) features. Two hypotheses are investigated: that 1) there is a benefit to explicitly modeling correlations between the two components, and 2) this modeling is more effective in feature spaces with higher levels of abstraction. Correlations between the two components are learned with canonical correlation analysis. Abstraction is achieved by representing text and images at a more general, semantic level. The two hypotheses are studied in the context of the task of cross-modal document retrieval. This includes retrieving the text that most closely matches a query image, or retrieving the images that most closely match a query text. It is shown that accounting for cross-modal correlations and semantic abstraction both improve retrieval accuracy. The cross-modal model is also shown to outperform state-of-the-art image retrieval systems on a unimodal retrieval task.",
"title": ""
},
{
"docid": "404fdd6f2d7f1bf69f2f010909969fa9",
"text": "Many applications in Multilingual and Multimodal Information Access involve searching large databases of high dimensional data objects with multiple (conditionally independent) views. In this work we consider the problem of learning hash functions for similarity search across the views for such applications. We propose a principled method for learning a hash function for each view given a set of multiview training data objects. The hash functions map similar objects to similar codes across the views thus enabling cross-view similarity search. We present results from an extensive empirical study of the proposed approach which demonstrate its effectiveness on Japanese language People Search and Multilingual People Search problems.",
"title": ""
}
] |
[
{
"docid": "74a38306c18b0a0ec6e02e5446ff7ed1",
"text": "In this work we scrutinize a low level computer vision task - non-maximum suppression (NMS) - which is a crucial preprocessing step in many computer vision applications. Especially in real time scenarios, efficient algorithms for such preprocessing algorithms, which operate on the full image resolution, are important. In the case of NMS, it seems that merely the straightforward implementation or slight improvements are known. We show that these are far from being optimal, and derive several algorithms ranging from easy-to-implement to highly-efficient",
"title": ""
},
{
"docid": "7380419cc9c5eac99e8d46e73df78285",
"text": "This paper discusses the classification of books purely based on cover image and title, without prior knowledge or context of author and origin. Several methods were implemented to assess the ability to distinguish books based on only these two characteristics. First we used a color-based distribution approach. Then we implemented transfer learning with convolutional neural networks on the cover image along with natural language processing on the title text. We found that image and text modalities yielded similar accuracy which indicate that we have reached a certain threshold in distinguishing between the genres that we have defined. This was confirmed by the accuracy being quite close to the human oracle accuracy.",
"title": ""
},
{
"docid": "07a1fca1b738cb550a7f384bd3e8de23",
"text": "American Library Association /ALA/ American Library Directory bibliographic record bibliography binding blanket order",
"title": ""
},
{
"docid": "ec26449b0d78b3f2b80404d340548d02",
"text": "A novel beam-forming phased array system using a substrate integrated waveguide (SIW) fed Yagi-Uda array antenna is presented. This phase array antenna employs an integrated waveguide structure lens as a beam forming network (BFN). A prototype phased array system is designed with 7 beam ports, 9 array ports, and 8 dummy ports. A 10 GHz SIW-fed Bow-tie linear array antenna is proposed with a nonplanar structure to scan over (-24°, +24°) with SIW lens.",
"title": ""
},
{
"docid": "dcc0237d174b6d41d4a4bcd4e00d172e",
"text": "Meander line antenna (MLA) is an electrically small antenna which poses several performance related issues such as narrow bandwidth, high VSWR, low gain and high cross polarization levels. This paper describe the design ,simulation and development of meander line microstrip antenna at wireless band, the antenna was modeled using microstrip lines and S parameter for the antenna was obtained. The properties of the antenna such as bandwidth, beamwidth, gain, directivity, return loss and polarization were obtained.",
"title": ""
},
{
"docid": "d118a5d9904a88ffd84a7f7c08970343",
"text": "We present FingOrbits, a wearable interaction technique using synchronized thumb movements. A thumb-mounted ring with an inertial measurement unit and a contact microphone are used to capture thumb movements when rubbing against the other fingers. Spectral information of the movements are extracted and fed into a classification backend that facilitates gesture discrimination. FingOrbits enables up to 12 different gestures through detecting three rates of movement against each of the four fingers. Through a user study with 10 participants (7 novices, 3 experts), we demonstrate that FingOrbits can distinguish up to 12 thumb gestures with an accuracy of 89% to 99% rendering the approach applicable for practical applications.",
"title": ""
},
{
"docid": "ef92f3f230a7eedee7555b5fc35f5558",
"text": "Smart home technologies offer potential benefits for assisting clinicians by automating health monitoring and well-being assessment. In this paper, we examine the actual benefits of smart home-based analysis by monitoring daily behavior in the home and predicting clinical scores of the residents. To accomplish this goal, we propose a clinical assessment using activity behavior (CAAB) approach to model a smart home resident's daily behavior and predict the corresponding clinical scores. CAAB uses statistical features that describe characteristics of a resident's daily activity performance to train machine learning algorithms that predict the clinical scores. We evaluate the performance of CAAB utilizing smart home sensor data collected from 18 smart homes over two years. We obtain a statistically significant correlation ( r=0.72) between CAAB-predicted and clinician-provided cognitive scores and a statistically significant correlation (r=0.45) between CAAB-predicted and clinician-provided mobility scores. These prediction results suggest that it is feasible to predict clinical scores using smart home sensor data and learning-based data analysis.",
"title": ""
},
{
"docid": "a262c272dac3b0ac86694fe738395b72",
"text": "This paper addresses how a recursive neural network model can automatically leave out useless information and emphasize important evidence, in other words, to perform “weight tuning” for higher-level representation acquisition. We propose two models, Weighted Neural Network (WNN) and Binary-Expectation Neural Network (BENN), which automatically control how much one specific unit contributes to the higher-level representation. The proposed model can be viewed as incorporating a more powerful compositional function for embedding acquisition in recursive neural networks. Experimental results demonstrate the significant improvement over standard neural models.",
"title": ""
},
{
"docid": "baefa50fee4f5ea6aaa314ae97342145",
"text": "MicroRNAs (miRNAs) are an extensive class of newly discovered endogenous small RNAs, which negatively regulate gene expression at the post-transcription levels. As the application of next-generation deep sequencing and advanced bioinformatics, the miRNA-related study has been expended to non-model plant species and the number of identified miRNAs has dramatically increased in the past years. miRNAs play a critical role in almost all biological and metabolic processes, and provide a unique strategy for plant improvement. Here, we first briefly review the discovery, history, and biogenesis of miRNAs, then focus more on the application of miRNAs on plant breeding and the future directions. Increased plant biomass through controlling plant development and phase change has been one achievement for miRNA-based biotechnology; plant tolerance to abiotic and biotic stress was also significantly enhanced by regulating the expression of an individual miRNA. Both endogenous and artificial miRNAs may serve as important tools for plant improvement.",
"title": ""
},
{
"docid": "59c24fb5b9ac9a74b3f89f74b332a27c",
"text": "This paper addresses the problem of learning to map sentences to logical form, given training data consisting of natural language sentences paired with logical representations of their meaning. Previous approaches have been designed for particular natural languages or specific meaning representations; here we present a more general method. The approach induces a probabilistic CCG grammar that represents the meaning of individual words and defines how these meanings can be combined to analyze complete sentences. We use higher-order unification to define a hypothesis space containing all grammars consistent with the training data, and develop an online learning algorithm that efficiently searches this space while simultaneously estimating the parameters of a log-linear parsing model. Experiments demonstrate high accuracy on benchmark data sets in four languages with two different meaning representations.",
"title": ""
},
{
"docid": "e3bae096e6c60b0dfc207bfd12e22d84",
"text": "This paper is the first in a sequence of papers which will prove the existence of a random polynomial time algorithm for the set of primes. The techniques used are from arithmetic algebraic geometry and to a lesser extent algebraic and analytic number theory. The result complements the well known result of Strassen and Soloway that there exists a random polynomial time algorithm for the set of composites.",
"title": ""
},
{
"docid": "869889e8be00663e994631b17061479b",
"text": "In this study we approach the problem of distinguishing general profanity from hate speech in social media, something which has not been widely considered. Using a new dataset annotated specifically for this task, we employ supervised classification along with a set of features that includes n-grams, skip-grams and clustering-based word representations. We apply approaches based on single classifiers as well as more advanced ensemble classifiers and stacked generalization, achieving the best result of 80% accuracy for this 3-class classification task. Analysis of the results reveals that discriminating hate speech and profanity is not a simple task, which may require features that capture a deeper understanding of the text not always possible with surface n-grams. The variability of gold labels in the annotated data, due to differences in the subjective adjudications of the annotators, is also an issue. Other directions for future work are discussed.",
"title": ""
},
{
"docid": "7b86ad7bbf53d92df5ec1088be6a82f9",
"text": "People typically underestimate their capacity to generate satisfaction with future outcomes. When people experience such self-generated satisfaction, they may mistakenly conclude that it was caused by an influential, insightful, and benevolent external agent. In three laboratory experiments, participants who were allowed to generate satisfaction with their outcomes were especially likely to conclude that an external agent had subliminally influenced their choice of partners (Study 1), had insight into their musical preferences (Study 2), and had benevolent intentions when giving them a stuffed animal (Study 3). These results suggest that belief in omniscient, omnipotent, and benevolent external agents, such as God, may derive in part from people's failure to recognize that they have generated their own satisfaction.",
"title": ""
},
{
"docid": "f03a96d81f7eeaf8b9befa73c2b6fbd5",
"text": "This research provided the first empirical investigation of how approach and avoidance motives for sacrifice in intimate relationships are associated with personal well-being and relationship quality. In Study 1, the nature of everyday sacrifices made by dating partners was examined, and a measure of approach and avoidance motives for sacrifice was developed. In Study 2, which was a 2-week daily experience study of college students in dating relationships, specific predictions from the theoretical model were tested and both longitudinal and dyadic components were included. Whereas approach motives for sacrifice were positively associated with personal well-being and relationship quality, avoidance motives for sacrifice were negatively associated with personal well-being and relationship quality. Sacrificing for avoidance motives was particularly detrimental to the maintenance of relationships over time. Perceptions of a partner's motives for sacrifice were also associated with well-being and relationship quality. Implications for the conceptualization of relationship maintenance processes along these 2 dimensions are discussed.",
"title": ""
},
{
"docid": "8439dbba880179895ab98a521b4c254f",
"text": "Given the increase in demand for sustainable livelihoods for coastal villagers in developing countries and for the commercial eucheumoid Kappaphycus alvarezii (Doty) Doty, for the carrageenan industry, there is a trend towards introducing K. alvarezii to more countries in the tropical world for the purpose of cultivation. However, there is also increasing concern over the impact exotic species have on endemic ecosystems and biodiversity. Quarantine and introduction procedures were tested in northern Madagascar and are proposed for all future introductions of commercial eucheumoids (K. alvarezii, K. striatum and Eucheuma denticulatum). In addition, the impact and extent of introduction of K. alvarezii was measured on an isolated lagoon in the southern Lau group of Fiji. It is suggested that, in areas with high human population density, the overwhelming benefits to coastal ecosystems by commercial eucheumoid cultivation far outweigh potential negative impacts. However, quarantine and introduction procedures should be followed. In addition, introduction should only take place if a thorough survey has been conducted and indicates the site is appropriate. Subsequently, the project requires that a well designed and funded cultivation development programme, with a management plan and an assured market, is in place in order to make certain cultivation, and subsequently the introduced algae, will not be abandoned at a later date. KAPPAPHYCUS ALVAREZI",
"title": ""
},
{
"docid": "27afd0280e81b731eb434ef174ffd9b2",
"text": "This paper presents a review of recently used direct torque and flux control (DTC) techniques for voltage inverter-fed induction and permanent-magnet synchronous motors. A variety of techniques, different in concept, are described as follows: switching-table-based hysteresis DTC, direct self control, constant-switching-frequency DTC with space-vector modulation (DTC-SVM). Also, trends in the DTC-SVM techniques based on neuro-fuzzy logic controllers are presented. Some oscillograms that illustrate properties of the presented techniques are shown.",
"title": ""
},
{
"docid": "c7bfb8dfcfc4c267b515e3c92afbbdd0",
"text": "Each month, many women experience an ovulatory cycle that regulates fertility. Although research has found that this cycle influences women's mating preferences, we proposed that it might also change women's political and religious views. Building on theory suggesting that political and religious orientation are linked to reproductive goals, we tested how fertility influenced women's politics, religiosity, and voting in the 2012 U.S. presidential election. In two studies with large and diverse samples, ovulation had drastically different effects on single women and women in committed relationships. Ovulation led single women to become more liberal, less religious, and more likely to vote for Barack Obama. In contrast, ovulation led women in committed relationships to become more conservative, more religious, and more likely to vote for Mitt Romney. In addition, ovulation-induced changes in political orientation mediated women's voting behavior. Overall, the ovulatory cycle not only influences women's politics but also appears to do so differently for single women than for women in relationships.",
"title": ""
},
{
"docid": "8b2f1dc92084548108dc349f5d8f7ff1",
"text": "Although the amygdala's role in processing facial expressions of fear has been well established, its role in the processing of other emotions is unclear. In particular, evidence for the amygdala's involvement in processing expressions of happiness and sadness remains controversial. To clarify this issue, we constructed a series of morphed stimuli whose emotional expression varied gradually from very faint to more pronounced. Five morphs each of sadness and happiness, as well as neutral faces, were shown to 27 subjects with unilateral amygdala damage and 5 with complete bilateral amygdala damage, whose data were compared to those from 12 braindamaged and 26 normal controls. Subjects were asked to rate the intensity and to label the stimuli. Subjects with unilateral amygdala damage performed very comparably to controls. By contrast, subjects with bilateral amygdala damage showed a specific impairment in rating sad faces, but performed normally in rating happy faces. Furthermore, subjects with right unilateral amygdala damage performed somewhat worse than subjects with left unilateral amygdala damage. The findings suggest that the amygdala's role in processing of emotional facial expressions encompasses multiple negatively valenced emotions, including fear and sadness.",
"title": ""
},
{
"docid": "560ff157bcedf4e59d4993229ef42d80",
"text": "Hash tables are important data structures that lie at the heart of important applications such as key-value stores and relational databases. Typically bucketized cuckoo hash tables (BCHTs) are used because they provide highthroughput lookups and load factors that exceed 95%. Unfortunately, this performance comes at the cost of reduced memory access efficiency. Positive lookups (key is in the table) and negative lookups (where it is not) on average access 1.5 and 2.0 buckets, respectively, which results in 50 to 100% more table-containing cache lines to be accessed than should be minimally necessary. To reduce these surplus accesses, this paper presents the Horton table, a revamped BCHT that reduces the expected cost of positive and negative lookups to fewer than 1.18 and 1.06 buckets, respectively, while still achieving load factors of 95%. The key innovation is remap entries, small in-bucket records that allow (1) more elements to be hashed using a single, primary hash function, (2) items that overflow buckets to be tracked and rehashed with one of many alternate functions while maintaining a worst-case lookup cost of 2 buckets, and (3) shortening the vast majority of negative searches to 1 bucket access. With these advancements, Horton tables outperform BCHTs by 17% to 89%.",
"title": ""
}
] |
scidocsrr
|
3eb59bf914b34d7dac4eeaf79b50b681
|
Approaches for Intrinsic and External Plagiarism Detection - Notebook for PAN at CLEF 2011
|
[
{
"docid": "a25b25773524731d6a1dbdc560bdda0e",
"text": "This paper overviews 18 plagiarism detectors that have been developed and evaluated within PAN’10. We start with a unified retrieval process that summarizes the best practices employed this year. Then, the detectors’ performances are evaluated in detail, highlighting several important aspects of plagiarism detection, such as obfuscation, intrinsic vs. external plagiarism, and plagiarism case length. Finally, all results are compared to those of last year’s competition. Martin Braschler and Donna Harman (Eds.): Notebook Papers of CLEF 2010 LABs and Workshops, 22-23 September, Padua, Italy. ISBN 978-88-904810-0-0. 2010.",
"title": ""
}
] |
[
{
"docid": "8854917dff531c706f0234c1e45a496d",
"text": "A new equivalent circuit model of an electrical size-reduced coupled line radio frequency Marchand balun is proposed and investigated in this paper. It consists of two parts of coupled lines with significantly reduced electrical length. Compared with the conventional Marchand balun, a short-circuit ending is applied instead of the open-circuit ending, and a capacitive feeding is introduced. The electrical length of the proposed balun is reduced to around 1/3 compared with that of the conventional Marchand balun. Detailed mathematical analysis for this design is included in this paper. Groups of circuit simulation results are shown to verify the conclusions. A sample balun is fabricated in microstrip line type on the Teflon substrate, with low dielectric constant of 2.54. It has a dimension of $0.189\\lambda _{g} \\times 0.066 \\lambda _{g}$ with amplitude imbalance of 0.1 dB and phase imbalance of 179.09° ± 0.14°. The simulation and experiment results are in good agreement.",
"title": ""
},
{
"docid": "2651e41af0ed03a1078197bcde20a7d3",
"text": "The use of automated blood pressure (BP) monitoring is growing as it does not require much expertise and can be performed by patients several times a day at home. Oscillometry is one of the most common measurement methods used in automated BP monitors. A review of the literature shows that a large variety of oscillometric algorithms have been developed for accurate estimation of BP but these algorithms are scattered in many different publications or patents. Moreover, considering that oscillometric devices dominate the home BP monitoring market, little effort has been made to survey the underlying algorithms that are used to estimate BP. In this review, a comprehensive survey of the existing oscillometric BP estimation algorithms is presented. The survey covers a broad spectrum of algorithms including the conventional maximum amplitude and derivative oscillometry as well as the recently proposed learning algorithms, model-based algorithms, and algorithms that are based on analysis of pulse morphology and pulse transit time. The aim is to classify the diverse underlying algorithms, describe each algorithm briefly, and discuss their advantages and disadvantages. This paper will also review the artifact removal techniques in oscillometry and the current standards for the automated BP monitors.",
"title": ""
},
{
"docid": "1eab21d97bb15cd18648e66383f8f572",
"text": "Indoor localization of smart hand-held devices is essential for location-based services of pervasive applications. The previous research mainly focuses on exploring wireless signal fingerprints for this purpose, and several shortcomings need to be addressed first before real-world usage, e.g., demanding a large number of access points or labor-intensive site survey. In this paper, through a systematic empirical study, we first gain in-depth understandings of Bluetooth characteristics, i.e., the impact of various factors, such as distance, orientation, and obstacles on the Bluetooth received signal strength indicator (RSSI). Then, by mining from historical data, a novel localization model is built to describe the relationship between the RSSI and the device location. On this basis, we present an energy-efficient indoor localization scheme that leverages user motions to iteratively shrink the search space to locate the target device. An Motion-assisted Device Tracking Algorithm has been prototyped and evaluated in several real-world scenarios. Extensive experiments show that our algorithm is efficient in terms of localization accuracy, searching time and energy consumption.",
"title": ""
},
{
"docid": "7f64b53a4188464301568eda283c99f0",
"text": "We survey the theory of perfectoid spaces and its applications. Mathematics Subject Classification (2010). Primary: 14G22, 11F80 Secondary: 14G20, 14C30, 14L05, 14G35, 11F03",
"title": ""
},
{
"docid": "c68729167831b81a2d694664a4cfa90b",
"text": "Micro aerial vehicles (MAV) pose a challenge in designing sensory systems and algorithms due to their size and weight constraints and limited computing power. We present an efficient 3D multi-resolution map that we use to aggregate measurements from a lightweight continuously rotating laser scanner. We estimate the robot's motion by means of visual odometry and scan registration, aligning consecutive 3D scans with an incrementally built map. By using local multi-resolution, we gain computational efficiency by having a high resolution in the near vicinity of the robot and a lower resolution with increasing distance from the robot, which correlates with the sensor's characteristics in relative distance accuracy and measurement density. Compared to uniform grids, local multi-resolution leads to the use of fewer grid cells without loosing information and consequently results in lower computational costs. We efficiently and accurately register new 3D scans with the map in order to estimate the motion of the MAV and update the map in-flight. In experiments, we demonstrate superior accuracy and efficiency of our registration approach compared to state-of-the-art methods such as GICP. Our approach builds an accurate 3D obstacle map and estimates the vehicle's trajectory in real-time.",
"title": ""
},
{
"docid": "dca1e884efc738a166a29e4142595b69",
"text": "Mammalian hair fibres can be structurally divided into three main components: a cuticle, cortex and sometimes a medulla. The cuticle consists of a thin layer of overlapping cells on the surface of the fibre, constituting around 10% of the total fibre weight. The cortex makes up the remaining 86–90% and is made up of axially aligned spindle-shaped cells of which three major types have been recognised in wool: ortho, meso and para. Cortical cells are packed full of macrofibril bundles, which are a composite of aligned intermediate filaments embedded in an amorphous matrix. The spacing and three-dimensional arrangement of the intermediate filaments vary with cell type. The medulla consists of a continuous or discontinuous column of horizontal spaces in the centre of the cortex that becomes more prevalent as the fibre diameter increases.",
"title": ""
},
{
"docid": "f6362a62b69999bdc3d9f681b68842fc",
"text": "Women with breast cancer, whether screen detected or symptomatic, have both mammography and ultrasound for initial imaging assessment. Unlike X-ray or magnetic resonance, which produce an image of the whole breast, ultrasound provides comparatively limited 2D or 3D views located around the lesions. Combining different modalities is an essential task for accurate diagnosis and simulating ultrasound images based on whole breast data could be a way toward correlating different information about the same lesion. Very few studies have dealt with such a simulation framework since the breast undergoes large scale deformation between the prone position of magnetic resonance imaging and the largely supine or lateral position of ultrasound. We present a framework for the realistic simulation of 3D ultrasound images based on prone magnetic resonance images from which a supine position is generated using a biomechanical model. The simulation parameters are derived from a real clinical infrastructure and from transducers that are used for routine scans, leading to highly realistic ultrasound images of any region of the breast.",
"title": ""
},
{
"docid": "b2a670d90d53825c53d8ce0082333db6",
"text": "Social media platforms facilitate the emergence of citizen communities that discuss real-world events. Their content reflects a variety of intent ranging from social good (e.g., volunteering to help) to commercial interest (e.g., criticizing product features). Hence, mining intent from social data can aid in filtering social media to support organizations, such as an emergency management unit for resource planning. However, effective intent mining is inherently challenging due to ambiguity in interpretation, and sparsity of relevant behaviors in social data. In this paper, we address the problem of multiclass classification of intent with a use-case of social data generated during crisis events. Our novel method exploits a hybrid feature representation created by combining top-down processing using knowledge-guided patterns with bottom-up processing using a bag-of-tokens model. We employ pattern-set creation from a variety of knowledge sources including psycholinguistics to tackle the ambiguity challenge, social behavior about conversations to enrich context, and contrast patterns to tackle the sparsity challenge. Our results show a significant absolute gain up to 7% in the F1 score relative to a baseline using bottom-up processing alone, within the popular multiclass frameworks of One-vs-One and One-vs-All. Intent mining can help design efficient cooperative information systems between citizens and organizations for serving organizational information needs.",
"title": ""
},
{
"docid": "310e525bc7a78da2987d8c6d6a0ff46b",
"text": "This tutorial provides an overview of the data mining process. The tutorial also provides a basic understanding of how to plan, evaluate and successfully refine a data mining project, particularly in terms of model building and model evaluation. Methodological considerations are discussed and illustrated. After explaining the nature of data mining and its importance in business, the tutorial describes the underlying machine learning and statistical techniques involved. It describes the CRISP-DM standard now being used in industry as the standard for a technology-neutral data mining process model. The paper concludes with a major illustration of the data mining process methodology and the unsolved problems that offer opportunities for research. The approach is both practical and conceptually sound in order to be useful to both academics and practitioners.",
"title": ""
},
{
"docid": "0f44ab1a2d93ce015778e9a41063ce7b",
"text": "Bullying is a serious problem in schools, and school authorities need effective solutions to resolve this problem. There is growing interest in the wholeschool approach to bullying. Whole-school programs have multiple components that operate simultaneously at different levels in the school community. This article synthesizes the existing evaluation research on whole-school programs to determine the overall effectiveness of this approach. The majority of programs evaluated to date have yielded nonsignificant outcomes on measures of self-reported victimization and bullying, and only a small number have yielded positive outcomes. On the whole, programs in which implementation was systematically monitored tended to be more effective than programs without any monitoring. show little empathy for their victims (Roberts & Morotti, 2000). Bullying may be a means of increasing one’s social status and access to valued resources, such as the attention of opposite-sex peers (Pellegrini, 2001). Victims tend to be socially isolated, lack social skills, and have more anxiety and lower self-esteem than students in general (Olweus, 1997). They also tend to have a higher than normal risk for depression and suicide (e.g., Sourander, Helstelae, Helenius, & Piha, 2000). A subgroup of victims reacts aggressively to abuse and has a distinct pattern of psychosocial maladjustment encompassing both the antisocial behavior of bullies and the social and emotional difficulties of victims (Glover, Gough, Johnson, & Cartwright, 2000). Bullying is a relatively stable and long-term problem for those involved, particularly children fitting the profile Bullying is a particularly vicious kind of aggressive behavior distinguished by repeated acts against weaker victims who cannot easily defend themselves (Farrington, 1993; Smith & Brain, 2000). Its consequences are severe, especially for those victimized over long periods of time. Bullying is a complex psychosocial problem influenced by a myriad of variables. The repetition and imbalance of power involved may be due to physical strength, numbers, or psychological factors. Both bullies and victims evidence poorer psychological adjustment than individuals not involved in bullying (Kumpulainen, Raesaenen, & Henttonen, 1999; Nansel et al., 2001). Children who bully tend to be involved in alcohol consumption and smoking, have poorer academic records than noninvolved students, display a strong need for dominance, and",
"title": ""
},
{
"docid": "c446ce16a62f832a167101293fe8b58d",
"text": "Unforeseen events such as node failures and resource contention can have a severe impact on the performance of data processing frameworks, such as Hadoop, especially in cloud environments where such incidents are common. SLA compliance in the presence of such events requires the ability to quickly and dynamically resize infrastructure resources. Unfortunately, the distributed and stateful nature of data processing frameworks makes it challenging to accurately scale the system at run-time. In this paper, we present the design and implementation of a model-driven autoscaling solution for Hadoop clusters. We first develop novel gray-box performance models for Hadoop workloads that specifically relate job execution times to resource allocation and workload parameters. We then employ these models to dynamically determine the resources required to successfully complete the Hadoop jobs as per the user-specified SLA under various scenarios including node failures and multi-job executions. Our experimental results on three different Hadoop cloud clusters and across different workloads demonstrate the efficacy of our models and highlight their autoscaling capabilities.",
"title": ""
},
{
"docid": "f0163ebc621a3e54588cd030796a606c",
"text": "Software Product Lines, in conjunction with modeldriven product derivation, are successful examples for extensive automation and reuse in software development. However, often each single product requires an individual, tailored user interface of its own to achieve the desired usability. Moreover, in some cases (e.g., online shops, games) it is even mandatory that each product has an individual, unique user interface of its own. Usually, this results in manual user interface design independent from the model-driven product derivation. Consequently, each product configuration has to be mapped manually to a corresponding user interface which can become a tedious and error-prone task for large and complex product lines. This paper addresses this problem by integrating concepts from SPL product derivation and Model-based User Interface Development. This facilitates both (1) a systematic and semi-automated creation of user interfaces during product derivation while (2) still supporting for individual, creative design.",
"title": ""
},
{
"docid": "b3196426a124a6fadc4e22741e9facf9",
"text": "Cloud computing is an expanding area in research and industry today, which involves virtualization, distributed computing, internet, software and web services. A cloud consists of several elements such as clients, data centers and distributed servers, internet and it includes fault tolerance, high availability, effectiveness, scalability, flexibility, reduced overhead for users, reduced cost of ownership, on demand services and etc. The services of cloud computing are becoming ubiquitous, and serve as the primary source of computing power for different applications like enterprises and personal computing applications. In this paper we introduced the novel load balancing algorithm using fuzzy logic in cloud computing, in which load balancing is a core and challenging issue in Cloud Computing. The processor speed and assigned load of Virtual Machine (VM) are used to balance the load in cloud computing through fuzzy logic.",
"title": ""
},
{
"docid": "f9fcaf54f908a11e165173c96334fb5e",
"text": "Axial flux-segmented rotor-switched reluctance motor (SSRM) topology could be a potential candidate for in-wheel electric vehicle application. This topology has the advantage of the increased active surface area for the torque production as compared to the radial flux SSRM for a given volume. To improve the performance of axial flux SSRM (AFSSRM), various stator slot/rotor segment combinations and winding polarities are studied. It is observed that the torque ripple is high for the designed three-phase, 12/8 pole AFSSRM. Therefore, the influence of the stator pole and rotor segment arc angles on the average torque and the torque ripple is studied. In addition, the adjacent rotor segments are displaced with respect to the stator, to reduce the torque dips in the phase commutation region. The proposed arrangement is analyzed using the quasi-3-D finite-element method-based simulation study and it is found that the torque ripple can be reduced by 38%. Furthermore, the low-frequency harmonic content in the torque output is analyzed and compared. The variation of the axial electromagnetic attractive force with displaced rotor segments is discussed. The effectiveness of the proposed technique is verified experimentally.",
"title": ""
},
{
"docid": "c450ac5c84d962bb7f2262cf48e1280a",
"text": "Animal-assisted therapies have become widespread with programs targeting a variety of pathologies and populations. Despite its popularity, it is unclear if this therapy is useful. The aim of this systematic review is to establish the efficacy of Animal assisted therapies in the management of dementia, depression and other conditions in adult population. A search was conducted in MEDLINE, EMBASE, CINAHL, LILACS, ScienceDirect, and Taylor and Francis, OpenGrey, GreyLiteratureReport, ProQuest, and DIALNET. No language or study type filters were applied. Conditions studied included depression, dementia, multiple sclerosis, PTSD, stroke, spinal cord injury, and schizophrenia. Only articles published after the year 2000 using therapies with significant animal involvement were included. 23 articles and dissertations met inclusion criteria. Overall quality was low. The degree of animal interaction significantly influenced outcomes. Results are generally favorable, but more thorough and standardized research should be done to strengthen the existing evidence.",
"title": ""
},
{
"docid": "9172d4ba2e86a7d4918ef64d7b837084",
"text": "Electromagnetic generators (EMGs) and triboelectric nanogenerators (TENGs) are the two most powerful approaches for harvesting ambient mechanical energy, but the effectiveness of each depends on the triggering frequency. Here, after systematically comparing the performances of EMGs and TENGs under low-frequency motion (<5 Hz), we demonstrated that the output performance of EMGs is proportional to the square of the frequency, while that of TENGs is approximately in proportion to the frequency. Therefore, the TENG has a much better performance than that of the EMG at low frequency (typically 0.1-3 Hz). Importantly, the extremely small output voltage of the EMG at low frequency makes it almost inapplicable to drive any electronic unit that requires a certain threshold voltage (∼0.2-4 V), so that most of the harvested energy is wasted. In contrast, a TENG has an output voltage that is usually high enough (>10-100 V) and independent of frequency so that most of the generated power can be effectively used to power the devices. Furthermore, a TENG also has advantages of light weight, low cost, and easy scale up through advanced structure designs. All these merits verify the possible killer application of a TENG for harvesting energy at low frequency from motions such as human motions for powering small electronics and possibly ocean waves for large-scale blue energy.",
"title": ""
},
{
"docid": "cf8bf65059568ca717289d8f23b25b38",
"text": "AIM\nThis paper aims to systematically review studies investigating the strength of association between FMS composite scores and subsequent risk of injury, taking into account both methodological quality and clinical and methodological diversity.\n\n\nDESIGN\nSystematic review with meta-analysis.\n\n\nDATA SOURCES\nA systematic search of electronic databases was conducted for the period between their inception and 3 March 2016 using PubMed, Medline, Google Scholar, Scopus, Academic Search Complete, AMED (Allied and Complementary Medicine Database), CINAHL (Cumulative Index to Nursing and Allied Health Literature), Health Source and SPORTDiscus.\n\n\nELIGIBILITY CRITERIA FOR SELECTING STUDIES\nInclusion criteria: (1) English language, (2) observational prospective cohort design, (3) original and peer-reviewed data, (4) composite FMS score, used to define exposure and non-exposure groups and (5) musculoskeletal injury, reported as the outcome.\n\n\nEXCLUSION CRITERIA\n(1) data reported in conference abstracts or non-peer-reviewed literature, including theses, and (2) studies employing cross-sectional or retrospective study designs.\n\n\nRESULTS\n24 studies were appraised using the Quality of Cohort Studies assessment tool. In male military personnel, there was 'strong' evidence that the strength of association between FMS composite score (cut-point ≤14/21) and subsequent injury was 'small' (pooled risk ratio=1.47, 95% CI 1.22 to 1.77, p<0.0001, I2=57%). There was 'moderate' evidence to recommend against the use of FMS composite score as an injury prediction test in football (soccer). For other populations (including American football, college athletes, basketball, ice hockey, running, police and firefighters), the evidence was 'limited' or 'conflicting'.\n\n\nCONCLUSION\nThe strength of association between FMS composite scores and subsequent injury does not support its use as an injury prediction tool.\n\n\nTRIAL REGISTRATION NUMBER\nPROSPERO registration number CRD42015025575.",
"title": ""
},
{
"docid": "5d6fcad6be8d4c80a1e50eae24a6d44d",
"text": "This study examined how 4 specific measures of home literacy practices (i.e., shared book reading frequency, maternal book reading strategies, child's enjoyment of reading, and maternal sensitivity) and a global measure of the quality and responsiveness of the home environment during the preschool years predicted children's language and emergent literacy skills between the ages of 3 and 5 years. Study participants were 72 African American children and their mothers or primary guardians primarily from low-income families whose home literacy environment and development have been followed since infancy. Annually, between 18 months and 5 years of age, the children's mothers were interviewed about the frequency they read to their child and how much their child enjoyed being read to, and the overall quality and responsiveness of the home environment were observed. Mothers also were observed reading to their child once a year at 2, 3, and 4 years of age, and maternal sensitivity and types of maternal book reading strategies were coded. Children's receptive and expressive language and vocabulary were assessed annually between 3 years of age and kindergarten entry, and emergent literacy skills were assessed at 4 years and kindergarten entry. The specific home literacy practices showed moderate to large correlations with each other, and only a few significant associations with the language and literacy outcomes, after controlling for maternal education, maternal reading skills, and the child's gender. The global measure of overall responsiveness and support of the home environment was the strongest predictor of children's language and early literacy skills and contributed over and above the specific literacy practice measures in predicting children's early language and literacy development.",
"title": ""
},
{
"docid": "ac1a7abbf9101e24ea49649a8eedd46a",
"text": "issues that involves very large numbers of heterogeneous agents in the hostile environment. The intention of the RoboCup Rescue project is to promote research and development in this socially significant domain at various levels, involving multiagent teamwork coordination, physical agents for search and rescue, information infrastructures, personal digital assistants, a standard simulator and decision-support systems, evaluation benchmarks for rescue strategies, and robotic systems that are all integrated into a comprehensive system in the future. For this effort, which was built on the success of the RoboCup Soccer project, we will provide forums of technical discussions and competitive evaluations for researchers and practitioners. Although the rescue domain is intuitively appealing as a large-scale multiagent and intelligent system domain, analysis has not yet revealed its domain characteristics. The first research evaluation meeting will be held at RoboCup-2001, in conjunction with the Seventeenth International Joint Conference on Artificial Intelligence (IJCAI-2001), as part of the RoboCup Rescue Simulation League and RoboCup/AAAI Rescue Robot Competition. In this article, we present a detailed analysis of the task domain and elucidate characteristics necessary for multiagent and intelligent systems for this domain. Then, we present an overview of the RoboCup Rescue project.",
"title": ""
},
{
"docid": "310b8159894bc88b74a907c924277de6",
"text": "We present a set of clustering algorithms that identify cluster boundaries by searching for a hyperplanar gap in unlabeled data sets. It turns out that the Normalized Cuts algorithm of Shi and Malik [1], originally presented as a graph-theoretic algorithm, can be interpreted as such an algorithm. Viewing Normalized Cuts under this light reveals that it pays more attention to points away from the center of the data set than those near the center of the data set. As a result, it can sometimes split long clusters and display sensitivity to outliers. We derive a variant of Normalized Cuts that assigns uniform weight to all points, eliminating the sensitivity to outliers.",
"title": ""
}
] |
scidocsrr
|
a800e3befabd58b3b37fae84911aa7ac
|
Dependency Sensitive Convolutional Neural Networks for Modeling Sentences and Documents
|
[
{
"docid": "6af09f57f2fcced0117dca9051917a0d",
"text": "We present a novel per-dimension learning rate method for gradient descent called ADADELTA. The method dynamically adapts over time using only first order information and has minimal computational overhead beyond vanilla stochastic gradient descent. The method requires no manual tuning of a learning rate and appears robust to noisy gradient information, different model architecture choices, various data modalities and selection of hyperparameters. We show promising results compared to other methods on the MNIST digit classification task using a single machine and on a large scale voice dataset in a distributed cluster environment.",
"title": ""
},
{
"docid": "b7597e1f8c8ae4b40f5d7d1fe1f76a38",
"text": "In this paper we present a Time-Delay Neural Network (TDNN) approach to phoneme recognition which is characterized by two important properties. 1) Using a 3 layer arrangement of simple computing units, a hierarchy can be constructed that allows for the formation of arbitrary nonlinear decision surfaces. The TDNN learns these decision surfaces automatically using error backpropagation 111. 2) The time-delay arrangement enables the network to discover acoustic-phonetic features and the temporal relationships between them independent of position in time and hence not blurred by temporal shifts",
"title": ""
}
] |
[
{
"docid": "bd2adf12f6d6bd0c50b7fa6fceb7f568",
"text": "The lack of a common benchmark for the evaluation of the gaze estimation task from RGB and RGB-D data is a serious limitation for distinguishing the advantages and disadvantages of the many proposed algorithms found in the literature. This paper intends to overcome this limitation by introducing a novel database along with a common framework for the training and evaluation of gaze estimation approaches. In particular, we have designed this database to enable the evaluation of the robustness of algorithms with respect to the main challenges associated to this task: i) Head pose variations; ii) Person variation; iii) Changes in ambient and sensing conditions and iv) Types of target: screen or 3D object.",
"title": ""
},
{
"docid": "c1943f443b0e7be72091250b34262a8f",
"text": "We survey recent approaches to noise reduction in distant supervision learning for relation extraction. We group them according to the principles they are based on: at-least-one constraints, topic-based models, or pattern correlations. Besides describing them, we illustrate the fundamental differences and attempt to give an outlook to potentially fruitful further research. In addition, we identify related work in sentiment analysis which could profit from approaches to noise reduction.",
"title": ""
},
{
"docid": "9a06a760f0ae201f867a63a4338525b8",
"text": "Recent rapid increase in the generation of clinical data and rapid development of computational science make us able to extract new insights from massive datasets in healthcare industry. Oncological clinical notes are creating rich databases for documenting patient’s history and they potentially contain lots of patterns that could help in better management of the disease. However, these patterns are locked within free text (unstructured) portions of clinical documents and consequence in limiting health professionals to extract useful information from them and to finally perform Query and Answering (Q&A) process in an accurate way. The Information Extraction (IE) process requires Natural Language Processing (NLP) techniques to assign semantics to these patterns. Therefore, in this paper, we analyze the design of annotators for specific lung cancer concepts that can be integrated over Apache Unstructured Information Management Architecture (UIMA) framework. In addition, we explain the details of generation and storage of annotation outcomes.",
"title": ""
},
{
"docid": "78cdc83f3ea306e5573bf859037cf043",
"text": "With the pervasive use of mobile devices, Location Based Social Networks(LBSNs) have emerged in past years. These LBSNs, allowing their users to share personal experiences and opinions on visited merchants, have very rich and useful information which enables a new breed of location-based services, namely, Merchant Recommendation. Existing techniques for merchant recommendation simply treat each merchant as an item and apply conventional recommendation algorithms, e.g., Collaborative Filtering, to recommend merchants to a target user. However, they do not differentiate the user's real preferences on various aspects, and thus can only achieve limited success. In this paper, we aim to address this problem by utilizing and analyzing user reviews to discover user preferences in different aspects. Following the intuition that a user rating represents a personalized rational choice, we propose a novel utility-based approach by combining collaborative and individual views to estimate user preference (i.e., rating). An optimization algorithm based on a Gaussian model is developed to train our merchant recommendation approach. Lastly we evaluate the proposed approach in terms of effectiveness, efficiency and cold-start using two real-world datasets. The experimental results show that our approach outperforms the state-of-the-art methods. Meanwhile, a real mobile application is implemented to demonstrate the practicability of our method.",
"title": ""
},
{
"docid": "e1845d22d647dd85b07873e414b58303",
"text": "Automatic defects detection in MR images is very important in many diagnostic and therapeutic applications. Because of high quantity data in MR images and blurred boundaries, tumour segmentation and classification is very hard. This work has introduced one automatic brain tumour detection method to increase the accuracy and yield and decrease the diagnosis time. The goal is classifying the tissues to two classes of normal and abnormal. MR images that have been used here are MR images from normal and abnormal brain tissues. Here, it is tried to give clear description from brain tissues using Multi-Layer Perceptron Network,energy, entropy, contrast and some other statistic features such as mean, median, variance and correlation. It is used from a feature selection method to reduce the feature space too. This method uses from neural network to do this classification. The purpose of this project is to classify the brain tissues to normal and abnormal classes automatically, that saves the radiologist time, increases accuracy and yield of diagnosis.",
"title": ""
},
{
"docid": "1f9bf4526e7e58494242ddce17f6c756",
"text": "Consider the following generalization of the classical job-shop scheduling problem in which a set of machines is associated with each operation of a job. The operation can be processed on any of the machines in this set. For each assignment μ of operations to machines letP(μ) be the corresponding job-shop problem andf(μ) be the minimum makespan ofP(μ). How to find an assignment which minimizesf(μ)? For problems with two jobs a polynomial algorithm is derived. Folgende Verallgemeinerung des klassischen Job-Shop Scheduling Problems wird untersucht. Jeder Operation eines Jobs sei eine Menge von Maschinen zugeordnet. Wählt man für jede Operation genau eine Maschine aus dieser Menge aus, so erhält man ein klassisches Job-Shop Problem, dessen minimale Gesamtbearbeitungszeitf(μ) von dieser Zuordnung μ abhängt. Gesucht ist eine Zuordnung μ, dief(μ) minimiert. Für zwei Jobs wird ein polynomialer Algorithmus entwickelt, der dieses Problem löst.",
"title": ""
},
{
"docid": "4f6b8ea6fb0884bbcf6d4a6a4f658e52",
"text": "Ballistocardiography (BCG) enables the recording of heartbeat, respiration, and body movement data from an unconscious human subject. In this paper, we propose a new heartbeat detection algorithm for calculating heart rate (HR) and heart rate variability (HRV) from the BCG signal. The proposed algorithm consists of a moving dispersion calculation method to effectively highlight the respective heartbeat locations and an adaptive heartbeat peak detection method that can set a heartbeat detection window by automatically predicting the next heartbeat location. To evaluate the proposed algorithm, we compared it with other reference algorithms using a filter, waveform analysis and envelope calculation of signal by setting the ECG lead I as the gold standard. The heartbeat detection in BCG should be able to measure sensitively in the regions for lower and higher HR. However, previous detection algorithms are optimized mainly in the region of HR range (60~90 bpm) without considering the HR range of lower (40~60 bpm) and higher (90~110 bpm) HR. Therefore, we proposed an improved method in wide HR range that 40~110 bpm. The proposed algorithm detected the heartbeat greater stability in varying and wider heartbeat intervals as comparing with other previous algorithms. Our proposed algorithm achieved a relative accuracy of 98.29% with a root mean square error (RMSE) of 1.83 bpm for HR, as well as coverage of 97.63% and relative accuracy of 94.36% for HRV. And we obtained the root mean square (RMS) value of 1.67 for separated ranges in HR.",
"title": ""
},
{
"docid": "b77bef86667caed885fee95c79dc2292",
"text": "In this work, we propose a novel method for vocabulary selection to automatically adapt automatic speech recognition systems to the diverse topics that occur in educational and scientific lectures. Utilizing materials that are available before the lecture begins, such as lecture slides, our proposed framework iteratively searches for related documents on the web and generates a lecture-specific vocabulary based on the resulting documents. In this paper, we propose a novel method for vocabulary selection where we first collect documents similar to an initial seed document and then rank the resulting vocabulary based on a score which is calculated using a combination of word features. This is a critical component for adaptation that has typically been overlooked in prior works. On the inter ACT German-English simultaneous lecture translation system our proposed approach significantly improved vocabulary coverage, reducing the out-of-vocabulary rate, on average by 57.0% and up to 84.9%, compared to a lecture-independent baseline. Furthermore, our approach reduced the word error rate, by 12.5% on average and up to 25.3%, compared to a lecture-independent baseline.",
"title": ""
},
{
"docid": "013ca7d513b658f2dac68644a915b43a",
"text": "Money laundering a suspicious fund transfer between accounts without names which affects and threatens the stability of countries economy. The growth of internet technology and loosely coupled nature of fund transfer gateways helps the malicious user’s to perform money laundering. There are many approaches has been discussed earlier for the detection of money laundering and most of them suffers with identifying the root of money laundering. We propose a time variant approach using behavioral patterns to identify money laundering. In this approach, the transaction logs are split into various time window and for each account specific to the fund transfer the time value is split into different time windows and we generate the behavioral pattern of the user. The behavioral patterns specifies the method of transfer between accounts and the range of amounts and the frequency of destination accounts and etc.. Based on generated behavioral pattern , the malicious transfers and accounts are identified to detect the malicious root account. The proposed approach helps to identify more suspicious accounts and their group accounts to perform money laundering identification. The proposed approach has produced efficient results with less time complexity.",
"title": ""
},
{
"docid": "a96f219a2a1baac2c0d964a5a7d9fb62",
"text": "Spam-reduction techniques have developed rapidly ov er the last few years, as spam volumes have increased. We believe that no one anti-spam soluti on is the “right” answer, and that the best approac h is a multifaceted one, combining various forms of filtering w ith infrastructure changes, financial changes, lega l recourse, and more, to provide a stronger barrier to spam tha n can be achieved with one solution alone. SpamGur u addresses the part of this multi-faceted approach t hat can be handled by technology on the recipient’s side, using plug-in tokenizers and parsers, plug-in classificat ion modules, and machine-learning techniques to ach ieve high hit rates and low false-positive rates.",
"title": ""
},
{
"docid": "1e9e3fce7ae4e980658997c2984f05cb",
"text": "BACKGROUND\nMotivation in learning behaviour and education is well-researched in general education, but less in medical education.\n\n\nAIM\nTo answer two research questions, 'How has the literature studied motivation as either an independent or dependent variable? How is motivation useful in predicting and understanding processes and outcomes in medical education?' in the light of the Self-determination Theory (SDT) of motivation.\n\n\nMETHODS\nA literature search performed using the PubMed, PsycINFO and ERIC databases resulted in 460 articles. The inclusion criteria were empirical research, specific measurement of motivation and qualitative research studies which had well-designed methodology. Only studies related to medical students/school were included.\n\n\nRESULTS\nFindings of 56 articles were included in the review. Motivation as an independent variable appears to affect learning and study behaviour, academic performance, choice of medicine and specialty within medicine and intention to continue medical study. Motivation as a dependent variable appears to be affected by age, gender, ethnicity, socioeconomic status, personality, year of medical curriculum and teacher and peer support, all of which cannot be manipulated by medical educators. Motivation is also affected by factors that can be influenced, among which are, autonomy, competence and relatedness, which have been described as the basic psychological needs important for intrinsic motivation according to SDT.\n\n\nCONCLUSION\nMotivation is an independent variable in medical education influencing important outcomes and is also a dependent variable influenced by autonomy, competence and relatedness. This review finds some evidence in support of the validity of SDT in medical education.",
"title": ""
},
{
"docid": "82f8bfc9bb01105ccab46005d3df18d7",
"text": "This paper presents a comparative study of different classification methodologies for the task of fine-art genre classification. 2-level comparative study is performed for this classification problem. 1st level reviews the performance of discriminative vs. generative models while 2nd level touches the features aspect of the paintings and compares semantic-level features vs low-level and intermediate level features present in the painting.",
"title": ""
},
{
"docid": "56bd18820903da1917ca5d194b520413",
"text": "The problem of identifying subtle time-space clustering of dis ease, as may be occurring in leukemia, is described and reviewed. Published approaches, generally associated with studies of leuke mia, not dependent on knowledge of the underlying population for their validity, are directed towards identifying clustering by establishing a relationship between the temporal and the spatial separations for the n(n —l)/2 possible pairs which can be formed from the n observed cases of disease. Here it is proposed that statistical power can be improved by applying a reciprocal trans form to these separations. While a permutational approach can give valid probability levels for any observed association, for reasons of practicability, it is suggested that the observed associa tion be tested relative to its permutational variance. Formulas and computational procedures for doing so are given. While the distance measures between points represent sym metric relationships subject to mathematical and geometric regu larities, the variance formula developed is appropriate for ar bitrary relationships. Simplified procedures are given for the ease of symmetric and skew-symmetric relationships. The general pro cedure is indicated as being potentially useful in other situations as, for example, the study of interpersonal relationships. Viewing the procedure as a regression approach, the possibility for extend ing it to nonlinear and mult ¡variatesituations is suggested. Other aspects of the problem and of the procedure developed are discussed.",
"title": ""
},
{
"docid": "a5cd7d46dc74d15344e2f3e9b79388a3",
"text": "A number of differences have emerged between modern and classic approaches to constituency parsing in recent years, with structural components like grammars and featurerich lexicons becoming less central while recurrent neural network representations rise in popularity. The goal of this work is to analyze the extent to which information provided directly by the model structure in classical systems is still being captured by neural methods. To this end, we propose a high-performance neural model (92.08 F1 on PTB) that is representative of recent work and perform a series of investigative experiments. We find that our model implicitly learns to encode much of the same information that was explicitly provided by grammars and lexicons in the past, indicating that this scaffolding can largely be subsumed by powerful general-purpose neural machinery.",
"title": ""
},
{
"docid": "afa7d0e5c19fea77e1bcb4fce39fbc93",
"text": "Highly Autonomous Driving (HAD) systems rely on deep neural networks for the visual perception of the driving environment. Such networks are train on large manually annotated databases. In this work, a semi-parametric approach to one-shot learning is proposed, with the aim of bypassing the manual annotation step required for training perceptions systems used in autonomous driving. The proposed generative framework, coined Generative One-Shot Learning (GOL), takes as input single one-shot objects, or generic patterns, and a small set of so-called regularization samples used to drive the generative process. New synthetic data is generated as Pareto optimal solutions from one-shot objects using a set of generalization functions built into a generalization generator. GOL has been evaluated on environment perception challenges encountered in autonomous vision.",
"title": ""
},
{
"docid": "dcbd016b70683fd7c7ee813732a31e78",
"text": "In this paper, we propose a new methodology to embed deep learning-based algorithms in both visual recognition and motion planning for general mobile robotic platforms. A framework for an asynchronous deep classification network is introduced to integrate heavy deep classification networks into a mobile robot with no loss of system bandwidth. Moreover, a gaming reinforcement learning-based motion planner, a novel and convenient embodiment of reinforcement learning, is introduced for simple implementation and high applicability. The proposed approaches are implemented and evaluated on a developed robot, TT2-bot. The evaluation was based on a mission devised for a qualitative evaluation of the general purposes and performances of a mobile robotic platform. The robot was required to recognize targets with a deep classifier and plan the path effectively using a deep motion planner. As a result, the robot verified that the proposed approaches successfully integrate deep learning technologies on the stand-alone mobile robot. The embedded neural networks for recognition and path planning were critical components for the robot.",
"title": ""
},
{
"docid": "51c0cdb22056a3dc3f2f9b95811ca1ca",
"text": "Technology plays the major role in healthcare not only for sensory devices but also in communication, recording and display device. It is very important to monitor various medical parameters and post operational days. Hence the latest trend in Healthcare communication method using IOT is adapted. Internet of things serves as a catalyst for the healthcare and plays prominent role in wide range of healthcare applications. In this project the PIC18F46K22 microcontroller is used as a gateway to communicate to the various sensors such as temperature sensor and pulse oximeter sensor. The microcontroller picks up the sensor data and sends it to the network through Wi-Fi and hence provides real time monitoring of the health care parameters for doctors. The data can be accessed anytime by the doctor. The controller is also connected with buzzer to alert the caretaker about variation in sensor output. But the major issue in remote patient monitoring system is that the data as to be securely transmitted to the destination end and provision is made to allow only authorized user to access the data. The security issue is been addressed by transmitting the data through the password protected Wi-Fi module ESP8266 which will be encrypted by standard AES128 and the users/doctor can access the data by logging to the html webpage. At the time of extremity situation alert message is sent to the doctor through GSM module connected to the controller. Hence quick provisional medication can be easily done by this system. This system is efficient with low power consumption capability, easy setup, high performance and time to time response.",
"title": ""
},
{
"docid": "759a44aa610befecc766e7c4cbe19734",
"text": "This survey introduces the current state of the art in image and video retargeting and describes important ideas and technologies that have influenced the recent work. Retargeting is the process of adapting an image or video from one screen resolution to another to fit different displays, for example, when watching a wide screen movie on a normal television screen or a mobile device. As there has been considerable work done in this field already, this survey provides an overview of the techniques. It is meant to be a starting point for new research in the field. We include explanations of basic terms and operators, as well as the basic workflow of the different methods.",
"title": ""
},
{
"docid": "8e168315079a639039e5450995cb2a46",
"text": "While multi-agent systems seem to provide a good basis for building complex software systems, this paper points out some of the drawbacks of classical “agent centered” multi-agent systems. To resolve these difficulties we claim that organization centered multi-agent system, or OCMAS for short, may be used. We propose a set of general principles from which true OCMAS may be designed. One of these principles is not to assume anything about the cognitive capabilities of agents. In order to show how OCMAS models may be designed, we propose a very concise and minimal OCMAS model called AGR, for Agent/Group/Role. We propose a set of notations and a methodological framework to help the designer to build MAS using AGR. We then show that it is possible to design multi-agent systems using only OCMAS models.",
"title": ""
}
] |
scidocsrr
|
b89133036c1aabeb6580775c1ae00de7
|
Chatbot with a Discourse Structure-Driven Dialogue Management
|
[
{
"docid": "0cb0d05320a9de415b51c99e4766bbeb",
"text": "We propose a novel approach for developing a two-stage document-level discourse parser. Our parser builds a discourse tree by applying an optimal parsing algorithm to probabilities inferred from two Conditional Random Fields: one for intrasentential parsing and the other for multisentential parsing. We present two approaches to combine these two stages of discourse parsing effectively. A set of empirical evaluations over two different datasets demonstrates that our discourse parser significantly outperforms the stateof-the-art, often by a wide margin.",
"title": ""
}
] |
[
{
"docid": "8e6d17b6d7919d76cebbcefcc854573e",
"text": "Vincent Larivière École de bibliothéconomie et des sciences de l’information, Université de Montréal, C.P. 6128, Succ. CentreVille, Montréal, QC H3C 3J7, Canada, and Observatoire des Sciences et des Technologies (OST), Centre Interuniversitaire de Recherche sur la Science et la Technologie (CIRST), Université du Québec à Montréal, CP 8888, Succ. Centre-Ville, Montréal, QC H3C 3P8, Canada. E-mail: [email protected]",
"title": ""
},
{
"docid": "dcbfaec8966e10b8b87311f17bf9a3c5",
"text": "The study presented here investigated the effects of emotional valence on the memory for words by assessing both memory performance and pupillary responses during a recognition memory task. Participants had to make speeded judgments on whether a word presented in the test phase of the experiment had already been presented (\"old\") or not (\"new\"). An emotion-induced recognition bias was observed: Words with emotional content not only produced a higher amount of hits, but also elicited more false alarms than neutral words. Further, we found a distinct pupil old/new effect characterized as an elevated pupillary response to hits as opposed to correct rejections. Interestingly, this pupil old/new effect was clearly diminished for emotional words. We therefore argue that the pupil old/new effect is not only able to mirror memory retrieval processes, but also reflects modulation by an emotion-induced recognition bias.",
"title": ""
},
{
"docid": "ce64c8f2769957a5b93e0947c1987db5",
"text": "Power companies can benefit from the use of knowledge discovery methods and statistical machine learning for preventive maintenance. We introduce a general process for transforming historical electrical grid data into models that aim to predict the risk of failures for components and systems. These models can be used directly by power companies to assist with prioritization of maintenance and repair work. Specialized versions of this process are used to produce (1) feeder failure rankings, (2) cable, joint, terminator, and transformer rankings, (3) feeder Mean Time Between Failure (MTBF) estimates, and (4) manhole events vulnerability rankings. The process in its most general form can handle diverse, noisy, sources that are historical (static), semi-real-time, or real-time, incorporates state-of-the-art machine learning algorithms for prioritization (supervised ranking or MTBF), and includes an evaluation of results via cross-validation and blind test. Above and beyond the ranked lists and MTBF estimates are business management interfaces that allow the prediction capability to be integrated directly into corporate planning and decision support; such interfaces rely on several important properties of our general modeling approach: that machine learning features are meaningful to domain experts, that the processing of data is transparent, and that prediction results are accurate enough to support sound decision making. We discuss the challenges in working with historical electrical grid data that were not designed for predictive purposes. The “rawness” of these data contrasts with the accuracy of the statistical models that can be obtained from the process; these models are sufficiently accurate to assist in maintaining New York City's electrical grid.",
"title": ""
},
{
"docid": "6b59e286bdb09f64c8e3a7aa9ff15381",
"text": "Noting that skills and knowledge taught in schools have become abstracted from their uses in the world, this paper clarifies some of the implications for the nature of the knowledge that students acquire through a i;roposal for the retooling of apprenticeship methods for the teaching and learning of cognitive skills. The paper specifically proposes the development of a new cognitive apprenticeship to teach students the thinking and problem-solving skills involved in school subjects such as reading, writing, and mathematics. The first section of the paper, after discussing key shortcomings in current curricular and pedagogical practices, presents some of the structural features of traditional apprenticeship, detailing what would be required to adapt these characteristics to the teaching and learning of cognitive skills. The central section of the paper considers three recently developed pedagogical models that exemplify aspects of apprenticeship methods in teaching thinking and reasoning skills. The section notes that these methods--A. S. Palincsar and A. L. Brown's reciprocal reading teaching, M. Scardamalia and C. Bereiter's procedural facilitation of writing, and A. H. Schoenfeld's method for teaching mathematical problem solving--appear to develop successfully not only the cognitive, but also the metacognitive, skills required for true expertise. The final section organizes ideas on the purposes and characteristics of successful teaching into a general framework for the design of learning \"environments,\" including the content being taught, pedagogical methods employed, sequencing of learning activities, and the sociology of learning--emphasizing how cognitive apprenticeship goes beyond the techniques of traditional apprenticeship. Tables of data are included, and references are appended. (Author/NKA) CENTER FOR THE STUDY OF READING Technical Report No. 403 COGNITIVE APPRENTICESHIP: TEACHING THE CRAFT OF READING, WRITING, AND MATHEMATICS Allan Collins BBN Laboratories John Seely Brown Susan E. Newman Xerox Palo Alto Research Center",
"title": ""
},
{
"docid": "f7f90e224c71091cc3e6356ab1ec0ea5",
"text": "A new two-degrees-of-freedom (2-DOF) compliant parallel micromanipulator (CPM) utilizing flexure joints has been proposed for two-dimensional (2-D) nanomanipulation in this paper. The system is developed by a careful design and proper selection of electrical and mechanical components. Based upon the developed PRB model, both the position and velocity kinematic modelings have been performed in details, and the CPM's workspace area is determined analytically in view of the physical constraints imposed by pizeo-actuators and flexure hinges. Moreover, in order to achieve a maximum workspace subjected to the given dexterity indices, kinematic optimization of the design parameters has been carried out, which leads to a manipulator satisfying the requirement of this work. Simulation results reveal that the designed CPM can perform a high dexterous manipulation within its workspace.",
"title": ""
},
{
"docid": "dcd21065898c9dd108617a3db8dad6a1",
"text": "Advanced driver assistance systems are the newest addition to vehicular technology. Such systems use a wide array of sensors to provide a superior driving experience. Vehicle safety and driver alert are important parts of these system. This paper proposes a driver alert system to prevent and mitigate adjacent vehicle collisions by proving warning information of on-road vehicles and possible collisions. A dynamic Bayesian network (DBN) is utilized to fuse multiple sensors to provide driver awareness. It detects oncoming adjacent vehicles and gathers ego vehicle motion characteristics using an on-board camera and inertial measurement unit (IMU). A histogram of oriented gradient feature based classifier is used to detect any adjacent vehicles. Vehicles front-rear end and side faces were considered in training the classifier. Ego vehicles heading, speed and acceleration are captured from the IMU and feed into the DBN. The network parameters were learned from data via expectation maximization(EM) algorithm. The DBN is designed to provide two type of warning to the driver, a cautionary warning and a brake alert for possible collision with other vehicles. Experiments were completed on multiple public databases, demonstrating successful warnings and brake alerts in most situations.",
"title": ""
},
{
"docid": "1a0ce5b259b3c5ee3f72a48802b03503",
"text": "This article presents a longitudinal study with four children with autism, who were exposed to a humanoid robot over a period of several months. The longitudinal approach allowed the children time to explore the space of robot–human, as well as human–human interaction. Based on the video material documenting the interactions, a quantitative and qualitative analysis was conducted. The quantitative analysis showed an increase in duration of pre-defined behaviours towards the later trials. A qualitative analysis of the video data, observing the children’s activities in their interactional context, revealed further aspects of social interaction skills (imitation, turn-taking and role-switch) and communicative competence that the children showed. The results clearly demonstrate the need for, and benefits of, long-term studies in order to reveal the full potential of robots in the therapy and education of children with autism.",
"title": ""
},
{
"docid": "d3b2283ce3815576a084f98c34f37358",
"text": "We present a system for the detection of the stance of headlines with regard to their corresponding article bodies. The approach can be applied in fake news, especially clickbait detection scenarios. The component is part of a larger platform for the curation of digital content; we consider veracity and relevancy an increasingly important part of curating online information. We want to contribute to the debate on how to deal with fake news and related online phenomena with technological means, by providing means to separate related from unrelated headlines and further classifying the related headlines. On a publicly available data set annotated for the stance of headlines with regard to their corresponding article bodies, we achieve a (weighted) accuracy score of 89.59.",
"title": ""
},
{
"docid": "ec26449b0d78b3f2b80404d340548d02",
"text": "A novel beam-forming phased array system using a substrate integrated waveguide (SIW) fed Yagi-Uda array antenna is presented. This phase array antenna employs an integrated waveguide structure lens as a beam forming network (BFN). A prototype phased array system is designed with 7 beam ports, 9 array ports, and 8 dummy ports. A 10 GHz SIW-fed Bow-tie linear array antenna is proposed with a nonplanar structure to scan over (-24°, +24°) with SIW lens.",
"title": ""
},
{
"docid": "b23e34b3e2571379cafa7c34cdf532e7",
"text": "This article describes the change in partial discharge (PD) pattern of high voltage rotating machines and the change in the tan /spl delta/ as a function of the applied test voltage during the aging processes as caused by the application of different stresses on stator bars. It also compares the PD patterns associated with internal, slot, and end-winding discharges, which were produced in well-controlled laboratory conditions. In addition, the influence of different temperature conditions on the partial discharge activities are shown. The investigations in this work were performed on model stator bars under laboratory conditions, and the results might be different from those obtained for complete machines, as rotating machines are complex PD test objects, and for example, the detected PD signals in a complete machine significantly depend on the transmission path from the PD source to the measurement device.",
"title": ""
},
{
"docid": "9adfb1b69d1521d148db41618a449e7b",
"text": "This article presents a novel parallel spherical mechanism called Argos with three rotational degrees of freedom. Design aspects of the first prototype built of the Argos mechanism are discussed. The direct kinematic problem is solved, leading always to four nonsingular configurations of the end effector for a given set of joint angles. The inverse-kinematic problem yields two possible configurations for each of the three pantographs for a given orientation of the end effector. Potential applications of the Argos mechanism are robot wrists, orientable machine tool beds, joy sticks, surgical manipulators, and orientable units for optical components. Another pantograph based new structure named PantoScope having two rotational DoF is also briefly introduced. KEY WORDS—parallel robot, machine tool, 3 degree of freedom (DoF) wrist, pure orientation, direct kinematics, inverse kinematics, Pantograph based, Argos, PantoScope",
"title": ""
},
{
"docid": "5ddbaa58635d706215ae3d61fe13e46c",
"text": "Recent years have seen growing interest in the problem of sup er-resolution restoration of video sequences. Whereas in the traditional single image re storation problem only a single input image is available for processing, the task of reconst ructing super-resolution images from multiple undersampled and degraded images can take adv antage of the additional spatiotemporal data available in the image sequence. In particula r, camera and scene motion lead to frames in the source video sequence containing similar, b ut not identical information. The additional information available in these frames make poss ible reconstruction of visually superior frames at higher resolution than that of the original d ta. In this paper we review the current state of the art and identify promising directions f or future research. The authors are with the Laboratory for Image and Signal Analysis (LIS A), University of Notre Dame, Notre Dame, IN 46556. E-mail: [email protected] .",
"title": ""
},
{
"docid": "10d41334c88039e9d85ce6eb93cb9abf",
"text": "nonlinear functional analysis and its applications iii variational methods and optimization PDF remote sensing second edition models and methods for image processing PDF remote sensing third edition models and methods for image processing PDF guide to signals and patterns in image processing foundations methods and applications PDF introduction to image processing and analysis PDF principles of digital image processing advanced methods undergraduate topics in computer science PDF image processing analysis and machine vision PDF image acquisition and processing with labview image processing series PDF wavelet transform techniques for image resolution PDF sparse image and signal processing wavelets and related geometric multiscale analysis PDF nonstandard methods in stochastic analysis and mathematical physics dover books on mathematics PDF solution manual wavelet tour of signal processing PDF remote sensing image fusion signal and image processing of earth observations PDF image understanding using sparse representations synthesis lectures on image video and multimedia processing PDF",
"title": ""
},
{
"docid": "b6226454fb7f7b15156d6b4268016020",
"text": "A 2 × 2 planar slot antenna array is designed at 60 GHz. The slots are fed by a printed feeding network based on the ridge gap waveguide (RGW) technology. Such a feeding network requires the slots to be more than a full wavelength apart, which causes excitation of undesired grating lobes. In order to reduce the grating lobes level, a low-cost dielectric superstrate at a distance of half wavelength in the air above the slot antenna array is used. The presence of the superstrate acts as a planar lens that increases the slot array gain by 7 dB along the broadside direction as well as greatly reduces the grating lobes level. The single element and the array characteristics are provided.",
"title": ""
},
{
"docid": "6b1a1c36fa583391eb8b142368837bc3",
"text": "In this paper, we present design and simulation of a compact grid array microstrip patch antenna. In the design of antenna a RT/duroid 5880 substrate having relative permittivity, thickness and loss tangent of 2.2, 1.57 mm and 0.0009 respectively, has been used. The simulated antenna performance was obtained by Computer Simulation Technology Microwave Studio (CST MWS). The antenna performance was investigated by analyzing its return loss (S11), radiation pattern, voltage standing wave ratio (VSWR) parameters. The simulated S11 parameter has shown that antenna operates for Industrial, Scientific and Medical (ISM) band and Wireless Body Area Network (WBAN) applications at 2.45 GHZ ISM, 6.25 GHZ, 8.25 GHZ and 10.45 GHZ ultra-wideband (UWB) four resonance frequencies with bandwidth > 500MHz (S11 < −10dB). The antenna directivity increased towards higher frequencies. The VSWR of resonance frequency bands is also achieved succesfully less than 2. It has been observed that the simulation result values of the antenna are suitable for WBAN applications.",
"title": ""
},
{
"docid": "a70fce38ca9f0ce79d84f6154b0cb0d3",
"text": "Vehicular Ad Hoc Network (VANET) has been drawing interest among the researchers for the past couple of years. Though ad hoc network or mobile ad hoc network is very common in military environment, the real world practice of ad hoc network is still very low. On the other hand, cloud computing is supposed to be the next big thing because of its scalability, PaaS, IaaS, SaaS and other important characteristics. In this paper we have tried to propose a model of ad hoc cloud network architecture. We have specially focused on vehicular ad hoc network architecture or VANET which will enable us to create a “cloud on the run” model. The major parts of this proposed model are wireless devices mounted on vehicles which will act as a mobile multihop network and a public or private cloud created by the vehicles called vehicular cloud.",
"title": ""
},
{
"docid": "3427d27d6c5c444a90a184183f991208",
"text": "Network virtualization is recognized as an enabling technology for the future Internet. It aims to overcome the resistance of the current Internet to architectural change. Application of this technology relies on algorithms that can instantiate virtualized networks on a substrate infrastructure, optimizing the layout for service-relevant metrics. This class of algorithms is commonly known as \"Virtual Network Embedding (VNE)\" algorithms. This paper presents a survey of current research in the VNE area. Based upon a novel classification scheme for VNE algorithms a taxonomy of current approaches to the VNE problem is provided and opportunities for further research are discussed.",
"title": ""
},
{
"docid": "661f7bcccc22d1834d224b2b17d0c615",
"text": "Offline handwriting recognition in Indian regional scripts is an interesting area of research as almost 460 million people in India use regional scripts. The nine major Indian regional scripts are Bangla (for Bengali and Assamese languages), Gujarati, Kannada, Malayalam, Oriya, Gurumukhi (for Punjabi language), Tamil, Telugu, and Nastaliq (for Urdu language). A state-of-the-art survey about the techniques available in the area of offline handwriting recognition (OHR) in Indian regional scripts will be of a great aid to the researchers in the subcontinent and hence a sincere attempt is made in this article to discuss the advancements reported in this regard during the last few decades. The survey is organized into different sections. A brief introduction is given initially about automatic recognition of handwriting and official regional scripts in India. The nine regional scripts are then categorized into four subgroups based on their similarity and evolution information. The first group contains Bangla, Oriya, Gujarati and Gurumukhi scripts. The second group contains Kannada and Telugu scripts and the third group contains Tamil and Malayalam scripts. The fourth group contains only Nastaliq script (Perso-Arabic script for Urdu), which is not an Indo-Aryan script. Various feature extraction and classification techniques associated with the offline handwriting recognition of the regional scripts are discussed in this survey. As it is important to identify the script before the recognition step, a section is dedicated to handwritten script identification techniques. A benchmarking database is very important for any pattern recognition related research. The details of the datasets available in different Indian regional scripts are also mentioned in the article. A separate section is dedicated to the observations made, future scope, and existing difficulties related to handwriting recognition in Indian regional scripts. We hope that this survey will serve as a compendium not only for researchers in India, but also for policymakers and practitioners in India. It will also help to accomplish a target of bringing the researchers working on different Indian scripts together. Looking at the recent developments in OHR of Indian regional scripts, this article will provide a better platform for future research activities.",
"title": ""
},
{
"docid": "f5b9cde4b7848f803b3e742298c92824",
"text": "For many years, analysis of short chain fatty acids (volatile fatty acids, VFAs) has been routinely used in identification of anaerobic bacteria. In numerous scientific papers, the fatty acids between 9 and 20 carbons in length have also been used to characterize genera and species of bacteria, especially nonfermentative Gram negative organisms. With the advent of fused silica capillary columns (which allows recovery of hydroxy acids and resolution of many isomers), it has become practical to use gas chromatography of whole cell fatty acid methyl esters to identify a wide range of organisms.",
"title": ""
},
{
"docid": "5aaba72970d1d055768e981f7e8e3684",
"text": "A hash table is a fundamental data structure in computer science that can offer rapid storage and retrieval of data. A leading implementation for string keys is the cacheconscious array hash table. Although fast with strings, there is currently no information in the research literatur e on its performance with integer keys. More importantly, we do not know how efficient an integer-based array hash table is compared to other hash tables that are designed for integers, such as bucketized cuckoo hashing. In this paper, we explain how to efficiently implement an array hash table for integers. We then demonstrate, through careful experimental evaluations, which hash table, whether it be a bucketized cuckoo hash table, an array hash table, or alternative hash table schemes such as linear probing, offers the best performance—with respect to time and space— for maintaining a large dictionary of integers in-memory, on a current cache-oriented processor.",
"title": ""
}
] |
scidocsrr
|
f477bf836a7f84c17c9f15ef4919a1f4
|
A Novel MPPT Algorithm Based on Particle Swarm Optimization for Photovoltaic Systems
|
[
{
"docid": "e8758a9e2b139708ca472dd60397dc2e",
"text": "Multiple photovoltaic (PV) modules feeding a common load is the most common form of power distribution used in solar PV systems. In such systems, providing individual maximum power point tracking (MPPT) schemes for each of the PV modules increases the cost. Furthermore, its v-i characteristic exhibits multiple local maximum power points (MPPs) during partial shading, making it difficult to find the global MPP using conventional single-stage (CSS) tracking. To overcome this difficulty, the authors propose a novel MPPT algorithm by introducing a particle swarm optimization (PSO) technique. The proposed algorithm uses only one pair of sensors to control multiple PV arrays, thereby resulting in lower cost, higher overall efficiency, and simplicity with respect to its implementation. The validity of the proposed algorithm is demonstrated through experimental studies. In addition, a detailed performance comparison with conventional fixed voltage, hill climbing, and Fibonacci search MPPT schemes are presented. Algorithm robustness was verified for several complicated partial shading conditions, and in all cases this method took about 2 s to find the global MPP.",
"title": ""
}
] |
[
{
"docid": "d229aa5797b6195ea25d74723e7b62af",
"text": "Radio emitter recognition in dense multi-user environments is an important tool for optimizing spectrum utilization, identifying and minimizing interference, and enforcing spectrum policy. Radio data is readily available and easy to obtain from an antenna, but labeled and curated data is often scarce making supervised learning strategies difficult and time consuming in practice. We demonstrate that semi-supervised learning techniques can be used to scale learning beyond supervised datasets, allowing for discerning and recalling new radio signals by using sparse signal representations based on both unsupervised and supervised methods for nonlinear feature learning and clustering methods.",
"title": ""
},
{
"docid": "923eee773a2953468bfd5876e0393d4d",
"text": "Latent variable time-series models are among the most heavily used tools from machine learning and applied statistics. These models have the advantage of learning latent structure both from noisy observations and from the temporal ordering in the data, where it is assumed that meaningful correlation structure exists across time. A few highly-structured models, such as the linear dynamical system with linear-Gaussian observations, have closed-form inference procedures (e.g. the Kalman Filter), but this case is an exception to the general rule that exact posterior inference in more complex generative models is intractable. Consequently, much work in time-series modeling focuses on approximate inference procedures for one particular class of models. Here, we extend recent developments in stochastic variational inference to develop a ‘black-box’ approximate inference technique for latent variable models with latent dynamical structure. We propose a structured Gaussian variational approximate posterior that carries the same intuition as the standard Kalman filter-smoother but, importantly, permits us to use the same inference approach to approximate the posterior of much more general, nonlinear latent variable generative models. We show that our approach recovers accurate estimates in the case of basic models with closed-form posteriors, and more interestingly performs well in comparison to variational approaches that were designed in a bespoke fashion for specific non-conjugate models.",
"title": ""
},
{
"docid": "f4abfe0bb969e2a6832fa6317742f202",
"text": "We built a highly compliant, underactuated, robust and at the same time dexterous anthropomorphic hand. We evaluate its dexterous grasping capabilities by implementing the comprehensive Feix taxonomy of human grasps and by assessing the dexterity of its opposable thumb using the Kapandji test. We also illustrate the hand’s payload limits and demonstrate its grasping capabilities in real-world grasping experiments. To support our claim that compliant structures are beneficial for dexterous grasping, we compare the dimensionality of control necessary to implement the diverse grasp postures with the dimensionality of the grasp postures themselves. We find that actuation space is smaller than posture space and explain the difference with the mechanic interaction between hand and grasped object. Additional desirable properties are derived from using soft robotics technology: the hand is robust to impact and blunt collisions, inherently safe, and not affected by dirt, dust, or liquids. Furthermore, the hand is simple and inexpensive to manufacture.",
"title": ""
},
{
"docid": "8b46e6e341f4fdf4eb18e66f237c4000",
"text": "We present a general learning-based approach for phrase-level sentiment analysis that adopts an ordinal sentiment scale and is explicitly compositional in nature. Thus, we can model the compositional effects required for accurate assignment of phrase-level sentiment. For example, combining an adverb (e.g., “very”) with a positive polar adjective (e.g., “good”) produces a phrase (“very good”) with increased polarity over the adjective alone. Inspired by recent work on distributional approaches to compositionality, we model each word as a matrix and combine words using iterated matrix multiplication, which allows for the modeling of both additive and multiplicative semantic effects. Although the multiplication-based matrix-space framework has been shown to be a theoretically elegant way to model composition (Rudolph and Giesbrecht, 2010), training such models has to be done carefully: the optimization is nonconvex and requires a good initial starting point. This paper presents the first such algorithm for learning a matrix-space model for semantic composition. In the context of the phrase-level sentiment analysis task, our experimental results show statistically significant improvements in performance over a bagof-words model.",
"title": ""
},
{
"docid": "d39b15762f300da3d77440955ea6a0a9",
"text": "Microfluidics is undoubtedly an influential technology that is currently revolutionizing the chemical and biological studies by replicating laboratory bench-top technology on a miniature chip-scale device. In the area of drug delivery science, microfluidics offers advantages, such as precise dosage, ideal delivery, target-precise delivery, sustainable and controlled release, multiple dosing, and slight side effects. These advantages bring significant assets to the drug delivery systems. Microfluidic technology has been progressively used for fabrication of drug carriers, direct drug delivery systems, high-throughput screening, and formulation and immobilization of drugs. This review discusses the recent technological progress, outcomes and available opportunities for the usage of microfluidics systems in drug delivery systems.",
"title": ""
},
{
"docid": "bdbd1dd388f30f887ccaeb4a8f9bbb79",
"text": "Figure 1: Autonomous Formula SAE-Electric Car This dissertation describes the development of a high level control system for an autonomous Formula SAE race car featuring fusion of a 6-DOF IMU, a consumer grade GPS and an automotive LIDAR. Formula SAE is a long-running annual competition organised by the Society of Automotive Engineers which has recently seen the introduction of the new class SAE-Electric. The car discussed in this dissertation features electric motors driving each of the two rear wheels via independent controllers and has full drive-by-wire control of the throttle, steering and (hydraulic) braking system. Whilst autonomous driving is outside the scope of the Formula-SAE competition, it has been the subject of significant research interest over the last several decades. It is intended that the Autonomous SAE car developed in this project will provide UWA with a platform for research into driverless performance cars. This project consists of the design and implementation of a navigation control system which uses a Linux PC to interface with a range of sensors as well as the drive-by-wire system, safety systems and a base station. The navigation control system is implemented as a multi-threaded C++ program featuring asynchronous communication with hardware outputs, sensor inputs and user interfaces. The Autonomous SAE Car can drive following a map consisting of “waypoints” and “fence posts” which are recorded by either driving the course manually or through a GoogleMaps based web interface. Mapped driving is augmented by the use of a LIDAR scanner for detection of obstacles including road edges for which a novel algorithm is presented. GPS is used as the primary navigation aid; however sensor fusion algorithms have been implemented in order to improve upon the measurement of the cars position and orientation through the use of a 6-DOF Inertial Measurement Unit.",
"title": ""
},
{
"docid": "f58e8454e97ccd02b327ea86bccee954",
"text": "One of the objectives of METIS-II project is to facilitate discussion on scenarios, use cases, KPIs and requirements for 5G, building upon the comprehensive work conducted in the METIS-I project and taking the work of other European projects as well as other bodies such as ITU-R, NGMN, etc. into account. This paper analyses the landscape of 5G use cases and presents METIS-II 5G use cases that cover the main 5G services, have stringent requirements and whose technical solutions are expected to serve other similar use cases as well. It also links these use cases to the business cases defined by 5G PPP so that requirements of vertical industries can be taken into account when designing the 5G Radio Access Network (RAN).",
"title": ""
},
{
"docid": "4a18861ce15cfae3eaa2519d2fdc98c8",
"text": "This paper presents deadlock prevention are use to solve the deadlock problem of flexible manufacturing systems (FMS). Petri nets have been successfully as one of the most powerful tools for modeling of FMS. Their modeling power and a mathematical arsenal supporting the analysis of the modeled systems stimulate the increasing interest in Petri nets. With the structural object of Petri nets, siphons are important in the analysis and control of deadlocks in Petri nets (PNs) excellent properties. The deadlock prevention method are caused by the unmarked siphons, during the Petri nets are an effective way to model, analyze, simulation and control deadlocks in FMS is presented in this work. The characterization of special structural elements in Petri net so-called siphons has been a major approach for the investigation of deadlock-freeness in the center of FMS. The siphons are structures which allow for some implications on the net's can be well controlled by adding a control place (called monitor) for each uncontrolled siphon in the net in order to become deadlock-free situation in the system. Finally, We proposed method of modeling, simulation, control of FMS by using Petri nets, where deadlock analysis have Production line in parallel processing is demonstrate by a practical example used Petri Net-tool in MATLAB, is effective, and explicitly although its off-line computation.",
"title": ""
},
{
"docid": "1523534d398b4900c90d94e3f1bee422",
"text": "PURPOSE\nThe purpose of this pilot study was to examine the effectiveness of hippotherapy as an intervention for the treatment of postural instability in individuals with multiple sclerosis (MS).\n\n\nSUBJECTS\nA sample of convenience of 15 individuals with MS (24-72 years) were recruited from support groups and assessed for balance deficits.\n\n\nMETHODS\nThis study was a nonequivalent pretest-posttest comparison group design. Nine individuals (4 males, 5 females) received weekly hippotherapy intervention for 14 weeks. The other 6 individuals (2 males, 4 females) served as a comparison group. All participants were assessed with the Berg Balance Scale (BBS) and Tinetti Performance Oriented Mobility Assessment (POMA) at 0, 7, and 14 weeks.\n\n\nRESULTS\nThe group receiving hippotherapy showed statistically significant improvement from pretest (0 week) to posttest (14 week) on the BBS (mean increase 9.15 points (x (2) = 8.82, p = 0.012)) and POMA scores (mean increase 5.13 (x (2) = 10.38, p = 0.006)). The comparison group had no significant changes on the BBS (mean increase 0.73 (x (2) = 0.40, p = 0.819)) or POMA (mean decrease 0.13 (x (2) = 1.41, p = 0.494)). A statistically significant difference was also found between the groups' final BBS scores (treatment group median = 55.0, comparison group median 41.0), U = 7, r = -0.49.\n\n\nDISCUSSION\nHippotherapy shows promise for the treatment of balance disorders in persons with MS. Further research is needed to refine protocols and selection criteria.",
"title": ""
},
{
"docid": "ec5a3a7b2e777f84d2a6d23c3d432eb7",
"text": "Acoustic systems may provide suitable underwater communications because sound propagates well in water. However, the maximum data transmission rates of these systems in shallow littoral waters are ~10 kilobits per second (kbps) which may be achieved only at ranges of less than 100 m. Although underwater (u/w) wireless optical communications systems can have even shorter ranges due to greater attenuation of light propagating through water, they may provide higher bandwidth (up to several hundred kbps) communications as well as covertness. To exploit these potential advantages, we consider the basic design issues for u/w optical communications systems in this paper. In addition to the basic physics of u/w optical communications with environmental noise, we consider system performance with some state-of-the-art commercial off-the-shelf (COTS) components, which have promise for placing u/w optical communications systems in a small package with low power consumption and weight. We discuss light sources which show promise for u/w optical transmitters such as laser diodes (LDs) and light emitting diodes (LEDs). Laser diodes with their output frequency shifted into the 500- to 650-nm range can emit more energy per pulse than LEDs but are more expensive. Currently, LEDs emit substantial amounts of light and are typically very inexpensive. Also, COTS photodiodes can be used as detectors which can respond to pulses several nanoseconds wide. Transmitter broadcast angles and detector fields of view (FOVs) with pointing considerations are discussed. If the transmitter broadcast angle and the detector FOV are both narrow, the signal-to-noise ratio (SNR) of the received pulse is higher but the pointing accuracy of transmitter and receiver is critical. If, however, the transmitter broadcast angle and/or the detector FOV is wide, pointing is less critical but SNR is lower and some covertness may be lost. The propagation of the transmitted light in various clear oceanic and turbid coastal water types is considered with range estimates for some COTS light sources and detectors. We also consider the effects of environmental noise such as background solar radiation, which typically limits performance of these systems",
"title": ""
},
{
"docid": "9a438856b2cce32bf4e9bcbdc93795a2",
"text": "By balancing the spacing effect against the effects of recency and frequency, this paper explains how practice may be scheduled to maximize learning and retention. In an experiment, an optimized condition using an algorithm determined with this method was compared with other conditions. The optimized condition showed significant benefits with large effect sizes for both improved recall and recall latency. The optimization method achieved these benefits by using a modeling approach to develop a quantitative algorithm, which dynamically maximizes learning by determining for each item when the balance between increasing temporal spacing (that causes better long-term recall) and decreasing temporal spacing (that reduces the failure related time cost of each practice) means that the item is at the spacing interval where long-term gain per unit of practice time is maximal. As practice repetitions accumulate for each item, items become stable in memory and this optimal interval increases.",
"title": ""
},
{
"docid": "06ba1eeef81df1b9a8888fd33f29855e",
"text": "Hyperspectral cameras provide useful discriminants for human face recognition that cannot be obtained by other imaging methods.We examine the utility of using near-infrared hyperspectral images for the recognition of faces over a database of 200 subjects. The hyperspectral images were collected using a CCD camera equipped with a liquid crystal tunable filter to provide 31 bands over the near-infrared (0.7 m-1.0 m). Spectral measurements over the near-infrared allow the sensing of subsurface tissue structure which is significantly different from person to person, but relatively stable over time. The local spectral properties of human tissue are nearly invariant to face orientation and expression which allows hyperspectral discriminants to be used for recognition over a large range of poses and expressions. We describe a face recognition algorithm that exploits spectral measurements for multiple facial tissue types. We demonstrate experimentally that this algorithm can be used to recognize faces over time in the presence of changes in facial pose",
"title": ""
},
{
"docid": "17ec8f66fc6822520e2f22bd035c3ba0",
"text": "The paper discusses various phases in Urdu lexicon development from corpus. First the issues related with Urdu orthography such as optional vocalic content, Unicode variations, name recognition, spelling variation etc. have been described, then corpus acquisition, corpus cleaning, tokenization etc has been discussed and finally Urdu lexicon development i.e. POS tags, features, lemmas, phonemic transcription and the format of the lexicon has been discussed .",
"title": ""
},
{
"docid": "3cff653dc452df2163d7cc67cf9e0dd6",
"text": "In this paper we propose the construction of linguistic descriptions of images. This is achieved through the extraction of scene description graphs (SDGs) from visual scenes using an automatically constructed knowledge base. SDGs are constructed using both vision and reasoning. Specifically, commonsense reasoning1 is applied on (a) detections obtained from existing perception methods on given images, (b) a “commonsense” knowledge base constructed using natural language processing of image annotations and (c) lexical ontological knowledge from resources such as WordNet. Amazon Mechanical Turk(AMT)-based evaluations on Flickr8k, Flickr30k and MS-COCO datasets show that in most cases, sentences auto-constructed from SDGs obtained by our method give a more relevant and thorough description of an image than a recent state-of-the-art image caption based approach. Our Image-Sentence Alignment Evaluation results are also comparable to that of the recent state-of-the art approaches.",
"title": ""
},
{
"docid": "20ed67f3f410c3be15c0cabefa4effd8",
"text": "The content selection component of a natural language generation system decides which information should be communicated in its output. We use information from reports on the game of cricket. We first describe a simple factoid-to-text alignment algorithm then treat content selection as a collective classification problem and demonstrate that simple ‘grouping’ of statistics at various levels of granularity yields substantially improved results over a probabilistic baseline. We additionally show that holding back of specific types of input data, and linking database structures with commonality further increase performance.",
"title": ""
},
{
"docid": "d97a992e8a7275a663883c7ee7e6cb56",
"text": "Mindfulness originated in the Buddhist tradition as a way of cultivating clarity of thought. Despite the fact that this behavior is best captured using critical thinking (CT) assessments, no studies have examined the effects of mindfulness on CT or the mechanisms underlying any such possible relationship. Even so, mindfulness has been suggested as being beneficial for CT in higher education. CT is recognized as an important higher-order cognitive process which involves the ability to analyze and evaluate evidence and arguments. Such non-automatic, reflective responses generally require the engagement of executive functioning (EF) which includes updating, inhibition, and shifting of representations in working memory. Based on research showing that mindfulness enhances aspects of EF and certain higher-order cognitive processes, we hypothesized that individuals higher in facets of dispositional mindfulness would demonstrate greater CT performance, and that this relationship would be mediated by EF. Cross-sectional assessment of these constructs in a sample of 178 university students was achieved using the observing and non-reactivity sub-scales of the Five Factor Mindfulness Questionnaire, a battery of EF tasks and the Halpern Critical Thinking Assessment. Our hypotheses were tested by constructing a multiple meditation model which was analyzed using Structural Equation Modeling. Evidence was found for inhibition mediating the relationships between both observing and non-reactivity and CT in different ways. Indirect-only (or full) mediation was demonstrated for the relationship between observing, inhibition, and CT. Competitive mediation was demonstrated for the relationship between non-reactivity, inhibition, and CT. This suggests additional mediators of the relationship between non-reactivity and CT which are not accounted for in this model and have a negative effect on CT in addition to the positive effect mediated by inhibition. These findings are discussed in the context of the Default Interventionist Dual Process Theory of Higher-order Cognition and previous studies on mindfulness, self-regulation, EF, and higher-order cognition. In summary, dispositional mindfulness appears to facilitate CT performance and this effect is mediated by the inhibition component of EF. However, this relationship is not straightforward which suggests many possibilities for future research.",
"title": ""
},
{
"docid": "c9c9af3680df50d4dd72c73c90a41893",
"text": "BACKGROUND\nVideo games provide extensive player involvement for large numbers of children and adults, and thereby provide a channel for delivering health behavior change experiences and messages in an engaging and entertaining format.\n\n\nMETHOD\nTwenty-seven articles were identified on 25 video games that promoted health-related behavior change through December 2006.\n\n\nRESULTS\nMost of the articles demonstrated positive health-related changes from playing the video games. Variability in what was reported about the games and measures employed precluded systematically relating characteristics of the games to outcomes. Many of these games merged the immersive, attention-maintaining properties of stories and fantasy, the engaging properties of interactivity, and behavior-change technology (e.g., tailored messages, goal setting). Stories in video games allow for modeling, vicarious identifying experiences, and learning a story's \"moral,\" among other change possibilities.\n\n\nCONCLUSIONS\nResearch is needed on the optimal use of game-based stories, fantasy, interactivity, and behavior change technology in promoting health-related behavior change.",
"title": ""
},
{
"docid": "5fc823af9b5df6e65145682fa8a97fc9",
"text": "A deterministic algorithm for computing a minimum spanning tree of a connected graph is presented. Its running time is <italic>0</italic>(<italic>m</italic> α(<italic>m, n</italic>)), where α is the classical functional inverse of Ackermann's function and <italic>n</italic> (respectively, <italic>m</italic>) is the number of vertices (respectively, edges). The algorithm is comparison-based : it uses pointers, not arrays, and it makes no numeric assumptions on the edge costs.",
"title": ""
},
{
"docid": "7dfef5a8009b8ccd9ddd3d60c3d52cdb",
"text": "One long-term goal of machine learning research is to produc e methods that are applicable to highly complex tasks, such as perception ( vision, audition), reasoning, intelligent control, and other artificially intell igent behaviors. We argue that in order to progress toward this goal, the Machine Learn ing community must endeavor to discover algorithms that can learn highly compl ex functions, with minimal need for prior knowledge, and with minimal human interv ention. We present mathematical and empirical evidence suggesting that many p opular approaches to non-parametric learning, particularly kernel methods, are fundamentally limited in their ability to learn complex high-dimensional fun ctions. Our analysis focuses on two problems. First, kernel machines are shallow architectures , in which one large layer of simple template matchers i followed by a single layer of trainable coefficients. We argue that shallow architectu res can be very inefficient in terms of required number of computational elements a d examples. Second, we analyze a limitation of kernel machines with a local k ernel, linked to the curse of dimensionality, that applies to supervised, unsup ervised (manifold learning) and semi-supervised kernel machines. Using empirical esults on invariant image recognition tasks, kernel methods are compared with deep architectures , in which lower-level features or concepts are progressively c ombined into more abstract and higher-level representations. We argue that dee p architectures have the potential to generalize in non-local ways, i.e., beyond imm ediate neighbors, and that this is crucial in order to make progress on the kind of co mplex tasks required for artificial intelligence.",
"title": ""
},
{
"docid": "58eecaba6edd29b548a9755f75f1383f",
"text": "The interdisciplinary work presented here deals with the management of diverse types of information collected during an archaeological excavation, and organized as an XML based data management system. The approach is global, from the consultation of three-dimensional data to simple textual data, and to additional data captured by a digital photogrammetry system called ARPENTEUR, which is now fully integrated to the XML data management system. This work is available on the Internet: http://GrandRibaudF.gamsau.archi.fr A stratigraphic approach and a set of underwater photogrammetric survey was done after the excavation process. The photogrammetric orientation work was done in Photomodeler 4.0 (camera calibration and bundle adjustment). The final plotting of amphoras and other artefacts was done using the ARPENTEUR software after importation of all camera orientation data from Photomodeler. The Arpenteur plotting phase is driven by a theoretical model of the measured artefact. The resulting survey is described in a set of XML files containing all measured data (2D, 3D and computed amphoras or artefacts). At this point, a data management system (based on XML technology) is built in order to access to all the data from 2D or 3D representation of the site on the Internet. In the framework of multimedia data management system as photographs, indexation of metadata available as XML documents is particularly convenient. Thanks to this formalism we can represent in an homogeneous way a set of very different kind of data such as : − Structural description of the image content (given by an expert), − Physical data as color histogram, color zone, (with automatic extraction) − Photogrammetric data − Metadata as : intellectual properties, shooting date, policy of right, original support, etc ... This homogeneous representation management of the data coming from different sources is an opportunity to elaborate a request on the whole set of data. On the other hand the results data (also generated in XML) allows a simple and automatic publication of the result towards different media as HTML or PDF for example. The implementation of such a system has to be done in close collaboration with experts of the investigated domain (here underwater archaeology) in order to build a relevant data model and adapt the request algorithm to the specific problematic. The XML document structure allows different kind of data indexation : − Intuitive way, by interactive navigation, − Simple way as keyword research (for example as search engine, i.e. Google) − Accurate way by request formalisation as we can do in a traditional DBMS with SQL. After a brief introduction of the archaeological context, followed by the photogrammetric aspects of the plotting phase with ARPENTEUR, we will then present the existing system and explain the way to navigate into the heterogeneous data as 3D models, theoretical amphora models, oriented photographs and 2D measured points.",
"title": ""
}
] |
scidocsrr
|
519fd16cb5a5a7ae6f25f1a95890c628
|
Snake Charmer: Physically Enabling Virtual Objects
|
[
{
"docid": "8c9f82b50cd541ed0efe1089b098e426",
"text": "This paper explores the intersection of emerging surface technologies, capable of sensing multiple contacts and of-ten shape information, and advanced games physics engines. We define a technique for modeling the data sensed from such surfaces as input within a physics simulation. This affords the user the ability to interact with digital objects in ways analogous to manipulation of real objects. Our technique is capable of modeling both multiple contact points and more sophisticated shape information, such as the entire hand or other physical objects, and of mapping this user input to contact forces due to friction and collisions within the physics simulation. This enables a variety of fine-grained and casual interactions, supporting finger-based, whole-hand, and tangible input. We demonstrate how our technique can be used to add real-world dynamics to interactive surfaces such as a vision-based tabletop, creating a fluid and natural experience. Our approach hides from application developers many of the complexities inherent in using physics engines, allowing the creation of applications without preprogrammed interaction behavior or gesture recognition.",
"title": ""
}
] |
[
{
"docid": "92cf6e3fd47d40c52bb80faaafab07c8",
"text": "Graham-Little syndrome, also know as Graham-Little-Piccardi-Lassueur syndrome, is an unusual form of lichen planopilaris, characterized by the presence of cicatricial alopecia on the scalp, keratosis pilaris of the trunk and extremities, and non-cicatricial hair loss of the pubis and axillae. We present the case of a 47-year-old woman whose condition was unusual in that there was a prominence of scalp findings. Her treatment included a topical steroid plus systemic prednisone beginning at 30 mg every morning, which rendered her skin smooth, but did not alter her scalp lopecia.",
"title": ""
},
{
"docid": "5f8b0a15477bf0ee5787269a578988c6",
"text": "Suppose your netmail is being erratically censored by Captain Yossarian. Whenever you send a message, he censors each bit of the message with probability 1/2, replacing each censored bit by some reserved character. Well versed in such concepts as redundancy, this is no real problem to you. The question is, can it actually be turned around and used to your advantage? We answer this question strongly in the affirmative. We show that this protocol, more commonly known as oblivious transfer, can be used to simulate a more sophisticated protocol, known as oblivious circuit evaluation([Y]). We also show that with such a communication channel, one can have completely noninteractive zero-knowledge proofs of statements in NP. These results do not use any complexity-theoretic assumptions. We can show that they have applications to a variety of models in which oblivious transfer can be done.",
"title": ""
},
{
"docid": "f45189bd1b309bd969fcd4e6d8ff473d",
"text": "Existing statistical approaches to natural language problems are very coarse approximations to the true complexity of language processing. As such, no single technique will be best for all problem instances. Many researchers are examining ensemble methods that combine the output of successful, separately developed modules to create more accurate solutions. This paper examines three merging rules for combining probability distributions: the well known mixture rule, the logarithmic rule, and a novel product rule. These rules were applied with state-of-the-art results to two problems commonly used to assess human mastery of lexical semantics—synonym questions and analogy questions. All three merging rules result in ensembles that are more accurate than any of their component modules. The differences among the three rules are not statistically significant, but it is suggestive that the popular mixture rule is not the best rule for either of the two problems.",
"title": ""
},
{
"docid": "56ec3abe17259cae868e17dc2163fc0e",
"text": "This paper reports a case study about lessons learned and usability issues encountered in a usability inspection of a digital library system called the Networked Computer Science Technical Reference Library (NCSTRL). Using a co-discovery technique with a team of three expert usability inspectors (the authors), we performed a usability inspection driven by a broad set of anticipated user tasks. We found many good design features in NCSTRL, but the primary result of a usability inspection is a list of usability problems as candidates for fixing. The resulting problems are organized by usability problem type and by system functionality, with emphasis on the details of problems specific to digital library functions. The resulting usability problem list was used to illustrate a cost/importance analysis technique that trades off importance to fix against cost to fix. The problems are sorted by the ratio of importance to cost, producing a priority ranking for resolution.",
"title": ""
},
{
"docid": "154c40c2fab63ad15ded9b341ff60469",
"text": "ICU mortality risk prediction may help clinicians take effective interventions to improve patient outcome. Existing machine learning approaches often face challenges in integrating a comprehensive panel of physiologic variables and presenting to clinicians interpretable models. We aim to improve both accuracy and interpretability of prediction models by introducing Subgraph Augmented Non-negative Matrix Factorization (SANMF) on ICU physiologic time series. SANMF converts time series into a graph representation and applies frequent subgraph mining to automatically extract temporal trends. We then apply non-negative matrix factorization to group trends in a way that approximates patient pathophysiologic states. Trend groups are then used as features in training a logistic regression model for mortality risk prediction, and are also ranked according to their contribution to mortality risk. We evaluated SANMF against four empirical models on the task of predicting mortality or survival 30 days after discharge from ICU using the observed physiologic measurements between 12 and 24 hours after admission. SANMF outperforms all comparison models, and in particular, demonstrates an improvement in AUC (0.848 vs. 0.827, p<0.002) compared to a state-of-the-art machine learning method that uses manual feature engineering. Feature analysis was performed to illuminate insights and benefits of subgraph groups in mortality risk prediction.",
"title": ""
},
{
"docid": "15ada8f138d89c52737cfb99d73219f0",
"text": "A dual-band circularly polarized stacked annular-ring patch antenna is presented in this letter. This antenna operates at both the GPS L1 frequency of 1575 MHz and L2 frequency of 1227 MHz, whose frequency ratio is about 1.28. The proposed antenna is formed by two concentric annular-ring patches that are placed on opposite sides of a substrate. Wide axial-ratio bandwidths (larger than 2%), determined by 3-dB axial ratio, are achieved at both bands. The measured gains at 1227 and 1575 MHz are about 6 and 7 dBi, respectively, with the loss of substrate taken into consideration. Both simulated and measured results are presented. The method of varying frequency ratio is also discussed.",
"title": ""
},
{
"docid": "9a2a126eecb116f04b501028f92b7736",
"text": "Sleep bruxism (SB) is a common sleep-related motor disorder characterized by tooth grinding and clenching. SB diagnosis is made on history of tooth grinding and confirmed by polysomnographic recording of electromyographic (EMG) episodes in the masseter and temporalis muscles. The typical EMG activity pattern in patients with SB is known as rhythmic masticatory muscle activity (RMMA). The authors observed that most RMMA episodes occur in association with sleep arousal and are preceded by physiologic activation of the central nervous and sympathetic cardiac systems. This article provides a comprehensive review of the cause, pathophysiology, assessment, and management of SB.",
"title": ""
},
{
"docid": "fd652333e274b25440767de985702111",
"text": "The global gold market has recently attracted a lot of attention and the price of gold is relatively higher than its historical trend. For mining companies to mitigate risk and uncertainty in gold price fluctuations, make hedging, future investment and evaluation decisions, depend on forecasting future price trends. The first section of this paper reviews the world gold market and the historical trend of gold prices from January 1968 to December 2008. This is followed by an investigation into the relationship between gold price and other key influencing variables, such as oil price and global inflation over the last 40 years. The second section applies a modified econometric version of the longterm trend reverting jump and dip diffusion model for forecasting natural-resource commodity prices. This method addresses the deficiencies of previous models, such as jumps and dips as parameters and unit root test for long-term trends. The model proposes that historical data of mineral commodities have three terms to demonstrate fluctuation of prices: a long-term trend reversion component, a diffusion component and a jump or dip component. The model calculates each term individually to estimate future prices of mineral commodities. The study validates the model and estimates the gold price for the next 10 years, based on monthly historical data of nominal gold price. & 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "167fe1a8b8ab5ab298c1a898b558918b",
"text": "Primary brain injury can result from a variety of causes, including trauma, focal or global cerebral ischemia, intraparenchymal or subarachnoid hemorrhage, infection, or toxic-metabolic derangements. Secondary neuronal injury may ensue from a variety of factors, including cerebral edema that may accompany elevated intracranial pressure (ICP), and compromise cerebral blood flow (CBF). The use of osmotic agents constitutes the cornerstone of medical therapy in acute brain resuscitation from cerebral edema and elevated ICP in all brain injury paradigms. While mannitol is the osmotic agent of choice, hypertonic saline (HS) solutions have received renewed attention as agents that hold promise in the future. This article reviews and highlights the pathophysiological principles of osmotherapy and the mechanisms of action of osmotic agents, and elaborates on their use in patients with acute brain injury.",
"title": ""
},
{
"docid": "93458e350a25da0d83f4c90ad22803c1",
"text": "BACKGROUND\nZero-dimensional (lumped parameter) and one dimensional models, based on simplified representations of the components of the cardiovascular system, can contribute strongly to our understanding of circulatory physiology. Zero-D models provide a concise way to evaluate the haemodynamic interactions among the cardiovascular organs, whilst one-D (distributed parameter) models add the facility to represent efficiently the effects of pulse wave transmission in the arterial network at greatly reduced computational expense compared to higher dimensional computational fluid dynamics studies. There is extensive literature on both types of models.\n\n\nMETHOD AND RESULTS\nThe purpose of this review article is to summarise published 0D and 1D models of the cardiovascular system, to explore their limitations and range of application, and to provide an indication of the physiological phenomena that can be included in these representations. The review on 0D models collects together in one place a description of the range of models that have been used to describe the various characteristics of cardiovascular response, together with the factors that influence it. Such models generally feature the major components of the system, such as the heart, the heart valves and the vasculature. The models are categorised in terms of the features of the system that they are able to represent, their complexity and range of application: representations of effects including pressure-dependent vessel properties, interaction between the heart chambers, neuro-regulation and auto-regulation are explored. The examination on 1D models covers various methods for the assembly, discretisation and solution of the governing equations, in conjunction with a report of the definition and treatment of boundary conditions. Increasingly, 0D and 1D models are used in multi-scale models, in which their primary role is to provide boundary conditions for sophisticate, and often patient-specific, 2D and 3D models, and this application is also addressed. As an example of 0D cardiovascular modelling, a small selection of simple models have been represented in the CellML mark-up language and uploaded to the CellML model repository http://models.cellml.org/. They are freely available to the research and education communities.\n\n\nCONCLUSION\nEach published cardiovascular model has merit for particular applications. This review categorises 0D and 1D models, highlights their advantages and disadvantages, and thus provides guidance on the selection of models to assist various cardiovascular modelling studies. It also identifies directions for further development, as well as current challenges in the wider use of these models including service to represent boundary conditions for local 3D models and translation to clinical application.",
"title": ""
},
{
"docid": "ab4cada23ae2142e52c98a271c128c58",
"text": "We introduce an interactive technique for manipulating simple 3D shapes based on extracting them from a single photograph. Such extraction requires understanding of the components of the shape, their projections, and relations. These simple cognitive tasks for humans are particularly difficult for automatic algorithms. Thus, our approach combines the cognitive abilities of humans with the computational accuracy of the machine to solve this problem. Our technique provides the user the means to quickly create editable 3D parts---human assistance implicitly segments a complex object into its components, and positions them in space. In our interface, three strokes are used to generate a 3D component that snaps to the shape's outline in the photograph, where each stroke defines one dimension of the component. The computer reshapes the component to fit the image of the object in the photograph as well as to satisfy various inferred geometric constraints imposed by its global 3D structure. We show that with this intelligent interactive modeling tool, the daunting task of object extraction is made simple. Once the 3D object has been extracted, it can be quickly edited and placed back into photos or 3D scenes, permitting object-driven photo editing tasks which are impossible to perform in image-space. We show several examples and present a user study illustrating the usefulness of our technique.",
"title": ""
},
{
"docid": "ec6fd0bc7f59bdf865b4383a247b984f",
"text": "This paper proposes a novel technique to forecast day-ahead electricity prices based on the wavelet transform and ARIMA models. The historical and usually ill-behaved price series is decomposed using the wavelet transform in a set of better-behaved constitutive series. Then, the future values of these constitutive series are forecast using properly fitted ARIMA models. In turn, the ARIMA forecasts allow, through the inverse wavelet transform, reconstructing the future behavior of the price series and therefore to forecast prices. Results from the electricity market of mainland Spain in year 2002 are reported.",
"title": ""
},
{
"docid": "6a0c269074d80f26453d1fec01cafcec",
"text": "Advances in neurobiology permit neuroscientists to manipulate specific brain molecules, neurons and systems. This has lead to major advances in the neuroscience of reward. Here, it is argued that further advances will require equal sophistication in parsing reward into its specific psychological components: (1) learning (including explicit and implicit knowledge produced by associative conditioning and cognitive processes); (2) affect or emotion (implicit 'liking' and conscious pleasure) and (3) motivation (implicit incentive salience 'wanting' and cognitive incentive goals). The challenge is to identify how different brain circuits mediate different psychological components of reward, and how these components interact.",
"title": ""
},
{
"docid": "01835769f2dc9391051869374e200a6a",
"text": "Finding sparse approximate solutions to large underdetermined linear systems of equations is a common problem in signal/image processing and statistics. Basis pursuit, the least absolute shrinkage and selection operator (LASSO), wavelet-based deconvolution and reconstruction, and compressed sensing (CS) are a few well-known areas in which problems of this type appear. One standard approach is to minimize an objective function that includes a quadratic (lscr 2) error term added to a sparsity-inducing (usually lscr1) regularizater. We present an algorithmic framework for the more general problem of minimizing the sum of a smooth convex function and a nonsmooth, possibly nonconvex regularizer. We propose iterative methods in which each step is obtained by solving an optimization subproblem involving a quadratic term with diagonal Hessian (i.e., separable in the unknowns) plus the original sparsity-inducing regularizer; our approach is suitable for cases in which this subproblem can be solved much more rapidly than the original problem. Under mild conditions (namely convexity of the regularizer), we prove convergence of the proposed iterative algorithm to a minimum of the objective function. In addition to solving the standard lscr2-lscr1 case, our framework yields efficient solution techniques for other regularizers, such as an lscrinfin norm and group-separable regularizers. It also generalizes immediately to the case in which the data is complex rather than real. Experiments with CS problems show that our approach is competitive with the fastest known methods for the standard lscr2-lscr1 problem, as well as being efficient on problems with other separable regularization terms.",
"title": ""
},
{
"docid": "c1cdc9bb29660e910ccead445bcc896d",
"text": "This paper describes an efficient technique for com' puting a hierarchical representation of the objects contained in a complex 3 0 scene. First, an adjacency graph keeping the costs of grouping the different pairs of objects in the scene is built. Then the minimum spanning tree (MST) of that graph is determined. A binary clustering tree (BCT) is obtained from the MS'I: Finally, a merging stage joins the adjacent nodes in the BCT which have similar costs. The final result is an n-ary tree which defines an intuitive clustering of the objects of the scene at different levels of abstraction. Experimental results with synthetic 3 0 scenes are presented.",
"title": ""
},
{
"docid": "7fbd687aaea396343740288233225f85",
"text": "We address the problem of answering new questions in community forums, by selecting suitable answers to already asked questions. We approach the task as an answer ranking problem, adopting a pairwise neural network architecture that selects which of two competing answers is better. We focus on the utility of the three types of similarities occurring in the triangle formed by the original question, the related question, and an answer to the related comment, which we call relevance, relatedness, and appropriateness. Our proposed neural network models the interactions among all input components using syntactic and semantic embeddings, lexical matching, and domain-specific features. It achieves state-of-the-art results, showing that the three similarities are important and need to be modeled together. Our experiments demonstrate that all feature types are relevant, but the most important ones are the lexical similarity features, the domain-specific features, and the syntactic and semantic embeddings.",
"title": ""
},
{
"docid": "0ff3e49a700a776c1a8f748d78bc4b73",
"text": "Nightlight surveys are commonly used to evaluate status and trends of crocodilian populations, but imperfect detection caused by survey- and location-specific factors makes it difficult to draw population inferences accurately from uncorrected data. We used a two-stage hierarchical model comprising population abundance and detection probability to examine recent abundance trends of American alligators (Alligator mississippiensis) in subareas of Everglades wetlands in Florida using nightlight survey data. During 2001–2008, there were declining trends in abundance of small and/or medium sized animals in a majority of subareas, whereas abundance of large sized animals had either demonstrated an increased or unclear trend. For small and large sized class animals, estimated detection probability declined as water depth increased. Detection probability of small animals was much lower than for larger size classes. The declining trend of smaller alligators may reflect a natural population response to the fluctuating environment of Everglades wetlands under modified hydrology. It may have negative implications for the future of alligator populations in this region, particularly if habitat conditions do not favor recruitment of offspring in the near term. Our study provides a foundation to improve inferences made from nightlight surveys of other crocodilian populations.",
"title": ""
},
{
"docid": "166bb3d8e2cd538e694f1b90054a5e97",
"text": "Recently, low-shot learning has been proposed for handling the lack of training data in machine learning. Despite of the importance of this issue, relatively less efforts have been made to study this problem. In this paper, we aim to increase the size of training dataset in various ways to improve the accuracy and robustness of face recognition. In detail, we adapt a generator from the Generative Adversarial Network (GAN) to increase the size of training dataset, which includes a base set, a widely available dataset, and a novel set, a given limited dataset, while adopting transfer learning as a backend. Based on extensive experimental study, we conduct the analysis on various data augmentation methods, observing how each affects the identification accuracy. Finally, we conclude that the proposed algorithm for generating faces is effective in improving the identification accuracy and coverage at the precision of 99% using both the base and novel set.",
"title": ""
},
{
"docid": "b03b34dc9708693f06ee4786c48ce9b5",
"text": "Mobile Cloud Computing (MCC) enables smartphones to offload compute-intensive codes and data to clouds or cloudlets for energy conservation. Thus, MCC liberates smartphones from battery shortage and embraces more versatile mobile applications. Most pioneering MCC research work requires a consistent network performance for offloading. However, such consistency is challenged by frequent mobile user movements and unstable network quality, thereby resulting in a suboptimal offloading decision. To embrace network inconsistency, we propose ENDA, a three-tier architecture that leverages user track prediction, realtime network performance and server loads to optimize offloading decisions. On cloud tier, we first design a greedy searching algorithm to predict user track using historical user traces stored in database servers. We then design a cloud-enabled Wi-Fi access point (AP) selection scheme to find the most energy efficient AP for smartphone offloading. We evaluate the performance of ENDA through simulations under a real-world scenario. The results demonstrate that ENDA can generate offloading decisions with optimized energy efficiency, desirable response time, and potential adaptability to a variety of scenarios. ENDA outperforms existing offloading techniques that do not consider user mobility and server workload balance management.",
"title": ""
},
{
"docid": "6d9f5f9e61c9b94febdd8e04cf999636",
"text": "The Internet oers the hope of a more democratic society. By promoting a decentralized form of social mobilization, it is said, the Internet can help us to renovate our institutions and liberate ourselves from our authoritarian legacies. The Internet does indeed hold these possibilities, but they are hardly inevitable. In order for the Internet to become a tool for social progress, not a tool of oppression or another centralized broadcast medium or simply a waste of money, concerned citizens must understand the dierent ways in which the Internet can become embedded in larger social processes. In thinking about culturally appropriate ways of using technologies like the Internet, the best starting-point is with peopleÐcoherent communities of people and the ways they think together. Let us consider an example. A photocopier company asked an anthropologist named Julian Orr to study its repair technicians and recommend the best ways to use technology in supporting their work. Orr (1996) took a broad view of the technicians' lives, learning some of their skills and following them around. Each morning the technicians would come to work, pick up their company vehicles, and drive to customers' premises where photocopiers needed ®xing; each evening they would return to the company, go to a bar together, and drink beer. Although the company had provided the technicians with formal training, Orr discovered that they actually acquired much of their expertise informally while drinking beer together. Having spent the day contending with dicult repair problems, they would entertain one another with ``war stories'', and these stories often helped them with future repairs. He suggested, therefore, that the technicians be given radio equipment so that they could remain in contact all day, telling stories and helping each other with their repair tasks. As Orr's (1996) story suggests, people think together best when they have something important in common. Networking technologies can often be used to create a Telematics and Informatics 15 (1998) 231±234",
"title": ""
}
] |
scidocsrr
|
c42184ad6de1e2a5fa8db09372975c4c
|
Building of an Information Retrieval System Based on Genetic Algorithms
|
[
{
"docid": "a2fd33f276a336e2a33d84c2a0abc283",
"text": "The Smart information retrieval project emphasizes completely automatic approaches to the understanding and retrieval of large quantities of text. We continue our work in TREC 3, performing runs in the routing, ad-hoc, and foreign language environments. Our major focus is massive query expansion: adding from 300 to 530 terms to each query. These terms come from known relevant documents in the case of routing, and from just the top retrieved documents in the case of ad-hoc and Spanish. This approach improves e ectiveness from 7% to 25% in the various experiments. Other ad-hoc work extends our investigations into combining global similarities, giving an overall indication of how a document matches a query, with local similarities identifying a smaller part of the document which matches the query. Using an overlapping text window de nition of \\local\", we achieve a 16% improvement.",
"title": ""
}
] |
[
{
"docid": "aebb4ee07fd2b9f804746df85a6af151",
"text": "Markov chain Monte Carlo (MC) simulations started in earnest with the 1953 article by Nicholas Metropolis, Arianna Rosenbluth, Marshall Rosenbluth, Augusta Teller and Edward Teller [18]. Since then MC simulations have become an indispensable tool with applications in many branches of science. Some of those are reviewed in the proceedings [13] of the 2003 Los Alamos conference, which celebrated the 50th birthday of Metropolis simulations. The purpose of this tutorial is to provide an overview of basic concepts, which are prerequisites for an understanding of the more advanced lectures of this volume. In particular the lectures by Prof. Landau are closely related. The theory behind MC simulations is based on statistics and the analy-",
"title": ""
},
{
"docid": "86947b00cfa5ea686526909b0909e1dd",
"text": "Modern cloud infrastructure uses virtualization to isolate applications, optimize the utilization of hardware resources and provide operational flexibility. However, conventional virtualization comes at the cost of resource overhead. Container-based virtualization could be an alternative as it potentially reduces overhead and thus improves the utilization of datacenters. This paper presents the results of a marco-benchmark performance comparison between the two implementations of these technologies, namely Xen and LXC, as well as a discussion on their operational flexibility.",
"title": ""
},
{
"docid": "dd0bbc039e1bbc9e36ffe087e105cf56",
"text": "Using a comparative analysis approach, this article examines the development, characteristics and issues concerning the discourse of modern Asian art in the twentieth century, with the aim of bringing into picture the place of Asia in the history of modernism. The wide recognition of the Western modernist canon as centre and universal displaces the contribution and significance of the non-Western world in the modern movement. From a cross-cultural perspective, this article demonstrates that modernism in the field of visual arts in Asia, while has had been complex and problematic, nevertheless emerged. Rather than treating Asian art as a generalized subject, this article argues that, with their subtly different notions of culture, identity and nationhood, the modernisms that emerged from various nations in this region are diverse and culturally specific. Through the comparison of various art-historical contexts in this region (namely China, India, Japan and Korea), this article attempts to map out some similarities as well as differences in their pursuit of an autonomous modernist representation.",
"title": ""
},
{
"docid": "1e5925569492956c4330d6c260c453e2",
"text": "A simple low loss H-shape hybrid coupler based on the substrate integrated waveguide technology is presented for millimeter-wave applications. The coupler operation is based on the excitation of two different modes, TE10 and TE20. The coupler S-matrix is calculated by using a full-wave solver that uses the even/odd mode (symmetry) analysis to minimize the computational time and provides more physical insight. The simulated return and insertion losses are better than -20 dB and -3.90 dB, respectively over the operating frequency bandwidth of 39-40.50 GHz.",
"title": ""
},
{
"docid": "ead92535c188bebd2285358c83fc0a07",
"text": "BACKGROUND\nIndigenous peoples of Australia, Canada, United States and New Zealand experience disproportionately high rates of suicide. As such, the methodological quality of evaluations of suicide prevention interventions targeting these Indigenous populations should be rigorously examined, in order to determine the extent to which they are effective for reducing rates of Indigenous suicide and suicidal behaviours. This systematic review aims to: 1) identify published evaluations of suicide prevention interventions targeting Indigenous peoples in Australia, Canada, United States and New Zealand; 2) critique their methodological quality; and 3) describe their main characteristics.\n\n\nMETHODS\nA systematic search of 17 electronic databases and 13 websites for the period 1981-2012 (inclusive) was undertaken. The reference lists of reviews of suicide prevention interventions were hand-searched for additional relevant studies not identified by the electronic and web search. The methodological quality of evaluations of suicide prevention interventions was assessed using a standardised assessment tool.\n\n\nRESULTS\nNine evaluations of suicide prevention interventions were identified: five targeting Native Americans; three targeting Aboriginal Australians; and one First Nation Canadians. The main intervention strategies employed included: Community Prevention, Gatekeeper Training, and Education. Only three of the nine evaluations measured changes in rates of suicide or suicidal behaviour, all of which reported significant improvements. The methodological quality of evaluations was variable. Particular problems included weak study designs, reliance on self-report measures, highly variable consent and follow-up rates, and the absence of economic or cost analyses.\n\n\nCONCLUSIONS\nThere is an urgent need for an increase in the number of evaluations of preventive interventions targeting reductions in Indigenous suicide using methodologically rigorous study designs across geographically and culturally diverse Indigenous populations. Combining and tailoring best evidence and culturally-specific individual strategies into one coherent suicide prevention program for delivery to whole Indigenous communities and/or population groups at high risk of suicide offers considerable promise.",
"title": ""
},
{
"docid": "ebf7457391e8f1e728508f9b5af7a19f",
"text": "Argument mining studies in natural language text often use lexical (e.g. n-grams) and syntactic (e.g. grammatical production rules) features with all possible values. In prior work on a corpus of academic essays, we demonstrated that such large and sparse feature spaces can cause difficulty for feature selection and proposed a method to design a more compact feature space. The proposed feature design is based on post-processing a topic model to extract argument and domain words. In this paper we investigate the generality of this approach, by applying our methodology to a new corpus of persuasive essays. Our experiments show that replacing n-grams and syntactic rules with features and constraints using extracted argument and domain words significantly improves argument mining performance for persuasive essays.",
"title": ""
},
{
"docid": "ae5976a021bd0c4ff5ce14525c1716e7",
"text": "We present PARAM 1.0, a model checker for parametric discrete-time Markov chains (PMCs). PARAM can evaluate temporal properties of PMCs and certain extensions of this class. Due to parametricity, evaluation results are polynomials or rational functions. By instantiating the parameters in the result function, one can cheaply obtain results for multiple individual instantiations, based on only a single more expensive analysis. In addition, it is possible to post-process the result function symbolically using for instance computer algebra packages, to derive optimum parameters or to identify worst cases.",
"title": ""
},
{
"docid": "c3cb261d9dc6b92a6e69e4be7ec44978",
"text": "An increasing number of studies in political communication focus on the “sentiment” or “tone” of news content, political speeches, or advertisements. This growing interest in measuring sentiment coincides with a dramatic increase in the volume of digitized information. Computer automation has a great deal of potential in this new media environment. The objective here is to outline and validate a new automated measurement instrument for sentiment analysis in political texts. Our instrument uses a dictionary-based approach consisting of a simple word count of the frequency of keywords in a text from a predefined dictionary. The design of the freely available Lexicoder Sentiment Dictionary (LSD) is discussed in detail here. The dictionary is tested against a body of human-coded news content, and the resulting codes are also compared to results from nine existing content-analytic dictionaries. Analyses suggest that the LSD produces results that are more systematically related to human coding than are results based on the other available dictionaries. The LSD is thus a useful starting point for a revived discussion about dictionary construction and validation in sentiment analysis for political communication.",
"title": ""
},
{
"docid": "473aadc8d69632f810901d6360dd2b0c",
"text": "One of the challenges in developing real-world autonomous robots is the need for integrating and rigorously testing high-level scripting, motion planning, perception, and control algorithms. For this purpose, we introduce an open-source cross-platform software architecture called OpenRAVE, the Open Robotics and Animation Virtual Environment. OpenRAVE is targeted for real-world autonomous robot applications, and includes a seamless integration of 3-D simulation, visualization, planning, scripting and control. A plugin architecture allows users to easily write custom controllers or extend functionality. With OpenRAVE plugins, any planning algorithm, robot controller, or sensing subsystem can be distributed and dynamically loaded at run-time, which frees developers from struggling with monolithic code-bases. Users of OpenRAVE can concentrate on the development of planning and scripting aspects of a problem without having to explicitly manage the details of robot kinematics and dynamics, collision detection, world updates, and robot control. The OpenRAVE architecture provides a flexible interface that can be used in conjunction with other popular robotics packages such as Player and ROS because it is focused on autonomous motion planning and high-level scripting rather than low-level control and message protocols. OpenRAVE also supports a powerful network scripting environment which makes it simple to control and monitor robots and change execution flow during run-time. One of the key advantages of open component architectures is that they enable the robotics research community to easily share and compare algorithms.",
"title": ""
},
{
"docid": "8ffb6fa71c70d32387436a631ceaab30",
"text": "In recent years, with the technological advancements in healthcare electronics, we see a number of digital data acquisition devices that can monitor certain body parameters of a patient and alert the concerned person in case of an emergency. However, if one desires to monitor multiple body parameters, they are forced to use multiple data acquisition devices. This determines the need for a platform that will allow users to configure their own digital data acquisition device and monitor their health parameters in real-time. This paper discusses the architecture and design flow of a Tele-Health Monitoring (THM) platform using effective usage of the computation power and various inbuilt peripherals of STM32 microcontroller that will allow users to have a reliable and interactive health monitoring system. The proposed platform provides a channel to interface various health sensors and data acquisition devices such that the individual himself or an authorized health provider can monitor and analyze the physical activity of an individual on regular basis. This platform finds great use in cases where patients are under transportation, homecare or frequent health checkup.",
"title": ""
},
{
"docid": "7fe6505453be76030d8580e7be5fa8c7",
"text": "Based on experiences with different organizations having insider threat programs, the components needed for an insider threat auditing and mitigation program and methods of program validation that agencies can use when both initiating a program and reviewing an existing program has been described. This paper concludes with descriptions of each of the best practices derived from the model program. This final section is meant to be a standalone section that readers can detach and incorporate into their insider threat mitigation program guidance.",
"title": ""
},
{
"docid": "c5e553148657a26e87f1d20c90b40a1e",
"text": "Literature citation analysis plays a very important role in bibliometrics and scientometrics, such as the Science Citation Index (SCI ) impact factor, h-index. Existing citation analysis methods assume that all citations in a paper are equally important, and they simply count the number of citations. Here we argue that the citations in a paper are not equally important and some citations are more important than the others. We use a strength value to assess the importance of each citation and propose to use the regression method with a few useful features for automatically estimating the strength value of each citation. Evaluation results on a manually labeled data set in the computer science field show that the estimated values can achieve good correlation with human-labeled values. We further apply the estimated citation strength values for evaluating paper influence and author influence, and the preliminary evaluation results demonstrate the usefulness of the citation strength values.",
"title": ""
},
{
"docid": "27b3c795085e395eadfd23e181abedc4",
"text": "Since remote sensing images are captured from the top of the target, such as from a satellite or plane platform, ship targets can be presented at any orientation. When detecting ship targets using horizontal bounding boxes, there will be background clutter in the box. This clutter makes it harder to detect the ship and find its precise location, especially when the targets are in close proximity or staying close to the shore. To solve these problems, this paper proposes a deep learning algorithm using a multiscale rotated bounding box to detect the ship target in a complex background and obtain the location and orientation information of the ship. When labeling the oriented targets, we use the five-parameter method to ensure that the box shape is maintained rectangular. The algorithm uses a pretrained deep network to extract features and produces two divided flow paths to output the result. One flow path predicts the target class, while the other predicts the location and angle information. In the training stage, we match the prior multiscale rotated bounding boxes to the ground-truth bounding boxes to obtain the positive sample information and use it to train the deep learning model. When matching the rotated bounding boxes, we narrow down the selection scope to reduce the amount of calculation. In the testing stage, we use the trained model to predict and obtain the final result after comparing with the score threshold and nonmaximum suppression post-processing. Experiments conducted on a remote sensing dataset show that the algorithm is robust in detecting ship targets under complex conditions, such as wave clutter background, target in close proximity, ship close to the shore, and multiscale varieties. Compared to other algorithms, our algorithm not only exhibits better performance in ship detection but also obtains the precise location and orientation information of the ship.",
"title": ""
},
{
"docid": "43a7e7241f1ce7967cee750eb481ca2b",
"text": "This paper proposes and analyzes the performance of the multihop free-space optical (FSO) communication links using a heterodyne differential phase-shift keying modulation scheme operating over a turbulence induced fading channel. A novel statistical fading channel model for multihop FSO systems using channel-state-information-assisted and fixed-gain relays is developed incorporating the atmospheric turbulence, pointing errors, and path-loss effects. The closed-form expressions for the moment generating function, probability density function, and cumulative distribution function of the multihop FSO channel are derived using Meijer's G-function. They are then used to derive the fundamental limits of the outage probability and average symbol error rate. Results confirm the performance loss as a function of the number of hops. Effects of the turbulence strength varying from weak-to-moderate and moderate-to-strong turbulence, geometric loss, and pointing errors are studied. The pointing errors can be mitigated by widening the beam at the expense of the received power level, whereas narrowing the beam can reduce the geometric loss at the cost of increased misalignment effects.",
"title": ""
},
{
"docid": "00a3eedc8aedacf711e4198193575bde",
"text": "The standard strategies for evaluation based on precision and recall are examined and their relative advantages and disadvantages are discussed. In particular, it is suggested that relevance feedback be evaluated from the perspective of the user. A number of different statistical tests are described for determining if differences in performance between retrieval methods are significant. These tests have often been ignored in the past because most are based on an assumption of normality which is not strictly valid for the standard performance measures. However, one can test this assumption using simple diagnostic plots, and if it is a poor approximation, there are a number of non-parametric alternatives.",
"title": ""
},
{
"docid": "b7f1af8c7850ee68c19cf5a4588aeb57",
"text": "The ‘ellipsoidal distribution’, in which angles are assumed to be distributed parallel to the surface of an oblate or prolate ellipsoid, has been widely used to describe the leaf angle distribution (LAD) of plant canopies. This ellipsoidal function is constrained to show a probability density of zero at an inclination angle of zero; however, actual LADs commonly show a peak probability density at zero, a pattern consistent with functional models of plant leaf display. A ‘rotated ellipsoidal distribution’ is described here, which geometrically corresponds to an ellipsoid in which small surface elements are rotated normal to the surface. Empirical LADs from canopy and understory species in an old-growth coniferous forest were used to compare the two models. In every case the rotated ellipsoidal function provided a better description of empirical data than did the non-rotated function, while retaining only a single parameter. The ratio of G-statistics for goodness of fit for the two functions ranged from 1.03 to 3.88. The improved fit is due to the fact that the rotated function always shows a probability density greater than zero at inclination angles of zero, can show a mode at zero, and more accurately characterizes the overall shape of empirical distributions. ©2000 Published by Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "832bed06d844fedb2867750bb7ec3989",
"text": "Viral diffusion allows a piece of information to widely and quickly spread within the network of users through word-ofmouth. In this paper, we study the problem of modeling both item and user factors that contribute to viral diffusion in Twitter network. We identify three behaviorial factors, namely user virality, user susceptibility and item virality, that contribute to viral diffusion. Instead of modeling these factors independently as done in previous research, we propose a model that measures all the factors simultaneously considering their mutual dependencies. The model has been evaluated on both synthetic and real datasets. The experiments show that our model outperforms the existing ones for synthetic data with ground truth labels. Our model also performs well for predicting the hashtags that have higher retweet likelihood. We finally present case examples that illustrate how the models differ from one another.",
"title": ""
},
{
"docid": "7b7f5a18bb7629c48c9fbe9475aa0f0c",
"text": "These are the notes for my quarter-long course on basic stability theory at UCLA (MATH 285D, Winter 2015). The presentation highlights some relations to set theory and cardinal arithmetic reflecting my impression about the tastes of the audience. We develop the general theory of local stability instead of specializing to the finite rank case, and touch on some generalizations of stability such as NIP and simplicity. The material in this notes is based on [Pil02, Pil96], [vdD05], [TZ12], [Cas11a, Cas07], [Sim15], [Poi01] and [Che12]. I would also like to thank the following people for their comments and suggestions: Tyler Arant, Madeline Barnicle, Allen Gehret, Omer Ben Neria, Anton Bobkov, Jesse Han, Pietro Kreitlon Carolino, Andrew Marks, Alex Mennen, Assaf Shani, John Susice, Spencer Unger. Comments and corrections are very welcome ([email protected], http://www.math.ucla.edu/~chernikov/).",
"title": ""
},
{
"docid": "eb2459cbb99879b79b94653c3b9ea8ef",
"text": "Extending the success of deep neural networks to natural language understanding and symbolic reasoning requires complex operations and external memory. Recent neural program induction approaches have attempted to address this problem, but are typically limited to differentiable memory, and consequently cannot scale beyond small synthetic tasks. In this work, we propose the Manager-ProgrammerComputer framework, which integrates neural networks with non-differentiable memory to support abstract, scalable and precise operations through a friendly neural computer interface. Specifically, we introduce a Neural Symbolic Machine, which contains a sequence-to-sequence neural \"programmer\", and a nondifferentiable \"computer\" that is a Lisp interpreter with code assist. To successfully apply REINFORCE for training, we augment it with approximate gold programs found by an iterative maximum likelihood training process. NSM is able to learn a semantic parser from weak supervision over a large knowledge base. It achieves new state-of-the-art performance on WEBQUESTIONSSP, a challenging semantic parsing dataset, with weak supervision. Compared to previous approaches, NSM is end-to-end, therefore does not rely on feature engineering or domain specific knowledge.",
"title": ""
},
{
"docid": "46f623cea7c1f643403773fc5ed2508d",
"text": "The use of machine learning tools has become widespread in medical diagnosis. The main reason for this is the effective results obtained from classification and diagnosis systems developed to help medical professionals in the diagnosis phase of diseases. The primary objective of this study is to improve the accuracy of classification in medical diagnosis problems. To this end, studies were carried out on 3 different datasets. These datasets are heart disease, Parkinson’s disease (PD) and BUPA liver disorders. Key feature of these datasets is that they have a linearly non-separable distribution. A new method entitled k-medoids clustering-based attribute weighting (kmAW) has been proposed as a data preprocessing method. The support vector machine (SVM) was preferred in the classification phase. In the performance evaluation stage, classification accuracy, specificity, sensitivity analysis, f-measure, kappa statistics value and ROC analysis were used. Experimental results showed that the developed hybrid system entitled kmAW + SVM gave better results compared to other methods described in the literature. Consequently, this hybrid intelligent system can be used as a useful medical decision support tool.",
"title": ""
}
] |
scidocsrr
|
9c8b967a44cfe62d125736ad90b8cd85
|
ParBlockchain: Leveraging Transaction Parallelism in Permissioned Blockchain Systems
|
[
{
"docid": "05f941acd4b2bd1188c7396d7edbd684",
"text": "A blockchain is a distributed ledger for recording transactions, maintained by many nodes without central authority through a distributed cryptographic protocol. All nodes validate the information to be appended to the blockchain, and a consensus protocol ensures that the nodes agree on a unique order in which entries are appended. Consensus protocols for tolerating Byzantine faults have received renewed attention because they also address blockchain systems. This work discusses the process of assessing and gaining confidence in the resilience of a consensus protocols exposed to faults and adversarial nodes. We advocate to follow the established practice in cryptography and computer security, relying on public reviews, detailed models, and formal proofs; the designers of several practical systems appear to be unaware of this. Moreover, we review the consensus protocols in some prominent permissioned blockchain platforms with respect to their fault models and resilience against attacks. 1998 ACM Subject Classification C.2.4 Distributed Systems, D.1.3 Concurrent Programming",
"title": ""
}
] |
[
{
"docid": "bbb91ddd9df0d5f38b8c1317a8e84f60",
"text": "Poisson regression model is widely used in software quality modeling. W h e n the response variable of a data set includes a large number of zeros, Poisson regression model will underestimate the probability of zeros. A zero-inflated model changes the mean structure of the pure Poisson model. The predictive quality is therefore improved. I n this paper, we examine a full-scale industrial software system and develop two models, Poisson regression and zero-inflated Poisson regression. To our knowledge, this is the first study that introduces the zero-inflated Poisson regression model in software reliability. Comparing the predictive qualities of the two competing models, we conclude that for this system, the zero-inflated Poisson regression model is more appropriate in theory and practice.",
"title": ""
},
{
"docid": "11fe82917eb56b1188ddc46cf8b5d0e2",
"text": "We show that to capture the empirical effects of uncertainty on the unemployment rate, it is crucial to study the interactions between search frictions and nominal rigidities. Our argument is guided by empirical evidence showing that an increase in uncertainty leads to a large increase in unemployment and a significant decline in inflation, suggesting that uncertainty partly operates via an aggregate demand channel. To understand the mechanism through which uncertainty generates these macroeconomic effects, we incorporate search frictions and nominal rigidities in a DSGE model. We show that an option-value channel that arises from search frictions interacts with a demand channel that arises from nominal rigidities, and such interactions magnify the effects of uncertainty to generate roughly 60 percent of the observed increase in unemployment following an uncer-",
"title": ""
},
{
"docid": "779c0081af334a597f6ee6942d7e7240",
"text": "We document our experiences in teaching smart contract programming to undergraduate students at the University of Maryland, the first pedagogical attempt of its kind. Since smart contracts deal directly with the movement of valuable currency units between contratual parties, security of a contract program is of paramount importance. Our lab exposed numerous common pitfalls in designing safe and secure smart contracts. We document several typical classes of mistakes students made, suggest ways to fix/avoid them, and advocate best practices for programming smart contracts. Finally, our pedagogical efforts have also resulted in online open course materials for programming smart contracts, which may be of independent interest to the community.",
"title": ""
},
{
"docid": "3aadfd9d063eeddc09fbd86c82f2bfe4",
"text": "We study the probabilistic generative models parameterized by feedfor-ward neural networks. An attractor dynamics for probabilistic inference in these models is derived from a mean field approximation for large, layered sigmoidal networks. Fixed points of the dynamics correspond to solutions of the mean field equations, which relate the statistics of each unittothoseofits Markovblanket. We establish global convergence of the dynamics by providing a Lyapunov function and show that the dynamics generate the signals required for unsupervised learning. Our results for feedforward networks provide a counterpart to those of Cohen-Grossberg and Hopfield for symmetric networks.",
"title": ""
},
{
"docid": "207892948c80af2f060d49cabd378067",
"text": "In the last decades, an increasing number of employers and job seekers have been relying on Web resources to get in touch and to find a job. If appropriately retrieved and analyzed, the huge number of job vacancies available today on on-line job portals can provide detailed and valuable information about the Web Labor Market dynamics and trends. In particular, this information can be useful to all actors, public and private, who play a role in the European Labor Market. This paper presents WoLMIS, a system aimed at collecting and automatically classifying multilingual Web job vacancies with respect to a standard taxonomy of occupations. The proposed system has been developed for the Cedefop European agency, which supports the development of European Vocational Education and Training (VET) policies and contributes to their implementation. In particular, WoLMIS allows analysts and Labor Market specialists to make sense of Labor Market dynamics and trends of several countries in Europe, by overcoming linguistic boundaries across national borders. A detailed experimental evaluation analysis is also provided for a set of about 2 million job vacancies, collected from a set of UK and Irish Web job sites from June to September 2015.",
"title": ""
},
{
"docid": "ffbcc6070b471bcf86dfb270d5fd2504",
"text": "This paper focuses on the specific problem of multiview learning where samples have the same feature set but different probability distributions, e.g., different viewpoints or different modalities. Since samples lying in different distributions cannot be compared directly, this paper aims to learn a latent subspace shared by multiple views assuming that the input views are generated from this latent subspace. Previous approaches usually learn the common subspace by either maximizing the empirical likelihood, or preserving the geometric structure. However, considering the complementarity between the two objectives, this paper proposes a novel approach, named low-rank discriminant embedding (LRDE), for multiview learning by taking full advantage of both sides. By further considering the duality between data points and features of multiview scene, i.e., data points can be grouped based on their distribution on features, while features can be grouped based on their distribution on the data points, LRDE not only deploys low-rank constraints on both sample level and feature level to dig out the shared factors across different views, but also preserves geometric information in both the ambient sample space and the embedding feature space by designing a novel graph structure under the framework of graph embedding. Finally, LRDE jointly optimizes low-rank representation and graph embedding in a unified framework. Comprehensive experiments in both multiview manner and pairwise manner demonstrate that LRDE performs much better than previous approaches proposed in recent literatures.",
"title": ""
},
{
"docid": "a8ae6f14a7e308b70804e7f898c34876",
"text": "Find the secret to improve the quality of life by reading this architecting dependable systems. This is a kind of book that you need now. Besides, it can be your favorite book to read after having this book. Do you ask why? Well, this is a book that has different characteristic with others. You may not need to know who the author is, how wellknown the work is. As wise word, never judge the words from who speaks, but make the words as your good value to your life.",
"title": ""
},
{
"docid": "b9779b478ee8714d5b0f6ce3e0857c9f",
"text": "Sensor-based motion recognition integrates the emerging area of wearable sensors with novel machine learning techniques to make sense of low-level sensor data and provide rich contextual information in a real-life application. Although Human Activity Recognition (HAR) problem has been drawing the attention of researchers, it is still a subject of much debate due to the diverse nature of human activities and their tracking methods. Finding the best predictive model in this problem while considering different sources of heterogeneities can be very difficult to analyze theoretically, which stresses the need of an experimental study. Therefore, in this paper, we first create the most complete dataset, focusing on accelerometer sensors, with various sources of heterogeneities. We then conduct an extensive analysis on feature representations and classification techniques (the most comprehensive comparison yet with 293 classifiers) for activity recognition. Principal component analysis is applied to reduce the feature vector dimension while keeping essential information. The average classification accuracy of eight sensor positions is reported to be 96.44% ± 1.62% with 10-fold evaluation, whereas accuracy of 79.92% ± 9.68% is reached in the subject-independent evaluation. This study presents significant evidence that we can build predictive models for HAR problem under more realistic conditions, and still achieve highly accurate results.",
"title": ""
},
{
"docid": "3cde70842ee80663cbdc04db6a871d46",
"text": "Artificial perception, in the context of autonomous driving, is the process by which an intelligent system translates sensory data into an effective model of the environment surrounding a vehicle. In this paper, and considering data from a 3D-LIDAR mounted onboard an intelligent vehicle, a 3D perception system based on voxels and planes is proposed for ground modeling and obstacle detection in urban environments. The system, which incorporates time-dependent data, is composed of two main modules: (i) an effective ground surface estimation using a piecewise plane fitting algorithm and RANSAC-method, and (ii) a voxel-grid model for static and moving obstacles detection using discriminative analysis and ego-motion information. This perception system has direct application in safety systems for intelligent vehicles, particularly in collision avoidance and vulnerable road users detection, namely pedestrians and cyclists. Experiments, using point-cloud data from a Velodyne LIDAR and localization data from an Inertial Navigation System were conducted for both a quantitative and a qualitative assessment of the static/moving obstacle detection module and for the surface estimation approach. Reported results, from experiments using the KITTI database, demonstrate the applicability and efficiency of the proposed approach in urban scenarios.",
"title": ""
},
{
"docid": "1c3a87fd2e10a9799e7c0a79be635816",
"text": "According to Network Effect literature network externalities lead to market failure due to Pareto-inferior coordination results. We show that the assumptions and simplifications implicitly used for modeling standardization processes fail to explain the real-world variety of diffusion courses in today’s dynamic IT markets and derive requirements for a more general model of network effects. We argue that Agent-based Computational Economics provides a solid basis for meeting these requirements by integrating evolutionary models from Game Theory and Institutional Economics.",
"title": ""
},
{
"docid": "dcda412c18e92650d9791023f13e4392",
"text": "Graph can straightforwardly represent the relations between the objects, which inevitably draws a lot of attention of both academia and industry. Achievements mainly concentrate on homogeneous graph and bipartite graph. However, it is difficult to use existing algorithm in actual scenarios. Because in the real world, the type of the objects and the relations are diverse and the amount of the data can be very huge. Considering of the characteristics of \"black market\", we proposeHGsuspector, a novel and scalable algorithm for detecting collective fraud in directed heterogeneous graphs.We first decompose directed heterogeneous graphs into a set of bipartite graphs, then we define a metric on each connected bipartite graph and calculate scores of it, which fuse the structure information and event probability. The threshold for distinguishing between normal and abnormal can be obtained by statistic or other anomaly detection algorithms in scores space. We also provide a technical solution for fraud detection in e-commerce scenario, which has been successfully applied in Jingdong e-commerce platform to detect collective fraud in real time. The experiments on real-world datasets, which has billion nodes and edges, demonstrate that HGsuspector is more accurate and fast than the most practical and state-of-the-art approach by far.",
"title": ""
},
{
"docid": "2d845ef6552b77fb4dd0d784233aa734",
"text": "The timing of the origin of arthropods in relation to the Cambrian explosion is still controversial, as are the timing of other arthropod macroevolutionary events such as the colonization of land and the evolution of flight. Here we assess the power of a phylogenomic approach to shed light on these major events in the evolutionary history of life on earth. Analyzing a large phylogenomic dataset (122 taxa, 62 genes) with a Bayesian-relaxed molecular clock, we simultaneously reconstructed the phylogenetic relationships and the absolute times of divergences among the arthropods. Simulations were used to test whether our analysis could distinguish between alternative Cambrian explosion scenarios with increasing levels of autocorrelated rate variation. Our analyses support previous phylogenomic hypotheses and simulations indicate a Precambrian origin of the arthropods. Our results provide insights into the 3 independent colonizations of land by arthropods and suggest that evolution of insect wings happened much earlier than the fossil record indicates, with flight evolving during a period of increasing oxygen levels and impressively large forests. These and other findings provide a foundation for macroevolutionary and comparative genomic study of Arthropoda.",
"title": ""
},
{
"docid": "ebeed0f16727adff1d6611ba4f48dde1",
"text": "The research reported here integrates computational, visual and cartographic methods to develop a geovisual analytic approach for exploring and understanding spatio-temporal and multivariate patterns. The developed methodology and tools can help analysts investigate complex patterns across multivariate, spatial and temporal dimensions via clustering, sorting and visualization. Specifically, the approach involves a self-organizing map, a parallel coordinate plot, several forms of reorderable matrices (including several ordering methods), a geographic small multiple display and a 2-dimensional cartographic color design method. The coupling among these methods leverages their independent strengths and facilitates a visual exploration of patterns that are difficult to discover otherwise. The visualization system we developed supports overview of complex patterns and through a variety of interactions, enables users to focus on specific patterns and examine detailed views. We demonstrate the system with an application to the IEEE InfoVis 2005 contest data set, which contains time-varying, geographically referenced and multivariate data for technology companies in the US",
"title": ""
},
{
"docid": "6cc7205ad19d3de8fab076a752d82284",
"text": "Visual odometry and mapping methods can provide accurate navigation and comprehensive environment (obstacle) information for autonomous flights of Unmanned Aerial Vehicle (UAV) in GPS-denied cluttered environments. This work presents a new light small-scale low-cost ARM-based stereo vision pre-processing system, which not only is used as onboard sensor to continuously estimate 6-DOF UAV pose, but also as onboard assistant computer to pre-process visual information, thereby saving more computational capability for the onboard host computer of the UAV to conduct other tasks. The visual odometry is done by one plugin specifically developed for this new system with a fixed baseline (12cm). In addition, the pre-processed infromation from this new system are sent via a Gigabit Ethernet cable to the onboard host computer of UAV for real-time environment reconstruction and obstacle detection with a octree-based 3D occupancy grid mapping approach, i.e. OctoMap. The visual algorithm is evaluated with the stereo video datasets from EuRoC Challenge III in terms of efficiency, accuracy and robustness. Finally, the new system is mounted and tested on a real quadrotor UAV to carry out the visual odometry and mapping task.",
"title": ""
},
{
"docid": "8db60612ae500ef2df172a6a1736f58b",
"text": "Errors are prevalent in time series data, such as GPS trajectories or sensor readings. Existing methods focus more on anomaly detection but not on repairing the detected anomalies. By simply filtering out the dirty data via anomaly detection, applications could still be unreliable over the incomplete time series. Instead of simply discarding anomalies, we propose to (iteratively) repair them in time series data, by creatively bonding the beauty of temporal nature in anomaly detection with the widely considered minimum change principle in data repairing. Our major contributions include: (1) a novel framework of iterative minimum repairing (IMR) over time series data, (2) explicit analysis on convergence of the proposed iterative minimum repairing, and (3) efficient estimation of parameters in each iteration. Remarkably, with incremental computation, we reduce the complexity of parameter estimation from O(n) to O(1). Experiments on real datasets demonstrate the superiority of our proposal compared to the state-of-the-art approaches. In particular, we show that (the proposed) repairing indeed improves the time series classification application.",
"title": ""
},
{
"docid": "73f5e4d9011ce7115fd7ff0be5974a14",
"text": "In this work we present, apply, and evaluate a novel, interactive visualization model for comparative analysis of structural variants and rearrangements in human and cancer genomes, with emphasis on data integration and uncertainty visualization. To support both global trend analysis and local feature detection, this model enables explorations continuously scaled from the high-level, complete genome perspective, down to the low-level, structural rearrangement view, while preserving global context at all times. We have implemented these techniques in Gremlin, a genomic rearrangement explorer with multi-scale, linked interactions, which we apply to four human cancer genome data sets for evaluation. Using an insight-based evaluation methodology, we compare Gremlin to Circos, the state-of-the-art in genomic rearrangement visualization, through a small user study with computational biologists working in rearrangement analysis. Results from user study evaluations demonstrate that this visualization model enables more total insights, more insights per minute, and more complex insights than the current state-of-the-art for visual analysis and exploration of genome rearrangements.",
"title": ""
},
{
"docid": "0a732282dc782b8893628697e39c9153",
"text": "Neural networks have had many great successes in recent years, particularly with the advent of deep learning and many novel training techniques. One issue that has prevented reinforcement learning from taking full advantage of scalable neural networks is that of catastrophic forgetting. The latter affects supervised learning systems when highly correlated input samples are presented, as well as when input patterns are non-stationary. However, most real-world problems are non-stationary in nature, resulting in prolonged periods of time separating inputs drawn from different regions of the input space. Unfortunately, reinforcement learning presents a worst-case scenario when it comes to precipitating catastrophic forgetting in neural networks. Meaningful training examples are acquired as the agent explores different regions of its state/action space. When the agent is in one such region, only highly correlated samples from that region are typically acquired. Moreover, the regions that the agent is likely to visit will depend on its current policy, suggesting that an agent that has a good policy may avoid exploring particular regions. The confluence of these factors means that without some mitigation techniques, supervised neural networks as function approximation in temporal-difference learning will only be applicable to the simplest test cases. In this work, we develop a feed forward neural network architecture that mitigates catastrophic forgetting by partitioning the input space in a manner that selectively activates a different subset of hidden neurons for each region of the input space. We demonstrate the effectiveness of the proposed framework on a cart-pole balancing problem for which other neural network architectures exhibit training instability likely due to catastrophic forgetting. We demonstrate that our technique produces better results, particularly with respect to a performance-stability measure.",
"title": ""
},
{
"docid": "1782fc75827937c6b31951bfca997f48",
"text": "Registering 2 or more range scans is a fundamental problem, with application to 3D modeling. While this problem is well addressed by existing techniques such as ICP when the views overlap significantly at a good initialization, no satisfactory solution exists for wide baseline registration. We propose here a novel approach which leverages contour coherence and allows us to align two wide baseline range scans with limited overlap from a poor initialization. Inspired by ICP, we maximize the contour coherence by building robust corresponding pairs on apparent contours and minimizing their distances in an iterative fashion. We use the contour coherence under a multi-view rigid registration framework, and this enables the reconstruction of accurate and complete 3D models from as few as 4 frames. We further extend it to handle articulations, and this allows us to model articulated objects such as human body. Experimental results on both synthetic and real data demonstrate the effectiveness and robustness of our contour coherence based registration approach to wide baseline range scans, and to 3D modeling.",
"title": ""
},
{
"docid": "f827c29bb9dd6073e626b7457775000c",
"text": "Inter vehicular communication is a technology where vehicles act as different nodes to form a network. In a vehicular network different vehicles communicate among each other via wireless access .Authentication is very crucial security service for inter vehicular communication (IVC) in Vehicular Information Network. It is because, protecting vehicles from any attempt to cause damage (misuse) to their private data and the attacks on their privacy. In this survey paper, we investigate the authentication issues for vehicular information network architecture based on the communication principle of named data networking (NDN). This paper surveys the most emerging paradigm of NDN in vehicular information network. So, we aims this survey paper helps to improve content naming, addressing, data aggregation and mobility for IVC in the vehicular information network.",
"title": ""
},
{
"docid": "522efee981fb9eb26ba31d02230604fa",
"text": "The lack of an integrated medical information service model has been considered as a main issue in ensuring the continuity of healthcare from doctors, healthcare professionals to patients; the resultant unavailable, inaccurate, or unconformable healthcare information services have been recognized as main causes to the annual millions of medication errors. This paper proposes an Internet computing model aimed at providing an affordable, interoperable, ease of integration, and systematic approach to the development of a medical information service network to enable the delivery of continuity of healthcare. Web services, wireless, and advanced automatic identification technologies are fully integrated in the proposed service model. Some preliminary research results are presented.",
"title": ""
}
] |
scidocsrr
|
0f22db032b990bf1b1514dedeba51d86
|
NoSQL Database Performance Tuning for IoT Data - Cassandra Case Study
|
[
{
"docid": "bbe43ff06e30a5cf2e9477a60c0bb6ff",
"text": "As the Internet of Things (IoT) paradigm gains popularity, the next few years will likely witness 'servitization' of domain sensing functionalities. We envision a cloud-based eco-system in which high quality data from large numbers of independently-managed sensors is shared or even traded in real-time. Such an eco-system will necessarily have multiple stakeholders such as sensor data providers, domain applications that utilize sensor data (data consumers), and cloud infrastructure providers who may collaborate as well as compete. While there has been considerable research on wireless sensor networks, the challenges involved in building cloud-based platforms for hosting sensor services are largely unexplored. In this paper, we present our vision for data quality (DQ)-centric big data infrastructure for federated sensor service clouds. We first motivate our work by providing real-world examples. We outline the key features that federated sensor service clouds need to possess. This paper proposes a big data architecture in which DQ is pervasive throughout the platform. Our architecture includes a markup language called SDQ-ML for describing sensor services as well as for domain applications to express their sensor feed requirements. The paper explores the advantages and limitations of current big data technologies in building various components of the platform. We also outline our initial ideas towards addressing the limitations.",
"title": ""
},
{
"docid": "8c4e02333f466c074ad332d904f655b9",
"text": "Context. The global communication system is in a tremendous growth, leading to wide range of data generation. The Telecom operators in various Telecom Industries, that generate large amount of data has a need to manage these data efficiently. As the technology involved in the database management systems is increasing, there is a remarkable growth of NoSQL databases in the 20 century. Apache Cassandra is an advanced NoSQL database system, which is popular for handling semi-structured and unstructured format of Big Data. Cassandra has an effective way of compressing data by using different compaction strategies. This research is focused on analyzing the performances of different compaction strategies in different use cases for default Cassandra stress model. The analysis can suggest better usage of compaction strategies in Cassandra, for a write heavy workload. Objectives. In this study, we investigate the appropriate performance metrics to evaluate the performance of compaction strategies. We provide the detailed analysis of Size Tiered Compaction Strategy, Date Tiered Compaction Strategy, and Leveled Compaction Strategy for a write heavy (90/10) work load, using default cassandra stress tool. Methods. A detailed literature research has been conducted to study the NoSQL databases, and the working of different compaction strategies in Apache Cassandra. The performances metrics are considered by the understanding of the literature research conducted, and considering the opinions of supervisors and Ericsson’s Apache Cassandra team. Two different tools were developed for collecting the performances of the considered metrics. The first tool was developed using Jython scripting language to collect the cassandra metrics, and the second tool was developed using python scripting language to collect the Operating System metrics. The graphs have been generated in Microsoft Excel, using the values obtained from the scripts. Results. Date Tiered Compaction Strategy and Size Tiered Compaction strategy showed more or less similar behaviour during the stress tests conducted. Level Tiered Compaction strategy has showed some remarkable results that effected the system performance, as compared to date tiered compaction and size tiered compaction strategies. Date tiered compaction strategy does not perform well for default cassandra stress model. Size tiered compaction can be preferred for default cassandra stress model, but not considerable for big data. Conclusions. With a detailed analysis and logical comparison of metrics, we finally conclude that Level Tiered Compaction Strategy performs better for a write heavy (90/10) workload while using default cassandra stress model, as compared to size tiered compaction and date tiered compaction strategies.",
"title": ""
}
] |
[
{
"docid": "94aa0777f80aa25ec854f159dc3e0706",
"text": "To develop a knowledge-aware recommender system, a key data problem is how we can obtain rich and structured knowledge information for recommender system (RS) items. Existing datasets or methods either use side information from original recommender systems (containing very few kinds of useful information) or utilize private knowledge base (KB). In this paper, we present the first public linked KB dataset for recommender systems, named KB4Rec v1.0, which has linked three widely used RS datasets with the popular KB Freebase. Based on our linked dataset, we first preform some interesting qualitative analysis experiments, in which we discuss the effect of two important factors (i.e., popularity and recency) on whether a RS item can be linked to a KB entity. Finally, we present the comparison of several knowledge-aware recommendation algorithms on our linked dataset.",
"title": ""
},
{
"docid": "b59a2c49364f3e95a2c030d800d5f9ce",
"text": "An algorithm with linear filters and morphological operations has been proposed for automatic fabric defect detection. The algorithm is applied off-line and real-time to denim fabric samples for five types of defects. All defect types have been detected successfully and the defective regions are labeled. The defective fabric samples are then classified by using feed forward neural network method. Both defect detection and classification application performances are evaluated statistically. Defect detection performance of real time and off-line applications are obtained as 88% and 83% respectively. The defective images are classified with an average accuracy rate of 96.3%.",
"title": ""
},
{
"docid": "ba8cddc6ed18f941ed7409524137c28c",
"text": "This paper proposes Self-Imitation Learning (SIL), a simple off-policy actor-critic algorithm that learns to reproduce the agent’s past good decisions. This algorithm is designed to verify our hypothesis that exploiting past good experiences can indirectly drive deep exploration. Our empirical results show that SIL significantly improves advantage actor-critic (A2C) on several hard exploration Atari games and is competitive to the state-of-the-art count-based exploration methods. We also show that SIL improves proximal policy optimization (PPO) on MuJoCo tasks.",
"title": ""
},
{
"docid": "c0d203ef23df86f5a3e9f970dfb1d152",
"text": "We propose a deep learning framework for few-shot image classification, which exploits information across label semantics and image domains, so that regions of interest can be properly attended for improved classification. The proposed semantics-guided attention module is able to focus on most relevant regions in an image, while the attended image samples allow data augmentation and alleviate possible overfitting during FSL training. Promising performances are presented in our experiments, in which we consider both closed and open-world settings. The former considers the test input belong to the categories of few shots only, while the latter requires recognition of all categories of interest.",
"title": ""
},
{
"docid": "af84229b7237e9f85f2273896a808b83",
"text": "Distributed word representation is an efficient method for capturing semantic and syntactic word relations. In this work, we introduce an extension to the continuous bag-of-words model for learning word representations efficiently by using implicit structure information. Instead of relying on a syntactic parser which might be noisy and slow to build, we compute weights representing probabilities of syntactic relations based on the Huffman softmax tree in an efficient heuristic. The constructed “implicit graphs” from these weights show that these weights contain useful implicit structure information. Extensive experiments performed on several word similarity and word analogy tasks show gains compared to the basic continuous bag-of-words model.",
"title": ""
},
{
"docid": "da878e8933c276f675aa5db698904c15",
"text": "Eruptive vellus hair cyst (EVHC) is a rare follicular developmental abnormality of the vellus hair follicles. They are usually seen in children, adolescents, or young adults and manifest as reddish-brown smooth papules most commonly involving the chest, limbs, and abdomen. An 18-year-old male presented with asymptomatic papules on the trunk and flexor aspect of both forearms for the past 2 years. There was no family history of similar lesions. His medical history was also not contributory. A clinical diagnosis of steatocystoma multiplex and chronic folliculitis was given, and a punch biopsy from the papule was performed and sent for histopathological examination. On microscopic examination, a final diagnosis of EVHC was rendered. The patient was advised topical treatment of retinoic acid cream (0.05%) for 6 months, and he is currently under follow-up period. Due to its rarity and resemblance to many similar entities, histopathological examination plays a major role in establishing a definite diagnosis and further proper management of the patient. We report this unusual case to generate awareness about this rarely diagnosed condition.",
"title": ""
},
{
"docid": "41aa05455471ecd660599f4ec285ff29",
"text": "The recent progress of human parsing techniques has been largely driven by the availability of rich data resources. In this work, we demonstrate some critical discrepancies between the current benchmark datasets and the real world human parsing scenarios. For instance, all the human parsing datasets only contain one person per image, while usually multiple persons appear simultaneously in a realistic scene. It is more practically demanded to simultaneously parse multiple persons, which presents a greater challenge to modern human parsing methods. Unfortunately, absence of relevant data resources severely impedes the development of multiple-human parsing methods. To facilitate future human parsing research, we introduce the Multiple-Human Parsing (MHP) dataset, which contains multiple persons in a real world scene per single image. The MHP dataset contains various numbers of persons (from 2 to 16) per image with 18 semantic classes for each parsing annotation. Persons appearing in the MHP images present sufficient variations in pose, occlusion and interaction. To tackle the multiple-human parsing problem, we also propose a novel Multiple-Human Parser (MH-Parser), which considers both the global context and local cues for each person in the parsing process. The model is demonstrated to outperform the naive “detect-and-parse” approach by a large margin, which will serve as a solid baseline and help drive the future research in real world human parsing.",
"title": ""
},
{
"docid": "5a82fe10b1c7e2f3d4838c91bba9e6a0",
"text": "The ability to assess an area of interest in 3 dimensions might benefit both novice and experienced clinicians alike. High-resolution limited cone-beam volumetric tomography (CBVT) has been designed for dental applications. As opposed to sliced-image data of conventional computed tomography (CT) imaging, CBVT captures a cylindrical volume of data in one acquisition and thus offers distinct advantages over conventional medical CT. These advantages include increased accuracy, higher resolution, scan-time reduction, and dose reduction. Specific endodontic applications of CBVT are being identified as the technology becomes more prevalent. CBVT has great potential to become a valuable tool in the modern endodontic practice. The objectives of this article are to briefly review cone-beam technology and its advantages over medical CT and conventional radiography, to illustrate current and future clinical applications of cone-beam technology in endodontic practice, and to discuss medicolegal considerations pertaining to the acquisition and interpretation of 3-dimensional data.",
"title": ""
},
{
"docid": "72af95617ff081cf773674ed5aaf7a07",
"text": "Reputation systems are crucial for distributed applications in which users have to be made accountable for their actions, such as ecommerce websites. However, existing systems often disclose the identity of the raters, which might deter honest users from submitting reviews out of fear of retaliation from the ratees. While many privacy-preserving reputation systems have been proposed, we observe that none of them is simultaneously truly decentralized, trustless, and suitable for real world usage in, for example, e-commerce applications. In this paper, we present a blockchain based decentralized privacy-preserving reputation system. We demonstrate that our system provides correctness and security while eliminating the need for users to trust any third parties or even fellow users.",
"title": ""
},
{
"docid": "7a55a1ba8f08cc1cec0db60296c5991d",
"text": "Erik H. Erikson's (1902-1994) theory reflects in part bis psychoanalytic training, but , it embraces society's influence and the social aspects of development to a much larger exrefit than did Freud's. With little more than a German high school education, Erikson attended art schools and traveled in ltaly, apparently in search of 4is own identity. Erikson's later writing popularized the concept of \"identity,\" and he applied it especially to the period of adolescence. After Erikson returned to Germany, where he studied art and prepared to teach art, he was offered a teaching position in a private school in Vienna that served the children of patients of Sigmund and Anna Freud. Peter Blos, a friend of Erikson from the rime they attended the Gymnasium together, also worked as a teacher in the same school and it was Blos's idea to offer Erikson the position. During bis tenure as a teacher, Erikson was invited to undergo psychoanalysis with Anna Freud, and during this process bis interest expanded from art and teaching to also include the study of psychoanalysis. While in Vienna, he also studied Montessori education, which later influenced bis psychoanalytic studies, such as the organization of abjects in space. Erikson graduated from the Vienna Psychoanalytic lnstitute in 1933 as a lay analyst since he held no medical or academic degrees. Later that year, he immigrated to the United States and became associated with the Harvard Psychological Clinic. Erikson bas published extensively, bis best known and most widely read book being Chitdhood and Society, published in 1950 and revised in 1963. Of particular significance to an understanding of adolescence is bis ldentity: Youth and Crisis (1968). Erikson's more recent book, The Life Cycle Compteted (1982), encompasses an integration of much of bis earlier work, but with the explicit purpose of exploring development by beginning with old age and to make sense of the \"completed life cycle.\" He also explained that the new organization reflects bis view that, because aIl stages grow out of previous stages, tracing the antecedents backward would highlight these relationships. The idea of identity formation bas remained the focus of much of bis work and appears in other book titles, such as ldentity",
"title": ""
},
{
"docid": "f4cda3090b5fa40360f4f44ecd577c99",
"text": "We present an approach for large-scale modeling of parametric surfaces using spherical harmonics (SHs). A standard least square fitting (LSF) method for SH expansion is not scalable and cannot accurately model large 3D surfaces. We propose an iterative residual fitting (IRF) algorithm, and demonstrate its effectiveness and scalability in creating accurate SH models for large 3D surfaces. These large-scale and accurate parametric models can be used in many applications in computer vision, graphics, and biomedical imaging. As a simple extension of LSF, IRF is very easy to implement and requires few machine resources.",
"title": ""
},
{
"docid": "d2f7f7a355f133a8e5f40c67ca42a076",
"text": "In present times, giving a computer to carry out any task requires a set of specific instructions or the implementation of an algorithm that defines the rules that need to be followed. The present day computer system has no ability to learn from past experiences and hence cannot readily improve on the basis of past mistakes. So, giving a computer or instructing a computer controlled programme to perform a task requires one to define a complete and correct algorithm for task and then programme the algorithm into the computer. Such activities involve tedious and time consuming effort by specially trained teacher or person. Jaime et al (Jaime G. Carbonell, 1983) also explained that the present day computer systems cannot truly learn to perform a task through examples or through previous solved task and they cannot improve on the basis of past mistakes or acquire new abilities by observing and imitating experts. Machine Learning research endeavours to open the possibility of instruction the computer in such a new way and thereby promise to ease the burden of hand writing programmes and growing problems of complex information that get complicated in the computer. When approaching a task-oriented acquisition task, one must be aware that the resultant computer system must interact with human and therefore should closely match human abilities. So, learning machine or programme on the other hand will have to interact with computer users who make use of them and consequently the concept and skills they acquireif not necessarily their internal mechanism must be understandable to humans. Also Alpaydin (Alpaydin, 2004) stated that with advances in computer technology, we currently have the ability to store and process large amount of data, as well as access it from physically distant locations over computer network. Most data acquisition devices are digital now and record reliable data. For example, a supermarket chain that has hundreds of stores all over the country selling thousands of goods to millions of customers. The point of sale terminals record the details of each transaction: date, customer identification code, goods bought and their amount, total money spent and so forth, This typically amounts to gigabytes of data every day. This store data becomes useful only when it is analysed and tuned into information that can be used or be predicted. We do not know exactly which people are likely to buy a particular product or which author to suggest to people who enjoy reading Hemingway. If we knew, we would not need any analysis of the data; we would just go ahead and write down code. But because we do not, we can only collect data and hope to extract the answers to these and similar question from 1",
"title": ""
},
{
"docid": "f5b85ce051a97bee29a1c921e3146bc0",
"text": "BACKGROUND\nUnderstanding how environmental attributes can influence particular physical activity behaviors is a public health research priority. Walking is the most common physical activity behavior of adults; environmental innovations may be able to influence rates of participation.\n\n\nMETHOD\nReview of studies on relationships of objectively assessed and perceived environmental attributes with walking. Associations with environmental attributes were examined separately for exercise and recreational walking, walking to get to and from places, and total walking.\n\n\nRESULTS\nEighteen studies were identified. Aesthetic attributes, convenience of facilities for walking (sidewalks, trails); accessibility of destinations (stores, park, beach); and perceptions about traffic and busy roads were found to be associated with walking for particular purposes. Attributes associated with walking for exercise were different from those associated with walking to get to and from places.\n\n\nCONCLUSIONS\nWhile few studies have examined specific environment-walking relationships, early evidence is promising. Key elements of the research agenda are developing reliable and valid measures of environmental attributes and walking behaviors, determining whether environment-behavior relationships are causal, and developing theoretical models that account for environmental influences and their interactions with other determinants.",
"title": ""
},
{
"docid": "4ad106897a19830c80a40e059428f039",
"text": "In 1972, and later in 1979, at the peak of the golden era of Good Old Fashioned Artificial Intelligence (GOFAI), the voice of philosopher Hubert Dreyfus made itself heard as one of the few calls against the hubristic programme of modelling the human mind as a mechanism of symbolic information processing (Dreyfus, 1979). He did not criticise particular solutions to specific problems; instead his deep concern was with the very foundations of the programme. His critical stance was unusual, at least for most GOFAI practitioners, in that it did not rely on technical issues, but on a philosophical position emanating from phenomenology and existentialism, a fact contributing to his claims being largely ignored or dismissed for a long time by the AI community. But, for the most part, he was eventually proven right. AI’s over-reliance on worldmodelling and planning went against the evidence provided by phenomenology of human activity as situated and with a clear and ever-present focus of practical concern – the body and not some algorithm is the originating locus of intelligent activity (if by intelligent we understand intentional, directed and flexible), and the world is not the sum total of all available facts, but the world-as-it-is-for-this-body. Such concerns were later vindicated by the Brooksian revolution in autonomous robotics with its foundations on embodiment, situatedness and de-centralised mechanisms (Brooks, 1991). Brooks’ practical and methodological preoccupations – building robots largely based on biologically plausible principles and capable of acting in the real world – proved parallel, despite his claim that his approach was not “German philosophy”, to issues raised by Dreyfus. Putting robotics back as the acid test of AI, as oppossed to playing chess and proving theorems, is now often seen as a positive response to Dreyfus’ point that AI was unable to capture true meaning by the summing of meaningless processes. This criticism was later devastatingly recast in Searle’s Chinese Room argument (1980), and extended by Harnad’s Symbol Grounding Problem (1990). Meaningful activity – that is, meaningful for the agent and not only for the designer – must obtain through sensorimotor grounding in the agent’s world, and for this both a body and world are needed. Following these developments, work in autonomous robotics and new AI since the 1990s rebelled against pure connectionism because of its lack of biological plausibility and also because most of connectionist research was carried out in vacuo – it was compellingly argued that neural network models as simple input/output processing units are meaningless for modelling the cognitive capabilities of insects, let alone humans, unless they are embedded in a closed sensorimotor loop of interaction with a world (Cliff, 1991). Objective meaning, that is meaningful internal states and states of the world, can only obtain in an embodied agent whose effector and sensor activities become coordinated",
"title": ""
},
{
"docid": "67925645b590cba622dd101ed52cf9e2",
"text": "This study is the first to demonstrate that features of psychopathy can be reliably and validly detected by lay raters from \"thin slices\" (i.e., small samples) of behavior. Brief excerpts (5 s, 10 s, and 20 s) from interviews with 96 maximum-security inmates were presented in video or audio form or in both modalities combined. Forty raters used these excerpts to complete assessments of overall psychopathy and its Factor 1 and Factor 2 components, various personality disorders, violence proneness, and attractiveness. Thin-slice ratings of psychopathy correlated moderately and significantly with psychopathy criterion measures, especially those related to interpersonal features of psychopathy, particularly in the 5- and 10-s excerpt conditions and in the video and combined channel conditions. These findings demonstrate that first impressions of psychopathy and related constructs, particularly those pertaining to interpersonal functioning, can be reasonably reliable and valid. They also raise intriguing questions regarding how individuals form first impressions and about the extent to which first impressions may influence the assessment of personality disorders. (PsycINFO Database Record (c) 2009 APA, all rights reserved).",
"title": ""
},
{
"docid": "c77494588aa7fb12235e131b20faa4e4",
"text": "A multiband planar monopole antenna fed by microstrip line feed with Defected Ground Structure (DGS) is presented for simultaneously satisfying wireless local area network (WLAN) and worldwide interoperability for microwave access (WiMAX) applications. The proposed antenna consists of a rectangular microstrip patch with rectangular slit, including the circular defect etched on the ground plane forming DGS structure. The soft nature of the DGS facilitates improvement in the performance of microstrip antennas. The simulated -10 dB bandwidth for return loss is from 2. 9-3. 77 GHz, 3. 91-6. 36, covering the WLAN: 5. 15–5. 35 and 5. 725–5. 85 GHz and WiMAX: 3. 3–3. 8 and 5. 25–5. 85 GHz bands. The design and optimization of DGS structures along with the parametric study were carried out using IE3D ZELAND which is based on method of moment.",
"title": ""
},
{
"docid": "acc700d965586f5ea65bdcb67af38fca",
"text": "OBJECTIVE\nAttention deficit hyperactivity disorder (ADHD) symptoms are associated with the deficit in executive functions. Playing Go involves many aspect of cognitive function and we hypothesized that it would be effective for children with ADHD.\n\n\nMETHODS\nSeventeen drug naïve children with ADHD and seventeen age and sex matched comparison subjects were participated. Participants played Go under the instructor's education for 2 hours/day, 5 days/week. Before and at the end of Go period, clinical symptoms, cognitive functions, and brain EEG were assessed with Dupaul's ADHD scale (ARS), Child depression inventory (CDI), digit span, the Children's Color Trails Test (CCTT), and 8-channel QEEG system (LXE3208, Laxtha Inc., Daejeon, Korea).\n\n\nRESULTS\nThere were significant improvements of ARS total score (z=2.93, p<0.01) and inattentive score (z=2.94, p<0.01) in children with ADHD. However, there was no significant change in hyperactivity score (z=1.33, p=0.18). There were improvement of digit total score (z=2.60, p<0.01; z=2.06, p=0.03), digit forward score (z=2.21, p=0.02; z=2.02, p=0.04) in both ADHD and healthy comparisons. In addition, ADHD children showed decreased time of CCTT-2 (z=2.21, p=0.03). The change of theta/beta right of prefrontal cortex during 16 weeks was greater in children with ADHD than in healthy comparisons (F=4.45, p=0.04). The change of right theta/beta in prefrontal cortex has a positive correlation with ARS-inattention score in children with ADHD (r=0.44, p=0.03).\n\n\nCONCLUSION\nWe suggest that playing Go would be effective for children with ADHD by activating hypoarousal prefrontal function and enhancing executive function.",
"title": ""
},
{
"docid": "2b9733f936f39d0bb06b8f89a95f31e4",
"text": "In order to improve the three-dimensional (3D) exploration of virtual spaces above a tabletop, we developed a set of navigation techniques using a handheld magic lens. These techniques allow for an intuitive interaction with two-dimensional and 3D information spaces, for which we contribute a classification into volumetric, layered, zoomable, and temporal spaces. The proposed PaperLens system uses a tracked sheet of paper to navigate these spaces with regard to the Z-dimension (height above the tabletop). A formative user study provided valuable feedback for the improvement of the PaperLens system with respect to layer interaction and navigation. In particular, the problem of keeping the focus on selected layers was addressed. We also propose additional vertical displays in order to provide further contextual clues.",
"title": ""
},
{
"docid": "7e5b18a0356a89a0285f80a2224d8b12",
"text": "Machine recognition of a handwritten mathematical expression (HME) is challenging due to the ambiguities of handwritten symbols and the two-dimensional structure of mathematical expressions. Inspired by recent work in deep learning, we present Watch, Attend and Parse (WAP), a novel end-to-end approach based on neural network that learns to recognize HMEs in a two-dimensional layout and outputs them as one-dimensional character sequences in LaTeX format. Inherently unlike traditional methods, our proposed model avoids problems that stem from symbol segmentation, and it does not require a predefined expression grammar. Meanwhile, the problems of symbol recognition and structural analysis are handled, respectively, using a watcher and a parser. We employ a convolutional neural network encoder that takes HME images as input as the watcher and employ a recurrent neural network decoder equipped with an attention mechanism as the parser to generate LaTeX sequences. Moreover, the correspondence between the input expressions and the output LaTeX sequences is learned automatically by the attention mechanism. We validate the proposed approach on a benchmark published by the CROHME international competition. Using the official training dataset, WAP significantly outperformed the state-of-the-art method with an expression recognition accuracy of 46.55% on CROHME 2014 and 44.55% on CROHME 2016. © 2017 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "8eb62d4fdc1be402cd9216352cb7cfc3",
"text": "In an attempt to better understand generalization in deep learning, we study several possible explanations. We show that implicit regularization induced by the optimization method is playing a key role in generalization and success of deep learning models. Motivated by this view, we study how different complexity measures can ensure generalization and explain how optimization algorithms can implicitly regularize complexity measures. We empirically investigate the ability of these measures to explain different observed phenomena in deep learning. We further study the invariances in neural networks, suggest complexity measures and optimization algorithms that have similar invariances to those in neural networks and evaluate them on a number of learning tasks. Thesis Advisor: Nathan Srebro Title: Professor",
"title": ""
}
] |
scidocsrr
|
e7b348bdd5435c5867447254f105b01f
|
Improving Computer-Aided Detection Using Convolutional Neural Networks and Random View Aggregation
|
[
{
"docid": "4f400f8e774ebd050ba914011da73514",
"text": "This paper summarizes the method of polyp detection in colonoscopy images and provides preliminary results to participate in ISBI 2015 Grand Challenge on Automatic Polyp Detection in Colonoscopy videos. The key aspect of the proposed method is to learn hierarchical features using convolutional neural network. The features are learned in different scales to provide scale-invariant features through the convolutional neural network, and then each pixel in the colonoscopy image is classified as polyp pixel or non-polyp pixel through fully connected network. The result is refined via smooth filtering and thresholding step. Experimental result shows that the proposed neural network can classify patches of polyp and non-polyp region with an accuracy of about 90%.",
"title": ""
}
] |
[
{
"docid": "589078a80d4034d4929676d359c16398",
"text": "This paper describes the University of Sheffield’s submission for the WMT16 Multimodal Machine Translation shared task, where we participated in Task 1 to develop German-to-English and Englishto-German statistical machine translation (SMT) systems in the domain of image descriptions. Our proposed systems are standard phrase-based SMT systems based on the Moses decoder, trained only on the provided data. We investigate how image features can be used to re-rank the n-best list produced by the SMT model, with the aim of improving performance by grounding the translations on images. Our submissions are able to outperform the strong, text-only baseline system for both directions.",
"title": ""
},
{
"docid": "9c05452b964c67b8f79ce7dfda4a72e5",
"text": "The Internet is evolving rapidly toward the future Internet of Things (IoT) which will potentially connect billions or even trillions of edge devices which could generate huge amount of data at a very high speed and some of the applications may require very low latency. The traditional cloud infrastructure will run into a series of difficulties due to centralized computation, storage, and networking in a small number of datacenters, and due to the relative long distance between the edge devices and the remote datacenters. To tackle this challenge, edge cloud and edge computing seem to be a promising possibility which provides resources closer to the resource-poor edge IoT devices and potentially can nurture a new IoT innovation ecosystem. Such prospect is enabled by a series of emerging technologies, including network function virtualization and software defined networking. In this survey paper, we investigate the key rationale, the state-of-the-art efforts, the key enabling technologies and research topics, and typical IoT applications benefiting from edge cloud. We aim to draw an overall picture of both ongoing research efforts and future possible research directions through comprehensive discussions.",
"title": ""
},
{
"docid": "b82c2865524e34fd61f1555fc9ba5fbf",
"text": "Optimization of decision problems in stochastic environments is usually concerned with maximizing the probability of achieving the goal and minimizing the expected episode length. For interacting agents in time-critical applications, learning of the possibility of scheduling of subtasks (events) or the full task is an additional relevant issue. Besides, there exist highly stochastic problems where the actual trajectories show great variety from episode to episode, but completing the task takes almost the same amount of time. The identification of sub-problems of this nature may promote e.g., planning, scheduling and segmenting Markov decision processes. In this work, formulae for the average duration as well as the standard deviation of the duration of events are derived. We show, that the emerging Bellman-type equation is a simple extension of Sobel’s work (1982) and that methods of dynamic programming as well as methods of reinforcement learning can be applied. Computer demonstration on a toy problem serve to highlight the principle.",
"title": ""
},
{
"docid": "0e002aae88332f8143e6f3a19c4c578b",
"text": "While attachment research has demonstrated that parents' internal working models of attachment relationships tend to be transmitted to their children, affecting children's developmental trajectories, this study specifically examines associations between adult attachment status and observable parent, child, and dyadic behaviors among children with autism and associated neurodevelopmental disorders of relating and communicating. The Adult Attachment Interview (AAI) was employed to derive parental working models of attachment relationships. The Functional Emotional Assessment Scale (FEAS) was used to determine the quality of relational and functional behaviors in parents and their children. The sample included parents and their 4- to 16-year-old children with autism and associated neurodevelopmental disorders. Hypothesized relationships between AAI classifications and FEAS scores were supported. Significant correlations were found between AAI classification and FEAS scores, indicating that children with autism spectrum disorders whose parents demonstrated secure attachment representations were better able to initiate and respond in two-way pre-symbolic gestural communication; organize two-way social problem-solving communication; and engage in imaginative thinking, symbolic play, and verbal communication. These findings lend support to the relevance of the parent's state of mind pertaining to attachment status to child and parent relational behavior in cases wherein the child has been diagnosed with autism or an associated neurodevelopmental disorder of relating and communicating. A model emerges from these findings of conceptualizing relationships between parental internal models of attachment relationships and parent-child relational and functional levels that may aid in differentiating interventions.",
"title": ""
},
{
"docid": "cc9741eb6e5841ddf10185578f26a077",
"text": "The context of prepaid mobile telephony is specific in the way that customers are not contractually linked to their operator and thus can cease their activity without notice. In order to estimate the retention efforts which can be engaged towards each individual customer, the operator must distinguish the customers presenting a strong churn risk from the other. This work presents a data mining application leading to a churn detector. We compare artificial neural networks (ANN) which have been historically applied to this problem, to support vectors machines (SVM) which are particularly effective in classification and adapted to noisy data. Thus, the objective of this article is to compare the application of SVM and ANN to churn detection in prepaid cellular telephony. We show that SVM gives better results than ANN on this specific problem.",
"title": ""
},
{
"docid": "752eea750f91318c3c45d250059cb597",
"text": "To estimate the value functions of policies from exploratory data, most model-free offpolicy algorithms rely on importance sampling, where the use of importance sampling ratios often leads to estimates with severe variance. It is thus desirable to learn off-policy without using the ratios. However, such an algorithm does not exist for multi-step learning with function approximation. In this paper, we introduce the first such algorithm based on temporal-difference (TD) learning updates. We show that an explicit use of importance sampling ratios can be eliminated by varying the amount of bootstrapping in TD updates in an action-dependent manner. Our new algorithm achieves stability using a two-timescale gradient-based TD update. A prior algorithm based on lookup table representation called Tree Backup can also be retrieved using action-dependent bootstrapping, becoming a special case of our algorithm. In two challenging off-policy tasks, we demonstrate that our algorithm is stable, effectively avoids the large variance issue, and can perform substantially better than its state-of-the-art counterpart.",
"title": ""
},
{
"docid": "ddc556ae150e165dca607e4a674583ae",
"text": "Increasing patient numbers, changing demographics and altered patient expectations have all contributed to the current problem with 'overcrowding' in emergency departments (EDs). The problem has reached crisis level in a number of countries, with significant implications for patient safety, quality of care, staff 'burnout' and patient and staff satisfaction. There is no single, clear definition of the cause of overcrowding, nor a simple means of addressing the problem. For some hospitals, the option of ambulance diversion has become a necessity, as overcrowded waiting rooms and 'bed-block' force emergency staff to turn patients away. But what are the options when ambulance diversion is not possible? Christchurch Hospital, New Zealand is a tertiary level facility with an emergency department that sees on average 65,000 patients per year. There are no other EDs to whom patients can be diverted, and so despite admission rates from the ED of up to 48%, other options need to be examined. In order to develop a series of unified responses, which acknowledge the multifactorial nature of the problem, the Emergency Department Cardiac Analogy model of ED flow, was developed. This model highlights the need to intervene at each of three key points, in order to address the issue of overcrowding and its associated problems.",
"title": ""
},
{
"docid": "d7e61562c913fa9fa265fd8ef5288cb5",
"text": "For our project, we consider the task of classifying the gender of an author of a blog, novel, tweet, post or comment. Previous attempts have considered traditional NLP models such as bag of words and n-grams to capture gender differences in authorship, and apply it to a specific media (e.g. formal writing, books, tweets, or blogs). Our project takes a novel approach by applying deep learning models developed by Lai et al to directly learn the gender of blog authors. We further refine their models and present a new deep learning model, the Windowed Recurrent Convolutional Neural Network (WRCNN), for gender classification. Our approaches are tested and trained on several datasets: a blog dataset used by Mukherjee et al, and two datasets representing 19th and 20th century authors, respectively. We report an accuracy of 86% on the blog dataset with our WRCNN model, comparable with state-of-the-art implementations.",
"title": ""
},
{
"docid": "7115c7f17faa8712dbdeac631f022ae4",
"text": "Scientific workflows, like other applications, benefit from the cloud computing, which offers access to virtually unlimited resources provisioned elastically on demand. In order to efficiently execute a workflow in the cloud, scheduling is required to address many new aspects introduced by cloud resource provisioning. In the last few years, many techniques have been proposed to tackle different cloud environments enabled by the flexible nature of the cloud, leading to the techniques of different designs. In this paper, taxonomies of cloud workflow scheduling problem and techniques are proposed based on analytical review. We identify and explain the aspects and classifications unique to workflow scheduling in the cloud environment in three categories, namely, scheduling process, task and resource. Lastly, review of several scheduling techniques are included and classified onto the proposed taxonomies. We hope that our taxonomies serve as a stepping stone for those entering this research area and for further development of scheduling technique.",
"title": ""
},
{
"docid": "0d83d1dc97d65d9aa4969e016a360451",
"text": "This paper proposes and evaluates a novel analytical performance model to study the efficiency and scalability of software-defined infrastructure (SDI) to host adaptive applications. The SDI allows applications to communicate their adaptation requirements at run-time. Adaptation scenarios require computing and networking resources to be provided to applications in a timely manner to facilitate seamless service delivery. Our analytical model yields the response time of realizing adaptations on the SDI and reveals the scalability limitations. We conduct extensive testbed experiments on a cloud environment to verify the accuracy and fidelity of the model. Cloud service providers can leverage the proposed model to perform capacity planning and bottleneck analysis when they accommodate adaptive applications.",
"title": ""
},
{
"docid": "bdde191440caa21c1f162ffa70f8075f",
"text": "There is a strong trend in using permanent magnet synchronous machines for very high speed, high power applications due to their high efficiencies, versatility and compact nature. To increase power output for a given speed, rotor design becomes critical in order to maximize rotor volume and hence torque output for a given electrical loading and cooling capability. The two main constraints on rotor volume are mechanical, characterized by stresses in the rotor and resonant speeds of the rotor assembly. The level of mechanical stresses sustained in rotors increases with their radius and speed and, as this is pushed higher, previously minor effects become important in rotor design. This paper describes an observed shear stress concentration in sleeved permanent magnet rotors, caused by the Poisson effect, which can lead to magnet cracking and rotor failure. A simple analytical prediction of the peak shear stress is presented and methods for mitigating it are recommended.",
"title": ""
},
{
"docid": "41b8fb6fd9237c584ce0211f94a828be",
"text": "Over the last few years, two of the main research directions in machine learning of natural language processing have been the study of semi-supervised learning algorithms as a way to train classifiers when the labeled data is scarce, and the study of ways to exploit knowledge and global information in structured learning tasks. In this paper, we suggest a method for incorporating domain knowledge in semi-supervised learning algorithms. Our novel framework unifies and can exploit several kinds of task specific constraints. The experimental results presented in the information extraction domain demonstrate that applying constraints helps the model to generate better feedback during learning, and hence the framework allows for high performance learning with significantly less training data than was possible before on these tasks.",
"title": ""
},
{
"docid": "159222cde67c2d08e0bde7996b422cd6",
"text": "Superficial thrombophlebitis of the dorsal vein of the penis, known as penile Mondor’s disease, is an uncommon genital disease. We report on a healthy 44-year-old man who presented with painful penile swelling, ecchymosis, and penile deviation after masturbation, which initially imitated a penile fracture. Thrombosis of the superficial dorsal vein of the penis without rupture of corpus cavernosum was found during surgical exploration. The patient recovered without erectile dysfunction.",
"title": ""
},
{
"docid": "71ff52158a45b1869500630cd5cb041b",
"text": "Heat shock proteins (HSPs) are a set of highly conserved proteins that can serve as intestinal gate keepers in gut homeostasis. Here, effects of a probiotic, Lactobacillus rhamnosus GG (LGG), and two novel porcine isolates, Lactobacillus johnsonii strain P47-HY and Lactobacillus reuteri strain P43-HUV, on cytoprotective HSP expression and gut barrier function, were investigated in a porcine IPEC-J2 intestinal epithelial cell line model. The IPEC-J2 cells polarized on a permeable filter exhibited villus-like cell phenotype with development of apical microvilli. Western blot analysis detected HSP expression in IPEC-J2 and revealed that L. johnsonii and L. reuteri strains were able to significantly induce HSP27, despite high basal expression in IPEC-J2, whereas LGG did not. For HSP72, only the supernatant of L. reuteri induced the expression, which was comparable to the heat shock treatment, which indicated that HSP72 expression was more stimulus specific. The protective effect of lactobacilli was further studied in IPEC-J2 under an enterotoxigenic Escherichia coli (ETEC) challenge. ETEC caused intestinal barrier destruction, as reflected by loss of cell-cell contact, reduced IPEC-J2 cell viability and transepithelial electrical resistance, and disruption of tight junction protein zonula occludens-1. In contrast, the L. reuteri treatment substantially counteracted these detrimental effects and preserved the barrier function. L. johnsonii and LGG also achieved barrier protection, partly by directly inhibiting ETEC attachment. Together, the results indicate that specific strains of Lactobacillus can enhance gut barrier function through cytoprotective HSP induction and fortify the cell protection against ETEC challenge through tight junction protein modulation and direct interaction with pathogens.",
"title": ""
},
{
"docid": "53c0564d82737d51ca9b7ea96a624be4",
"text": "In part 1 of this article, an occupational therapy model of practice for children with attention deficit hyperactivity disorder (ADHD) was described (Chu and Reynolds 2007). It addressed some specific areas of human functioning related to children with ADHD in order to guide the practice of occupational therapy. The model provides an approach to identifying and communicating occupational performance difficulties in relation to the interaction between the child, the environment and the demands of the task. A family-centred occupational therapy assessment and treatment package based on the model was outlined. The delivery of the package was underpinned by the principles of the family-centred care approach. Part 2 of this two-part article reports on a multicentre study, which was designed to evaluate the effectiveness and acceptability of the proposed assessment and treatment package and thereby to offer some validation of the delineation model. It is important to note that no treatment has yet been proved to ‘cure’ the condition of ADHD or to produce any enduring effects in affected children once the treatment is withdrawn. So far, the only empirically validated treatments for children with ADHD with substantial research evidence are psychostimulant medication, behavioural and educational management, and combined medication and behavioural management (DuPaul and Barkley 1993, A family-centred occupational therapy assessment and treatment package for children with attention deficit hyperactivity disorder (ADHD) was evaluated. The package involves a multidimensional evaluation and a multifaceted intervention, which are aimed at achieving a goodness-of-fit between the child, the task demands and the environment in which the child carries out the task. The package lasts for 3 months, with 12 weekly contacts with the child, parents and teacher. A multicentre study was carried out, with 20 occupational therapists participating. Following a 3-day training course, they implemented the package and supplied the data that they had collected from 20 children. The outcomes were assessed using the ADHD Rating Scales, pre-intervention and post-intervention. The results showed behavioural improvement in the majority of the children. The Measure of Processes of Care – 20-item version (MPOC-20) provided data on the parents’ perceptions of the family-centredness of the package and also showed positive ratings. The results offer some support for the package and the guiding model of practice, but caution should be exercised in generalising the results because of the small sample size, lack of randomisation, absence of a control group and potential experimenter effects from the research therapists. A larger-scale randomised controlled trial should be carried out to evaluate the efficacy of an improved package.",
"title": ""
},
{
"docid": "8ff481b3b35b74356d876c28513dc703",
"text": "This paper describes the ScratchJr research project, a collaboration between Tufts University's Developmental Technologies Research Group, MIT's Lifelong Kindergarten Group, and the Playful Invention Company. Over the past five years, dozens of ScratchJr prototypes have been designed and studied with over 300 K-2nd grade students, teachers and parents. ScratchJr allows children ages 5 to 7 years to explore concepts of computer programming and digital content creation in a safe and fun environment. This paper describes the progression of major prototypes leading to the current public version, as well as the educational resources developed for use with ScratchJr. Future directions and educational implications are also discussed.",
"title": ""
},
{
"docid": "7832707feef1e81c3a01e974c37a960b",
"text": "Most current commercial automated fingerprint-authentication systems on the market are based on the extraction of the fingerprint minutiae, and on medium resolution (500 dpi) scanners. Sensor manufacturers tend to reduce the sensing area in order to adapt it to low-power mobile hand-held communication systems and to lower the cost of their devices. An interesting alternative is designing a novel fingerprintauthentication system capable of dealing with an image from a small, high resolution (1000 dpi) sensor area based on combined level 2 (minutiae) and level 3 (sweat pores) feature extraction. In this paper, we propose a new strategy and implementation of a series of techniques for automatic level 2 and level 3 feature extraction in fragmentary fingerprint comparison. The main challenge in achieving high reliability while using a small portion of a fingerprint for matching is that there may not be a sufficient number of minutiae but the uniqueness of the pore configurations provides a powerful means to compensate for this insufficiency. A pilot study performed to test the presented approach confirms the efficacy of using pores in addition to the traditionally used minutiae in fragmentary fingerprint comparison.",
"title": ""
},
{
"docid": "d18d4780cc259da28da90485bd3f0974",
"text": "L'ostéogenèse imparfaite (OI) est un groupe hétérogène de maladies affectant le collagène de type I et caractérisées par une fragilité osseuse. Les formes létales sont rares et se caractérisent par une micromélie avec déformation des membres. Un diagnostic anténatal d'OI létale a été fait dans deux cas, par échographie à 17 et à 25 semaines d'aménorrhée, complélées par un scanner du squelette fœtal dans un cas. Une interruption thérapeutique de grossesse a été indiquée dans les deux cas. Pan African Medical Journal. 2016; 25:88 doi:10.11604/pamj.2016.25.88.5871 This article is available online at: http://www.panafrican-med-journal.com/content/article/25/88/full/ © Houda EL Mhabrech et al. The Pan African Medical Journal ISSN 1937-8688. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Pan African Medical Journal – ISSN: 19378688 (www.panafrican-med-journal.com) Published in partnership with the African Field Epidemiology Network (AFENET). (www.afenet.net) Case report Open Access",
"title": ""
},
{
"docid": "bee25514d15321f4f0bdcf867bb07235",
"text": "We propose a randomized block-coordinate variant of the classic Frank-Wolfe algorithm for convex optimization with block-separable constraints. Despite its lower iteration cost, we show that it achieves a similar convergence rate in duality gap as the full FrankWolfe algorithm. We also show that, when applied to the dual structural support vector machine (SVM) objective, this yields an online algorithm that has the same low iteration complexity as primal stochastic subgradient methods. However, unlike stochastic subgradient methods, the block-coordinate FrankWolfe algorithm allows us to compute the optimal step-size and yields a computable duality gap guarantee. Our experiments indicate that this simple algorithm outperforms competing structural SVM solvers.",
"title": ""
},
{
"docid": "b418734faef12396bbcef4df356c6fb6",
"text": "Active learning techniques were employed for classification of dialogue acts over two dialogue corpora, the English humanhuman Switchboard corpus and the Spanish human-machine Dihana corpus. It is shown clearly that active learning improves on a baseline obtained through a passive learning approach to tagging the same data sets. An error reduction of 7% was obtained on Switchboard, while a factor 5 reduction in the amount of labeled data needed for classification was achieved on Dihana. The passive Support Vector Machine learner used as baseline in itself significantly improves the state of the art in dialogue act classification on both corpora. On Switchboard it gives a 31% error reduction compared to the previously best reported result.",
"title": ""
}
] |
scidocsrr
|
ba845b72e11c6b1011ecd1ef99cf42ba
|
Predicting human resting-state functional connectivity from structural connectivity
|
[
{
"docid": "576f63258e468f2454aa4e8e30a9c770",
"text": "A biomechanical model is presented for the dynamic changes in deoxyhemoglobin content during brain activation. The model incorporates the conflicting effects of dynamic changes in both blood oxygenation and blood volume. Calculations based on the model show pronounced transients in the deoxyhemoglobin content and the blood oxygenation level dependent (BOLD) signal measured with functional MRI, including initial dips and overshoots and a prolonged poststimulus undershoot of the BOLD signal. Furthermore, these transient effects can occur in the presence of tight coupling of cerebral blood flow and oxygen metabolism throughout the activation period. An initial test of the model against experimental measurements of flow and BOLD changes during a finger-tapping task showed good agreement.",
"title": ""
}
] |
[
{
"docid": "c82f3a7a6b8670c74d92cbeb7bb0a5da",
"text": "Thesauri for science and technology information are increasingly used in bibliometrics and scientometrics. However, the manual construction and maintenance of thesauri is costly and time consuming, thus, methods for semi-automatic construction and maintenance are being actively studied. We propose a method that expands an existing thesaurus with specified terms extracted from the abstracts of articles. Specifically, we assign the terms to specified subcategories by clustering a word vector space, then determine the hyponyms and hypernyms based on their relations with terms in the sub-categories. The word vectors are constructed from 177,000 IEEE articles archived from 2012 to 2014 in the Scopus dataset. In experiments, the terms were correctly classified into the Japan Science and Technology thesaurus with 70.8% precision and 75.4% recall. In future, we will develop a semiautomatic thesaurus maintenance system that recommends new terms in their proper relative positions.",
"title": ""
},
{
"docid": "1b8afad1b27c5febbd256e00300b3178",
"text": "Psychosocial risks at the workplace is a well-researched subject from a managerial and organisational point of view. However, the relation of psychosocial risks to Information Security has not been formally studied to the extent required by the gravity of the topic. An attempt is made to highlight the nature of psychosocial risks and provide examples of their effects on Information Security. The foundation is thus set for methodologies of assessment and mitigation and suggestions are made on future research directions.",
"title": ""
},
{
"docid": "920b3c1264ad303bbb1a263ecf7c1162",
"text": "Nowadays, operational quality and robustness of cellular networks are among the hottest topics wireless communications research. As a response to a growing need in reduction of expenses for mobile operators, 3rd Generation Partnership Project (3GPP) initiated work on Minimization of Drive Tests (MDT). There are several major areas of standardization related to MDT, such as coverage, capacity, mobility optimization and verification of end user quality [1]. This paper presents results of the research devoted to Quality of Service (QoS) verification for MDT. The main idea is to jointly observe the user experienced QoS in terms of throughput, and corresponding radio conditions. Also the necessity to supplement the existing MDT metrics with the new reporting types is elaborated.",
"title": ""
},
{
"docid": "a4268c77c3f51ca8d05fa0d108682883",
"text": "In this paper, we propose a locality-constrained and sparsity-encouraged manifold fitting approach, aiming at capturing the locally sparse manifold structure into neighborhood graph construction by exploiting a principled optimization model. The proposed model formulates neighborhood graph construction as a sparse coding problem with the locality constraint, therefore achieving simultaneous neighbor selection and edge weight optimization. The core idea underlying our model is to perform a sparse manifold fitting task for each data point so that close-by points lying on the same local manifold are automatically chosen to connect and meanwhile the connection weights are acquired by simple geometric reconstruction. We term the novel neighborhood graph generated by our proposed optimization model M-Fitted Graph since such a graph stems from sparse manifold fitting. To evaluate the robustness and effectiveness of M-fitted graphs, we leverage graph-based semisupervised learning as the testbed. Extensive experiments carried out on six benchmark datasets validate that the proposed M-fitted graph is superior to state-of-the-art neighborhood graphs in terms of classification accuracy using popular graph-based semi-supervised learning methods.",
"title": ""
},
{
"docid": "d057eece8018a905fe1642a1f40de594",
"text": "6 Abstract— Removal of noise from the original signal is still a bottleneck for researchers. There are several methods and techniques published and each method has its own advantages, disadvantages and assumptions. This paper presents a review of some significant work in the field of Image Denoising.The brief introduction of some popular approaches is provided and discussed. Insights and potential future trends are also discussed",
"title": ""
},
{
"docid": "0aa84826291bb9b7a15a1edac43b3b2e",
"text": "Reservoir computing (RC), a computational paradigm inspired on neural systems, has become increasingly popular in recent years for solving a variety of complex recognition and classification problems. Thus far, most implementations have been software-based, limiting their speed and power efficiency. Integrated photonics offers the potential for a fast, power efficient and massively parallel hardware implementation. We have previously proposed a network of coupled semiconductor optical amplifiers as an interesting test case for such a hardware implementation. In this paper, we investigate the important design parameters and the consequences of process variations through simulations. We use an isolated word recognition task with babble noise to evaluate the performance of the photonic reservoirs with respect to traditional software reservoir implementations, which are based on leaky hyperbolic tangent functions. Our results show that the use of coherent light in a well-tuned reservoir architecture offers significant performance benefits. The most important design parameters are the delay and the phase shift in the system's physical connections. With optimized values for these parameters, coherent semiconductor optical amplifier (SOA) reservoirs can achieve better results than traditional simulated reservoirs. We also show that process variations hardly degrade the performance, but amplifier noise can be detrimental. This effect must therefore be taken into account when designing SOA-based RC implementations.",
"title": ""
},
{
"docid": "9bacc1ef43fd8c05dde814a18f59e467",
"text": "The processes that affect removal and retention of nitrogen during wastewater treatment in constructed wetlands (CWs) are manifold and include NH(3) volatilization, nitrification, denitrification, nitrogen fixation, plant and microbial uptake, mineralization (ammonification), nitrate reduction to ammonium (nitrate-ammonification), anaerobic ammonia oxidation (ANAMMOX), fragmentation, sorption, desorption, burial, and leaching. However, only few processes ultimately remove total nitrogen from the wastewater while most processes just convert nitrogen to its various forms. Removal of total nitrogen in studied types of constructed wetlands varied between 40 and 55% with removed load ranging between 250 and 630 g N m(-2) yr(-1) depending on CWs type and inflow loading. However, the processes responsible for the removal differ in magnitude among systems. Single-stage constructed wetlands cannot achieve high removal of total nitrogen due to their inability to provide both aerobic and anaerobic conditions at the same time. Vertical flow constructed wetlands remove successfully ammonia-N but very limited denitrification takes place in these systems. On the other hand, horizontal-flow constructed wetlands provide good conditions for denitrification but the ability of these system to nitrify ammonia is very limited. Therefore, various types of constructed wetlands may be combined with each other in order to exploit the specific advantages of the individual systems. The soil phosphorus cycle is fundamentally different from the N cycle. There are no valency changes during biotic assimilation of inorganic P or during decomposition of organic P by microorganisms. Phosphorus transformations during wastewater treatment in CWs include adsorption, desorption, precipitation, dissolution, plant and microbial uptake, fragmentation, leaching, mineralization, sedimentation (peat accretion) and burial. The major phosphorus removal processes are sorption, precipitation, plant uptake (with subsequent harvest) and peat/soil accretion. However, the first three processes are saturable and soil accretion occurs only in FWS CWs. Removal of phosphorus in all types of constructed wetlands is low unless special substrates with high sorption capacity are used. Removal of total phosphorus varied between 40 and 60% in all types of constructed wetlands with removed load ranging between 45 and 75 g N m(-2) yr(-1) depending on CWs type and inflow loading. Removal of both nitrogen and phosphorus via harvesting of aboveground biomass of emergent vegetation is low but it could be substantial for lightly loaded systems (cca 100-200 g N m(-2) yr(-1) and 10-20 g P m(-2) yr(-1)). Systems with free-floating plants may achieve higher removal of nitrogen via harvesting due to multiple harvesting schedule.",
"title": ""
},
{
"docid": "338324ca3b3d89dc5e0d340cffd069d9",
"text": "Selected hedge funds employ trend-following strategies in an attempt to achieve superior risk adjusted returns. We employ a lookback straddle approach for evaluating the return characteristics of a trend following strategy. The strategies can improve investor performance in the context of a multi-period dynamic portfolio model. The gains are achieved by taking advantage of the funds’ high level of volatility. A set of empirical results confirms the advantages of the lookback straddle for investors at the top end of the multi-period efficient frontier.",
"title": ""
},
{
"docid": "ba55729b62e2232064f070460f48d552",
"text": "A striking difference between brain-inspired neuromorphic processors and current von Neumann processor architectures is the way in which memory and processing is organized. As information and communication technologies continue to address the need for increased computational power through the increase of cores within a digital processor, neuromorphic engineers and scientists can complement this need by building processor architectures where memory is distributed with the processing. In this paper, we present a survey of brain-inspired processor architectures that support models of cortical networks and deep neural networks. These architectures range from serial clocked implementations of multineuron systems to massively parallel asynchronous ones and from purely digital systems to mixed analog/digital systems which implement more biological-like models of neurons and synapses together with a suite of adaptation and learning mechanisms analogous to the ones found in biological nervous systems. We describe the advantages of the different approaches being pursued and present the challenges that need to be addressed for building artificial neural processing systems that can display the richness of behaviors seen in biological systems.",
"title": ""
},
{
"docid": "bb2ad600e0e90a1a349e39ce0f097277",
"text": "Tongue drive system (TDS) is a tongue-operated, minimally invasive, unobtrusive, and wireless assistive technology (AT) that infers users' intentions by detecting their voluntary tongue motion and translating them into user-defined commands. Here we present the new intraoral version of the TDS (iTDS), which has been implemented in the form of a dental retainer. The iTDS system-on-a-chip (SoC) features a configurable analog front-end (AFE) that reads the magnetic field variations inside the mouth from four 3-axial magnetoresistive sensors located at four corners of the iTDS printed circuit board (PCB). A dual-band transmitter (Tx) on the same chip operates at 27 and 432 MHz in the Industrial/Scientific/Medical (ISM) band to allow users to switch in the presence of external interference. The Tx streams the digitized samples to a custom-designed TDS universal interface, built from commercial off-the-shelf (COTS) components, which delivers the iTDS data to other devices such as smartphones, personal computers (PC), and powered wheelchairs (PWC). Another key block on the iTDS SoC is the power management integrated circuit (PMIC), which provides individually regulated and duty-cycled 1.8 V supplies for sensors, AFE, Tx, and digital control blocks. The PMIC also charges a 50 mAh Li-ion battery with constant current up to 4.2 V, and recovers data and clock to update its configuration register through a 13.56 MHz inductive link. The iTDS SoC has been implemented in a 0.5-μm standard CMOS process and consumes 3.7 mW on average.",
"title": ""
},
{
"docid": "086269223c00209787310ee9f0bcf875",
"text": "The availability of large annotated datasets and affordable computation power have led to impressive improvements in the performance of CNNs on various object detection and recognition benchmarks. These, along with a better understanding of deep learning methods, have also led to improved capabilities of machine understanding of faces. CNNs are able to detect faces, locate facial landmarks, estimate pose, and recognize faces in unconstrained images and videos. In this paper, we describe the details of a deep learning pipeline for unconstrained face identification and verification which achieves state-of-the-art performance on several benchmark datasets. We propose a novel face detector, Deep Pyramid Single Shot Face Detector (DPSSD), which is fast and capable of detecting faces with large scale variations (especially tiny faces). We give design details of the various modules involved in automatic face recognition: face detection, landmark localization and alignment, and face identification/verification. We provide evaluation results of the proposed face detector on challenging unconstrained face detection datasets. Then, we present experimental results for IARPA Janus Benchmarks A, B and C (IJB-A, IJB-B, IJB-C), and the Janus Challenge Set 5 (CS5).",
"title": ""
},
{
"docid": "90c871f50dc2e4d3caf1eb963e78a4ae",
"text": "Over the past few decades the capabilities of adapting new class of devices for health monitoring system have improved significantly.But the increase in usage of low cost sensors and various communication media for data transmission in health monitoring have lead to a major concern for current existing platforms i.e., inefficiency in processing massive amount of data in real time. To advance this field requires a new look at the computing framework and infrastructure. This paper describes our initial work for Bigdata processing framework for MCPS that combines the real world and cyber world aspects with dynamic provisioning and fully elastic system for decision making in health care application.",
"title": ""
},
{
"docid": "2d955a3e27c6d3419417946066acd9c8",
"text": "Progress in DNA sequencing has revealed the startling complexity of cancer genomes, which typically carry thousands of somatic mutations. However, it remains unclear which are the key driver mutations or dependencies in a given cancer and how these influence pathogenesis and response to therapy. Although tumors of similar types and clinical outcomes can have patterns of mutations that are strikingly different, it is becoming apparent that these mutations recurrently hijack the same hallmark molecular pathways and networks. For this reason, it is likely that successful interpretation of cancer genomes will require comprehensive knowledge of the molecular networks under selective pressure in oncogenesis. Here we announce the creation of a new effort, The Cancer Cell Map Initiative (CCMI), aimed at systematically detailing these complex interactions among cancer genes and how they differ between diseased and healthy states. We discuss recent progress that enables creation of these cancer cell maps across a range of tumor types and how they can be used to target networks disrupted in individual patients, significantly accelerating the development of precision medicine.",
"title": ""
},
{
"docid": "86052e2fc8f89b91f274a607531f536e",
"text": "Existing approaches to analyzing the asymptotics of graph Laplacians typically assume a well-behaved kernel function with smoothness assumptions. We remove the smoothness assumption and generalize the analysis of graph Laplacians to include previously unstudied graphs including kNN graphs. We also introduce a kernel-free framework to analyze graph constructions with shrinking neighborhoods in general and apply it to analyze locally linear embedding (LLE). We also describe how, for a given limit operator, desirable properties such as a convergent spectrum and sparseness can be achieved by choosing the appropriate graph construction.",
"title": ""
},
{
"docid": "e077a3c57b1df490d418a2b06cf14b2c",
"text": "Inductive power transfer (IPT) is widely discussed for the automated opportunity charging of plug-in hybrid and electric public transport buses without moving mechanical components and reduced maintenance requirements. In this paper, the design of an on-board active rectifier and dc–dc converter for interfacing the receiver coil of a 50 kW/85 kHz IPT system is designed. Both conversion stages employ 1.2 kV SiC MOSFET devices for their low switching losses. For the dc–dc conversion, a modular, nonisolated buck+boost-type topology with coupled magnetic devices is used for increasing the power density. For the presented hardware prototype, a power density of 9.5 kW/dm3 (or 156 W/in3) is achieved, while the ac–dc efficiency from the IPT receiver coil to the vehicle battery is 98.6%. Comprehensive experimental results are presented throughout this paper to support the theoretical analysis.",
"title": ""
},
{
"docid": "48393a47c0f977c77ef346ef2432e8f5",
"text": "Information Systems researchers and technologists have built and investigated Decision Support Systems (DSS) for almost 40 years. This article is a narrative overview of the history of Decision Support Systems (DSS) and a means of gathering more first-hand accounts about the history of DSS. Readers are asked to comment upon the stimulus narrative titled “A Brief History of Decision Support Systems” that has been read by thousands of visitors to DSSResources.COM. Also, the stimulus narrative has been reviewed by a number of key actors who created the history of DSS. The narrative is divided into four sections: The Early Years – 1964-1975; Developing DSS Theory – 1976-1982; Expanding the Scope of Decision Support – 1979-1989; and A Technology Shift – 1990-1995.",
"title": ""
},
{
"docid": "39fa66b86ca91c54a2d2020f04ecc7ba",
"text": "We use mechanical translation of a coded aperture for code division multiple access compression of video. We discuss the compressed video's temporal resolution and present experimental results for reconstructions of > 10 frames of temporal data per coded snapshot.",
"title": ""
},
{
"docid": "97838cc3eb7b31d49db6134f8fc81c84",
"text": "We study the problem of semi-supervised question answering—-utilizing unlabeled text to boost the performance of question answering models. We propose a novel training framework, the Generative Domain-Adaptive Nets. In this framework, we train a generative model to generate questions based on the unlabeled text, and combine model-generated questions with human-generated questions for training question answering models. We develop novel domain adaptation algorithms, based on reinforcement learning, to alleviate the discrepancy between the modelgenerated data distribution and the humangenerated data distribution. Experiments show that our proposed framework obtains substantial improvement from unlabeled text.",
"title": ""
},
{
"docid": "ebc8a48b9664ef2aab9e2933a987ef19",
"text": "We consider the three-stage two-dimensional bin packing problem (2BP) which occurs in real-world applications such as glass, paper, or steel cutting. We present new integer linear programming formulations: Models for a restricted version and the original version of the problem are developed. Both involve polynomial numbers of variables and constraints only and effectively avoid symmetries. Those models are solved using CPLEX. Furthermore, a branch-and-price (B&P) algorithm is presented for a set covering formulation of the unrestricted problem. We consider stabilizing the column generation process of the B&P algorithm using dual-optimal inequalities. Fast column generation is performed by applying a hierarchy of four methods: (a) a fast greedy heuristic, (b) an evolutionary algorithm, (c) solving a restricted form of the pricing problem using CPLEX, and finally (d) solving the complete pricing problem using CPLEX. Computational experiments on standard benchmark instances document the benefits of the new approaches: The restricted version of the ILP model can be used for quickly obtaining nearly optimal solutions. The unrestricted version is computationally more expensive. Column generation provides a strong lower bound for 3-stage 2BP. The combination of all four pricing algorithms and column generation stabilization in the proposed B&P framework yields the best results in terms of the average objective value, the average run-time, and the number of instances solved to proven optimality. 1 This work is supported by the Austrian Science Fund (FWF) under grant P16263-N04. Preprint submitted to Elsevier Science 30 September 2004",
"title": ""
},
{
"docid": "f3467adcca693e015c9dcc85db04d492",
"text": "For urban driving, knowledge of ego-vehicle’s position is a critical piece of information that enables advanced driver-assistance systems or self-driving cars to execute safety-related, autonomous driving maneuvers. This is because, without knowing the current location, it is very hard to autonomously execute any driving maneuvers for the future. The existing solutions for localization rely on a combination of a Global Navigation Satellite System, an inertial measurement unit, and a digital map. However, in urban driving environments, due to poor satellite geometry and disruption of radio signal reception, their longitudinal and lateral errors are too significant to be used for an autonomous system. To enhance the existing system’s localization capability, this work presents an effort to develop a vision-based lateral localization algorithm. The algorithm aims at reliably counting, with or without observations of lane-markings, the number of road-lanes and identifying the index of the road-lane on the roadway upon which our vehicle happens to be driving. Tests of the proposed algorithms against intercity and interstate highway videos showed promising results in terms of counting the number of road-lanes and the indices of the current road-lanes. C © 2015 Wiley Periodicals, Inc.",
"title": ""
}
] |
scidocsrr
|
50477262d8c941c3133dda64487774d5
|
Why are average faces attractive? The effect of view and averageness on the attractiveness of female faces.
|
[
{
"docid": "7440cb90073c8d8d58e28447a1774b2c",
"text": "Common maxims about beauty suggest that attractiveness is not important in life. In contrast, both fitness-related evolutionary theory and socialization theory suggest that attractiveness influences development and interaction. In 11 meta-analyses, the authors evaluate these contradictory claims, demonstrating that (a) raters agree about who is and is not attractive, both within and across cultures; (b) attractive children and adults are judged more positively than unattractive children and adults, even by those who know them; (c) attractive children and adults are treated more positively than unattractive children and adults, even by those who know them; and (d) attractive children and adults exhibit more positive behaviors and traits than unattractive children and adults. Results are used to evaluate social and fitness-related evolutionary theories and the veracity of maxims about beauty.",
"title": ""
},
{
"docid": "b66609e66cc9c3844974b3246b8f737e",
"text": "— Inspired by the evolutionary conjecture that sexually selected traits function as indicators of pathogen resistance in animals and humans, we examined the notion that human facial attractiveness provides evidence of health. Using photos of 164 males and 169 females in late adolescence and health data on these individuals in adolescence, middle adulthood, and later adulthood, we found that adolescent facial attractiveness was unrelated to adolescent health for either males or females, and was not predictive of health at the later times. We also asked raters to guess the health of each stimulus person from his or her photo. Relatively attractive stimulus persons were mistakenly rated as healthier than their peers. The correlation between perceived health and medically assessed health increased when attractiveness was statistically controlled, which implies that attractiveness suppressed the accurate recognition of health. These findings may have important implications for evolutionary models. 0 When social psychologists began in earnest to study physical attractiveness , they were startled by the powerful effect of facial attractiveness on choice of romantic partner (Walster, Aronson, Abrahams, & Rott-mann, 1966) and other aspects of human interaction (Berscheid & Wal-ster, 1974; Hatfield & Sprecher, 1986). More recent findings have been startling again in revealing that infants' preferences for viewing images of faces can be predicted from adults' attractiveness ratings of the faces The assumption that perceptions of attractiveness are culturally determined has thus given ground to the suggestion that they are in substantial part biologically based (Langlois et al., 1987). A biological basis for perception of facial attractiveness is aptly viewed as an evolutionary basis. It happens that evolutionists, under the rubric of sexual selection theory, have recently devoted increasing attention to the origin and function of sexually attractive traits in animal species (Andersson, 1994; Hamilton & Zuk, 1982). Sexual selection as a province of evolutionary theory actually goes back to Darwin (1859, 1871), who noted with chagrin that a number of animals sport an appearance that seems to hinder their survival chances. Although the females of numerous birds of prey, for example, are well camouflaged in drab plum-age, their mates wear bright plumage that must be conspicuous to predators. Darwin divined that the evolutionary force that \" bred \" the males' bright plumage was the females' preference for such showiness in a mate. Whereas Darwin saw aesthetic preferences as fundamental and did not seek to give them adaptive functions, other scholars, beginning …",
"title": ""
},
{
"docid": "1fc10d626c7a06112a613f223391de26",
"text": "The question of what makes a face attractive, and whether our preferences come from culture or biology, has fascinated scholars for centuries. Variation in the ideals of beauty across societies and historical periods suggests that standards of beauty are set by cultural convention. Recent evidence challenges this view, however, with infants as young as 2 months of age preferring to look at faces that adults find attractive (Langlois et al., 1987), and people from different cultures showing considerable agreement about which faces are attractive (Cun-for a review). These findings raise the possibility that some standards of beauty may be set by nature rather than culture. Consistent with this view, specific preferences have been identified that appear to be part of our biological rather than Such a preference would be adaptive if stabilizing selection operates on facial traits (Symons, 1979), or if averageness is associated with resistance to pathogens , as some have suggested Evolutionary biologists have proposed that a preference for symmetry would also be adaptive because symmetry is a signal of health and genetic quality Only high-quality individuals can maintain symmetric development in the face of environmental and genetic stresses. Symmetric bodies are certainly attractive to humans and many other animals but what about symmetric faces? Biologists suggest that facial symmetry should be attractive because it may signal mate quality High levels of facial asymmetry in individuals with chro-mosomal abnormalities (e.g., Down's syndrome and Tri-somy 14; for a review, see Thornhill & Møller, 1997) are consistent with this view, as is recent evidence that facial symmetry levels correlate with emotional and psychological health (Shackelford & Larsen, 1997). In this paper, we investigate whether people can detect subtle differences in facial symmetry and whether these differences are associated with differences in perceived attractiveness. Recently, Kowner (1996) has reported that faces with normal levels of asymmetry are more attractive than perfectly symmetric versions of the same faces. 3 Similar results have been reported by Langlois et al. and an anonymous reviewer for helpful comments on an earlier version of the manuscript. We also thank Graham Byatt for assistance with stimulus construction, Linda Jeffery for assistance with the figures, and Alison Clark and Catherine Hickford for assistance with data collection and statistical analysis in Experiment 1A. Evolutionary, as well as cultural, pressures may contribute to our perceptions of facial attractiveness. Biologists predict that facial symmetry should be attractive, because it may signal …",
"title": ""
}
] |
[
{
"docid": "b1383088b26636e6ac13331a2419f794",
"text": "This paper investigates the problem of blurring caused by motion during image capture of text documents. Motion blurring prevents proper optical character recognition of the document text contents. One area of such applications is to deblur name card images obtained from handheld cameras. In this paper, a complete motion deblurring procedure for document images has been proposed. The method handles both uniform linear motion blur and uniform acceleration motion blur. Experiments on synthetic and real-life blurred images prove the feasibility and reliability of this algorithm provided that the motion is not too irregular. The restoration procedure consumes only small amount of computation time.",
"title": ""
},
{
"docid": "85f2e049dc90bf08ecb0d34899d8b3c5",
"text": "Here is little doubt that the Internet represents the spearhead of the industrial revolution. I love new technologies and gadgets that promise new and better ways of doing things. I have many such gadgets myself and I even manage to use a few of them (though not without some pain).A new piece of technology is like a new relationship, fun and exciting at first, but eventually it requires some hard work to maintain, usually in the form of time and energy. I doubt technology’s promise to improve the quality of life and I am still surprised how time-distorting and dissociating the computer and the Internet can be for me, along with the thousands of people I’ve interviewed, studied and treated in my clinical practice. It seems clear that the Internet can be used and abused in a compulsive fashion, and that there are numerous psychological factors that contribute to the Internet’s power and appeal. It appears that the very same features that drive the potency of the Net are potentially habit-forming. This study examined the self-reported Internet behavior of nearly 18,000 people who answered a survey on the ABCNEWS.com web site. Results clearly support the psychoactive nature of the Internet, and the potential for compulsive use and abuse of the Internet for certain individuals. Introduction Technology, and most especially, computers and the Internet, seem to be at best easily overused/abused, and at worst, addictive. The combination of available stimulating content, ease of access, convenience, low cost, visual stimulation, autonomy, and anonymity—all contribute to a highly psychoactive experience. By psychoactive, that is Running Head: Virtual Addiction to say mood altering, and potentially behaviorally impacting. In other words these technologies affect the manner in which we live and love. It is my contention that some of these effects are indeed less than positive, and may contribute to various negative psychological effects. The Internet and other digital technologies are only the latest in a series of “improvements” to our world which may have unintended negative effects. The experience of problems with new and unknown technologies is far from new; we have seen countless examples of newer and better things that have had unintended and unexpected deleterious effects. Remember Thalidomide, PVC/PCB’s, Atomic power, fossil fuels, even television, along with other seemingly innocuous conveniences which have been shown to be conveniently helpful, but on other levels harmful. Some of these harmful effects are obvious and tragic, while others are more subtle and insidious. Even seemingly innocuous advances such as the elevator, remote controls, credit card gas pumps, dishwashers, and drive-through everything, have all had unintended negative effects. They all save time and energy, but the energy they save may dissuade us from using our physical bodies as they were designed to be used. In short we have convenience ourselves to a sedentary lifestyle. Technology is amoral; it is not inherently good or evil, but it is impact on the manner in which we live our lives. American’s love technology and for some of us this trust and blind faith almost parallels a religious fanaticism. Perhaps most of all, we love it Running Head: Virtual Addiction because of the hope for the future it promises; it is this promise of a better today and a longer tomorrow which captivates us to attend to the call for new better things to come. We live in the age were computer and digital technology are always on the cusp of great things-Newer, better ways of doing things (which in some ways is true). The old becomes obsolete within a year or two. Newer is always better. Computers and the Internet purport to make our lives easier, simpler, and therefore more fulfilling, but it may not be that simple. People have become physically and psychologically dependent on many behaviors and substances for centuries. This compulsive pattern does not reflect a casual interest, but rather consists of a driven pattern of use that can frequently escalate to negatively impact our lives. The key life-areas that seem to be impacted are marriages and relationships, employment, health, and legal/financial status. The fact that substances, such as alcohol and other mood-altering drugs can create a physical and/or psychological dependence is well known and accepted. And certain behaviors such as gambling, eating, work, exercise, shopping, and sex have gained more recent acceptance with regard to their addictive potential. More recently however, there has been an acknowledgement that the compulsive performance of these behaviors may mimic the compulsive process found with drugs, alcohol and other substances. This same process appears to also be found with certain aspects of the Internet. Running Head: Virtual Addiction The Internet can and does produce clear alterations in mood; nearly 30 percent of Internet users admit to using the Net to alter their mood so as to relieve a negative mood state. In other words, they use the Internet like a drug (Greenfield, 1999). In addressing the phenomenon of Internet behavior, initial behavioral research (Young, 1996, 1998) focused on conceptual definitions of Internet use and abuse, and demonstrated similar patterns of abuse as found in compulsive gambling. There have been further recent studies on the nature and effects of the Internet. Cooper, Scherer, Boies, and Gordon (1998) examined sexuality on the Internet utilizing an extensive online survey of 9,177 Web users, and Greenfield (1999) surveyed nearly 18,000 Web users on ABCNEWS.com to examine Internet use and abuse behavior. The later study did yield some interesting trends and patterns, but also raised further areas that require clarification. There has been very little research that actually examined and measured specific behavior related to Internet use. The Carnegie Mellon University study (Kraut, Patterson, Lundmark, Kiesler, Mukopadhyay, and Scherlis, 1998) did attempt to examine and verify actual Internet use among 173 people in 73 households. This initial study did seem to demonstrate that there may be some deleterious effects from heavy Internet use, which appeared to increase some measures of social isolation and depression. What seems to be abundantly clear from the limited research to date is that we know very little about the human/Internet interface. Theoretical suppositions abound, but we are only just beginning to understand the nature and implications of Internet use and Running Head: Virtual Addiction abuse. There is an abundance of clinical, legal, and anecdotal evidence to suggest that there is something unique about being online that seems to produce a powerful impact on people. It is my belief that as we expand our analysis of this new and exciting area we will likely discover that there are many subcategories of Internet abuse, some of which will undoubtedly exist as concomitant disorders alongside of other addictions including sex, gambling, and compulsive shopping/spending. There are probably two types of Internet based problems: the first is defined as a primary problem where the Internet itself becomes the focus on the compulsive pattern, and secondary, where a preexisting problem (or compulsive behavior) is exacerbated via the use of the Internet. In a secondary problem, necessity is no longer the mother of invention, but rather convenience is. The Internet simply makes everything easier to acquire, and therefore that much more easily abused. The ease of access, availability, low cost, anonymity, timelessness, disinhibition, and loss of boundaries all appear to contribute to the total Internet experience. This has particular relevance when it comes to well-established forms of compulsive consumer behavior such as gambling, shopping, stock trading, and compulsive sexual behavior where traditional modalities of engaging in these behaviors pale in comparison to the speed and efficiency of the Internet. There has been considerable debate regarding the terms and definitions in describing pathological Internet behavior. Many terms have been used, including Internet abuse, Internet addiction, and compulsive Internet use. The concern over terminology Running Head: Virtual Addiction seems spurious to me, as it seems irrelevant as to what the addictive process is labeled. The underlying neurochemical changes (probably Dopamine) that occur during any pleasurable act have proven themselves to be potentially habit-forming on a brainbehavior level. The net effect is ultimately the same with regard to potential life impact, which in the case of compulsive behavior can be quite large. Any time there is a highly pleasurable human behavior that can be acquired without human interface (as can be accomplished on the Net) there seems to be greater potential for abuse. The ease of purchasing a stock, gambling, or shopping online allows for a boundless and disinhibited experience. Without the normal human interaction there is a far greater likelihood of abusive and/or compulsive behavior in these areas. Research in the field of Internet behavior is in its relative infancy. This is in part due to the fact that the depth and breadth of the Internet and World Wide Web are changing at exponential rates. With thousands of new subscribers a day and approaching (perhaps exceeding) 200 million worldwide users, the Internet represents a communications, social, and economic revolution. The Net now serves at the pinnacle of the digital industrial revolution, and with any revolution come new problems and difficulties.",
"title": ""
},
{
"docid": "6888b5311d7246c5eb18142d2746ec68",
"text": "Forms of well-being vary in their activation as well as valence, differing in respect of energy-related arousal in addition to whether they are negative or positive. Those differences suggest the need to refine traditional assumptions that poor person-job fit causes lower well-being. More activated forms of well-being were proposed to be associated with poorer, rather than better, want-actual fit, since greater motivation raises wanted levels of job features and may thus reduce fit with actual levels. As predicted, activated well-being (illustrated by job engagement) and more quiescent well-being (here, job satisfaction) were found to be associated with poor fit in opposite directions--positively and negatively, respectively. Theories and organizational practices need to accommodate the partly contrasting implications of different forms of well-being.",
"title": ""
},
{
"docid": "87a6fd003dd6e23f27e791c9de8b8ba6",
"text": "The well-known travelling salesman problem is the following: \" A salesman is required ~,o visit once and only once each of n different cities starting from a base city, and returning to this city. What path minimizes the to ta l distance travelled by the salesman?\" The problem has been treated by a number of different people using a var ie ty of techniques; el. Dantzig, Fulkerson, Johnson [1], where a combination of ingemtity and linear programming is used, and Miller, Tucker and Zemlin [2], whose experiments using an all-integer program of Gomory did not produce results i~ cases with ten cities although some success was achieved in eases of simply four cities. The purpose of this note is to show tha t this problem can easily be formulated in dynamic programming terms [3], and resolved computationally for up to 17 cities. For larger numbers, the method presented below, combined with various simple manipulations, may be used to obtain quick approximate solutions. Results of this nature were independently obtained by M. Held and R. M. Karp, who are in the process of publishing some extensions and computat ional results.",
"title": ""
},
{
"docid": "1c2f873f3fb57de69f5783cc1f9699ed",
"text": "Deep artificial neural networks (DNNs) are typically trained via gradient-based learning algorithms, namely backpropagation. Evolution strategies (ES) can rival backprop-based algorithms such as Q-learning and policy gradients on challenging deep reinforcement learning (RL) problems. However, ES can be considered a gradient-based algorithm because it performs stochastic gradient descent via an operation similar to a finite-difference approximation of the gradient. That raises the question of whether non-gradient-based evolutionary algorithms can work at DNN scales. Here we demonstrate they can: we evolve the weights of a DNN with a simple, gradient-free, population-based genetic algorithm (GA) and it performs well on hard deep RL problems, including Atari and humanoid locomotion. The Deep GA successfully evolves networks with over four million free parameters, the largest neural networks ever evolved with a traditional evolutionary algorithm. These results (1) expand our sense of the scale at which GAs can operate, (2) suggest intriguingly that in some cases following the gradient is not the best choice for optimizing performance, and (3) make immediately available the multitude of techniques that have been developed in the neuroevolution community to improve performance on RL problems. To demonstrate the latter, we show that combining DNNs with novelty search, which was designed to encourage exploration on tasks with deceptive or sparse reward functions, can solve a high-dimensional problem on which reward-maximizing algorithms (e.g. DQN, A3C, ES, and the GA) fail. Additionally, the Deep GA parallelizes better than ES, A3C, and DQN, and enables a state-of-the-art compact encoding technique that can represent million-parameter DNNs in thousands of bytes.",
"title": ""
},
{
"docid": "fef105b33a85f76f24c468c58a7534a0",
"text": "An aging population in the United States presents important challenges for patients and physicians. The presence of inflammation can contribute to an accelerated aging process, the increasing presence of comorbidities, oxidative stress, and an increased prevalence of chronic pain. As patient-centered care is embracing a multimodal, integrative approach to the management of disease, patients and physicians are increasingly looking to the potential contribution of natural products. Camu camu, a well-researched and innovative natural product, has the potential to contribute, possibly substantially, to this management paradigm. The key issue is to raise camu camu's visibility through increased emphasis on its robust evidentiary base and its various formulations, as well as making consumers, patients, and physicians more aware of its potential. A program to increase the visibility of camu camu can contribute substantially not only to the management of inflammatory conditions and its positive contribution to overall good health but also to its potential role in many disease states.",
"title": ""
},
{
"docid": "d72bb787f20a08e70d5f0294551907d7",
"text": "In this paper we present a novel strategy, DragPushing, for improving the performance of text classifiers. The strategy is generic and takes advantage of training errors to successively refine the classification model of a base classifier. We describe how it is applied to generate two new classification algorithms; a Refined Centroid Classifier and a Refined Naïve Bayes Classifier. We present an extensive experimental evaluation of both algorithms on three English collections and one Chinese corpus. The results indicate that in each case, the refined classifiers achieve significant performance improvement over the base classifiers used. Furthermore, the performance of the Refined Centroid Classifier implemented is comparable, if not better, to that of state-of-the-art support vector machine (SVM)-based classifier, but offers a much lower computational cost.",
"title": ""
},
{
"docid": "60291da2284d7cde487094fff6f8c9c6",
"text": "0959-1524/$ see front matter 2009 Elsevier Ltd. A doi:10.1016/j.jprocont.2009.02.003 * Tel.: +39 02 2399 3539. E-mail address: [email protected] The aim of this paper is to review and to propose a classification of a number of decentralized, distributed and hierarchical control architectures for large scale systems. Attention is focused on the design approaches based on Model Predictive Control. For the considered architectures, the underlying rationale, the fields of application, the merits and limitations are discussed, the main references to the literature are reported and some future developments are suggested. Finally, a number of open problems is listed. 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "45fe8a9188804b222df5f12bc9a486bc",
"text": "There is renewed interest in the application of gypsum to agricultural lands, particularly of gypsum produced during flue gas desulfurization (FGD) at coal-burning power plants. We studied the effects of land application of FGD gypsum to corn ( L.) in watersheds draining to the Great Lakes. The FGD gypsum was surface applied at 11 sites at rates of 0, 1120, 2240, and 4480 kg ha after planting to 3-m by 7.6-m field plots. Approximately 12 wk after application, penetration resistance and hydraulic conductivity were measured in situ, and samples were collected for determination of bulk density and aggregate stability. No treatment effect was detected for penetration resistance or hydraulic conductivity. A positive treatment effect was seen for bulk density at only 2 of 10 sites tested. Aggregate stability reacted similarly across all sites and was decreased with the highest application of FGD gypsum, whereas the lower rates were not different from the control. Overall, there were few beneficial effects of the FGD gypsum to soil physical properties in the year of application.",
"title": ""
},
{
"docid": "3d4fa878fe3e4d3cbeb1ccedd75ee913",
"text": "Digital images are widely communicated over the internet. The security of digital images is an essential and challenging task on shared communication channel. Various techniques are used to secure the digital image, such as encryption, steganography and watermarking. These are the methods for the security of digital images to achieve security goals, i.e. confidentiality, integrity and availability (CIA). Individually, these procedures are not quite sufficient for the security of digital images. This paper presents a blended security technique using encryption, steganography and watermarking. It comprises of three key components: (1) the original image has been encrypted using large secret key by rotating pixel bits to right through XOR operation, (2) for steganography, encrypted image has been altered by least significant bits (LSBs) of the cover image and obtained stego image, then (3) stego image has been watermarked in the time domain and frequency domain to ensure the ownership. The proposed approach is efficient, simpler and secured; it provides significant security against threats and attacks. Keywords—Image security; Encryption; Steganography; Watermarking",
"title": ""
},
{
"docid": "0344917c6b44b85946313957a329bc9c",
"text": "Recently, Haas and Hellerstein proposed the hash ripple join algorithm in the context of online aggregation. Although the algorithm rapidly gives a good estimate for many join-aggregate problem instances, the convergence can be slow if the number of tuples that satisfy the join predicate is small or if there are many groups in the output. Furthermore, if memory overflows (for example, because the user allows the algorithm to run to completion for an exact answer), the algorithm degenerates to block ripple join and performance suffers. In this paper, we build on the work of Haas and Hellerstein and propose a new algorithm that (a) combines parallelism with sampling to speed convergence, and (b) maintains good performance in the presence of memory overflow. Results from a prototype implementation in a parallel DBMS show that its rate of convergence scales with the number of processors, and that when allowed to run to completion, even in the presence of memory overflow, it is competitive with the traditional parallel hybrid hash join algorithm.",
"title": ""
},
{
"docid": "1be8fa2ade3d8547044d06bd07b6fc1e",
"text": "Gastric rupture with necrosis following acute gastric dilatation (AGD) is a rare and potentially fatal event; usually seen in patients with eating disorders such as anorexia nervosa or bulimia. A 12-year-old lean boy with no remarkable medical history was brought to our Emergency Department suffering acute abdominal symptoms. Emergency laparotomy revealed massive gastric dilatation and partial necrosis, with rupture of the anterior wall of the fundus of the stomach. We performed partial gastrectomy and the patient recovered uneventfully. We report this case to demonstrate that AGD and subsequent gastric rupture can occur in patients without any underlying disorders and that just a low body mass index is a risk factor for this potentially fatal condition.",
"title": ""
},
{
"docid": "595cb7698c38b9f5b189ded9d270fe69",
"text": "Sentiment Analysis can help to extract knowledge related to opinions and emotions from user generated text information. It can be applied in medical field for patients monitoring purposes. With the availability of large datasets, deep learning algorithms have become a state of the art also for sentiment analysis. However, deep models have the drawback of not being non human-interpretable, raising various problems related to model’s interpretability. Very few work have been proposed to build models that explain their decision making process and actions. In this work, we review the current sentiment analysis approaches and existing explainable systems. Moreover, we present a critical review of explainable sentiment analysis models and discussed the insight of applying explainable sentiment analysis in the medical field.",
"title": ""
},
{
"docid": "d84ef527d58d70b3c559d21608901d2f",
"text": "Whistleblowing on organizational wrongdoing is becoming increasingly prevalent. What aspects of the person, the context, and the transgression relate to whistleblowing intentions and to actual whistleblowing on corporate wrongdoing? Which aspects relate to retaliation against whistleblowers? Can we draw conclusions about the whistleblowing process by assessing whistleblowing intentions? Meta-analytic examination of 193 correlations obtained from 26 samples (N = 18,781) reveals differences in the correlates of whistleblowing intentions and actions. Stronger relationships were found between personal, contextual, and wrongdoing characteristics and whistleblowing intent than with actual whistleblowing. Retaliation might best be predicted using contextual variables. Implications for research and practice are discussed.",
"title": ""
},
{
"docid": "a0e7712da82a338fda01e1fd0bb4a44e",
"text": "Compliance specifications concisely describe selected aspects of what a business operation should adhere to. To enable automated techniques for compliance checking, it is important that these requirements are specified correctly and precisely, describing exactly the behavior intended. Although there are rigorous mathematical formalisms for representing compliance rules, these are often perceived to be difficult to use for business users. Regardless of notation, however, there are often subtle but important details in compliance requirements that need to be considered. The main challenge in compliance checking is to bridge the gap between informal description and a precise specification of all requirements. In this paper, we present an approach which aims to facilitate creating and understanding formal compliance requirements by providing configurable templates that capture these details as options for commonly-required compliance requirements. These options are configured interactively with end-users, using question trees and natural language. The approach is implemented in the Process Mining Toolkit ProM.",
"title": ""
},
{
"docid": "5034984717b3528f7f47a1f88a3b1310",
"text": "ALL RIGHTS RESERVED. This document contains material protected under International and Federal Copyright Laws and Treaties. Any unauthorized reprint or use of this material is prohibited. No part of this document may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or by any information storage and retrieval system without express written permission from the author / publisher.",
"title": ""
},
{
"docid": "0ac38422284d164095882a3f3dd74e4f",
"text": "This paper introduces the status of social recommender system research in general and collaborative filtering in particular. For the collaborative filtering, the paper shows the basic principles and formulas of two basic approaches, the user-based collaborative filtering and the item-based collaborative filtering. For the user or item similarity calculation, the paper compares the differences between the cosine-based similarity, the revised cosine-based similarity and the Pearson-based similarity. The paper also analyzes the three main challenges of the collaborative filtering algorithm and shows the related works facing the challenges. To solve the Cold Start problem and reduce the cost of best neighborhood calculation, the paper provides several solutions. At last it discusses the future of the collaborative filtering algorithm in social recommender system.",
"title": ""
},
{
"docid": "ef976fc364d9fdb85c0d34e5b831644c",
"text": "This paper presents a Mars Sample Return (MSR) Sample Acquisition and Caching (SAC) study developed for the three rover platforms: MER, MER+, and MSL. The study took into account 26 SAC requirements provided by the NASA Mars Exploration Program Office. For this SAC architecture, the reduction of mission risk was chosen by us as having greater priority than mass or volume. For this reason, we selected a “One Bit per Core” approach. The enabling technology for this architecture is Honeybee Robotics' “eccentric tubes” core breakoff approach. The breakoff approach allows the drill bits to be relatively small in diameter and in turn lightweight. Hence, the bits could be returned to Earth with the cores inside them with only a modest increase to the total returned mass, but a significant decrease in complexity. Having dedicated bits allows a reduction in the number of core transfer steps and actuators. It also alleviates the bit life problem, eliminates cross contamination, and aids in hermetic sealing. An added advantage is faster drilling time, lower power, lower energy, and lower Weight on Bit (which reduces Arm preload requirements). Drill bits are based on the BigTooth bit concept, which allows re-use of the same bit multiple times, if necessary. The proposed SAC consists of a 1) Rotary-Percussive Core Drill, 2) Bit Storage Carousel, 3) Cache, 4) Robotic Arm, and 5) Rock Abrasion and Brushing Bit (RABBit), which is deployed using the Drill. The system also includes PreView bits (for viewing of cores prior to caching) and Powder bits for acquisition of regolith or cuttings. The SAC total system mass is less than 22 kg for MER and MER+ size rovers and less than 32 kg for the MSL-size rover.",
"title": ""
},
{
"docid": "f3e9858900dd75c86d106856e63f1ab2",
"text": "In the near future, new storage-class memory (SCM) technologies -- such as phase-change memory and memristors -- will radically change the nature of long-term storage. These devices will be cheap, non-volatile, byte addressable, and near DRAM density and speed. While SCM offers enormous opportunities, profiting from them will require new storage systems specifically designed for SCM's properties.\n This paper presents Echo, a persistent key-value storage system designed to leverage the advantages and address the challenges of SCM. The goals of Echo include high performance for both small and large data objects, recoverability after failure, and scalability on multicore systems. Echo achieves its goals through the use of a two-level memory design targeted for memory systems containing both DRAM and SCM, exploitation of SCM's byte addressability for fine-grained transactions in non-volatile memory, and the use of snapshot isolation for concurrency, consistency, and versioning. Our evaluation demonstrates that Echo's SCM-centric design achieves the durability guarantees of the best disk-based stores with the performance characteristics approaching the best in-memory key-value stores.",
"title": ""
},
{
"docid": "97a458ead2bd94775c7d27a6a47ce8e6",
"text": "This article presents an approach to using cognitive models of narrative discourse comprehension to define an explicit computational model of a reader’s comprehension process during reading, predicting aspects of narrative focus and inferencing with precision. This computational model is employed in a narrative discourse generation system to select and sequence content from a partial plan representing story world facts, objects, and events, creating discourses that satisfy comprehension criteria. Cognitive theories of narrative discourse comprehension define explicit models of a reader’s mental state during reading. These cognitive models are created to test hypotheses and explain empirical results about reader comprehension, but do not often contain sufficient precision for implementation on a computer. Therefore, they have not previously been suitable for computational narrative generation. The results of three experiments are presented and discussed, exhibiting empirical support for the approach presented. This work makes a number of contributions that advance the state-of-the-art in narrative discourse generation: a formal model of narrative focus, a formal model of online inferencing in narrative, a method of selecting narrative discourse content to satisfy comprehension criteria, and both implementation and evaluation of these models. .................................................................................................................................................................................",
"title": ""
}
] |
scidocsrr
|
b9ed17a63bbe60e01aab0077f45868f8
|
PaloBoost: An Overfitting-robust TreeBoost with Out-of-Bag Sample Regularization Techniques
|
[
{
"docid": "978b6dfa805b214d95827af6b1d030f9",
"text": "LambdaMART is the boosted tree version of LambdaRank, which is based on RankNet. RankNet, LambdaRank, and LambdaMART have proven to be very successful algorithms for solving real world ranking problems: for example an ensemble of LambdaMART rankers won Track 1 of the 2010 Yahoo! Learning To Rank Challenge. The details of these algorithms are spread across several papers and reports, and so here we give a self-contained, detailed and complete description of them.",
"title": ""
}
] |
[
{
"docid": "1d50c8598a41ed7953e569116f59ae41",
"text": "Several web-based platforms have emerged to ease the development of interactive or near real-time IoT applications by providing a way to connect things and services together and process the data they emit using a data flow paradigm. While these platforms have been found to be useful on their own, many IoT scenarios require the coordination of computing resources across the network: on servers, gateways and devices themselves. To address this, we explore how to extend existing IoT data flow platforms to create a system suitable for execution on a range of run time environments, toward supporting distributed IoT programs that can be partitioned between servers, gateways and devices. Eventually we aim to automate the distribution of data flows using appropriate distribution mechanism, and optimization heuristics based on participating resource capabilities and constraints imposed by the developer.",
"title": ""
},
{
"docid": "9c98685d50238cebb1e23e00201f8c09",
"text": "A frequently asked questions (FAQ) retrieval system improves the access to information by allowing users to pose natural language queries over an FAQ collection. From an information retrieval perspective, FAQ retrieval is a challenging task, mainly because of the lexical gap that exists between a query and an FAQ pair, both of which are typically very short. In this work, we explore the use of supervised learning to rank to improve the performance of domain-specific FAQ retrieval. While supervised learning-to-rank models have been shown to yield effective retrieval performance, they require costly human-labeled training data in the form of document relevance judgments or question paraphrases. We investigate how this labeling effort can be reduced using a labeling strategy geared toward the manual creation of query paraphrases rather than the more time-consuming relevance judgments. In particular, we investigate two such strategies, and test them by applying supervised ranking models to two domain-specific FAQ retrieval data sets, showcasing typical FAQ retrieval scenarios. Our experiments show that supervised ranking models can yield significant improvements in the precision-at-rank-5 measure compared to unsupervised baselines. Furthermore, we show that a supervised model trained using data labeled via a low-effort paraphrase-focused strategy has the same performance as that of the same model trained using fully labeled data, indicating that the strategy is effective at reducing the labeling effort while retaining the performance gains of the supervised approach. To encourage further research on FAQ retrieval we make our FAQ retrieval data set publicly available.",
"title": ""
},
{
"docid": "72b77a9a80d7d26e9c5b0b070f8eceb8",
"text": "3D City models have so far neglected utility networks in built environments, both interior and exterior. Many urban applications, e.g. emergency response or maintenance operations, are looking for such an integration of interior and exterior utility. Interior utility is usually created and maintained using Building Information Model (BIM) systems, while exterior utility is stored, managed and analyzed using GIS. Researchers have suggested that the best approach for BIM/GIS integration is harmonized semantics, which allow formal mapping between the BIM and real world GIS. This paper provides preliminary ideas and directions for how to acquire information from BIM/Industry Foundation Class (IFC) and map it to CityGML utility network Application Domain Extension (ADE). The investigation points out that, in most cases, there is a direct one-to-one mapping between IFC schema and UtilityNetworkADE schema, and only in one case there is one-to-many mapping; related to logical connectivity since there is no exact concept to represent the case in UtilityNetworkADE. Many examples are shown of partial IFC files and their possible translation in order to be represented in UtilityNetworkADE classes. DRAFT VERSION of the paper to be published in Kolbe, T. H.; König, G.; Nagel, C. (Eds.) 2011: Advances in 3D Geo-Information Sciences, ISBN 978-3-642-12669-7 Series Editors: Cartwright, W., Gartner, G., Meng, L., Peterson, M.P., ISSN: 1863-2246 5th International 3D GeoInfo Conference, November 3-4, 2010, Berlin, Germany 1 2 I. Hijazi, M. Ehlers, S. Zlatanova, T. Becker, L.Berlo",
"title": ""
},
{
"docid": "5c2b73276c9f845d7eef5c9dc4cea2a1",
"text": "The detection of QR codes, a type of 2D barcode, as described in the literature consists merely in the determination of the boundaries of the symbol region in images obtained with the specific intent of highlighting the symbol. However, many important applications such as those related with accessibility technologies or robotics, depends on first detecting the presence of a barcode in an environment. We employ Viola-Jones rapid object detection framework to address the problem of finding QR codes in arbitrarily acquired images. This framework provides an efficient way to focus the detection process in promising regions of the image and a very fast feature calculation approach for pattern classification. An extensive study of variations in the parameters of the framework for detecting finder patterns, present in three corners of every QR code, was carried out. Detection accuracy superior to 90%, with controlled number of false positives, is achieved. We also propose a post-processing algorithm that aggregates the results of the first step and decides if the detected finder patterns are part of QR code symbols. This two-step processing is done in real time.",
"title": ""
},
{
"docid": "10b838bb8a0925d0ff90349c14aaad6e",
"text": "Web Service Technology has been developing rapidly as it provides a flexible application-to-application interaction mechanism. Several ongoing research efforts focus on various aspects of web service technology, including the modeling, specification, discovery, composition and verification of web services. The approaches advocated are often conflicting---based as they are on differing expectations on the current status of web services as well as differing models of their future evolution. One way of deciding the relative relevance of the various research directions is to look at their applicability to the currently available web services. To this end, we took a snapshot of the currently publicly available web services. Our aim is to get an idea of the number, type, complexity and composability of these web services and see if this analysis provides useful information about the near-term fruitful research directions.",
"title": ""
},
{
"docid": "7d6cd23ec44d7425b10ed086380bfc14",
"text": "Objectives: To analysis different approaches for taxonomy construction to improve the knowledge classification, information retrieval and other data mining process. Findings: Taxonomies learning keep getting more important process for knowledge sharing about a domain. It is also used for application development such as knowledge searching, information retrieval. The taxonomy can be build manually but it is a complex process when the data are so large and it also produce some errors while taxonomy construction. There is various automatic taxonomy construction techniques are used to learn taxonomy based on keyword phrases, text corpus and from domain specific concepts etc. So it is required to build taxonomy with less human effort and with less error rate. This paper provides detailed information about those techniques. Methods: The methods such as lexico-syntatic pattern, semi supervised methods, graph based methods, ontoplus, TaxoLearn, Bayesian approach, two-step method, ontolearn and Automatic Taxonomy Construction from Text are analyzed in this paper. Application/Improvements: The findings of this work prove that the TaxoFinder approach provides better result than other approaches.",
"title": ""
},
{
"docid": "2d2af2c8054fa11d0f3db3a05a89b0de",
"text": "Object tracking is a reoccurring problem in computer vision. Tracking-by-detection approaches, in particular Struck [20], have shown to be competitive in recent evaluations. However, such approaches fail in the presence of long-term occlusions as well as severe viewpoint changes of the object. In this paper we propose a principled way to combine occlusion and motion reasoning with a tracking-by-detection approach. Occlusion and motion reasoning is based on state-of-the-art long-term trajectories which are labeled as object or background tracks with an energy-based formulation. The overlap between labeled tracks and detected regions allows to identify occlusions. The motion changes of the object between consecutive frames can be estimated robustly from the geometric relation between object trajectories. If this geometric change is significant, an additional detector is trained. Experimental results show that our tracker obtains state-of-the-art results and handles occlusion and viewpoints changes better than competing tracking methods.",
"title": ""
},
{
"docid": "e0633afb6f4dcb1561dbb23b6e3aa713",
"text": "Software security vulnerabilities are one of the critical issues in the realm of computer security. Due to their potential high severity impacts, many different approaches have been proposed in the past decades to mitigate the damages of software vulnerabilities. Machine-learning and data-mining techniques are also among the many approaches to address this issue. In this article, we provide an extensive review of the many different works in the field of software vulnerability analysis and discovery that utilize machine-learning and data-mining techniques. We review different categories of works in this domain, discuss both advantages and shortcomings, and point out challenges and some uncharted territories in the field.",
"title": ""
},
{
"docid": "a25adeae7e1cdc9260c7d059f9fa5f82",
"text": "This work presents a generic computer vision system designed for exploiting trained deep Convolutional Neural Networks (CNN) as a generic feature extractor and mixing these features with more traditional hand-crafted features. Such a system is a single structure that can be used for synthesizing a large number of different image classification tasks. Three substructures are proposed for creating the generic computer vision system starting from handcrafted and non-handcrafter features: i) one that remaps the output layer of a trained CNN to classify a different problem using an SVM; ii) a second for exploiting the output of the penultimate layer of a trained CNN as a feature vector to feed an SVM; and iii) a third for merging the output of some deep layers, applying a dimensionality reduction method, and using these features as the input to an SVM. The application of feature transform techniques to reduce the dimensionality of feature sets coming from the deep layers represents one of the main contributions of this paper. Three approaches are used for the non-handcrafted features: deep",
"title": ""
},
{
"docid": "a784d35f9d7ea612ab4374c6b4060bb2",
"text": "The intelligent vehicle is a complicated nonlinear system, and the design of a path tracking controller is one of the key technologies in intelligent vehicle research. This paper mainly designs a lateral control dynamic model of the intelligent vehicle, which is used for lateral tracking control. Firstly, the vehicle dynamics model (i.e., transfer function) is established according to the vehicle parameters. Secondly, according to the vehicle steering control system and the CARMA (Controlled Auto-Regression and Moving-Average) model, a second-order control system model is built. Using forgetting factor recursive least square estimation (FFRLS), the system parameters are identified. Finally, a neural network PID (Proportion Integral Derivative) controller is established for lateral path tracking control based on the vehicle model and the steering system model. Experimental simulation results show that the proposed model and algorithm have the high real-time and robustness in path tracing control. This provides a certain theoretical basis for intelligent vehicle autonomous navigation tracking control, and lays the foundation for the vertical and lateral coupling control.",
"title": ""
},
{
"docid": "a3a57ab8e6cd8f543e1c8bd43254b413",
"text": "How can anyone be rational in a world where knowledge is limited, time is pressing, and deep thought is often an unattainable luxury? Traditional models of unbounded rationality and optimization in cognitive science, economics, and animal behavior have tended to view decision-makers as possessing supernatural powers of reason, limitless knowledge, and endless time. But understanding decisions in the real world requires a more psychologically plausible notion of bounded rationality. In Simple heuristics that make us smart, we explore fast and frugal heuristics--simple rules in the mind's adaptive toolbox for making decisions with realistic mental resources. These heuristics can enable both living organisms and artificial systems to make smart choices quickly and with a minimum of information by exploiting the way that information is structured in particular environments. In this precis, we show how simple building blocks that control information search, stop search, and make decisions can be put together to form classes of heuristics, including: ignorance-based and one-reason decision making for choice, elimination models for categorization, and satisficing heuristics for sequential search. These simple heuristics perform Simple Heuristics That Make Us Smart http://www.bbsonline.org/documents/a/00/00/04/69/bbs00000469-00/... 2 von 21 23.05.2005 18:16 comparably to more complex algorithms, particularly when generalizing to new data--that is, simplicity leads to robustness. We present evidence regarding when people use simple heuristics and describe the challenges to be addressed by this research program.",
"title": ""
},
{
"docid": "40c9250b3fb527425138bc41acf8fd4e",
"text": "Noise pollution is a major problem in cities around the world. The current methods to assess it neglect to represent the real exposure experienced by the citizens themselves, and therefore could lead to wrong conclusions and a biased representations. In this paper we present a novel approach to monitor noise pollution involving the general public. Using their mobile phones as noise sensors, we provide a low cost solution for the citizens to measure their personal exposure to noise in their everyday environment and participate in the creation of collective noise maps by sharing their geo-localized and annotated measurements with the community. Our prototype, called NoiseTube, can be found online [1].",
"title": ""
},
{
"docid": "29287a078dcd16da737e86f05794f64d",
"text": "One of the most well-known yet perhaps controversial conditions affecting temporomandibular dysfunction (TMD) and the signs and symptoms of facial pain and clinical outcomes after orthognathic surgery procedures is temporomandibular joint internal derangement. This article provides an overview of the mutual relationship between orthognathic surgery and TMD, with especial consideration to internal derangement. The existing literature is reviewed and analyzed and the pertinent findings are summarized. The objective is to guide oral and maxillofacial surgeons in their clinical decision making when contemplating orthognathic surgery in patients with preexisting TMD.",
"title": ""
},
{
"docid": "765db16c14ed82f12755d960a46fd081",
"text": "Managing virtualized services efficiently over the cloud is an open challenge. Traditional models of software development are not appropriate for the cloud computing domain, where software (and other) services are acquired on demand. In this paper, we describe a new integrated methodology for the life cycle of IT services delivered on the cloud and demonstrate how it can be used to represent and reason about services and service requirements and so automate service acquisition and consumption from the cloud. We have divided the IT service life cycle into five phases of requirements, discovery, negotiation, composition, and consumption. We detail each phase and describe the ontologies that we have developed to represent the concepts and relationships for each phase. To show how this life cycle can automate the usage of cloud services, we describe a cloud storage prototype that we have developed. This methodology complements previous work on ontologies for service descriptions in that it is focused on supporting negotiation for the particulars of a service and going beyond simple matchmaking.",
"title": ""
},
{
"docid": "5dee244ee673909c3ba3d3d174a7bf83",
"text": "Fingerprint has remained a very vital index for human recognition. In the field of security, series of Automatic Fingerprint Identification Systems (AFIS) have been developed. One of the indices for evaluating the contributions of these systems to the enforcement of security is the degree with which they appropriately verify or identify input fingerprints. This degree is generally determined by the quality of the fingerprint images and the efficiency of the algorithm. In this paper, some of the sub-models of an existing mathematical algorithm for the fingerprint image enhancement were modified to obtain new and improved versions. The new versions consist of different mathematical models for fingerprint image segmentation, normalization, ridge orientation estimation, ridge frequency estimation, Gabor filtering, binarization and thinning. The implementation was carried out in an environment characterized by Window Vista Home Basic operating system as platform and Matrix Laboratory (MatLab) as frontend engine. Synthetic images as well as real fingerprints obtained from the FVC2004 fingerprint database DB3 set A were used to test the adequacy of the modified sub-models and the resulting algorithm. The results show that the modified sub-models perform well with significant improvement over the original versions. The results also show the necessity of each level of the enhancement. KeywordAFIS; Pattern recognition; pattern matching; fingerprint; minutiae; image enhancement.",
"title": ""
},
{
"docid": "1bd9467a7fafcdb579f8a4cd1d7be4b3",
"text": "OBJECTIVE\nTo determine the diagnostic and triage accuracy of online symptom checkers (tools that use computer algorithms to help patients with self diagnosis or self triage).\n\n\nDESIGN\nAudit study.\n\n\nSETTING\nPublicly available, free symptom checkers.\n\n\nPARTICIPANTS\n23 symptom checkers that were in English and provided advice across a range of conditions. 45 standardized patient vignettes were compiled and equally divided into three categories of triage urgency: emergent care required (for example, pulmonary embolism), non-emergent care reasonable (for example, otitis media), and self care reasonable (for example, viral upper respiratory tract infection).\n\n\nMAIN OUTCOME MEASURES\nFor symptom checkers that provided a diagnosis, our main outcomes were whether the symptom checker listed the correct diagnosis first or within the first 20 potential diagnoses (n=770 standardized patient evaluations). For symptom checkers that provided a triage recommendation, our main outcomes were whether the symptom checker correctly recommended emergent care, non-emergent care, or self care (n=532 standardized patient evaluations).\n\n\nRESULTS\nThe 23 symptom checkers provided the correct diagnosis first in 34% (95% confidence interval 31% to 37%) of standardized patient evaluations, listed the correct diagnosis within the top 20 diagnoses given in 58% (55% to 62%) of standardized patient evaluations, and provided the appropriate triage advice in 57% (52% to 61%) of standardized patient evaluations. Triage performance varied by urgency of condition, with appropriate triage advice provided in 80% (95% confidence interval 75% to 86%) of emergent cases, 55% (47% to 63%) of non-emergent cases, and 33% (26% to 40%) of self care cases (P<0.001). Performance on appropriate triage advice across the 23 individual symptom checkers ranged from 33% (95% confidence interval 19% to 48%) to 78% (64% to 91%) of standardized patient evaluations.\n\n\nCONCLUSIONS\nSymptom checkers had deficits in both triage and diagnosis. Triage advice from symptom checkers is generally risk averse, encouraging users to seek care for conditions where self care is reasonable.",
"title": ""
},
{
"docid": "d781b53fb701468d5c2c8ac9ca741887",
"text": "In this chapter, we suggest PRAXLABS as a framework to reflect on, and implement, elements of sustainable empirical research and design in Living Lab studies. In recent years the Living Lab approach has widely evolved into a common ground for researchers aiming at practice-based collaborative design. This design approach, amongst others, includes intensive stakeholder and end-user participation as well as long-term orientation in R&D projects. For the explication of the PRAXLABS framework, we present a comparative analysis of three Living Lab projects aiming at different design themes in the domestic domain: home entertainment, energy monitoring, and ambient assisted living. In each project users were involved as co-creators in the research and design of new domestic IT artifacts. By analyzing and comparing these cases, we will specify experiences which may be transferred to other projects located in the domestic context. In addition, we discuss conditions which enable the transfer of insights across the borders of the home domain into other fields of application, e.g. work spaces, public spaces or into the field of mobile applications. Overall, we claim that the PRAXLABS framework makes experiences from different Living Lab studies manageable in a systematized and sustainable manner. 1 Researching the home During the 1980’s the home became a prominent field for research of human computer interaction. Empirical work, especially ethnographically oriented methods like observation were applied to the understanding of certain aspects of the role of technology within home life. TV and video, especially, played a major role as a focus of research activities. Lull, for example, investigated the social use of TV within families (Lull, 1990). With the mass adaptation of PC technologies since the mid 90’s, plenty of new research interests became visible. New devices like interactive TV sets changed the way services were used and also influenced usage practices, e.g. those related to an early set-top box trial (O’Brien et al., 1999) or the usage of new services for communication and individual media consumption (e.g. streaming providers like Netflix etc.) on a variety of different devices, e.g. smartphones, tablet PCs (Hess et al., 2012a). Nowadays, smart home devices and wearables with smart functions are further shaping the consumer market and household practices. New devices will continue to impact on usage practices in a long run. Researchers in CSCW and HCI hence have investigated different dimensions of the home ecosystem. A variety of empirical research work focuses on different aspects of how technology is used and managed within families, e.g. computer help at home or technical support (Poole, 2012) home networking (Grinter et al., 2009), home automation (Brush et al., 2011; 2011), and research on the routine nature of communication (Crabtree and Rodden, 2004). To foster practice-based design in the home, the Living Lab approach has evolved in the last years (Eriksson et al., 2005), comprising many different methods and tools applied during the diverse research phases. Ethnographic stances and action research-based methods are commonly during the pre-design phase in order to gain a better understanding of the practice contexts and for the construction of a common understanding of the design goals between design teams, end-users and other involved stakeholders (Müller et al., 2012a, 2015a). For an overview of participatory methods see (Muller, 2001). Another pivotal element of Living Labs is that design and development is informed by continuous feedback from (potential) customers and represented in form of mock-ups and other intermediate design products which support ongoing negotiation processes between design team and end-userand stakeholdergroups (Ogonowski et al., 2013). To support such processes, self-documentation methods are widely used, e.g. in form of cultural probes (Gaver et al., 1999), or in variations such as playful and creative probes (Bernhaupt et al., 2008), and diary studies (Sohn et al., 2008). Evaluation of designed products is another important element of Living Labs. In the ideal case, prototypes are given to households as early as possible to learn how technology is being appropriated in the real social practice environment (Stevens et al., 2009). Methodologically speaking, many different methods are being used to gain empirical insights, often in form of a mixture of interviews, workshops, observation studies, usability tests and online questionnaires. However, the many approaches of scholars under the “umbrella” of Living Lab vary considerably, often because of their proximity to real practice contexts varies. Techniques for evaluating the use of IT artefacts in the home, and how such artefacts are appropriated, will be contingent on the degree to which the target group representatives’ real life circumstances are investigated. In addition, little has been done when using Living Lab approaches, thus far, tosystemize research findings so as toto build up of a scientific corpus, e.g. by a systematic comparative approach. That is why we suggest the PRAXLABS framework, one which addresses these shortcomings in the Living Lab research. The PRAXLABS framework is part of a systematic approach to the generation of a scientific corpus of practice-based design work, described as Grounded Design (Stevens et al., this book), in turn based on the Design Case Study approach by Wulf et al. (2015). In this work we describe, compare and analyze the participatory design process across three projects aiming at the development of new technologies for domestic contexts: a cross-platform entertainment concept, an energy management system and an internet platform aiming at supporting informal help for elderly adults in a local neighborhood. Each project entailed setting up a Living Lab and included user representatives as well as stakeholders (service providers, industry partners) in a long-term oriented manner to inform potential design solutions. We discuss the benefits and issues of Living Labs as an innovation infrastructure and research methodology by focusing on the question of how research experience can be made available in a more sustainable manner and strategically allowing future projects to benefit from their predecessors. To create a systematic understanding of design case studies in Living Lab environments we describe the three projects in more detail and develop categories of experiences which may be transferred across Living Lab projects within the home and across the borders of the domestic application field. In the form of the PRAXLABS framework, we bring our results together to provide for the comparative analysis of the three different Living Lab studies. 2 Practice-based computing 2.1 From participatory design to practice-based computing Participatory design aims to involve different stakeholders in the development process to reach a more democratic design (Bjerknes and Bratteteig, 1995) and – more generally – to develop software that will successfully be accepted by users. Applying the methods and tools of PD aims at fostering a software design process grounded, in some sense, in practice. Traditionally, work in PD addresses the context of the workplace (Bødker et al., 2004). Bødker et al. highlight the importance of a design that really is informed by the needs of actual users. Such process requires establishing a mutual learning process between designer and users (to address questions such as “What is needed?” and “What is possible?”). Instead of involving users as informants only, genuine participation results in a shared understanding of needs, problems and options toward solutions (see Wagner, this book, for a more detailed analysis). Related to PD is the concept of infrastructuring (Pipek and Syrjänen, 2006; Pipek and Wulf, 2009). Infrastructuring refers to a design process informing the development of an “infrastructure in use” rather than purely focusing on the IT artifact itself. Such a process of infrastructuring needs to include knowledge about the context and the sociotechnical environment. Several aspects become important then, such as: How can the artifact support established practices of group work? How can new tools improve the work of a group to be more successful? How can articulation work be managed more easily? To continuously improve the work environment, infrastructuring can be seen as an ongoing process, also including and accompanying the adoption of artifacts and recognizing changes of practices. Thus, the activities of infrastructuring inform a continuous design schedule with respect to existing work culture, tools and practices. “Points of infrastructure” become visible when infrastructure breakdowns occur and when (socio-) technical innovations are introduced (e.g. when a new software is being installed or new practices are being established to support the processes of learning or adaptation of software). With the wider distribution of new technologies the domestic context have come increasingly into focus for PD. In HCI, applying PD for the home has become equally popular, such as in the context of home care, ageing at home, family interactions around new media, or energy management and sustainability (Crabtree and Rodden, 2004; Palen and Aaløkke, 2006). Several concepts have proven successful in providing for transferability from the work to home context, such as the design for social awareness (Crabtree et al., 2003), at least to a degree. However, other authors have claimed that the occupation with the new research domains beyond the workplace revealed the need for acknowledging the unique demands of domestic technology appropriation and use. Rather than designing for efficiency and utilitarian pursuits, home technologies aim at fostering sociability, inclusion and social awareness, thus calling for a taking into account of different underlying design aspec",
"title": ""
},
{
"docid": "70df4eee6d98efdbb741e125271f395c",
"text": "Mobile Ad Hoc networks are autonomously self-organized networks without infrastructure support. Wireless sensor networks are appealing to researchers due to their wide range of application potential in areas such as target detection and tracking, environmental monitoring, industrial process monitoring, and tactical systems. Highly dynamic topology and bandwidth constraint in dense networks, brings the necessity to achieve an efficient medium access protocol subject to power constraints. Various MAC protocols with different objectives were proposed for wireless sensor networks. The aim of this paper is to outline the significance of various MAC protocols along with their merits and demerits.",
"title": ""
},
{
"docid": "6f8e441738a0c045a83f0e1efd4e0bbd",
"text": "Irony and humour are just two of many forms of figurative language. Approaches to identify in vast volumes of data such as the internet humorous or ironic statements is important not only from a theoretical view point but also for their potential applicability in social networks or human-computer interactive systems. In this study we investigate the automatic detection of irony and humour in social networks such as Twitter casting it as a classification problem. We propose a rich set of features for text interpretation and representation to train classification procedures. In cross-domain classification experiments our model achieves and improves state-of-the-art",
"title": ""
}
] |
scidocsrr
|
66418795e9037d036af8379bdeb2b8c5
|
Towards Generic Text-Line Extraction
|
[
{
"docid": "258601c560572a9c43823fe65481a3bf",
"text": "Dewarping of documents captured with hand-held cameras in an uncontrolled environment has triggered a lot of interest in the scientific community over the last few years and many approaches have been proposed. However, there has been no comparative evaluation of different dewarping techniques so far. In an attempt to fill this gap, we have organized a page dewarping contest along with CBDAR 2007. We have created a dataset of 102 documents captured with a hand-held camera and have made it freely available online. We have prepared text-line, text-zone, and ASCII text ground-truth for the documents in this dataset. Three groups participated in the contest with their methods. In this paper we present an overview of the approaches that the participants used, the evaluation measure, and the dataset used in the contest. We report the performance of all participating methods. The evaluation shows that none of the participating methods was statistically significantly better than any other participating method.",
"title": ""
}
] |
[
{
"docid": "b5372d4cad87aab69356ebd72aed0e0b",
"text": "Web content nowadays can also be accessed through new generation of Internet connected TVs. However, these products failed to change users’ behavior when consuming online content. Users still prefer personal computers to access Web content. Certainly, most of the online content is still designed to be accessed by personal computers or mobile devices. In order to overcome the usability problem of Web content consumption on TVs, this paper presents a knowledge graph based video generation system that automatically converts textual Web content into videos using semantic Web and computer graphics based technologies. As a use case, Wikipedia articles are automatically converted into videos. The effectiveness of the proposed system is validated empirically via opinion surveys. Fifty percent of survey users indicated that they found generated videos enjoyable and 42 % of them indicated that they would like to use our system to consume Web content on their TVs.",
"title": ""
},
{
"docid": "6f87969a98451881a9c9da9c8a05f219",
"text": "The possibility of filtering light cloud cover in satellite imagery to expose objects beneath the clouds is discussed. A model of the cloud distortion process is developed and a transformation is introduced which makes the signal and noise additive so that optimum linear filtering techniques can be applied. This homomorphic filtering can be done in the two-dimensional image plane, or it can be extended to include the spectral dimension on multispectral data. The three-dimensional filter is especially promising because clouds tend to follow a common spectral response. The noise statistics can be estimated directly from the noisy data. Results from a computer simulation and from Landsat data are shown.",
"title": ""
},
{
"docid": "0123fd04bc65b8dfca7ff5c058d087da",
"text": "The authors forward the hypothesis that social exclusion is experienced as painful because reactions to rejection are mediated by aspects of the physical pain system. The authors begin by presenting the theory that overlap between social and physical pain was an evolutionary development to aid social animals in responding to threats to inclusion. The authors then review evidence showing that humans demonstrate convergence between the 2 types of pain in thought, emotion, and behavior, and demonstrate, primarily through nonhuman animal research, that social and physical pain share common physiological mechanisms. Finally, the authors explore the implications of social pain theory for rejection-elicited aggression and physical pain disorders.",
"title": ""
},
{
"docid": "1de1324d0f10a0e58c2adccdd8cb2c21",
"text": "In keyword search advertising, many advertisers operate on a limited budget. Yet how limited budgets affect keyword search advertising has not been extensively studied. This paper offers an analysis of the generalized second-price auction with budget constraints. We find that the budget constraint may induce advertisers to raise their bids to the highest possible amount for two different motivations: to accelerate the elimination of the budget-constrained competitor as well as to reduce their own advertising cost. Thus, in contrast to the current literature, our analysis shows that both budget-constrained and unconstrained advertisers could bid more than their own valuation. We further extend the model to consider dynamic bidding and budget-setting decisions.",
"title": ""
},
{
"docid": "d3d478d3e8ef3498b63e7e8803c8cfec",
"text": "INTRODUCTION\nThe International Physical Activity Questionnaire (IPAQ) was developed to measure health-related physical activity (PA) in populations. The short version of the IPAQ has been tested extensively and is now used in many international studies. The present study aimed to explore the validity characteristics of the long-version IPAQ.\n\n\nSUBJECTS AND METHODS\nForty-six voluntary healthy male and female subjects (age, mean +/- standard deviation: 40.7 +/- 10.3 years) participated in the study. PA indicators derived from the long, self-administered IPAQ were compared with data from an activity monitor and a PA log book for concurrent validity, and with aerobic fitness, body mass index (BMI) and percentage body fat for construct validity.\n\n\nRESULTS\nStrong positive relationships were observed between the activity monitor data and the IPAQ data for total PA (rho = 0.55, P < 0.001) and vigorous PA (rho = 0.71, P < 0.001), but a weaker relationship for moderate PA (rho = 0.21, P = 0.051). Calculated MET-h day(-1) from the PA log book was significantly correlated with MET-h day(-1) from the IPAQ (rho = 0.67, P < 0.001). A weak correlation was observed between IPAQ data for total PA and both aerobic fitness (rho = 0.21, P = 0.051) and BMI (rho = 0.25, P = 0.009). No significant correlation was observed between percentage body fat and IPAQ variables. Bland-Altman analysis suggested that the inability of activity monitors to detect certain types of activities might introduce a source of error in criterion validation studies.\n\n\nCONCLUSIONS\nThe long, self-administered IPAQ questionnaire has acceptable validity when assessing levels and patterns of PA in healthy adults.",
"title": ""
},
{
"docid": "0075c4714b8e7bf704381d3a3722ab59",
"text": "This paper surveys the current state of the art in Natural Language Generation (nlg), defined as the task of generating text or speech from non-linguistic input. A survey of nlg is timely in view of the changes that the field has undergone over the past two decades, especially in relation to new (usually data-driven) methods, as well as new applications of nlg technology. This survey therefore aims to (a) give an up-to-date synthesis of research on the core tasks in nlg and the architectures adopted in which such tasks are organised; (b) highlight a number of recent research topics that have arisen partly as a result of growing synergies between nlg and other areas of artificial intelligence; (c) draw attention to the challenges in nlg evaluation, relating them to similar challenges faced in other areas of nlp, with an emphasis on different evaluation methods and the relationships between them.",
"title": ""
},
{
"docid": "094fb0a17d6358cc166e43872bc59b09",
"text": "This paper is a review of the evolutionary history of deep learning models. It covers from the genesis of neural networks when associationism modeling of the brain is studied, to the models that dominate the last decade of research in deep learning like convolutional neural networks, deep belief networks, and recurrent neural networks, and extends to popular recent models like variational autoencoder and generative adversarial nets. In addition to a review of these models, this paper primarily focuses on the precedents of the models above, examining how the initial ideas are assembled to construct the early models and how these preliminary models are developed into their current forms. Many of these evolutionary paths last more than half a century and have a diversity of directions. For example, CNN is built on prior knowledge of biological vision system; DBN is evolved from a trade-off of modeling power and computation complexity of graphical models and many nowadays models are neural counterparts of ancient linear models. This paper reviews these evolutionary paths and offers a concise thought flow of how these models are developed, and aims to provide a thorough background for deep learning. More importantly, along with the path, this paper summarizes the gist behind these milestones and proposes many directions to guide the future research of deep learning. 1 ar X iv :1 70 2. 07 80 0v 2 [ cs .L G ] 1 M ar 2 01 7",
"title": ""
},
{
"docid": "001104ca832b10553b28bbd713e6cbd5",
"text": "In this paper we present a tracker, which is radically different from state-of-the-art trackers: we apply no model updating, no occlusion detection, no combination of trackers, no geometric matching, and still deliver state-of-the-art tracking performance, as demonstrated on the popular online tracking benchmark (OTB) and six very challenging YouTube videos. The presented tracker simply matches the initial patch of the target in the first frame with candidates in a new frame and returns the most similar patch by a learned matching function. The strength of the matching function comes from being extensively trained generically, i.e., without any data of the target, using a Siamese deep neural network, which we design for tracking. Once learned, the matching function is used as is, without any adapting, to track previously unseen targets. It turns out that the learned matching function is so powerful that a simple tracker built upon it, coined Siamese INstance search Tracker, SINT, which only uses the original observation of the target from the first frame, suffices to reach state-of-the-art performance. Further, we show the proposed tracker even allows for target re-identification after the target was absent for a complete video shot.",
"title": ""
},
{
"docid": "c95980f3f1921426c20757e6020f62c2",
"text": "Recent successes of deep learning have been largely driven by the ability to train large models on vast amounts of data. We believe that High Performance Computing (HPC) will play an increasingly important role in helping deep learning achieve the next level of innovation fueled by neural network models that are orders of magnitude larger and trained on commensurately more training data. We are targeting the unique capabilities of both current and upcoming HPC systems to train massive neural networks and are developing the Livermore Big Artificial Neural Network (LBANN) toolkit to exploit both model and data parallelism optimized for large scale HPC resources. This paper presents our preliminary results in scaling the size of model that can be trained with the LBANN toolkit.",
"title": ""
},
{
"docid": "a986826041730d953dfbf9fbc1b115a6",
"text": "This article presents a general class of associative reinforcement learning algorithms for connectionist networks containing stochastic units. These algorithms, called REINFORCE algorithms, are shown to make weight adjustments in a direction that lies along the gradient of expected reinforcement in both immediate-reinforcement tasks and certain limited forms of delayed-reinforcement tasks, and they do this without explicitly computing gradient estimates or even storing information from which such estimates could be computed. Specific examples of such algorithms are presented, some of which bear a close relationship to certain existing algorithms while others are novel but potentially interesting in their own right. Also given are results that show how such algorithms can be naturally integrated with backpropagation. We close with a brief discussion of a number of additional issues surrounding the use of such algorithms, including what is known about their limiting behaviors as well as further considerations that might be used to help develop similar but potentially more powerful reinforcement learning algorithms.",
"title": ""
},
{
"docid": "c25bdb567ee525e2ae3416dcf9c42717",
"text": "Despite the efforts that bioengineers have exerted in designing and constructing biological processes that function according to a predetermined set of rules, their operation remains fundamentally circumstantial. The contextual situation in which molecules and single-celled or multi-cellular organisms find themselves shapes the way they interact, respond to the environment and process external information. Since the birth of the field, synthetic biologists have had to grapple with contextual issues, particularly when the molecular and genetic devices inexplicably fail to function as designed when tested in vivo. In this review, we set out to identify and classify the sources of the unexpected divergences between design and actual function of synthetic systems and analyze possible methodologies aimed at controlling, if not preventing, unwanted contextual issues.",
"title": ""
},
{
"docid": "7c974eacb24368a0c5acfeda45d60f64",
"text": "We propose a novel approach for verifying model hypotheses in cluttered and heavily occluded 3D scenes. Instead of verifying one hypothesis at a time, as done by most state-of-the-art 3D object recognition methods, we determine object and pose instances according to a global optimization stage based on a cost function which encompasses geometrical cues. Peculiar to our approach is the inherent ability to detect significantly occluded objects without increasing the amount of false positives, so that the operating point of the object recognition algorithm can nicely move toward a higher recall without sacrificing precision. Our approach outperforms state-of-the-art on a challenging dataset including 35 household models obtained with the Kinect sensor, as well as on the standard 3D object recognition benchmark dataset.",
"title": ""
},
{
"docid": "1b646a8a45b65799bbf2e71108f420e0",
"text": "Dynamic Time Warping (DTW) is a distance measure that compares two time series after optimally aligning them. DTW is being used for decades in thousands of academic and industrial projects despite the very expensive computational complexity, O(n2). These applications include data mining, image processing, signal processing, robotics and computer graphics among many others. In spite of all this research effort, there are many myths and misunderstanding about DTW in the literature, for example \"it is too slow to be useful\" or \"the warping window size does not matter much.\" In this tutorial, we correct these misunderstandings and we summarize the research efforts in optimizing both the efficiency and effectiveness of both the basic DTW algorithm, and of the higher-level algorithms that exploit DTW such as similarity search, clustering and classification. We will discuss variants of DTW such as constrained DTW, multidimensional DTW and asynchronous DTW, and optimization techniques such as lower bounding, early abandoning, run-length encoding, bounded approximation and hardware optimization. We will discuss a multitude of application areas including physiological monitoring, social media mining, activity recognition and animal sound processing. The optimization techniques are generalizable to other domains on various data types and problems.",
"title": ""
},
{
"docid": "7dd62985fc9349b87b2d239e01ccd5b5",
"text": "The goal of pattern-based classification of functional neuroimaging data is to link individual brain activation patterns to the experimental conditions experienced during the scans. These brain-reading analyses advance functional neuroimaging on three fronts. From a technical standpoint, pattern-based classifiers overcome fatal f laws in the status quo inferential and exploratory multivariate approaches by combining pattern-based analyses with a direct link to experimental variables. In theoretical terms, the results that emerge from pattern-based classifiers can offer insight into the nature of neural representations. This shifts the emphasis in functional neuroimaging studies away from localizing brain activity toward understanding how patterns of brain activity encode information. From a practical point of view, pattern-based classifiers are already well established and understood in many areas of cognitive science. These tools are familiar to many researchers and provide a quantitatively sound and qualitatively satisfying answer to most questions addressed in functional neuroimaging studies. Here, we examine the theoretical, statistical, and practical underpinnings of pattern-based classification approaches to functional neuroimaging analyses. Pattern-based classification analyses are well positioned to become the standard approach to analyzing functional neuroimaging data.",
"title": ""
},
{
"docid": "af03474957035ad189d47f3bee959cda",
"text": "Fully convolutional neural network (FCN) has been dominating the game of face detection task for a few years with its congenital capability of sliding-window-searching with shared kernels, which boiled down all the redundant calculation, and most recent state-of-the-art methods such as Faster-RCNN, SSD, YOLO and FPN use FCN as their backbone. So here comes one question: Can we find a universal strategy to further accelerate FCN with higher accuracy, so could accelerate all the recent FCN-based methods? To analyze this, we decompose the face searching space into two orthogonal directions, 'scale' and 'spatial'. Only a few coordinates in the space expanded by the two base vectors indicate foreground. So if FCN could ignore most of the other points, the searching space and false alarm should be significantly boiled down. Based on this philosophy, a novel method named scale estimation and spatial attention proposal (S2AP) is proposed to pay attention to some specific scales in image pyramid and valid locations in each scales layer. Furthermore, we adopt a masked-convolution operation based on the attention result to accelerate FCN calculation. Experiments show that FCN-based method RPN can be accelerated by about 4× with the help of S2AP and masked-FCN and at the same time it can also achieve the state-of-the-art on FDDB, AFW and MALF face detection benchmarks as well.",
"title": ""
},
{
"docid": "a76a1aea4861dfd1e1f426ce55747b2a",
"text": "Which topics spark the most heated debates in social media? Identifying these topics is a first step towards creating systems which pierce echo chambers. In this paper, we perform a systematic methodological study of controversy detection using social media network structure and content.\n Unlike previous work, rather than identifying controversy in a single hand-picked topic and use domain-specific knowledge, we focus on comparing topics in any domain. Our approach to quantifying controversy is a graph-based three-stage pipeline, which involves (i) building a conversation graph about a topic, which represents alignment of opinion among users; (ii) partitioning the conversation graph to identify potential sides of the controversy; and (iii)measuring the amount of controversy from characteristics of the~graph.\n We perform an extensive comparison of controversy measures, as well as graph building approaches and data sources. We use both controversial and non-controversial topics on Twitter, as well as other external datasets. We find that our new random-walk-based measure outperforms existing ones in capturing the intuitive notion of controversy, and show that content features are vastly less helpful in this task.",
"title": ""
},
{
"docid": "b4c25df52a0a5f6ab23743d3ca9a3af2",
"text": "Measuring similarity between texts is an important task for several applications. Available approaches to measure document similarity are inadequate for document pairs that have non-comparable lengths, such as a long document and its summary. This is because of the lexical, contextual and the abstraction gaps between a long document of rich details and its concise summary of abstract information. In this paper, we present a document matching approach to bridge this gap, by comparing the texts in a common space of hidden topics. We evaluate the matching algorithm on two matching tasks and find that it consistently and widely outperforms strong baselines. We also highlight the benefits of incorporating domain knowledge to text matching.",
"title": ""
},
{
"docid": "48a0e75b97fdaa734f033c6b7791e81f",
"text": "OBJECTIVE\nTo examine the role of physical activity, inactivity, and dietary patterns on annual weight changes among preadolescents and adolescents, taking growth and development into account.\n\n\nSTUDY DESIGN\nWe studied a cohort of 6149 girls and 4620 boys from all over the United States who were 9 to 14 years old in 1996. All returned questionnaires in the fall of 1996 and a year later in 1997. Each child provided his or her current height and weight and a detailed assessment of typical past-year dietary intakes, physical activities, and recreational inactivities (TV, videos/VCR, and video/computer games).\n\n\nMETHODS\nOur hypotheses were that physical activity and dietary fiber intake are negatively correlated with annual changes in adiposity and that recreational inactivity (TV/videos/games), caloric intake, and dietary fat intake are positively correlated with annual changes in adiposity. Separately for boys and girls, we performed regression analysis of 1-year change in body mass index (BMI; kg/m(2)). All hypothesized factors were in the model simultaneously with several adjustment factors.\n\n\nRESULTS\nLarger increases in BMI from 1996 to 1997 were among girls who reported higher caloric intakes (.0061 +/-.0026 kg/m(2) per 100 kcal/day; beta +/- standard error), less physical activity (-.0284 +/-.0142 kg/m(2)/hour/day) and more time with TV/videos/games (.0372 +/-.0106 kg/m(2)/hour/day) during the year between the 2 BMI assessments. Larger BMI increases were among boys who reported more time with TV/videos/games (.0384 +/-.0101) during the year. For both boys and girls, a larger rise in caloric intake from 1996 to 1997 predicted larger BMI increases (girls:.0059 +/-.0027 kg/m(2) per increase of 100 kcal/day; boys:.0082 +/-.0030). No significant associations were noted for energy-adjusted dietary fat or fiber.\n\n\nCONCLUSIONS\nFor both boys and girls, a 1-year increase in BMI was larger in those who reported more time with TV/videos/games during the year between the 2 BMI measurements, and in those who reported that their caloric intakes increased more from 1 year to the next. Larger year-to-year increases in BMI were also seen among girls who reported higher caloric intakes and less physical activity during the year between the 2 BMI measurements. Although the magnitudes of these estimated effects were small, their cumulative effects, year after year during adolescence, would produce substantial gains in body weight. Strategies to prevent excessive caloric intakes, to decrease time with TV/videos/games, and to increase physical activity would be promising as a means to prevent obesity.",
"title": ""
},
{
"docid": "17deb6c21da616a73a6daedf971765c3",
"text": "Recent approaches to causal discovery based on Boolean satisfiability solvers have opened new opportunities to consider search spaces for causal models with both feedback cycles and unmeasured confounders. However, the available methods have so far not been able to provide a principled account of how to handle conflicting constraints that arise from statistical variability. Here we present a new approach that preserves the versatility of Boolean constraint solving and attains a high accuracy despite the presence of statistical errors. We develop a new logical encoding of (in)dependence constraints that is both well suited for the domain and allows for faster solving. We represent this encoding in Answer Set Programming (ASP), and apply a state-of-theart ASP solver for the optimization task. Based on different theoretical motivations, we explore a variety of methods to handle statistical errors. Our approach currently scales to cyclic latent variable models with up to seven observed variables and outperforms the available constraintbased methods in accuracy.",
"title": ""
}
] |
scidocsrr
|
7567d41aeec41e49dfa9bda17ef19c59
|
Recognizing realistic actions from videos “in the wild”
|
[
{
"docid": "b9a893fb526955b5131860a1402e2f7c",
"text": "A common trend in object recognition is to detect and leverage the use of sparse, informative feature points. The use of such features makes the problem more manageable while providing increased robustness to noise and pose variation. In this work we develop an extension of these ideas to the spatio-temporal case. For this purpose, we show that the direct 3D counterparts to commonly used 2D interest point detectors are inadequate, and we propose an alternative. Anchoring off of these interest points, we devise a recognition algorithm based on spatio-temporally windowed data. We present recognition results on a variety of datasets including both human and rodent behavior.",
"title": ""
},
{
"docid": "1557db582fbcf5e17c2b021b6d37b03a",
"text": "Visual codebook based quantization of robust appearance descriptors extracted from local image patches is an effective means of capturing image statistics for texture analysis and scene classification. Codebooks are usually constructed by using a method such as k-means to cluster the descriptor vectors of patches sampled either densely ('textons') or sparsely ('bags of features' based on key-points or salience measures) from a set of training images. This works well for texture analysis in homogeneous images, but the images that arise in natural object recognition tasks have far less uniform statistics. We show that for dense sampling, k-means over-adapts to this, clustering centres almost exclusively around the densest few regions in descriptor space and thus failing to code other informative regions. This gives suboptimal codes that are no better than using randomly selected centres. We describe a scalable acceptance-radius based clusterer that generates better codebooks and study its performance on several image classification tasks. We also show that dense representations outperform equivalent keypoint based ones on these tasks and that SVM or mutual information based feature selection starting from a dense codebook further improves the performance.",
"title": ""
}
] |
[
{
"docid": "e2cd9538192d717a9eaef6344cf0371e",
"text": "Device-to-device (D2D) communication commonly refers to a type of technology that enable devices to communicate directly with each other without communication infrastructures such as access points (APs) or base stations (BSs). Bluetooth and WiFi-Direct are the two most popular D2D techniques, both working in the unlicensed industrial, scientific and medical (ISM) bands. Cellular networks, on the other hand, do not support direct over-the-air communications between users and devices. However, with the emergence of context-aware applications and the accelerating growth of Machine-to-Machine (M2M) applications, D2D communication plays an increasingly important role. It facilitates the discovery of geographically close devices, and enables direct communications between these proximate devices, which improves communication capability and reduces communication delay and power consumption. To embrace the emerging market that requires D2D communications, mobile operators and vendors are accepting D2D as a part of the fourth generation (4G) Long Term Evolution (LTE)-Advanced standard in 3rd Generation Partnership Project (3GPP) Release 12.",
"title": ""
},
{
"docid": "b0988b5d33bf97ac4eba7365bce055bd",
"text": "This research investigates audience experience of empathy with a performer during a digitally mediated performance. Theatrical performance necessitates social interaction between performers and audience. We present a performance-based study that explores audience awareness of performer's kinaesthetic activity in 2 ways: by isolating the audience's senses (visual, auditory, and kinaesthetic) and by focusing audience perception through defamiliarization. By positioning the performer behind the audience: in their 'backspace', we focus the audience's attention to the performer in an unfamiliar way. We describe two research contributions to the study of audience empathic experience during performance. The first is the development of a phenomenological interview method designed for extracting empirical evaluations of experience of audience members in a performance scenario. The second is a descriptive model for a poetics of reception. Our model is based on an empathetic audience-performer relationship that includes 3 components of audience awareness: contextual, interpersonal, and sense-based. Our research contributions are of particular benefit to performances involving digital media, and can provide insight into audience experience of empathy.",
"title": ""
},
{
"docid": "3623bb72ecc6c178c1b9412745025354",
"text": "Languages are transmitted from person to person and generation to generation via a process of iterated learning: people learn a language from other people who once learned that language themselves. We analyze the consequences of iterated learning for learning algorithms based on the principles of Bayesian inference, assuming that learners compute a posterior distribution over languages by combining a prior (representing their inductive biases) with the evidence provided by linguistic data. We show that when learners sample languages from this posterior distribution, iterated learning converges to a distribution over languages that is determined entirely by the prior. Under these conditions, iterated learning is a form of Gibbs sampling, a widely-used Markov chain Monte Carlo algorithm. The consequences of iterated learning are more complicated when learners choose the language with maximum posterior probability, being affected by both the prior of the learners and the amount of information transmitted between generations. We show that in this case, iterated learning corresponds to another statistical inference algorithm, a variant of the expectation-maximization (EM) algorithm. These results clarify the role of iterated learning in explanations of linguistic universals and provide a formal connection between constraints on language acquisition and the languages that come to be spoken, suggesting that information transmitted via iterated learning will ultimately come to mirror the minds of the learners.",
"title": ""
},
{
"docid": "166230b235fe0c18a80041741a7c5e4a",
"text": "Most existing neural network models for music generation use recurrent neural networks. However, the recent WaveNet model proposed by DeepMind shows that convolutional neural networks (CNNs) can also generate realistic musical waveforms in the audio domain. Following this light, we investigate using CNNs for generating melody (a series of MIDI notes) one bar after another in the symbolic domain. In addition to the generator, we use a discriminator to learn the distributions of melodies, making it a generative adversarial network (GAN). Moreover, we propose a novel conditional mechanism to exploit available prior knowledge, so that the model can generate melodies either from scratch, by following a chord sequence, or by conditioning on the melody of previous bars (e.g. a priming melody), among other possibilities. The resulting model, named MidiNet, can be expanded to generate music with multiple MIDI channels (i.e. tracks). We conduct a user study to compare the melody of eight-bar long generated by MidiNet and by Google’s MelodyRNN models, each time using the same priming melody. Result shows that MidiNet performs comparably with MelodyRNN models in being realistic and pleasant to listen to, yet MidiNet’s melodies are reported to be much more interesting.",
"title": ""
},
{
"docid": "4dda22757c56723b434afeab7457a6d4",
"text": "The treatment of incomplete data is an important step in the pre-processing of data. We propose a novel nonparametric algorithm Generalized regression neural network Ensemble for Multiple Imputation (GEMI). We also developed a single imputation (SI) version of this approach—GESI. We compare our algorithms with 25 popular missing data imputation algorithms on 98 real-world and synthetic terms of (i) the accuracy of output classification: three classifiers (a generalized regression neural network, a multilayer perceptron and a logistic regression technique) are separately trained and tested on the dataset imputed with each imputation algorithm, (ii) interval analysis with missing observations and (iii) point estimation accuracy of the missing value imputation. GEMI outperformed GESI and all the conventional imputation algorithms in terms of all three criteria considered. & 2010 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "7d232cd3fd69bbe33add101551dfdf25",
"text": "The vector space model is one of the classical and widely applied information retrieval models to rank the web page based on similarity values. The retrieval operations consist of cosine similarity function to compute the similarity values between a given query and the set of documents retrieved and then rank the documents according to the relevance. In this paper, we are presenting different approaches of vector space model to compute similarity values of hits from search engine for given queries based on terms weight. In order to achieve the goal of an effective evaluation algorithm, our work intends to extensive analysis of the main aspects of Vector space model, its approaches and provides a comprehensive comparison for Term-Count",
"title": ""
},
{
"docid": "ca5ad8301e3a37a6d2749bb27ede1d7a",
"text": "Data and connectivity between users form the core of social networks. Every status, post, friendship, tweet, re-tweet, tag or image generates a massive amount of structured and unstructured data. Deriving meaning from this data and, in particular, extracting behavior and emotions of individual users, as well as of user communities, is the goal of sentiment analysis and affective computing and represents a significant challenge. Social networks also represent a potentially infinite source of applications for both research and commercial purposes and are adaptable to many different areas, including life science. Nevertheless, collecting, sharing, storing and analyzing social networks data pose several challenges to computer scientists, such as the management of highly unstructured data, big data, and the need for real-time computation. In this paper we give a brief overview of some concrete examples of applying sentiment analysis to social networks for healthcare purposes, we present the current type of tools existing for sentiment analysis, and summarize the challenges involved in this process focusing on the role of high performance computing.",
"title": ""
},
{
"docid": "cfc4dc24378c5b7b83586db56fad2cac",
"text": "This study investigated the effects of proximal and distal constructs on adolescent's academic achievement through self-efficacy. Participants included 482 ninth- and tenth- grade Norwegian students who completed a questionnaire designed to assess school-goal orientations, organizational citizenship behavior, academic self-efficacy, and academic achievement. The results of a bootstrapping technique used to analyze relationships between the constructs indicated that school-goal orientations and organizational citizenship predicted academic self-efficacy. Furthermore, school-goal orientation, organizational citizenship, and academic self-efficacy explained 46% of the variance in academic achievement. Mediation analyses revealed that academic self-efficacy mediated the effects of perceived task goal structure, perceived ability structure, civic virtue, and sportsmanship on adolescents' academic achievements. The results are discussed in reference to current scholarship, including theories underlying our hypothesis. Practical implications and directions for future research are suggested.",
"title": ""
},
{
"docid": "cec35452b7a691be5141ec02fb1b3292",
"text": "Confidentiality of training data induced by releasing machine-learning models, and has recently received increasing attention. Motivated by existing MI attacks and other previous attacks that turn out to be MI \"in disguise,\" this paper initiates a formal study of MI attacks by presenting a game-based methodology. Our methodology uncovers a number of subtle issues, and devising a rigorous game-based definition, analogous to those in cryptography, is an interesting avenue for future work. We describe methodologies for two types of attacks. The first is for black-box attacks, which consider an adversary who infers sensitive values with only oracle access to a model. The second methodology targets the white-box scenario where an adversary has some additional knowledge about the structure of a model. For the restricted class of Boolean models and black-box attacks, we characterize model invertibility using the concept of influence from Boolean analysis in the noiseless case, and connect model invertibility with stable influence in the noisy case. Interestingly, we also discovered an intriguing phenomenon, which we call \"invertibility interference,\" where a highly invertible model quickly becomes highly non-invertible by adding little noise. For the white-box case, we consider a common phenomenon in machine-learning models where the model is a sequential composition of several sub-models. We show, quantitatively, that even very restricted communication between layers could leak a significant amount of information. Perhaps more importantly, our study also unveils unexpected computational power of these restricted communication channels, which, to the best of our knowledge, were not previously known.",
"title": ""
},
{
"docid": "81d50714ba7a53d908f6b3e3030499c2",
"text": "Bit coin is widely regarded as the first broadly successful e-cash system. An oft-cited concern, though, is that mining Bit coins wastes computational resources. Indeed, Bit coin's underlying mining mechanism, which we call a scratch-off puzzle (SOP), involves continuously attempting to solve computational puzzles that have no intrinsic utility. We propose a modification to Bit coin that repurposes its mining resources to achieve a more broadly useful goal: distributed storage of archival data. We call our new scheme Perm coin. Unlike Bit coin and its proposed alternatives, Perm coin requires clients to invest not just computational resources, but also storage. Our scheme involves an alternative scratch-off puzzle for Bit coin based on Proofs-of-Retrievability (PORs). Successfully minting money with this SOP requires local, random access to a copy of a file. Given the competition among mining clients in Bit coin, this modified SOP gives rise to highly decentralized file storage, thus reducing the overall waste of Bit coin. Using a model of rational economic agents we show that our modified SOP preserves the essential properties of the original Bit coin puzzle. We also provide parameterizations and calculations based on realistic hardware constraints to demonstrate the practicality of Perm coin as a whole.",
"title": ""
},
{
"docid": "b72faf101696a1c9175bb1117a072135",
"text": "The rapid deployment of smartphones as all-purpose mobile computing systems has led to a wide adoption of wireless communication systems such as Wi-Fi and Bluetooth in mobile scenarios. Both communication systems leak information to the surroundings during operation. This information has been used for tracking and crowd density estimations in literature. However, an estimation of pedestrian flows has not yet been evaluated with respect to a known ground truth and, thus, a reliable adoption in real world scenarios is rather difficult. With this paper, we fill in this gap. Using ground truth provided by the security check process at a major German airport, we discuss the quality and feasibility of pedestrian flow estimations for both WiFi and Bluetooth captures. We present and evaluate three approaches in order to improve the accuracy in comparison to a naive count of captured MAC addresses. Such counts only showed an impractical Pearson correlation of 0.53 for Bluetooth and 0.61 for Wi-Fi compared to ground truth. The presented extended approaches yield a superior correlation of 0.75 in best case. This indicates a strong correlation and an improvement of accuracy. Given these results, the presented approaches allow for a practical estimation of pedestrian flows.",
"title": ""
},
{
"docid": "6a84f902a18256f0d7101b0ec8767422",
"text": "A novel personalized approach has recently been presented to prevent credit card fraud. This new approach proposes to prevent fraud before initial use of a new card, even users without any real transaction data. This approach shows potential, nevertheless, there are some problems needed solving. A main issue is how to predict accurately with only few data, since it collects quasi-real transaction data via an online questionnaire system and thus respondents are commonly unwilling to spend too much time to reply questionnaires. This study employs both support vector machines (SVM) and artificial neural networks (ANN) to investigate the time-varying fraud problem. The performance of ANN is compared with that from SVM. Results show that SVM and ANN are comparable in training but ANN can have highest training accuracy. However, ANN seems to overfit training data and thus has worse performance of predicting the future data when data number is small",
"title": ""
},
{
"docid": "fba109e4627d4bb580d07368e3c00cc1",
"text": "-Wheeled-tracked vehicles are undoubtedly the most popular means of transportation. However, these vehicles are mainly suitable for relatively flat terrain. Legged vehicles, on the other hand, have the potential to handle wide variety of terrain. Robug IIs is a legged climbing robot designed to work in relatively unstructured and rough terrain. It has the capability of walking, climbing vertical surfaces and performing autonomous floor to wall transfer. The sensing technique used in Robug IIs is mainly tactile and ultrasonic sensing. A set of reflexive rules have been developed for the robot to react to the uncertainty of the working environment. The robot also has the intelligence to seek and verify its own foot-holds. It is envisaged that the main application of robot is for remote inspection and maintenance in hazardous environments. Keywords—Legged robot, climbing service robot, insect inspired robot, pneumatic control, fuzzy logic.",
"title": ""
},
{
"docid": "e911045eb1c6469fdaa38102901f104f",
"text": "Adversarial attacks to image classification systems present challenges to convolutional networks and opportunities for understanding them. This study suggests that adversarial perturbations on images lead to noise in the features constructed by these networks. Motivated by this observation, we develop new network architectures that increase adversarial robustness by performing feature denoising. Specifically, our networks contain blocks that denoise the features using non-local means or other filters; the entire networks are trained end-to-end. When combined with adversarial training, our feature denoising networks substantially improve the state-of-the-art in adversarial robustness in both white-box and black-box attack settings. On ImageNet, under 10-iteration PGD white-box attacks where prior art has 27.9% accuracy, our method achieves 55.7%; even under extreme 2000-iteration PGD white-box attacks, our method secures 42.6% accuracy. A network based on our method was ranked first in Competition on Adversarial Attacks and Defenses (CAAD) 2018 — it achieved 50.6% classification accuracy on a secret, ImageNet-like test dataset against 48 unknown attackers, surpassing the runner-up approach by ∼10%. Code and models will be made publicly available.",
"title": ""
},
{
"docid": "33c89872c2a1e5b1b2417c58af616560",
"text": "We provide a new proof of the linear convergence of the alternating direction method of multipliers (ADMM) when one of the objective terms is strongly convex. Our proof is based on a framework for analyzing optimization algorithms introduced in Lessard et al. [21], reducing algorithm convergence to verifying the stability of a dynamical system. This approach generalizes a number of existing results and obviates any assumptions about specific choices of algorithm parameters. On a numerical example, we demonstrate that minimizing the derived bound on the convergence rate provides a practical approach to selecting algorithm parameters for particular ADMM instances. We complement our upper bound by constructing a nearly-matching lower bound on the worst-case rate of convergence.",
"title": ""
},
{
"docid": "3bc897662b39bcd59b7c7831fb1df091",
"text": "The proliferation of wearable devices has contributed to the emergence of mobile crowdsensing, which leverages the power of the crowd to collect and report data to a third party for large-scale sensing and collaborative learning. However, since the third party may not be honest, privacy poses a major concern. In this paper, we address this concern with a two-stage privacy-preserving scheme called RG-RP: the first stage is designed to mitigate maximum a posteriori (MAP) estimation attacks by perturbing each participant's data through a nonlinear function called repeated Gompertz (RG); while the second stage aims to maintain accuracy and reduce transmission energy by projecting high-dimensional data to a lower dimension, using a row-orthogonal random projection (RP) matrix. The proposed RG-RP scheme delivers better recovery resistance to MAP estimation attacks than most state-of-the-art techniques on both synthetic and real-world datasets. For collaborative learning, we proposed a novel LSTM-CNN model combining the merits of Long Short-Term Memory (LSTM) and Convolutional Neural Networks (CNN). Our experiments on two representative movement datasets captured by wearable sensors demonstrate that the proposed LSTM-CNN model outperforms standalone LSTM, CNN and Deep Belief Network. Together, RG+RP and LSTM-CNN provide a privacy-preserving collaborative learning framework that is both accurate and privacy-preserving.",
"title": ""
},
{
"docid": "cff9a7f38ca6699b235c774232a56f54",
"text": "This paper presents a Miniature Aerial Vehicle (MAV) capable of handsoff autonomous operation within indoor environments. Our prototype is a Quadrotor weighing approximately 600g, with a diameter of 550mm, which carries the necessary electronics for stability control, altitude control, collision avoidance and anti-drift control. This MAV is equipped with three rate gyroscopes, three accelerometers, one ultrasonic sensor, four infrared sensors, a high-speed motor controller and a flight computer. Autonomous flight tests have been carried out in a 7x6-m room.",
"title": ""
},
{
"docid": "8b949b03afbdb5e7e393e52b753426d8",
"text": "We present a novel approach to estimate the distance between a generic point in the Cartesian space and objects detected with a depth sensor. This information is crucial in many robotic applications, e.g., for collision avoidance, contact point identification, and augmented reality. The key idea is to perform all distance evaluations directly in the depth space. This allows distance estimation by considering also the frustum generated by the pixel on the depth image, which takes into account both the pixel size and the occluded points. Different techniques to aggregate distance data coming from multiple object points are proposed. We compare the Depth space approach with the commonly used Cartesian space or Configuration space approaches, showing that the presented method provides better results and faster execution times. An application to human-robot collision avoidance using a KUKA LWR IV robot and a Microsoft Kinect sensor illustrates the effectiveness of the approach.",
"title": ""
},
{
"docid": "337a37fab4eb5ed603dac81697be58eb",
"text": "Hazard analysis was conducted to identify critical control points (CCPs) during cocoa processing and milk chocolate manufacture and applied into a hazard analysis and critical control point (HACCP) plan. During the process, the different biological, physical and chemical hazards identified at each processing stage in the hazard analysis worksheet were incorporated into the HACCP plan to assess the risks associated with the processes. Physical hazards such as metals, stones, fibres, plastics and papers; chemical hazards such as pesticide residues, mycotoxins and heavy metals; and microbiological hazards such as Staphyloccous aureus, coliforms, Salmonella, Aspergillus and Penicillium were identified. ISO 22000 analysis was conducted for the determination of some pre-requisite programmes (PrPs) during the chocolate processing and compared with the HACCP system. The ISO 22000 Analysis worksheet reduced the CCPs for both cocoa processing and chocolate manufacture due to the elimination of the pre-requisite programmes (PrPs). Monitoring systems were established for the CCPs identified and these included preventive measures, critical limits, corrective actions, assignment of responsibilities and verification procedures. The incorporation of PrPs in the ISO 22000 made the system simple, more manageable and effective since a smaller number of CCPs were obtained.",
"title": ""
}
] |
scidocsrr
|
4643bf2faad33226a0e2303ca45df60e
|
A Customized Attention-Based Long Short-Term Memory Network for Distant Supervised Relation Extraction
|
[
{
"docid": "d71f2693331ecef85af77c122ee47496",
"text": "Deep Learning is a new area of Machine Learning research, which mainly addresses the problem of time consuming, often incomplete feature engineering in machine learning. Recursive Neural Network (RNN) is a new deep learning architecture that has been highly successful in several Natural Language Processing tasks. We propose a new approach for relation classification, using an RNN, based on the shortest path between two entities in the dependency graph. Most previous works on RNN are based on constituency-based parsing because phrasal nodes in a parse tree can capture compositionality in a sentence. Compared with constituency-based parse trees, dependency graphs can represent the relation more compactly. This is particularly important in sentences with distant entities, where the parse tree spans words that are not relevant to the relation. In such cases RNN cannot be trained effectively in a timely manner. On the other hand, dependency graphs lack phrasal nodes that complicates the application of RNN. In order to tackle this problem, we employ dependency constituent units called chains. Further, we devise two methods to incorporate chains into an RNN. The first model uses a fixed tree structure based on a heuristic, while the second one predicts the structure by means of a recursive autoencoder. Chain based RNN provides a smaller network which performs considerably faster, and achieves better classification results. Experiments on SemEval 2010 relation classification task and SemEval 2013 drug drug interaction task demonstrate the effectiveness of our approach compared with the state-of-the-art models.",
"title": ""
},
{
"docid": "517916f4c62bc7b5766efa537359349d",
"text": "Document-level sentiment classification aims to predict user’s overall sentiment in a document about a product. However, most of existing methods only focus on local text information and ignore the global user preference and product characteristics. Even though some works take such information into account, they usually suffer from high model complexity and only consider wordlevel preference rather than semantic levels. To address this issue, we propose a hierarchical neural network to incorporate global user and product information into sentiment classification. Our model first builds a hierarchical LSTM model to generate sentence and document representations. Afterwards, user and product information is considered via attentions over different semantic levels due to its ability of capturing crucial semantic components. The experimental results show that our model achieves significant and consistent improvements compared to all state-of-theart methods. The source code of this paper can be obtained from https://github. com/thunlp/NSC.",
"title": ""
},
{
"docid": "9c44aba7a9802f1fe95fbeb712c23759",
"text": "In relation extraction, distant supervision seeks to extract relations between entities from text by using a knowledge base, such as Freebase, as a source of supervision. When a sentence and a knowledge base refer to the same entity pair, this approach heuristically labels the sentence with the corresponding relation in the knowledge base. However, this heuristic can fail with the result that some sentences are labeled wrongly. This noisy labeled data causes poor extraction performance. In this paper, we propose a method to reduce the number of wrong labels. We present a novel generative model that directly models the heuristic labeling process of distant supervision. The model predicts whether assigned labels are correct or wrong via its hidden variables. Our experimental results show that this model detected wrong labels with higher performance than baseline methods. In the experiment, we also found that our wrong label reduction boosted the performance of relation extraction.",
"title": ""
}
] |
[
{
"docid": "17d0975d7bccf98c4ff3792d5687b3c1",
"text": "The research on two-wheel inverted pendulum or commonly call balancing robot has gained momentum over the last decade in a number of robotic laboratories around the world This paper deals with the modeling of 2-wheels Inverted Pendulum and the design of Full Order Sliding Mode Control (FOSMC) for the system. The mathematical model of 2-wheels inverted pendulum system that is highly nonlinear is derived. The final model is then represented in state-space form and the system suffers from mismatched condition. A robust controller based on Sliding Mode Control is proposed to perform the robust stabilization and disturbance rejection of the system. A computer simulation study is carried out to access the performance of the proposed control law.",
"title": ""
},
{
"docid": "8780b620d228498447c4f1a939fa5486",
"text": "A new mechanism is proposed for securing a blockchain applied to contracts management such as digital rights management. This mechanism includes a new consensus method using a credibility score and creates a hybrid blockchain by alternately using this new method and proof-of-stake. This makes it possible to prevent an attacker from monopolizing resources and to keep securing blockchains.",
"title": ""
},
{
"docid": "a36032c72d485d89e7eb5de784962a65",
"text": "OBJECTIVE\nThe past two decades have seen dramatic progress in our ability to model brain signals recorded by electroencephalography, functional near-infrared spectroscopy, etc., and to derive real-time estimates of user cognitive state, response, or intent for a variety of purposes: to restore communication by the severely disabled, to effect brain-actuated control and, more recently, to augment human-computer interaction. Continuing these advances, largely achieved through increases in computational power and methods, requires software tools to streamline the creation, testing, evaluation and deployment of new data analysis methods.\n\n\nAPPROACH\nHere we present BCILAB, an open-source MATLAB-based toolbox built to address the need for the development and testing of brain-computer interface (BCI) methods by providing an organized collection of over 100 pre-implemented methods and method variants, an easily extensible framework for the rapid prototyping of new methods, and a highly automated framework for systematic testing and evaluation of new implementations.\n\n\nMAIN RESULTS\nTo validate and illustrate the use of the framework, we present two sample analyses of publicly available data sets from recent BCI competitions and from a rapid serial visual presentation task. We demonstrate the straightforward use of BCILAB to obtain results compatible with the current BCI literature.\n\n\nSIGNIFICANCE\nThe aim of the BCILAB toolbox is to provide the BCI community a powerful toolkit for methods research and evaluation, thereby helping to accelerate the pace of innovation in the field, while complementing the existing spectrum of tools for real-time BCI experimentation, deployment and use.",
"title": ""
},
{
"docid": "1f3985e9c8bbad7279ee7ebfda74a8a8",
"text": "Adversarial perturbations of normal images are usually imperceptible to humans, but they can seriously confuse state-of-the-art machine learning models. What makes them so special in the eyes of image classifiers? In this paper, we show empirically that adversarial examples mainly lie in the low probability regions of the training distribution, regardless of attack types and targeted models. Using statistical hypothesis testing, we find that modern neural density models are surprisingly good at detecting imperceptible image perturbations. Based on this discovery, we devised PixelDefend, a new approach that purifies a maliciously perturbed image by moving it back towards the distribution seen in the training data. The purified image is then run through an unmodified classifier, making our method agnostic to both the classifier and the attacking method. As a result, PixelDefend can be used to protect already deployed models and be combined with other model-specific defenses. Experiments show that our method greatly improves resilience across a wide variety of state-of-the-art attacking methods, increasing accuracy on the strongest attack from 63% to 84% for Fashion MNIST and from 32% to 70% for CIFAR-10.",
"title": ""
},
{
"docid": "327f9ca4c80bb9b6efb4c386d9155aaa",
"text": "PV cell model is necessary both for software and hardware simulators in analyzing and testing the performance of PV generation systems. To get the characteristic non-linear I-V Curve of a PV cell, 5 characteristic parameters Rs, Rsh, I0 and vt should be extracted from the three remarkable operation points of the manufacture's datasheets, Isc, Voc, IMPP and VMPP. This paper analyzes conventional three representative PV modeling algorithms by comparison, and proposes a novel PV modeling algorithm which is fast, accurate and applicable to all kinds of PV cells such as Cr-Si type and thin-film type PV cells. Proposed theory is verified by simulations for various PV cell types such as Cr-Si type and thin-film type PV cells.",
"title": ""
},
{
"docid": "45986bb7bb041f50fac577e562347b61",
"text": "In this paper, we study the human locomotor adaptation to the action of a powered exoskeleton providing assistive torque at the user's hip during walking. To this end, we propose a controller that provides the user's hip with a fraction of the nominal torque profile, adapted to the specific gait features of the user from Winter's reference data . The assistive controller has been implemented on the ALEX II exoskeleton and tested on ten healthy subjects. Experimental results show that when assisted by the exoskeleton, users can reduce the muscle effort compared to free walking. Despite providing assistance only to the hip joint, both hip and ankle muscles significantly reduced their activation, indicating a clear tradeoff between hip and ankle strategy to propel walking.",
"title": ""
},
{
"docid": "fee10f3826337c0f901030be7fd32d28",
"text": "A temporal graph is a graph in which connections between vert ices are active at specific times, and such temporal information l eads to completely new patterns and knowledge that are not present i n a non-temporal graph. In this paper, we study traversal probl ems in a temporal graph. Graph traversals, such as DFS and BFS, are ba sic operations for processing and studying a graph. While both D FS and BFS are well-known simple concepts, it is non-trivial to adopt the same notions from a non-temporal graph to a temporal grap h. We analyze the difficulties of defining temporal graph traver sals and propose new definitions of DFS and BFS for a temporal graph . We investigate the properties of temporal DFS and BFS, and pr opose efficient algorithms with optimal complexity. In parti cular, we also study important applications of temporal DFS and BFS . We verify the efficiency and importance of our graph traversa l algorithms in real world temporal graphs.",
"title": ""
},
{
"docid": "a74880697c58a2c4cb84ef1626344316",
"text": "This article provides an overview of contemporary and forward looking inter-cell interference coordination techniques for 4G OFDM systems with a specific emphasis on implementations for LTE. Viable approaches include the use of power control, opportunistic spectrum access, intra and inter-base station interference cancellation, adaptive fractional frequency reuse, spatial antenna techniques such as MIMO and SDMA, and adaptive beamforming, as well as recent innovations in decoding algorithms. The applicability, complexity, and performance gains possible with each of these techniques based on simulations and empirical measurements will be highlighted for specific cellular topologies relevant to LTE macro, pico, and femto deployments for both standalone and overlay networks.",
"title": ""
},
{
"docid": "73adcdf18b86ab3598731d75ac655f2c",
"text": "Many individuals exhibit unconscious body movements called mannerisms while speaking. These repeated changes often distract the audience when not relevant to the verbal context. We present an intelligent interface that can automatically extract human gestures using Microsoft Kinect to make speakers aware of their mannerisms. We use a sparsity-based algorithm, Shift Invariant Sparse Coding, to automatically extract the patterns of body movements. These patterns are displayed in an interface with subtle question and answer-based feedback scheme that draws attention to the speaker's body language. Our formal evaluation with 27 participants shows that the users became aware of their body language after using the system. In addition, when independent observers annotated the accuracy of the algorithm for every extracted pattern, we find that the patterns extracted by our algorithm is significantly (p<0.001) more accurate than just random selection. This represents a strong evidence that the algorithm is able to extract human-interpretable body movement patterns. An interactive demo of AutoManner is available at http://tinyurl.com/AutoManner.",
"title": ""
},
{
"docid": "29df7f7e7739bd78f0d72986d43e3adf",
"text": "2009;53;992-1002; originally published online Feb 19, 2009; J. Am. Coll. Cardiol. and Leonard S. Gettes E. William Hancock, Barbara J. Deal, David M. Mirvis, Peter Okin, Paul Kligfield, International Society for Computerized Electrocardiology Endorsed by the Cardiology Foundation; and the Heart Rhythm Society Committee, Council on Clinical Cardiology; the American College of the American Heart Association Electrocardiography and Arrhythmias Associated With Cardiac Chamber Hypertrophy A Scientific Statement From Interpretation of the Electrocardiogram: Part V: Electrocardiogram Changes AHA/ACCF/HRS Recommendations for the Standardization and This information is current as of August 2, 2011 http://content.onlinejacc.org/cgi/content/full/53/11/992 located on the World Wide Web at: The online version of this article, along with updated information and services, is",
"title": ""
},
{
"docid": "450842d87097d457c94ec6f5729b547d",
"text": "Web crawlers are program, designed to fetch web pages for information retrieval system. Crawlers facilitate this process by following hyperlinks in web pages to automatically download new or update existing web pages in the repository. A web crawler interacts with millions of hosts, fetches millions of page per second and updates these pages into a database, creating a need for maintaining I/O performance, network resources within OS limit, which are essential in order to achieve high performance at a reasonable cost. This paper aims to showcase efficient techniques to develop a scalable web crawling system, addressing challenges which deals with issues related to the structure of the web, distributed computing, job scheduling, spider traps, canonicalizing URLs and inconsistent data formats on the web. A brief discussion on new web crawler architecture is done in this paper.",
"title": ""
},
{
"docid": "d9aac3e00316f9970d04eb5c46d16b4c",
"text": "Cannabis (Cannabis sativa, or hemp) and its constituents-in particular the cannabinoids-have been the focus of extensive chemical and biological research for almost half a century since the discovery of the chemical structure of its major active constituent, Δ9-tetrahydrocannabinol (Δ9-THC). The plant's behavioral and psychotropic effects are attributed to its content of this class of compounds, the cannabinoids, primarily Δ9-THC, which is produced mainly in the leaves and flower buds of the plant. Besides Δ9-THC, there are also non-psychoactive cannabinoids with several medicinal functions, such as cannabidiol (CBD), cannabichromene (CBC), and cannabigerol (CBG), along with other non-cannabinoid constituents belonging to diverse classes of natural products. Today, more than 560 constituents have been identified in cannabis. The recent discoveries of the medicinal properties of cannabis and the cannabinoids in addition to their potential applications in the treatment of a number of serious illnesses, such as glaucoma, depression, neuralgia, multiple sclerosis, Alzheimer's, and alleviation of symptoms of HIV/AIDS and cancer, have given momentum to the quest for further understanding the chemistry, biology, and medicinal properties of this plant.This contribution presents an overview of the botany, cultivation aspects, and the phytochemistry of cannabis and its chemical constituents. Particular emphasis is placed on the newly-identified/isolated compounds. In addition, techniques for isolation of cannabis constituents and analytical methods used for qualitative and quantitative analysis of cannabis and its products are also reviewed.",
"title": ""
},
{
"docid": "4621856b479672433f9f9dff86d4f4da",
"text": "Reproducibility of computational studies is a hallmark of scientific methodology. It enables researchers to build with confidence on the methods and findings of others, reuse and extend computational pipelines, and thereby drive scientific progress. Since many experimental studies rely on computational analyses, biologists need guidance on how to set up and document reproducible data analyses or simulations. In this paper, we address several questions about reproducibility. For example, what are the technical and non-technical barriers to reproducible computational studies? What opportunities and challenges do computational notebooks offer to overcome some of these barriers? What tools are available and how can they be used effectively? We have developed a set of rules to serve as a guide to scientists with a specific focus on computational notebook systems, such as Jupyter Notebooks, which have become a tool of choice for many applications. Notebooks combine detailed workflows with narrative text and visualization of results. Combined with software repositories and open source licensing, notebooks are powerful tools for transparent, collaborative, reproducible, and reusable data analyses.",
"title": ""
},
{
"docid": "9d0ed62f210d0e09db0cc6735699f5b3",
"text": "The process for drug discovery and development is challenging, time consuming and expensive. Computer-aided drug discovery (CADD) tools can act as a virtual shortcut, assisting in the expedition of this long process and potentially reducing the cost of research and development. Today CADD has become an effective and indispensable tool in therapeutic development. The human genome project has made available a substantial amount of sequence data that can be used in various drug discovery projects. Additionally, increasing knowledge of biological structures, as well as increasing computer power have made it possible to use computational methods effectively in various phases of the drug discovery and development pipeline. The importance of in silico tools is greater than ever before and has advanced pharmaceutical research. Here we present an overview of computational methods used in different facets of drug discovery and highlight some of the recent successes. In this review, both structure-based and ligand-based drug discovery methods are discussed. Advances in virtual high-throughput screening, protein structure prediction methods, protein-ligand docking, pharmacophore modeling and QSAR techniques are reviewed.",
"title": ""
},
{
"docid": "16a745f989f13cd16bec2b78d30806e1",
"text": "In this paper, a Smart Parking System (SPS) based on the integration of Ultra-High Frequency (UHF) Radio Frequency Identification (RFID) and IEEE 802.15.4 Wireless Sensor Network (WSN) technologies is presented. The system is able to collect information about the occupancy state of parking spaces, and to direct drivers to the nearest vacant parking spot by using a customized software application. Such application also leverages an NFC-based e-wallet system to allow users to pay for the parking fee. Furthermore, a software application based on RESTful Java and Google Cloud Messaging (GCM) technologies has been installed on a Central Server in order to manage alert events (e.g. improper use of a reserved space or expiration of the purchased time). In such a case, it promptly informs the traffic cops through an Android mobile app, which has been designed ad hoc for the considered scenario. A proof-of-concept has demonstrated that the proposed solution can meet the real requirements of a SPS.",
"title": ""
},
{
"docid": "3ae9d56474b91243c8d4244db9a25809",
"text": "Approximately one quarter of the food supplied for human consumption is wasted across the food supply chain. In the high income countries, the food waste generated at the household level represents about half of the total food waste, making this level one of the biggest contributors to food waste. Yet, there is still little evidence regarding the determinants of consumers' food waste behaviour. The present study examines the effect of psycho-social factors, food-related routines, household perceived capabilities and socio-demographic characteristics on self-reported food waste. Survey data gathered among 1062 Danish respondents measured consumers' intentions not to waste food, planning, shopping and reuse of leftovers routines, perceived capability to deal with household food-related activities, injunctive and moral norms, attitudes towards food waste, and perceived behavioural control. Results show that perceived behavioural control and routines related to shopping and reuse of leftovers are the main drivers of food waste, while planning routines contribute indirectly. In turn, the routines are related to consumers' perceived capabilities to deal with household related activities. With regard to intentional processes, injunctive norms and attitudes towards food waste have an impact while moral norms and perceived behavioural control make no significant contribution. Implications of the study for initiatives aimed at changing consumers' food waste behaviour are discussed.",
"title": ""
},
{
"docid": "ffafffd33a69dbf4f04f6f7b67b3b56b",
"text": "Significant advances have been made in Natural Language Processing (NLP) mod1 elling since the beginning of 2018. The new approaches allow for accurate results, 2 even when there is little labelled data, because these NLP models can benefit from 3 training on both task-agnostic and task-specific unlabelled data. However, these 4 advantages come with significant size and computational costs. 5 This workshop paper outlines how our proposed convolutional student architec6 ture, having been trained by a distillation process from a large-scale model, can 7 achieve 300× inference speedup and 39× reduction in parameter count. In some 8 cases, the student model performance surpasses its teacher on the studied tasks. 9",
"title": ""
},
{
"docid": "637d700bcb162dff3e6342cab1bc0f85",
"text": "This paper introduces a factorization-based approach that efficiently segments textured images. We use local spectral histograms as features, and construct an M × N feature matrix using M-dimensional feature vectors in an N-pixel image. Based on the observation that each feature can be approximated by a linear combination of several representative features, we factor the feature matrix into two matrices-one consisting of the representative features and the other containing the weights of representative features at each pixel used for linear combination. The factorization method is based on singular value decomposition and nonnegative matrix factorization. The method uses local spectral histograms to discriminate region appearances in a computationally efficient way and at the same time accurately localizes region boundaries. The experiments conducted on public segmentation data sets show the promise of this simple yet powerful approach.",
"title": ""
},
{
"docid": "53d48fc9cbc1c1371a7c2c22852fb880",
"text": "Advances in medicine have changed how patients experience the end of life. With longer life spans, there has also been an increase in years lived with disability. The clustering of illnesses in the last years of life is particularly pronounced in patients with cardiovascular disease. At the end of life, patients with cardiovascular disease are more symptomatic, less likely to die at home, and less likely to receive high-quality palliative care. Social determinants have created widening disparities in end-of-life care. The increasing complexity and duration of care have resulted in an epidemic of caregiver burden. Modern medical care has also resulted in new ethical challenges, for example, those related to deactivation of cardiac devices, such as pacemakers, defibrillators, and mechanical circulatory support. Recommendations to improve end-of-life care for patients with cardiovascular disease include optimizing metrics to assess quality, ameliorating disparities, enhancing education and research in palliative care, overcoming disparities, and innovating palliative care delivery and reimbursement.",
"title": ""
},
{
"docid": "1b2fcf85bc73f3249d8685e0063aaa3a",
"text": "In our present society, the cinema has become one of the major forms of entertainment providing unlimited contexts of emotion elicitation for the emotional needs of human beings. Since emotions are universal and shape all aspects of our interpersonal and intellectual experience, they have proved to be a highly multidisciplinary research field, ranging from psychology, sociology, neuroscience, etc., to computer science. However, affective multimedia content analysis work from the computer science community benefits but little from the progress achieved in other research fields. In this paper, a multidisciplinary state-of-the-art for affective movie content analysis is given, in order to promote and encourage exchanges between researchers from a very wide range of fields. In contrast to other state-of-the-art papers on affective video content analysis, this work confronts the ideas and models of psychology, sociology, neuroscience, and computer science. The concepts of aesthetic emotions and emotion induction, as well as the different representations of emotions are introduced, based on psychological and sociological theories. Previous global and continuous affective video content analysis work, including video emotion recognition and violence detection, are also presented in order to point out the limitations of affective video content analysis work.",
"title": ""
}
] |
scidocsrr
|
05f474b848a81bd2b9ea512dab8fb179
|
What's Wrong with my Solar Panels: a Data-Driven Approach
|
[
{
"docid": "c3525081c0f4eec01069dd4bd5ef12ab",
"text": "More than twelve years have elapsed since the first public release of WEKA. In that time, the software has been rewritten entirely from scratch, evolved substantially and now accompanies a text on data mining [35]. These days, WEKA enjoys widespread acceptance in both academia and business, has an active community, and has been downloaded more than 1.4 million times since being placed on Source-Forge in April 2000. This paper provides an introduction to the WEKA workbench, reviews the history of the project, and, in light of the recent 3.6 stable release, briefly discusses what has been added since the last stable version (Weka 3.4) released in 2003.",
"title": ""
}
] |
[
{
"docid": "937dec4b11b3d039c81ca258283f82e8",
"text": "Nonnegative matrix factorization (NMF) provides a lower rank approximation of a matrix by a product of two nonnegative factors. NMF has been shown to produce clustering results that are often superior to those by other methods such as K-means. In this paper, we provide further interpretation of NMF as a clustering method and study an extended formulation for graph clustering called Symmetric NMF (SymNMF). In contrast to NMF that takes a data matrix as an input, SymNMF takes a nonnegative similarity matrix as an input, and a symmetric nonnegative lower rank approximation is computed. We show that SymNMF is related to spectral clustering, justify SymNMF as a general graph clustering method, and discuss the strengths and shortcomings of SymNMF and spectral clustering. We propose two optimization algorithms for SymNMF and discuss their convergence properties and computational efficiencies. Our experiments on document clustering, image clustering, and image segmentation support SymNMF as a graph clustering method that captures latent linear and nonlinear relationships in the data.",
"title": ""
},
{
"docid": "c979c978f1b8c82c2b0b8235464e2bf1",
"text": "Cloud Computing is one of the biggest buzzwords in the computer world these days. It allows resource sharing that includes software, platform and infrastructure by means of virtualization. Virtualization is the core technology behind cloud resource sharing. This environment strives to be dynamic, reliable, and customizable with a guaranteed quality of service. Security is as much of an issue in the cloud as it is anywhere else. Different people share different point of view on cloud computing. Some believe it is unsafe to use cloud. Cloud vendors go out of their way to ensure security. This paper investigates few major security issues with cloud computing and the existing counter measures to those security challenges in the world of cloud computing..",
"title": ""
},
{
"docid": "67bc6aa954413241827114fd20686355",
"text": "Hardware-based Trusted Execution Environments (TEEs) are widely deployed in mobile devices. Yet their use has been limited primarily to applications developed by the device vendors. Recent standardization of TEE interfaces by GlobalPlatform (GP) promises to partially address this problem by enabling GP-compliant trusted applications to run on TEEs from different vendors. Nevertheless ordinary developers wishing to develop trusted applications face significant challenges. Access to hardware TEE interfaces are difficult to obtain without support from vendors. Tools and software needed to develop and debug trusted applications may be expensive or non-existent. In this paper, we describe Open-TEE, a virtual, hardware-independent TEE implemented in software. Open-TEE conforms to GP specifications. It allows developers to develop and debug trusted applications with the same tools they use for developing software in general. Once a trusted application is fully debugged, it can be compiled for any actual hardware TEE. Through performance measurements and a user study we demonstrate that Open-TEE is efficient and easy to use. We have made Open-TEE freely available as open source.",
"title": ""
},
{
"docid": "209903813ce1e8d630bcde29f5666906",
"text": "Online reviews provide consumers with valuable information that guides their decisions on a variety of fronts: from entertainment and shopping to medical services. Although the proliferation of online reviews gives insights about different aspects of a product, it can also prove a serious drawback: consumers cannot and will not read thousands of reviews before making a purchase decision. This need to extract useful information from large review corpora has spawned considerable prior work, but so far all have drawbacks. Review summarization (generating statistical descriptions of review sets) sacrifices the immediacy and narrative structure of reviews. Likewise, review selection (identifying a subset of 'helpful' or 'important' reviews) leads to redundant or non-representative summaries. In this paper, we fill the gap between existing review-summarization and review-selection methods by selecting a small subset of reviews that together preserve the statistical properties of the entire review corpus. We formalize this task as a combinatorial optimization problem and show that it NP-hard both tosolve and approximate. We also design effective algorithms that prove to work well in practice. Our experiments with real review corpora on different types of products demonstrate the utility of our methods, and our user studies indicate that our methods provide a better summary than prior approaches.",
"title": ""
},
{
"docid": "c2b1dd2d2dd1835ed77cf6d43044eed8",
"text": "The artificial neural networks that are used to recognize shapes typically use one or more layers of learned feature detectors that produce scalar outputs. By contrast, the computer vision community uses complicated, hand-engineered features, like SIFT [6], that produce a whole vector of outputs including an explicit representation of the pose of the feature. We show how neural networks can be used to learn features that output a whole vector of instantiation parameters and we argue that this is a much more promising way of dealing with variations in position, orientation, scale and lighting than the methods currently employed in the neural networks community. It is also more promising than the handengineered features currently used in computer vision because it provides an efficient way of adapting the features to the domain.",
"title": ""
},
{
"docid": "7002ccec7f0959ec6faf81f924aa23e5",
"text": "Relighting of human images has various applications in image synthesis. For relighting, we must infer albedo, shape, and illumination from a human portrait. Previous techniques rely on human faces for this inference, based on spherical harmonics (SH) lighting. However, because they often ignore light occlusion, inferred shapes are biased and relit images are unnaturally bright particularly at hollowed regions such as armpits, crotches, or garment wrinkles. This paper introduces the first attempt to infer light occlusion in the SH formulation directly. Based on supervised learning using convolutional neural networks (CNNs), we infer not only an albedo map, illumination but also a light transport map that encodes occlusion as nine SH coefficients per pixel. The main difficulty in this inference is the lack of training datasets compared to unlimited variations of human portraits. Surprisingly, geometric information including occlusion can be inferred plausibly even with a small dataset of synthesized human figures, by carefully preparing the dataset so that the CNNs can exploit the data coherency. Our method accomplishes more realistic relighting than the occlusion-ignored formulation.",
"title": ""
},
{
"docid": "511991822f427c3f62a4c091594e89e3",
"text": "Reinforcement learning has recently gained popularity due to its many successful applications in various fields. In this project reinforcement learning is implemented in a simple warehouse situation where robots have to learn to interact with each other while performing specific tasks. The aim is to study whether reinforcement learning can be used to train multiple agents. Two different methods have been used to achieve this aim, Q-learning and deep Q-learning. Due to practical constraints, this paper cannot provide a comprehensive review of real life robot interactions. Both methods are tested on single-agent and multi-agent models in Python computer simulations. The results show that the deep Q-learning model performed better in the multiagent simulations than the Q-learning model and it was proven that agents can learn to perform their tasks to some degree. Although, the outcome of this project cannot yet be considered sufficient for moving the simulation into reallife, it was concluded that reinforcement learning and deep learning methods can be seen as suitable for modelling warehouse robots and their interactions.",
"title": ""
},
{
"docid": "51ea8936c266077b1522d1d953d356ec",
"text": "Speech data typically contains task irrelevant information lying within features. Specifically, phonetic information, speaker characteristic information, emotional information and noise are always mixed together and tend to impair one another for certain task. We propose a new type of auto-encoder for feature learning called contrastive auto-encoder. Unlike other variants of auto-encoders, contrastive auto-encoder is able to leverage class labels in constructing its representation layer. We achieve this by modeling two autoencoders together and making their differences contribute to the total loss function. The transformation built with contrastive auto-encoder can be seen as a task-specific and invariant feature learner. Our experiments on TIMIT clearly show the superiority of the feature extracted from contrastive auto-encoder over original acoustic feature, feature extracted from deep auto-encoder, and feature extracted from a model that contrastive auto-encoder originates from.",
"title": ""
},
{
"docid": "db26d71ec62388e5367eb0f2bb45ad40",
"text": "The linear programming (LP) is one of the most popular necessary optimization tool used for data analytics as well as in various scientific fields. However, the current state-of-art algorithms suffer from scalability issues when processing Big Data. For example, the commercial optimization software IBM CPLEX cannot handle an LP with more than hundreds of thousands variables or constraints. Existing algorithms are fundamentally hard to scale because they are inevitably too complex to parallelize. To address the issue, we study the possibility of using the Belief Propagation (BP) algorithm as an LP solver. BP has shown remarkable performances on various machine learning tasks and it naturally lends itself to fast parallel implementations. Despite this, very little work has been done in this area. In particular, while it is generally believed that BP implicitly solves an optimization problem, it is not well understood under what conditions the solution to a BP converges to that of a corresponding LP formulation. Our efforts consist of two main parts. First, we perform a theoretic study and establish the conditions in which BP can solve LP [1,2]. Although there has been several works studying the relation between BP and LP for certain instances, our work provides a generic condition unifying all prior works for generic LP. Second, utilizing our theoretical results, we develop a practical BP-based parallel algorithms for solving generic LPs, and it shows 71x speed up while sacrificing only 0.1% accuracy compared to the state-of-art exact algorithm [3, 4]. As a result of the study, the PIs have published two conference papers [1,3] and two follow-up journal papers [3,4] are under submission. We refer the readers to our published work [1,3] for details. Introduction: The main goal of our research is to develop a distributed and parallel algorithm for large-scale linear optimization (or programming). Considering the popularity and importance of linear optimizations in various fields, the proposed method has great potentials applicable to various big data analytics. Our approach is based on the Belief Propagation (BP) algorithm, which has shown remarkable performances on various machine learning tasks and naturally lends itself to fast parallel implementations. Our key contributions are summarized below: 1) We establish key theoretic foundations in the area of Belief Propagation. In particular, we show that BP converges to the solution of LP if some sufficient conditions are satisfied. Our DISTRIBUTION A. Approved for public release: distribution unlimited. conditions not only cover various prior studies including maximum weight matching, mincost network flow, shortest path, etc., but also discover new applications such as vertex cover and traveling salesman. 2) While the theoretic study provides understanding of the nature of BP, it falls short in slow convergence speed, oscillation and wrong convergence. To make BP-based algorithms more practical, we design a BP-based framework which uses BP as a ‘weight transformer’ to resolve the convergence issue of BP. We refer the readers to our published work [1, 3] for details. The rest of the report contains a summary of our work appeared in UAI (Uncertainty in Artificial Intelligence) and IEEE Conference in Big Data [1,3] and follow up work [2,4] under submission to major journals. Experiment: We first establish theoretical conditions when Belief Propagation (BP) can solve Linear Programming (LP), and second provide a practical distributed/parallel BP-based framework solving generic optimizations. We demonstrate the wide-applicability of our approach via popular combinatorial optimizations including maximum weight matching, shortest path, traveling salesman, cycle packing and vertex cover. Results and Discussion: Our contribution consists of two parts: Study 1 [1,2] looks at the theoretical conditions that BP converges to the solution of LP. Our theoretical result unify almost all prior result about BP for combinatorial optimization. Furthermore, our conditions provide a guideline for designing distributed algorithm for combinatorial optimization problems. Study 2 [3,4] focuses on building an optimal framework based on the theory of Study 1 for boosting the practical performance of BP. Our framework is generic, thus, it can be easily extended to various optimization problems. We also compare the empirical performance of our framework to other heuristics and state of the art algorithms for several combinatorial optimization problems. -------------------------------------------------------Study 1 -------------------------------------------------------We first introduce the background for our contributions. A joint distribution of � (binary) variables � = [��] ∈ {0,1}� is called graphical model (GM) if it factorizes as follows: for � = [��] ∈ {0,1}�, where ψψ� ,�� are some non-negative functions so called factors; � is a collection of subsets (each αα� is a subset of {1,⋯ ,�} with |��| ≥ 2; �� is the projection of � onto dimensions included in αα. Assignment �∗ is called maximum-a-posteriori (MAP) assignment if �∗maximizes the probability. The following figure depicts the graphical relation between factors � and variables �. DISTRIBUTION A. Approved for public release: distribution unlimited. Figure 1: Factor graph for the graphical model with factors αα1 = {1,3},�2 = {1,2,4},�3 = {2,3,4} Now we introduce the algorithm, (max-product) BP, for approximating MAP assignment in a graphical model. BP is an iterative procedure; at each iteration �, there are four messages between each variable �� and every associated αα ∈ ��, where ��: = {� ∈ �:� ∈ �}. Then, messages are updated as follows: Finally, given messages, BP marginal beliefs are computed as follows: Then, BP outputs the approximated MAP assignment ��� = [��] as Now, we are ready to introduce the main result of Study 1. Consider the following GM: for � = [��] ∈ {0,1}� and � = [��] ∈ ��, where the factor function ψψαα for αα ∈ � is defined as for some matrices ��,�� and vectors ��,��. Consider the Linear Programming (LP) corresponding the above GM: One can easily observe that the MAP assignments for GM corresponds to the (optimal) solution of the above LP if the LP has an integral solution �∗ ∈ {0,1}�. The following theorem is our main result of Study 1 which provide sufficient conditions so that BP can indeed find the LP solution DISTRIBUTION A. Approved for public release: distribution unlimited. Theorem 1 can be applied to several combinatorial optimization problems including matching, network flow, shortest path, vertex cover, etc. See [1,2] for the detailed proof of Theorem 1 and its applications to various combinatorial optimizations including maximum weight matching, min-cost network flow, shortest path, vertex cover and traveling salesman. -------------------------------------------------------Study 2 -------------------------------------------------------Study 2 mainly focuses on providing a distributed generic BP-based combinatorial optimization solver which has high accuracy and low computational complexity. In summary, the key contributions of Study 2 are as follows: 1) Practical BP-based algorithm design: To the best of our knowledge, this paper is the first to propose a generic concept for designing BP-based algorithms that solve large-scale combinatorial optimization problems. 2) Parallel implementation: We also demonstrate that the algorithm is easily parallelizable. For the maximum weighted matching problem, this translates to 71x speed up while sacrificing only 0.1% accuracy compared to the state-of-art exact algorithm. 3) Extensive empirical evaluation: We evaluate our algorithms on three different combinatorial optimization problems on diverse synthetic and real-world data-sets. Our evaluation shows that the framework shows higher accuracy compared to other known heuristics. Designing a BP-based algorithm for some problem is easy in general. However (a) it might diverge or converge very slowly, (b) even if it converges quickly, the BP decision might be not correct, and (c) even worse, BP might produce an infeasible solution, i.e., it does not satisfy the constraints of the problem. DISTRIBUTION A. Approved for public release: distribution unlimited. Figure 2: Overview of our generic BP-based framework To address these issues, we propose a generic BP-based framework that provides highly accurate approximate solutions for combinatorial optimization problems. The framework has two steps, as shown in Figure 2. In the first phase, it runs a BP algorithm for a fixed number of iterations without waiting for convergence. Then, the second phase runs a known heuristic using BP beliefs instead of the original weights to output a feasible solution. Namely, the first and second phases are respectively designed for ‘BP weight transforming’ and ‘post-processing’. Note that our evaluation mainly uses the maximum weight matching problem. The formal description of the maximum weight matching (MWM) problem is as follows: Given a graph � = (�,�) and edge weights � = [��] ∈ �|�|, it finds a set of edges such that each vertex is connected to at most one edge in the set and the sum of edge weights in the set is maximized. The problem is formulated as the following IP (Integer Programming): where δδ(�) is the set of edges incident to vertex � ∈ �. In the following paragraphs, we describe the two phases in more detail in reverse order. We first describe the post-processing phase. As we mentioned, one of the main issue of a BP-based algorithm is that the decision on BP beliefs might give an infeasible solution. To resolve the issue, we use post-processing by utilizing existing heuristics to the given problem that find a feasible solution. Applying post-processing ensures that the solution is at least feasible. In addition, our key idea is to replace the original weights by the logarithm of BP beliefs, i.e. function of (3). After th",
"title": ""
},
{
"docid": "48aa68862748ab502f3942300b4d8e1e",
"text": "While data volumes continue to rise, the capacity of human attention remains limited. As a result, users need analytics engines that can assist in prioritizing attention in this fast data that is too large for manual inspection. We present a set of design principles for the design of fast data analytics engines that leverage the relative scarcity of human attention and overabundance of data: return fewer results, prioritize iterative analysis, and filter fast to compute less. We report on our early experiences employing these principles in the design and deployment of MacroBase, an open source analysis engine for prioritizing attention in fast data. By combining streaming operators for feature transformation, classification, and data summarization, MacroBase provides users with interpretable explanations of key behaviors, acting as a search engine for fast data.",
"title": ""
},
{
"docid": "58cfc1f2f7c56794cdf0d81133253c00",
"text": "Machine reading comprehension with unanswerable questions aims to abstain from answering when no answer can be inferred. In addition to extract answers, previous works usually predict an additional “no-answer” probability to detect unanswerable cases. However, they fail to validate the answerability of the question by verifying the legitimacy of the predicted answer. To address this problem, we propose a novel read-then-verify system, which not only utilizes a neural reader to extract candidate answers and produce noanswer probabilities, but also leverages an answer verifier to decide whether the predicted answer is entailed by the input snippets. Moreover, we introduce two auxiliary losses to help the reader better handle answer extraction as well as noanswer detection, and investigate three different architectures for the answer verifier. Our experiments on the SQuAD 2.0 dataset show that our system obtains a score of 74.2 F1 on test set, achieving state-of-the-art results at the time of submission (Aug. 28th, 2018).",
"title": ""
},
{
"docid": "d5b20e250e28cae54a7f3c868f342fc5",
"text": "Context: Reusing software by means of copy and paste is a frequent activity in software development. The duplicated code is known as a software clone and the activity is known as code cloning. Software clones may lead to bug propagation and serious maintenance problems. Objective: This study reports an extensive systematic literature review of software clones in general and software clone detection in particular. Method: We used the standard systematic literature review method based on a comprehensive set of 213 articles from a total of 2039 articles published in 11 leading journals and 37 premier conferences and",
"title": ""
},
{
"docid": "32f1361440bd78021aef847a3ffe1c3f",
"text": "Data analysis and presentation, together with interpretation of the results and report writing, form the last step in the water quality assessment process (see Figure 2.2). It is this phase that shows how successful the monitoring activities have been in attaining the objectives of the assessment. It is also the step that provides the information needed for decision making, such as choosing the most appropriate solution to a water quality problem, assessing the state of the environment or refining the water quality assessment process itself. Although computers now help the process of data analysis and presentation considerably, these activities are still very labour intensive. In addition, they require a working knowledge of all the preceding steps of the water quality assessment (see Figure 2.2), as well as a good understanding of statistics as it applies to the science of water quality assessment. This is perhaps one of the reasons why data analysis and interpretation do not always receive proper attention when water quality studies are planned and implemented. Although the need to integrate this activity with all the other activities of the assessment process seems quite obvious, achievement of this is often difficult. The \" data rich, information poor \" syndrome is common in many agencies, both in developed and developing countries. This chapter gives some guidelines and techniques for water quality data analysis and presentation. Emphasis is placed on the simpler methods, although the more complex procedures are mentioned to serve as a starting point for those who may want to use them, or to help in understanding published material which has used these techniques. For those individuals with limited knowledge of statistical procedures, caution is recommended before applying some of the techniques described in this chapter. With the advent of computers and the associated statistical software, it is often too easy to invoke techniques which are inappropriate to the data, without considering whether they are actually suitable, simply because the statistical tests are readily available on the computer and involve no computational effort. If in doubt, consult a statistician, preferably before proceeding too far with the data collection, i.e. at the planning and design phase of the assessment programme. The collection of appropriate numbers of samples from representative locations is particularly important for the final stages of data analysis and interpretation of results. The subject of statistical sampling and programme design is complex and cannot be discussed in …",
"title": ""
},
{
"docid": "0f48d860b9ab4527293ae53b3c3092fe",
"text": "6 Relationships between common water bacteria and pathogens in drinking-water H. Leclerc 6.1 INTRODUCTION To perform a risk analysis for pathogens in drinking-water, it is necessary, on the one hand, to promote epidemiological studies, such as prospective cohort and case–control studies. It is also appropriate, on the other hand, to better understand the ecology of these microorganisms, especially in analysing in detail the interactions between common water bacteria and pathogens in such diverse habitats as free water and biofilms. It appears essential to distinguish two categories of drinking-water sources: surface water and groundwater under the direct influence of surface water",
"title": ""
},
{
"docid": "d4cd0dabcf4caa22ad92fab40844c786",
"text": "NA",
"title": ""
},
{
"docid": "3f1ebe976a39dabe2270d4c882b9bcea",
"text": "We propose a new active learning algorithm for parametric linear regression with random design. We provide finite sample convergence guarantees for general distributions in the misspecified model. This is the first active learner for this setting that provably can improve over passive learning. Unlike other learning settings (such as classification), in regression the passive learning rate of O(1/ ) cannot in general be improved upon. Nonetheless, the so-called ‘constant’ in the rate of convergence, which is characterized by a distribution-dependent risk, can be improved in many cases. For a given distribution, achieving the optimal risk requires prior knowledge of the distribution. Following the stratification technique advocated in Monte-Carlo function integration, our active learner approaches the optimal risk using piecewise constant approximations.",
"title": ""
},
{
"docid": "65f520d865de2ce9cfbed043c0822228",
"text": "Container based virtualization is rapidly growing in popularity for cloud deployments and applications as a virtualization alternative due to the ease of deployment coupled with high-performance. Emerging byte-addressable, nonvolatile memories, commonly called Storage Class Memory or SCM, technologies are promising both byte-addressability and persistence near DRAM speeds operating on the main memory bus. These new memory alternatives open up a new realm of applications that no longer have to rely on slow, block-based persistence, but can rather operate directly on persistent data using ordinary loads and stores through the cache hierarchy coupled with transaction techniques. However, SCM presents a new challenge for container-based applications, which typically access persistent data through layers of block based file isolation. Traditional persistent data accesses in containers are performed through layered file access, which slows byte-addressable persistence and transactional guarantees, or through direct access to drivers, which do not provide for isolation guarantees or security. This paper presents a high-performance containerized version of byte-addressable, non-volatile memory (SCM) for applications running inside a container that solves performance challenges while providing isolation guarantees. We created an open-source container-aware Linux loadable Kernel Module (LKM) called Containerized Storage Class Memory, or CSCM, that presents SCM for application isolation and ease of portability. We performed evaluation using microbenchmarks, STREAMS, and Redis, a popular in-memory data structure store, and found our CSCM driver has near the same memory throughput for SCM applications as a non-containerized application running on a host and much higher throughput than persistent in-memory applications accessing SCM through Docker Storage or Volumes.",
"title": ""
},
{
"docid": "0854f7b29b54d6610540d903b0117920",
"text": "The interactions in the world of the web and the possibilities offered by information and communication technologies are becoming increasingly unlimited and at the same time produce large volumes of data. The management and classification of these data becomes difficult in Many areas. The smart city seeks nature to facilitate the complexity of urban life which faces the same problems and challenges posed by large data due to the diversity and plurality of sectors (political, economic, social, administrative, cultural, etc.), Identify the problem of producing and selecting relevant and customized services, real-time decision making is a problem for users.\n In this contribution, we propose semantic recommendation architecture for service in the smart city, based on the user profile in order to produce relevant services adapted to their preferences, using semantic Web technologies and the Recommendation system.",
"title": ""
},
{
"docid": "4b4dc34feba176a30bced5b7dbe4fe7b",
"text": "The Bitcoin ecosystem has suffered frequent thefts and losses affecting both businesses and individuals. The insider threat faced by a business is particularly serious. Due to the irreversibility, automation, and pseudonymity of transactions, Bitcoin currently lacks support for the sophisticated internal control systems deployed by modern businesses to deter fraud. We seek to bridge this gap. We show that a thresholdsignature scheme compatible with Bitcoin’s ECDSA signatures can be used to enforce complex yet useful security policies including: (1) shared control of a wallet, (2) secure bookkeeping, a Bitcoin-specific form of accountability, (3) secure delegation of authority, and (4) two-factor security for personal wallets.",
"title": ""
},
{
"docid": "c8cb32e37aa01b712c7e6921800fbe60",
"text": "Risky families are characterized by conflict and aggression and by relationships that are cold, unsupportive, and neglectful. These family characteristics create vulnerabilities and/or interact with genetically based vulnerabilities in offspring that produce disruptions in psychosocial functioning (specifically emotion processing and social competence), disruptions in stress-responsive biological regulatory systems, including sympathetic-adrenomedullary and hypothalamic-pituitary-adrenocortical functioning, and poor health behaviors, especially substance abuse. This integrated biobehavioral profile leads to consequent accumulating risk for mental health disorders, major chronic diseases, and early mortality. We conclude that childhood family environments represent vital links for understanding mental and physical health across the life span.",
"title": ""
}
] |
scidocsrr
|
db850941ed3879148b758ec1c7f6c5ed
|
A portable 24-GHz FMCW radar based on six-port for short-range human tracking
|
[
{
"docid": "a80a539bf4e233e9dbde52426bf890d3",
"text": "Innovative technology approaches have been increasingly investigated for the last two decades aiming at human-being long-term monitoring. However, current solutions suffer from critical limitations. In this paper, a complete system for contactless health-monitoring in home environment is presented. For the first time, radar, wireless communications, and data processing techniques are combined, enabling contactless fall detection and tagless localization. Practical limitations are considered and properly dealt with. Experimental tests, conducted with human volunteers in a realistic room setting, demonstrate an adequate detection of the target's absolute distance and a success rate of 94.3% in distinguishing fall events from normal movements. The volunteers were free to move about the whole room with no constraints in their movements.",
"title": ""
}
] |
[
{
"docid": "3b988fe1c91096f67461dc9fc7bb6fae",
"text": "The paper analyzes the test setup required by the International Electrotechnical Commission (IEC) 61000-4-4 to evaluate the immunity of electronic equipment to electrical fast transients (EFTs), and proposes an electrical model of the capacitive coupling clamp, which is employed to add disturbances to nominal signals. The study points out limits on accuracy of this model, and shows how it can be fruitfully employed to predict the interference waveform affecting nominal system signals through computer simulations.",
"title": ""
},
{
"docid": "95d24478b92f8e5d096481bac0622d53",
"text": "We present MultiPoint, a set of perspective-based remote pointing techniques that allows users to perform bimanual and multi-finger remote manipulation of graphical objects on large displays. We conducted two empirical studies that compared remote pointing techniques performed using fingers and laser pointers, in single and multi-finger pointing interactions. We explored three types of manual selection gestures: squeeze, breach and trigger. The fastest and most preferred technique was the trigger gesture in the single point experiment and the unimanual breach gesture in the multi-finger pointing study. The laser pointer obtained mixed results: it is fast, but inaccurate in single point, and it obtained the lowest ranking and performance in the multipoint experiment. Our results suggest MultiPoint interaction techniques are superior in performance and accuracy to traditional laser pointers for interacting with graphical objects on a large display from a distance. & 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "98ead4f3cee84b4db8be568ec125c786",
"text": "This paper assesses the potential impact of FinTech on the finance industry, focusing on financial stability and access to services. I document first that financial services remain surprisingly expensive, which explains the emergence of new entrants. I then argue that the current regulatory approach is subject to significant political economy and coordination costs, and therefore unlikely to deliver much structural change. FinTech, on the other hand, can bring deep changes but is likely to create significant regulatory challenges.",
"title": ""
},
{
"docid": "fb223abb83654f316da33d9c97f3173f",
"text": "Online peer-to-peer (P2P) lending services are a new type of social platform that enables individuals borrow and lend money directly from one to another. In this paper, we study the dynamics of bidding behavior in a P2P loan auction website, Prosper.com. We investigate the change of various attributes of loan requesting listings over time, such as the interest rate and the number of bids. We observe that there is herding behavior during bidding, and for most of the listings, the numbers of bids they receive reach spikes at very similar time points. We explain these phenomena by showing that there are economic and social factors that lenders take into account when deciding to bid on a listing. We also observe that the profits the lenders make are tied with their bidding preferences. Finally, we build a model based on the temporal progression of the bidding, that reliably predicts the success of a loan request listing, as well as whether a loan will be paid back or not.",
"title": ""
},
{
"docid": "e0eb48acede0ae18b7dfdd9ecf97448d",
"text": "We investigate the idea of using a topic model such as the popular Latent Dirichlet Allocation model as a feature selection step for unsupervised document clustering, where documents are clustered using the proportion of the various topics that are present in each document. One concern with using “vanilla” LDA as a feature selection method for input to a clustering algorithm is that the Dirichlet prior on the topic mixing proportions is too smooth and well-behaved. It does not encourage a “bumpy” distribution of topic mixing proportion vectors, which is what one would desire as input to a clustering algorithm. As such, we propose two variant topic models that are designed to do a better job of producing topic mixing proportions that have a good clustering structure.",
"title": ""
},
{
"docid": "ed8d116bf4ade5003506914bbb1db750",
"text": "User interfaces in modeling have traditionally followed the WIMP (Window, Icon, Menu, Pointer) paradigm. Though functional and very powerful, they can also be cumbersome and daunting to a novice user, and creating a complex model requires considerable expertise and effort. A recent trend is toward more accessible and natural interfaces, which has lead to sketch-based interfaces for modeling (SBIM). The goal is to allow sketches—hasty freehand drawings—to be used in the modeling process, from rough model creation through to fine detail construction. Mapping a 2D sketch to a 3D modeling operation is a difficult task, rife with ambiguity. To wit, we present a categorization based on how a SBIM application chooses to interpret a sketch, of which there are three primary methods: to create a 3D model, to add details to an existing model, or to deform and manipulate a model. Additionally, in this paper we introduce a survey of sketch-based interfaces focused on 3D geometric modeling applications. The canonical and recent works are presented and classified, including techniques for sketch acquisition, filtering, and interpretation. The survey also provides an overview of some specific applications of SBIM and a discussion of important challenges and open problems for researchers to tackle in the coming years. & 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "a33f862d0b7dfde7b9f18aa193db9acf",
"text": "Phytoremediation is an important process in the removal of heavy metals and contaminants from the soil and the environment. Plants can help clean many types of pollution, including metals, pesticides, explosives, and oil. Phytoremediation in phytoextraction is a major technique. In this process is the use of plants or algae to remove contaminants in the soil, sediment or water in the harvesting of plant biomass. Heavy metal is generally known set of elements with atomic mass (> 5 gcm -3), particularly metals such as exchange of cadmium, lead and mercury. Between different pollutant cadmium (Cd) is the most toxic and plant and animal heavy metals. Mustard (Brassica juncea L.) and Sunflower (Helianthus annuus L.) are the plant for the production of high biomass and rapid growth, and it seems that the appropriate species for phytoextraction because it can compensate for the low accumulation of cadmium with a much higher biomass yield. To use chelators, such as acetic acid, ethylene diaminetetraacetic acid (EDTA), and to increase the solubility of metals in the soil to facilitate easy availability indiscernible and the absorption of the plant from root leg in vascular plants. *Corresponding Author: Awais Shakoor [email protected] Journal of Biodiversity and Environmental Sciences (JBES) ISSN: 2220-6663 (Print) 2222-3045 (Online) Vol. 10, No. 3, p. 88-98, 2017 http://www.innspub.net J. Bio. Env. Sci. 2017 89 | Shakoor et al. Introduction Phytoremediation consists of Greek and words of \"station\" and Latin remedium plants, which means \"rebalancing\" describes the treatment of environmental problems treatment (biological) through the use of plants that mitigate the environmental problem without digging contaminated materials and disposed of elsewhere. Controlled by the plant interactions with groundwater and organic and inorganic contaminated materials in specific locations to achieve therapeutic targets molecules site application (Landmeyer, 2011). Phytoremediation is the use of green plants to remove contaminants from the environment or render them harmless. The technology that uses plants to\" green space \"of heavy metals in the soil through the roots. While vacuum cleaners and you should be able to withstand and survive high levels of heavy metals in the soil unique plants (Baker, 2000). The main result in increasing the population and more industrialization are caused water and soil contamination that is harmful for environment as well as human health. In the whole world, contamination in the soil by heavy metals has become a very serious issue. So, removal of these heavy metals from the soil is very necessary to protect the soil and human health. Both inorganic and organic contaminants, like petroleum, heavy metals, agricultural waste, pesticide and fertilizers are the main source that deteriorate the soil health (Chirakkara et al., 2016). Heavy metals have great role in biological system, so we can divide into two groups’ essentials and non essential. Those heavy metals which play a vital role in biochemical and physiological function in some living organisms are called essential heavy metals, like zinc (Zn), nickel (Ni) and cupper (Cu) (Cempel and Nikel, 2006). In some living organisms, heavy metals don’t play any role in biochemical as well as physiological functions are called non essential heavy metals, such as mercury (Hg), lead (Pb), arsenic (As), and Cadmium (Cd) (Dabonne et al., 2010). Cadmium (Cd) is consider as a non essential heavy metal that is more toxic at very low concentration as compare to other non essential heavy metals. It is toxic to plant, human and animal health. Cd causes serious diseases in human health through the food chain (Rafiq et al., 2014). So, removal of Cd from the soil is very important problem to overcome these issues (Neilson and Rajakaruna, 2015). Several methods are used to remove the Cd from the soil, such as physical, chemical and physiochemical to increase the soil pH (Liu et al., 2015). The main source of Cd contamination in the soil and environment is automobile emissions, batteries and commercial fertilizers (Liu et al., 2015). Phytoremediation is a promising technique that is used in removing the heavy metals form the soil (Ma et al., 2011). Plants update the heavy metals through the root and change the soil properties which are helpful in increasing the soil fertility (Mench et al., 2009). Plants can help clean many types of pollution, including metals, pesticides, explosives, and oil. Plants also help prevent wind and rain, groundwater and implementation of pollution off site to other areas. Phytoremediation works best in locations with low to moderate amounts of pollution. Plants absorb harmful chemicals from the soil when the roots take in water and nutrients from contaminated soils, streams and groundwater. Once inside the plant and chemicals can be stored in the roots, stems, or leaves. Change of less harmful chemicals within the plant. Or a change in the gases that are released into the air as a candidate plant Agency (US Environmental Protection, 2001). Phytoremediation is the direct use of living green plants and minutes to stabilize or reduce pollution in soil, sludge, sediment, surface water or groundwater bodies with low concentrations of pollutants a large clean space and shallow depths site offers favorable treatment plant (associated with US Environmental Protection Agency 0.2011) circumstances. Phytoremediation is the use of plants for the treatment of contaminated soil sites and sediments J. Bio. Env. Sci. 2017 90 | Shakoor et al. and water. It is best applied at sites of persistent organic pollution with shallow, nutrient, or metal. Phytoremediation is an emerging technology for contaminated sites is attractive because of its low cost and versatility (Schnoor, 1997). Contaminated soils on the site using the processing plants. Phytoremediation is a plant that excessive accumulation of metals in contaminated soils in growth (National Research Council, 1997). Phytoremediation to facilitate the concentration of pollutants in contaminated soil, water or air is composed, and plants able to contain, degrade or eliminate metals, pesticides, solvents, explosives, crude oil and its derivatives, and other contaminants in the media that contain them. Phytoremediation have several techniques and these techniques depend on different factors, like soil type, contaminant type, soil depth and level of ground water. Special operation situations and specific technology applied at the contaminated site (Hyman and Dupont 2001). Techniques of phytoremediation Different techniques are involved in phytoremediation, such as phytoextraction, phytostabilisation, phytotransformation, phytostimulation, phytovolatilization, and rhizofiltration. Phytoextraction Phytoextraction is also called phytoabsorption or phytoaccumulation, in this technique heavy metals are removed by up taking through root form the water and soil environment, and accumulated into the shoot part (Rafati et al., 2011). Phytostabilisation Phytostabilisation is also known as phytoimmobilization. In this technique different type of plants are used for stabilization the contaminants from the soil environment (Ali et al., 2013). By using this technique, the bioavailability and mobility of the different contaminants are reduced. So, this technique is help to avoiding their movement into food chain as well as into ground water (Erakhrumen, 2007). Nevertheless, Phytostabilisation is the technique by which movement of heavy metals can be stop but its not permanent solution to remove the contamination from the soil. Basically, phytostabilisation is the management approach for inactivating the potential of toxic heavy metals form the soil environment contaminants (Vangronsveld et al., 2009).",
"title": ""
},
{
"docid": "a16be992aa947c8c5d2a7c9899dfbcd8",
"text": "The effect of the Eureka Spring (ES) appliance was investigated on 37 consecutively treated, noncompliant patients with bilateral Class II malocclusions. Lateral cephalographs were taken at the start of orthodontic treatment (T1), at insertion of the ES (T2), and at removal of the ES (T3). The average treatment interval between T2 and T3 was four months. The Class II correction occurred almost entirely by dentoalveolar movement and was almost equally distributed between the maxillary and mandibular dentitions. The rate of molar correction was 0.7 mm/mo. There was no change in anterior face height, mandibular plane angle, palatal plane angle, or gonial angle with treatment. There was a 2 degrees change in the occlusal plane resulting from intrusion of the maxillary molar and the mandibular incisor. Based on the results in this sample, the ES appliance was very effective in correcting Class II malocclusions in noncompliant patients without increasing the vertical dimension.",
"title": ""
},
{
"docid": "800befb527094bc6169809c6765d5d15",
"text": "The problem of scheduling a weighted directed acyclic graph (DAG) to a set of homogeneous processors to minimize the completion time has been extensively studied. The NPcompleteness of the problem has instigated researchers to propose a myriad of heuristic algorithms. While these algorithms are individually reported to be efficient, it is not clear how effective they are and how well they compare against each other. A comprehensive performance evaluation and comparison of these algorithms entails addressing a number of difficult issues. One of the issues is that a large number of scheduling algorithms are based upon radically different assumptions, making their comparison on a unified basis a rather intricate task. Another issue is that there is no standard set of benchmarks that can be used to evaluate and compare these algorithms. Furthermore, most algorithms are evaluated using small problem sizes, and it is not clear how their performance scales with the problem size. In this paper, we first provide a taxonomy for classifying various algorithms into different categories according to their assumptions and functionalities. We then propose a set of benchmarks which are of diverse structures without being biased towards a particular scheduling technique and still allow variations in important parameters. We have evaluated 15 scheduling algorithms, and compared them using the proposed benchmarks. Based upon the design philosophies and principles behind these algorithms, we interpret the results and discuss why some algorithms perform better than the others.",
"title": ""
},
{
"docid": "8953837ae11284b4be15d0abbaf7db77",
"text": "UAV has been a popular piece of equipment both in military and civilian applications. Groups of UAVs can form an UAV network and accomplish complicated missions such as rescue, searching, patrolling and mapping. One of the most active areas of research in UAV networks is that of area coverage problem which is usually defined as a problem of how well the UAV networks are able to monitor the given space, and how well the UAVs inside a network are able to cooperate with each other. Area coverage problem in cooperative UAV networks is the very base of many applications. In this paper, we take a representative survey of the current work that has been done about this problem via discussion of different classifications. This study serves as an overview of area coverage problem, and give some inspiration to related researchers.",
"title": ""
},
{
"docid": "225fa1a3576bc8cea237747cb25fc38d",
"text": "Common video systems for laparoscopy provide the surgeon a two-dimensional image (2D), where information on spatial depth can be derived only from secondary spatial depth cues and experience. Although the advantage of stereoscopy for surgical task efficiency has been clearly shown, several attempts to introduce three-dimensional (3D) video systems into clinical routine have failed. The aim of this study is to evaluate users’ performances in standardised surgical phantom model tasks using 3D HD visualisation compared with 2D HD regarding precision and working speed. This comparative study uses a 3D HD video system consisting of a dual-channel laparoscope, a stereoscopic camera, a camera controller with two separate outputs and a wavelength multiplex stereoscopic monitor. Each of 20 medical students and 10 laparoscopically experienced surgeons (more than 100 laparoscopic cholecystectomies each) pre-selected in a stereo vision test were asked to perform one task to familiarise themselves with the system and subsequently a set of five standardised tasks encountered in typical surgical procedures. The tasks were performed under either 3D or 2D conditions at random choice and subsequently repeated under the other vision condition. Predefined errors were counted, and time needed was measured. In four of the five tasks the study participants made fewer mistakes in 3D than in 2D vision. In four of the tasks they needed significantly more time in the 2D mode. Both the student group and the surgeon group showed similarly improved performance, while the surgeon group additionally saved more time on difficult tasks. This study shows that 3D HD using a state-of-the-art 3D monitor permits superior task efficiency, even as compared with the latest 2D HD video systems.",
"title": ""
},
{
"docid": "f571329b93779ae073184d9d63eb0c6c",
"text": "Retailers are now the dominant partners in most suply systems and have used their positions to re-engineer operations and partnership s with suppliers and other logistic service providers. No longer are retailers the pass ive recipients of manufacturer allocations, but instead are the active channel con trollers organizing supply in anticipation of, and reaction to consumer demand. T his paper reflects on the ongoing transformation of retail supply chains and logistics. If considers this transformation through an examination of the fashion, grocery and selected other retail supply chains, drawing on practical illustrations. Current and fut ure challenges are then discussed. Introduction Retailers were once the passive recipients of produ cts allocated to stores by manufacturers in the hope of purchase by consumers and replenished o nly at the whim and timing of the manufacturer. Today, retailers are the controllers of product supply in anticipation of, and reaction to, researched, understood, and real-time customer demand. Retailers now control, organise, and manage the supply chain from producti on to consumption. This is the essence of the retail logistics and supply chain transforma tion that has taken place since the latter part of the twentieth century. Retailers have become the channel captains and set the pace in logistics. Having extended their channel control and focused on corporate effi ci ncy and effectiveness, retailers have",
"title": ""
},
{
"docid": "c0dbb410ebd6c84bd97b5f5e767186b3",
"text": "A new hypothesis about the role of focused attention is proposed. The feature-integration theory of attention suggests that attention must be directed serially to each stimulus in a display whenever conjunctions of more than one separable feature are needed to characterize or distinguish the possible objects presented. A number of predictions were tested in a variety of paradigms including visual search, texture segregation, identification and localization, and using both separable dimensions (shape and color) and local elements or parts of figures (lines, curves, etc. in letters) as the features to be integrated into complex wholes. The results were in general consistent with the hypothesis. They offer a new set of criteria for distinguishing separable from integral features and a new rationale for predicting which tasks will show attention limits and which will not.",
"title": ""
},
{
"docid": "f48d02ff3661d3b91c68d6fcf750f83e",
"text": "There have been a number of techniques developed in recent years for the efficient analysis of probabilistic inference problems, represented as Bayes' networks or influence diagrams (Lauritzen and Spiegelhalter [9], Pearl [12], Shachter [14]). To varying degrees these methods exploit the conditional independence assumed and revealed in the problem structure to analyze problems in polynomial time, essentially polynomial in the number of variables and the size of the largest state space encountered during the evaluation. Unfortunately, there are many problems of interest for which the variables of interest are continuous rather than discrete, so the relevant state spaces become infinite and the polynomial complexity is of little help.",
"title": ""
},
{
"docid": "e86ce9f0a1beb982f8358930e8ef776d",
"text": "We study the function g(n, y) := i≤n P (i)≤y gcd(i, n), where P (n) denotes the largest prime factor of n, and we derive some estimates for its summatory function.",
"title": ""
},
{
"docid": "07651e4941e453f8dcbdaf30e9e690e6",
"text": "BACKGROUND\nAllocation of scarce resources presents an increasing challenge to hospital administrators and health policy makers. Intensive care units can present bottlenecks within busy hospitals, but their expansion is costly and difficult to gauge. Although mathematical tools have been suggested for determining the proper number of intensive care beds necessary to serve a given demand, the performance of such models has not been prospectively evaluated over significant periods.\n\n\nMETHODS\nThe authors prospectively collected 2 years' admission, discharge, and turn-away data in a busy, urban intensive care unit. Using queuing theory, they then constructed a mathematical model of patient flow, compared predictions from the model to observed performance of the unit, and explored the sensitivity of the model to changes in unit size.\n\n\nRESULTS\nThe queuing model proved to be very accurate, with predicted admission turn-away rates correlating highly with those actually observed (correlation coefficient = 0.89). The model was useful in predicting both monthly responsiveness to changing demand (mean monthly difference between observed and predicted values, 0.4+/-2.3%; range, 0-13%) and the overall 2-yr turn-away rate for the unit (21%vs. 22%). Both in practice and in simulation, turn-away rates increased exponentially when utilization exceeded 80-85%. Sensitivity analysis using the model revealed rapid and severe degradation of system performance with even the small changes in bed availability that might result from sudden staffing shortages or admission of patients with very long stays.\n\n\nCONCLUSIONS\nThe stochastic nature of patient flow may falsely lead health planners to underestimate resource needs in busy intensive care units. Although the nature of arrivals for intensive care deserves further study, when demand is random, queuing theory provides an accurate means of determining the appropriate supply of beds.",
"title": ""
},
{
"docid": "1bdf406fd827af2dddcecef934e291d4",
"text": "This study was conducted to collect data on specific volatile fatty acids (produced from soft tissue decomposition) and various anions and cations (liberated from soft tissue and bone), deposited in soil solution underneath decomposing human cadavers as an aid in determining the \"time since death.\" Seven nude subjects (two black males, a white female and four white males) were placed within a decay research facility at various times of the year and allowed to decompose naturally. Data were amassed every three days in the spring and summer, and weekly in the fall and winter. Analyses of the data reveal distinct patterns in the soil solution for volatile fatty acids during soft tissue decomposition and for specific anions and cations once skeletonized, when based on accumulated degree days. Decompositional rates were also obtained, providing valuable information for estimating the \"maximum time since death.\" Melanin concentrations observed in soil solution during this study also yields information directed at discerning racial affinities. Application of these data can significantly enhance \"time since death\" determinations currently in use.",
"title": ""
},
{
"docid": "fa42192f3ffd08332e35b98019e622ff",
"text": "Human immunodeficiency virus 1 (HIV-1) and other retroviruses synthesize a DNA copy of their genome after entry into the host cell. Integration of this DNA into the host cell's genome is an essential step in the viral replication cycle. The viral DNA is synthesized in the cytoplasm and is associated with viral and cellular proteins in a large nucleoprotein complex. Before integration into the host genome can occur, this complex must be transported to the nucleus and must cross the nuclear envelope. This Review summarizes our current knowledge of how this journey is accomplished.",
"title": ""
},
{
"docid": "1a3357aff8569e691f619a5ace483585",
"text": "Mesenchymal stromal cells (MSCs) are explored as a novel treatment for a variety of medical conditions. Their fate after infusion is unclear, and long-term safety regarding malignant transformation and ectopic tissue formation has not been addressed in patients. We examined autopsy material from 18 patients who had received human leukocyte antigen (HLA)-mismatched MSCs, and 108 tissue samples from 15 patients were examined by PCR. No signs of ectopic tissue formation or malignant tumors of MSC-donor origin were found on macroscopic or histological examination. MSC donor DNA was detected in one or several tissues including lungs, lymph nodes, and intestine in eight patients at levels from 1/100 to <1/1,000. Detection of MSC donor DNA was negatively correlated with time from infusion to sample collection, as DNA was detected from nine of 13 MSC infusions given within 50 days before sampling but from only two of eight infusions given earlier. There was no correlation between MSC engraftment and treatment response. We conclude that MSCs appear to mediate their function through a \"hit and run\" mechanism. The lack of sustained engraftment limits the long-term risks of MSC therapy.",
"title": ""
},
{
"docid": "9e303bade1b9ef60839de21ee08eb211",
"text": "In the present study, the frequency distributions of 20 discrete cranial traits in 70 major human populations from around the world were analyzed. The principal-coordinate and neighbor-joining analyses of Smith's mean measure of divergence (MMD), based on trait frequencies, indicate that 1). the clustering pattern is similar to those based on classic genetic markers, DNA polymorphisms, and craniometrics; 2). significant interregional separation and intraregional diversity are present in Subsaharan Africans; 3). clinal relationships exist among regional groups; 4). intraregional discontinuity exists in some populations inhabiting peripheral or isolated areas. For example, the Ainu are the most distinct outliers of the East Asian populations. These patterns suggest that founder effects, genetic drift, isolation, and population structure are the primary causes of regional variation in discrete cranial traits. Our results are compatible with a single origin for modern humans as well as the multiregional model, similar to the results of Relethford and Harpending ([1994] Am. J. Phys. Anthropol. 95:249-270). The results presented here provide additional measures of the morphological variation and diversification of modern human populations.",
"title": ""
}
] |
scidocsrr
|
730fbecebf520e548d36961d68aeaba7
|
Metric Learning in Codebook Generation of Bag-of-Words for Person Re-identification
|
[
{
"docid": "a2b2607e4af771632912900d63999f40",
"text": "In this work, we propose a method for simultaneously learning features and a corresponding similarity metric for person re-identification. We present a deep convolutional architecture with layers specially designed to address the problem of re-identification. Given a pair of images as input, our network outputs a similarity value indicating whether the two input images depict the same person. Novel elements of our architecture include a layer that computes cross-input neighborhood differences, which capture local relationships between the two input images based on mid-level features from each input image. A high-level summary of the outputs of this layer is computed by a layer of patch summary features, which are then spatially integrated in subsequent layers. Our method significantly outperforms the state of the art on both a large data set (CUHK03) and a medium-sized data set (CUHK01), and is resistant to over-fitting. We also demonstrate that by initially training on an unrelated large data set before fine-tuning on a small target data set, our network can achieve results comparable to the state of the art even on a small data set (VIPeR).",
"title": ""
}
] |
[
{
"docid": "7e2f05bd2af5dab6ea6c38780a889ea3",
"text": "Given the ubiquity of time series data, the data mining community has spent significant time investigating the best time series similarity measure to use for various tasks and domains. After more than a decade of extensive efforts, there is increasing evidence that Dynamic Time Warping (DTW) is very difficult to beat. Given that, recent efforts have focused on making the intrinsically slow DTW algorithm faster. For the similarity-search task, an important subroutine in many data mining algorithms, significant progress has been made by replacing the vast majority of expensive DTW calculations with cheap-to-compute lower bound calculations. However, these lower bound based optimizations do not directly apply to clustering, and thus for some realistic problems, clustering with DTW can take days or weeks. In this work, we show that we can mitigate this untenable lethargy by casting DTW clustering as an anytime algorithm. At the heart of our algorithm is a novel data-adaptive approximation to DTW which can be quickly computed, and which produces approximations to DTW that are much better than the best currently known linear-time approximations. We demonstrate our ideas on real world problems showing that we can get virtually all the accuracy of a batch DTW clustering algorithm in a fraction of the time.",
"title": ""
},
{
"docid": "571c33de3dc46d553bbf0c7dd180686a",
"text": "To ensure flight safety of aircraft structures, it is necessary to have regular maintenance using visual and nondestructive inspection (NDI) methods. In this paper, we propose an automatic image-based aircraft defect detection using Deep Neural Networks (DNNs). To the best of our knowledge, this is the first work for aircraft defect detection using DNNs. We perform a comprehensive evaluation of state-of-the-art feature descriptors and show that the best performance is achieved by vgg-f DNN as feature extractor with a linear SVM classifier. To reduce the processing time, we propose to apply SURF key point detector to identify defect patch candidates. Our experiment results suggest that we can achieve over 96% accuracy at around 15s processing time for a high-resolution (20-megapixel) image on a laptop.",
"title": ""
},
{
"docid": "191d247a3d4a5c469adc352f22f75b56",
"text": "Read and write assist techniques are now commonly used to lower the minimum operating voltage (Vmin) of an SRAM. In this paper, we review the efficacy of four leading write-assist (WA) techniques and their behavior at lower supply voltages in commercial SRAMs from 65nm, 45nm and 32nm low power technology nodes. In particular, the word-line boosting and negative bit-line WA techniques seem most promising at lower voltages. These two techniques help reduce the value of WLcrit by a factor of ~2.5X at 0.7V and also decrease the 3σ spread by ~3.3X, thus significantly reducing the impact of process variations. These write-assist techniques also impact the dynamic read noise margin (DRNM) of half-selected cells during the write operation. The negative bit-line WA technique has virtually no impact on the DRNM but all other WA techniques degrade the DRNM by 10--15%. In conjunction with the benefit (decrease in WLcrit) and the negative impact (decrease in DRNM), overhead of implementation in terms of area and performance must be analyzed to choose the best write-assist technique for lowering the SRAM Vmin.",
"title": ""
},
{
"docid": "f127063be30f4a39b1b8d7ef6a9e9d28",
"text": "Open story generation is the problem of automatically creating a story for any domain without retraining. Neural language models can be trained on large corpora across many domains and then used to generate stories. However, stories generated via language models tend to lack direction and coherence. We introduce a policy gradient reinforcement learning approach to open story generation that learns to achieve a given narrative goal state. In this work, the goal is for a story to end with a specific type of event, given in advance. However, a reward based on achieving the given goal is too sparse for effective learning. We use reward shaping to provide the reinforcement learner with a partial reward at every step. We show that our technique can train a model that generates a story that reaches the goal 94% of the time and reduces model perplexity. A human subject evaluation shows that stories generated by our technique are perceived to have significantly higher plausible event ordering and plot coherence over a baseline language modeling technique without perceived degradation of overall quality, enjoyability, or local causality.",
"title": ""
},
{
"docid": "2e32606df9b1750b9abb03d450051d16",
"text": "This research investigates two major aspects of homeschooling. Factors determining parental motivations to homeschool and the determinants of the student achievement of home-educated children are identified. Original survey data from an organized group of homeschoolers is analyzed. Regression models are employed to predict parents’ motivations and their students’ standardized test achievement. Four sets of homeschooling motivations are identified. Academic and pedagogical concerns are most important, and it appears that the religious base of the movement is subsiding. Several major demographic variables have no impact upon parental motivations, indicating that this is a diverse group. Parents’ educational attainment and political identification are consistent predictors of their students’ achievement. Race and class—the two major divides in public education—are not significant determinants of standardized test achievement, suggesting that homeschooling is efficacious. It is concluded that homeschoolers are a heterogeneous population with varying and overlapping motivations.",
"title": ""
},
{
"docid": "617c4da4ce82b2cb5f4d0e6fb61f87b9",
"text": "PURPOSE\nRecent studies have suggested that microRNA biomarkers could be useful for stratifying lung cancer subtypes, but microRNA signatures varied between different populations. Squamous cell carcinoma (SCC) is one major subtype of lung cancer that urgently needs biomarkers to aid patient management. Here, we undertook the first comprehensive investigation on microRNA in Chinese SCC patients.\n\n\nEXPERIMENTAL DESIGN\nMicroRNA expression was measured in cancerous and noncancerous tissue pairs strictly collected from Chinese SCC patients (stages I-III), who had not been treated with chemotherapy or radiotherapy prior to surgery. The molecular targets of proposed microRNA were further examined.\n\n\nRESULTS\nWe identified a 5-microRNA classifier (hsa-miR-210, hsa-miR-182, hsa-miR-486-5p, hsa-miR-30a, and hsa-miR-140-3p) that could distinguish SCC from normal lung tissues. The classifier had an accuracy of 94.1% in a training cohort (34 patients) and 96.2% in a test cohort (26 patients). We also showed that high expression of hsa-miR-31 was associated with poor survival in these 60 SCC patients by Kaplan-Meier analysis (P = 0.007), by univariate Cox analysis (P = 0.011), and by multivariate Cox analysis (P = 0.011). This association was independently validated in a separate cohort of 88 SCC patients (P = 0.008, 0.011, and 0.003 in Kaplan-Meier analysis, univariate Cox analysis, and multivariate Cox analysis, respectively). We then determined that the tumor suppressor DICER1 is a target of hsa-miR-31. Expression of hsa-miR-31 in a human lung cancer cell line repressed DICER1 activity but not PPP2R2A or LATS2.\n\n\nCONCLUSIONS\nOur results identified a new diagnostic microRNA classifier for SCC among Chinese patients and a new prognostic biomarker, hsa-miR-31.",
"title": ""
},
{
"docid": "fca66085984bf1fe513080e70c3fafc2",
"text": "In this letter, a two-layer metasurface is proposed to achieve radar cross-section (RCS) reduction of a stacked patch antenna at a broadband. The lower layer metasurface is composed of four square patches loaded with four resistors, which is utilized to reduce RCS in the operation band (2.75-3.4 GHz) of the patch antenna. The periodic square loops with four resistors mounted on each side are adopted to construct the upper layer metasurface for absorbing the incoming wave out of band. We first investigate the effectiveness of the proposed metasurface on the RCS reduction of the single stacked patch and then apply this strategy to the 1 ×4 stacked patch array. The proposed low RCS stacked patch array antenna is fabricated and measured. The experimental results show that the designed metasurface makes the antenna RCS dramatically reduced in a broadband covering the operation band and out-of-band from 5.5-16 GHz. Moreover, the introduction of metasurface is demonstrated to have little influence on the antenna performance.",
"title": ""
},
{
"docid": "9b959976af688aa7f00fc21cd14ad7f9",
"text": "Article history: Received 7 October 2007 Received in revised form 27 February 2008 Accepted 16 May 2008",
"title": ""
},
{
"docid": "ea28d601dfbf1b312904e39802ce25b8",
"text": "In this paper, we present the implementation and performance evaluation of security functionalities at the link layer of IEEE 802.15.4-compliant IoT devices. Specifically, we implement the required encryption and authentication mechanisms entirely in software and as well exploit the hardware ciphers that are made available by our IoT platform. Moreover, we present quantitative results on the memory footprint, the execution time and the energy consumption of selected implementation modes and discuss some relevant tradeoffs. As expected, we find that hardware-based implementations are not only much faster, leading to latencies shorter than two orders of magnitude compared to software-based security suites, but also provide substantial savings in terms of ROM memory occupation, i.e. up to six times, and energy consumption. Furthermore, the addition of hardware-based security support at the link layer only marginally impacts the network lifetime metric, leading to worst-case reductions of just 2% compared to the case where no security is employed. This is due to the fact that energy consumption is dominated by other factors, including the transmission and reception of data packets and the control traffic that is required to maintain the network structures for routing and data collection. On the other hand, entirely software-based implementations are to be avoided as the network lifetime reduction in this case can be as high as 25%.",
"title": ""
},
{
"docid": "c2ebe0fa42e3ca8e2b4b560a9dd0f1af",
"text": "We present a simulation model of the Bitcoin peer-to-peer network, a widely deployed distributed electronic currency system. The model enables evaluations of the feasibility and cost of attacks on the Bitcoin network at full scale of 6,000 nodes. The simulation model is based on unmodified code from core segments of the Bitcoin reference implementation used by 99% of nodes. Parametrization of the model is performed based on large-scale measurements of the real-world network. We present preliminary validation results showing a reasonable correspondence of the propagation of messages in the Bitcoin network compared with simulation results. We apply the model to study the feasibility of a partitioning attack on the network and show that the attack is sensitive to the churn of the attacking nodes.",
"title": ""
},
{
"docid": "0349bef88d7dd5ca012fd4d2fd28cf0d",
"text": "Impedance-source converters, an emerging technology in electric energy conversion, overcome limitations of conventional solutions by the use of specific impedance-source networks. Focus of this paper is on the topologies of galvanically isolated impedance-source dc-dc converters. These converters are particularly appropriate for distributed generation systems with renewable or alternative energy sources, which require input voltage and load regulation in a wide range. We review here the basic topologies for researchers and engineers, and classify all the topologies of the impedance-source galvanically isolated dc-dc converters according to the element that transfers energy from the input to the output: a transformer, a coupled inductor, or their combination. This classification reveals advantages and disadvantages, as well as a wide space for further research. This paper also outlines the most promising research directions in this field.",
"title": ""
},
{
"docid": "9193aad006395bd3bd76cabf44012da5",
"text": "In recent years, there is growing evidence that plant-foods polyphenols, due to their biological properties, may be unique nutraceuticals and supplementary treatments for various aspects of type 2 diabetes mellitus. In this article we have reviewed the potential efficacies of polyphenols, including phenolic acids, flavonoids, stilbenes, lignans and polymeric lignans, on metabolic disorders and complications induced by diabetes. Based on several in vitro, animal models and some human studies, dietary plant polyphenols and polyphenol-rich products modulate carbohydrate and lipid metabolism, attenuate hyperglycemia, dyslipidemia and insulin resistance, improve adipose tissue metabolism, and alleviate oxidative stress and stress-sensitive signaling pathways and inflammatory processes. Polyphenolic compounds can also prevent the development of long-term diabetes complications including cardiovascular disease, neuropathy, nephropathy and retinopathy. Further investigations as human clinical studies are needed to obtain the optimum dose and duration of supplementation with polyphenolic compounds in diabetic patients.",
"title": ""
},
{
"docid": "a31af87d915b383fe6a359d652ddc563",
"text": "Graduate unemployment and its management are challenges that leaders of the economy, managers and policy analysts grapple with on a daily basis. As a result, economic leaders and managers of economies have sought theoretical explanations to guide their management strategies of graduate unemployment. Th ere are two competing theses to explain the problem: skills mismatch and skills oversupply. However, due to the seeming simplicity of basic tenets and policy implications of the skills mismatch thesis, many governments and laypersons have blamed graduate unemployment on it. Th is paper argues that policy solutions based entirely on skills mismatch may trigger another form of unemployment, oversupply of skilled graduates. Furthermore, oversupply of graduates is more likely to be the signifi cant cause of graduate unemployment than skills mismatch. An eff ective policy, therefore, is one that takes into account interventions to stimulate demand for labor while at the same time manages the supply of skilled labor. Such an approach will provide more sustainable solutions to graduate unemployment. In addition, the potential contributions of psychologists in the eff orts towards the management of graduate unemployment are also outlined.",
"title": ""
},
{
"docid": "058cc1d2e459c987d7a53e02428c98a5",
"text": "Sediment cores from nine lakes in southern Norway (N) and six in northern New England (NE) were dated by 137Cs, 210Pb and in NE also by pollen, and were analyzed geochemically and for diatoms. Cores from two N and three NE lakes were analyzed for cladocerans. 137Cs dating is unreliable in these lakes, probably due to mobility of Cs in the sediment. In Holmvatn sediment, an up-core increase in Fe, starting ca. 1900, correlates with geochemical indications of decreasing mechanical erosion of soils. Diatoms indicate a lake acidification starting in the 1920's. We propose that soil Fe was mobilized and runoff acidified by acidic precipitation and/or by soil acidification resulting from vegetational succession following reduced grazing. Even minor land use changes or disturbances in lake watersheds introduce ambiguity to the sedimentary evidence relating to atmospheric influences. Diatom counts from surface sediments in 36 N and 31 NE lakes were regressed against contemporary water pH to obtain coefficients for computing past pH from subsurface counts. Computed decreases of 0.3–0.8 pH units start between I890 and I930 in N lakes already acidic (pH 5.0–5.5) before the decrease. These and lesser decreases in other lakes start decades to over a century after the first sedimentary indications of atmospheric heavy metal pollution. It is proposed that the acidification of precipitation accompanied the metal pollution. The delays in lake acidification may be due to buffering by the lakes and watersheds. The magnitude of acidification and heavy metal loading of the lakes parallels air pollution gradients. Shift in cladoceran remains are contemporary with acidification, preceding elimination of fishes.",
"title": ""
},
{
"docid": "7288a312b26c6c3281cef7ecf7be8f44",
"text": "This paper discusses an important issue in computational linguistics: classifying texts as formal or informal style. Our work describes a genreindependent methodology for building classifiers for formal and informal texts. We used machine learning techniques to do the automatic classification, and performed the classification experiments at both the document level and the sentence level. First, we studied the main characteristics of each style, in order to train a system that can distinguish between them. We then built two datasets: the first dataset represents general-domain documents of formal and informal style, and the second represents medical texts. We tested on the second dataset at the document level, to determine if our model is sufficiently general, and that it works on any type of text. The datasets are built by collecting documents for both styles from different sources. After collecting the data, we extracted features from each text. The features that we designed represent the main characteristics of both styles. Finally, we tested several classification algorithms, namely Decision Trees, Naïve Bayes, and Support Vector Machines, in order to choose the classifier that generates the best classification results. 1 LiLT Volume 8, Issue 1, March 2012. Learning to Classify Documents According to Formal and Informal Style. Copyright c © 2012, CSLI Publications. 2 / LiLT volume 8, issue 1 March 2012",
"title": ""
},
{
"docid": "d7dde22af9c95b77b84d11a015561b4c",
"text": "Clustering of measurement data is an important task in digital signal processing. Especially in the case of radar signal processing the need of clustering detection points becomes obvious when high-resolution radar sensor systems are used. Clustering is usually used as a preprocessing step for classification of the measured data. In this paper a new approach for automotive radar data clustering is presented. A shape finding technique from image signal processing, called border following, is used to perform this task. Some adjustments and modifications of the method are required to get it working with radar measurements. The adapted algorithm is proven in three different measurement spaces and rated for the best performance by focusing on clustering of cyclists. It is showed, that the technique produces clustered radar data appropriate to their physical appearance.",
"title": ""
},
{
"docid": "6509150b9a7fcf201eb19b98d88adc4f",
"text": "The main aim of the present experiment was to determine whether extensive musical training facilitates pitch contour processing not only in music but also in language. We used a parametric manipulation of final notes' or words' fundamental frequency (F0), and we recorded behavioral and electrophysiological data to examine the precise time course of pitch processing. We compared professional musicians and nonmusicians. Results revealed that within both domains, musicians detected weak F0 manipulations better than nonmusicians. Moreover, F0 manipulations within both music and language elicited similar variations in brain electrical potentials, with overall shorter onset latency for musicians than for nonmusicians. Finally, the scalp distribution of an early negativity in the linguistic task varied with musical expertise, being largest over temporal sites bilaterally for musicians and largest centrally and over left temporal sites for nonmusicians. These results are taken as evidence that extensive musical training influences the perception of pitch contour in spoken language.",
"title": ""
},
{
"docid": "a22a319fedc1392ff21dcfa4ad92b82e",
"text": "This paper investigates the possible causes for high attrition rates for Computer Science students. It is a serious problem in universities that must be addressed if the need for technologically competent professionals is to be met.",
"title": ""
},
{
"docid": "08196718e17bfcdcecea60b0fb735638",
"text": "Atari games are an excellent testbed for studying intelligent behavior, as they offer a range of tasks that differ widely in their visual representation, game dynamics, and goals presented to an agent. The last two years have seen a spate of research into artificial agents that use a single algorithm to learn to play these games. The best of these artificial agents perform at better-than-human levels on most games, but require hundreds of hours of game-play experience to produce such behavior. Humans, on the other hand, can learn to perform well on these tasks in a matter of minutes. In this paper we present data on human learning trajectories for several Atari games, and test several hypotheses about the mechanisms that lead to such rapid learning.",
"title": ""
}
] |
scidocsrr
|
c4b1910b5a0c81548b9b7e1fda599fc8
|
The self-programming thermostat: optimizing setback schedules based on home occupancy patterns
|
[
{
"docid": "3d3fa5295bfa02ae27ae01adfcc0b560",
"text": "In this paper we introduce the simultaneous tracking and activity recognition (STAR) problem, which exploits the synergy between location and activity to provide the information necessary for automatic health monitoring. Automatic health monitoring can potentially help the elderly population live safely and independently in their own homes by providing key information to caregivers. Our goal is to perform accurate tracking and activity recognition for multiple people in a home environment. We use a “bottom-up” approach that primarily uses information gathered by many minimally invasive sensors commonly found in home security systems. We describe a Rao-Blackwellised particle filter for roomlevel tracking, rudimentary activity recognition (i.e., whether or not an occupant is moving), and data association. We evaluate our approach with experiments in a simulated environment and in a real instrumented home.",
"title": ""
}
] |
[
{
"docid": "bde253462808988038235a46791affc1",
"text": "Power electronic Grid-Connected Converters (GCCs) are widely applied as grid interface in renewable energy sources. This paper proposes an extended Direct Power Control with Space Vector Modulation (DPC-SVM) scheme with improved operation performance under grid distortions. The real-time operated DPC-SVM scheme has to execute several important tasks as: space vector pulse width modulation, active and reactive power feedback control, grid current harmonics and voltage dips compensation. Thus, development and implementation of the DPC-SVM algorithm using single chip floating-point microcontroller TMS320F28335 is described. It combines large peripheral equipment, typical for microcontrollers, with high computation capacity characteristic for Digital Signal Processors (DSPs). The novelty of the proposed system lies in extension of the generic DPC-SVM scheme by additional higher harmonic and voltage dips compensation modules and implementation of the whole algorithm in a single chip floating point microcontroller. Overview of the laboratory setup, description of basic algorithm subtasks sequence, software optimization as well as execution time of specific program modules on fixed-point and floating-point processors are discussed. Selected oscillograms illustrating operation and robustness of the developed algorithm used in 5 kVA laboratory model of the GCC are presented.",
"title": ""
},
{
"docid": "6c8e1e77efea6fd82f9ec6146689a011",
"text": "BACKGROUND\nHigh incidences of neck pain morbidity are challenging in various situations for populations based on their demographic, physiological and pathological characteristics. Chinese proprietary herbal medicines, as Complementary and Alternative Medicine (CAM) products, are usually developed from well-established and long-standing recipes formulated as tablets or capsules. However, good quantification and strict standardization are still needed for implementation of individualized therapies. The Qishe pill was developed and has been used clinically since 2009. The Qishe pill's personalized medicine should be documented and administered to various patients according to the ancient TCM system, a classification of personalized constitution types, established to determine predisposition and prognosis to diseases as well as therapy and life-style administration. Therefore, we describe the population pharmacokinetic profile of the Qishe pill and compare its metabolic rate in the three major constitution types (Qi-Deficiency, Yin-Deficiency and Blood-Stasis) to address major challenges to individualized standardized TCM.\n\n\nMETHODS/DESIGN\nHealthy subjects (N = 108) selected based on constitutional types will be assessed, and standardized pharmacokinetic protocol will be used for assessing demographic, physiological, and pathological data. Laboratory biomarkers will be evaluated and blood samples collected for pharmacokinetics(PK) analysis and second-generation gene sequencing. In single-dose administrations, subjects in each constitutional type cohort (N = 36) will be randomly divided into three groups to receive different Qishe pill doses (3.75, 7.5 and 15 grams). Multiomics, including next generation sequencing, metabolomics, and proteomics, will complement the Qishe pill's multilevel assessment, with cytochrome P450 genes as targets. In a comparison with the general population, a systematic population pharmacokinetic (PopPK) model for the Qishe pill will be established and verified.\n\n\nTRIAL REGISTRATION\nThis study is registered at ClinicalTrials.gov, NCT02294448 .15 November 2014.",
"title": ""
},
{
"docid": "6d8156b2952cc83701b06c24c2e7b162",
"text": "Even when working on a well-modularized software system, programmers tend to spend more time navigating the code than working with it. This phenomenon arises because it is impossible to modularize the code for all tasks that occur over the lifetime of a system. We describe the use of a degree-of-interest (DOI) model to capture the task context of program elements scattered across a code base. The Mylar tool that we built encodes the DOI of program elements by monitoring the programmer's activity, and displays the encoded DOI model in views of Java and AspectJ programs. We also present the results of a preliminary diary study in which professional programmers used Mylar for their daily work on enterprise-scale Java systems.",
"title": ""
},
{
"docid": "3128ce664080927afcd78b57935012ef",
"text": "The availability of big data sets in research, industry and society in general has opened up many possibilities of how to use this data. In many applications, however, it is not the data itself that is of interest but rather we want to answer some question about it. These answers may sometimes be phrased as solutions to an optimization problem. We survey some algorithmic methods that optimize over large-scale data sets, beyond the realm of machine learning.",
"title": ""
},
{
"docid": "cc968b7d0feeefe4c210717580dc80c8",
"text": "This paper discusses the back-end-of-line (BEOL) layers for a 7 nm predictive process design kit (PDK). The rationale behind choosing a particular lithographic process—EUV lithography, self-aligned double patterning (SADP), and litho-etch litho-etch (LELE)—for different layers, in addition to some design rule values, is described. The rules are based on the literature and on design technology co-optimization (DTCO) evaluation of standard cell based designs and automated place-and-route experiments. Decomposition criteria and design rules to ensure conflict-free coloring of SADP metal topologies and manufacturable SADP photolithography masks are discussed in detail. Their efficacy is demonstrated through successful coloring and photolithography mask derivation for target metal shape layouts, which represent corner cases, by using the Mentor Graphics Calibre and multi-patterning tools. Edge placement errors, misalignment, and critical dimension uniformity are included in the analysis.",
"title": ""
},
{
"docid": "62f455d95a65eb2454753414f01d8435",
"text": "Metabolic glycoengineering is a technique introduced in the early 90s of the last century by Reutter et al.. It utilises the ability of cells to metabolically convert sugar derivatives with bioorthogonal side chains like azides or alkynes and by that incorporation into several glyco structures. Afterwards, the carbohydrates can be labelled to study their distribution, dynamics and roles in different biological processes. So far many studies were performed on mammal cell lines as well as in small animals. Very recently, bacterial glyco-structures were targeted by glycoengineering, showing promising results in infection prevention by reducing pathogen adhesion towards human epithelial cells. Introduction Bacteria were among the first life forms to appear on earth, and are present in most habitats on the planet, e. g., they live in symbiosis with plants and animals. Compared to human cells there are ten times as many bacterial cells in our body. Most of them are harmless or even beneficial. But some species are pathogenic and cause infectious diseases with more than 1.2 million deaths each year [1]. Those infections include cholera, syphilis, anthrax, leprosy, and bubonic plague as well as respiratory infections like tuberculosis. 1 This article is part of the Proceedings of the Beilstein Glyco-Bioinformatics Symposium 2013. www.proceedings.beilstein-symposia.org Discovering the Subtleties of Sugars June 10 – 14, 2013, Potsdam, Germany",
"title": ""
},
{
"docid": "f4c51f4790114c42bef19ff421c83f0d",
"text": "Real-time systems are growing in complexity and realtime and soft real-time applications are becoming common in general-purpose computing environments. Thus, there is a growing need for scheduling solutions that simultaneously support processes with a variety of different timeliness constraints. Toward this goal we have developed the Resource Allocation/Dispatching (RAD) integrated scheduling model and the Rate-Based Earliest Deadline (RBED) integrated multi-class real-time scheduler based on this model. We present RAD and the RBED scheduler and formally prove the correctness of the operations that RBED employs. We then describe our implementation of RBED and present results demonstrating how RBED simultaneously and seamlessly supports hard real-time, soft real-time, and best-effort processes.",
"title": ""
},
{
"docid": "137b9760d265304560f1cac14edb7f21",
"text": "Gallstones are solid particles formed from bile in the gall bladder. In this paper, we propose a technique to automatically detect Gallstones in ultrasound images, christened as, Automated Gallstone Segmentation (AGS) Technique. Speckle Noise in the ultrasound image is first suppressed using Anisotropic Diffusion Technique. The edges are then enhanced using Unsharp Filtering. NCUT Segmentation Technique is then put to use to segment the image. Afterwards, edges are detected using Sobel Edge Detection. Further, Edge Thickening Process is used to smoothen the edges and probability maps are generated using Floodfill Technique. Then, the image is scribbled using Automatic Scribbling Technique. Finally, we get the segmented gallstone within the gallbladder using the Closed Form Matting Technique.",
"title": ""
},
{
"docid": "5638ba62bcbfd1bd5e46b4e0dccf0d94",
"text": "Sentiment analysis aims to automatically uncover the underlying attitude that we hold towards an entity. The aggregation of these sentiment over a population represents opinion polling and has numerous applications. Current text-based sentiment analysis rely on the construction of dictionaries and machine learning models that learn sentiment from large text corpora. Sentiment analysis from text is currently widely used for customer satisfaction assessment and brand perception analysis, among others. With the proliferation of social media, multimodal sentiment analysis is set to bring new opportunities with the arrival of complementary data streams for improving and going beyond text-based sentiment analysis. Since sentiment can be detected through affective traces it leaves, such as facial and vocal displays, multimodal sentiment analysis offers promising avenues for analyzing facial and vocal expressions in addition to the transcript or textual content. These approaches leverage emotion recognition and context inference to determine the underlying polarity and scope of an individual’s sentiment. In this survey, we define sentiment and the problem of multimodal sentiment analysis and review recent developments in multimodal sentiment analysis in different domains, including spoken reviews, images, video blogs, human-machine and human-human interaction. Challenges and opportunities of this emerging field are also discussed leading to our thesis that multimodal sentiment analysis holds a significant untapped potential.",
"title": ""
},
{
"docid": "cd2fd948b08fd8b187cc9615d9bee8f1",
"text": "The spacing effect in list learning occurs because identical massed items suffer encoding deficits and because spaced items benefit from retrieval and increased time in working memory. Requiring the retrieval of identical items produced a spacing effect for recall and recognition, both for intentional and incidental learning. Not requiring retrieval produced spacing only for intentional learning because intentional learning encourages retrieval. Once-presented words provided baselines for these effects. Next, massed and spaced word pairs were judged for matches on their first three letters, forcing retrieval. The words were not identical, so there was no encoding deficit. Retrieval could and did cause spacing only for the first word of each pair; time in working memory, only for the second.",
"title": ""
},
{
"docid": "6c89c95f3fcc3c0f1da3f4ae16e0475e",
"text": "Understanding search queries is a hard problem as it involves dealing with “word salad” text ubiquitously issued by users. However, if a query resembles a well-formed question, a natural language processing pipeline is able to perform more accurate interpretation, thus reducing downstream compounding errors. Hence, identifying whether or not a query is well formed can enhance query understanding. Here, we introduce a new task of identifying a well-formed natural language question. We construct and release a dataset of 25,100 publicly available questions classified into well-formed and non-wellformed categories and report an accuracy of 70.7% on the test set. We also show that our classifier can be used to improve the performance of neural sequence-to-sequence models for generating questions for reading comprehension.",
"title": ""
},
{
"docid": "4def0dc478dfb5ddb5a0ec59ec7433f5",
"text": "A system that enables continuous slip compensation for a Mars rover has been designed, implemented, and field-tested. This system is composed of several components that allow the rover to accurately and continuously follow a designated path, compensate for slippage, and reach intended goals in high-slip environments. These components include: visual odometry, vehicle kinematics, a Kalman filter pose estimator, and a slip compensation/path follower. Visual odometry tracks distinctive scene features in stereo imagery to estimate rover motion between successively acquired stereo image pairs. The vehicle kinematics for a rocker-bogie suspension system estimates motion by measuring wheel rates, and rocker, bogie, and steering angles. The Kalman filter merges data from an inertial measurement unit (IMU) and visual odometry. This merged estimate is then compared to the kinematic estimate to determine how much slippage has occurred, taking into account estimate uncertainties. If slippage has occurred then a slip vector is calculated by differencing the current Kalman filter estimate from the kinematic estimate. This slip vector is then used to determine the necessary wheel velocities and steering angles to compensate for slip and follow the desired path.",
"title": ""
},
{
"docid": "c3182fada2dc486fb338654b885cbbfe",
"text": "Traditional syllogisms involve sentences of the following simple forms: All X are Y , Some X are Y , No X are Y ; similar sentences with proper names as subjects, and identities between names. These sentences come with the natural semantics using subsets of a given universe, and so it is natural to ask about complete proof systems. Logical systems are important in this area due to the prominence of syllogistic arguments in human reasoning, and also to the role they have played in logic from Aristotle onwards. We present complete systems for the entire syllogistic fragment and many sub-fragments. These begin with the fragment of All sentences, for which we obtain one of the easiest completeness theorems in logic. The last system extends syllogistic reasoning with the classical boolean operations and cardinality comparisons.",
"title": ""
},
{
"docid": "afec9b75987e95752dcb1392de1c48a0",
"text": "With advancement of technologies and services, data with high velocity, variety and volume is produced which cannot be handled by traditional architectures, algorithms or databases. So, there is a need of new architecture that finds the hidden threads and trends from different structured or unstructured sources and that technique is called BIG DATA. This Review paper presents different Methodologies to implement Big Data that is HPCC (older one) and HADOOP. The whole process of deployment can be divided into 5 phasesData Distillation, Model Deployment, Validation and Deployment, Real time Scoring, Model Refresh. Along with that it concentrates on comparing HPCC , HADOOP and their components also.",
"title": ""
},
{
"docid": "d32fdc6d5dd535079b93b2695ca917d5",
"text": "We present a discrete spectral framework for the sparse or cardinality-constrained solution of a generalized Rayleigh quotient. This NP-hard combinatorial optimization problem is central to supervised learning tasks such as sparse LDA, feature selection and relevance ranking for classification. We derive a new generalized form of the Inclusion Principle for variational eigenvalue bounds, leading to exact and optimal sparse linear discriminants using branch-and-bound search. An efficient greedy (approximate) technique is also presented. The generalization performance of our sparse LDA algorithms is demonstrated with real-world UCI ML benchmarks and compared to a leading SVM-based gene selection algorithm for cancer classification.",
"title": ""
},
{
"docid": "1a8acc86f518712c6f5cfd5adf0b8fb9",
"text": "Dyslexia, attention deficit hyperactivity disorder (ADHD), and attention deficit disorder (ADD) show distinct clinical profiles that may include auditory and language-related impairments. Currently, an objective brain-based diagnosis of these developmental disorders is still unavailable. We investigated the neuro-auditory systems of dyslexic, ADHD, ADD, and age-matched control children (N = 147) using neuroimaging, magnetencephalography and psychoacoustics. All disorder subgroups exhibited an oversized left planum temporale and an abnormal interhemispheric asynchrony (10-40 ms) of the primary auditory evoked P1-response. Considering right auditory cortex morphology, bilateral P1 source waveform shapes, and auditory performance, the three disorder subgroups could be reliably differentiated with outstanding accuracies of 89-98%. We therefore for the first time provide differential biomarkers for a brain-based diagnosis of dyslexia, ADHD, and ADD. The method allowed not only allowed for clear discrimination between two subtypes of attentional disorders (ADHD and ADD), a topic controversially discussed for decades in the scientific community, but also revealed the potential for objectively identifying comorbid cases. Noteworthy, in children playing a musical instrument, after three and a half years of training the observed interhemispheric asynchronies were reduced by about 2/3, thus suggesting a strong beneficial influence of music experience on brain development. These findings might have far-reaching implications for both research and practice and enable a profound understanding of the brain-related etiology, diagnosis, and musically based therapy of common auditory-related developmental disorders and learning disabilities.",
"title": ""
},
{
"docid": "8a8acb74a69005a37a0adbb3b6e45746",
"text": "We introduce Similarity Group Proposal Network (SGPN), a simple and intuitive deep learning framework for 3D object instance segmentation on point clouds. SGPN uses a single network to predict point grouping proposals and a corresponding semantic class for each proposal, from which we can directly extract instance segmentation results. Important to the effectiveness of SGPN is its novel representation of 3D instance segmentation results in the form of a similarity matrix that indicates the similarity between each pair of points in embedded feature space, thus producing an accurate grouping proposal for each point. Experimental results on various 3D scenes show the effectiveness of our method on 3D instance segmentation, and we also evaluate the capability of SGPN to improve 3D object detection and semantic segmentation results. We also demonstrate its flexibility by seamlessly incorporating 2D CNN features into the framework to boost performance.",
"title": ""
},
{
"docid": "1b98568349b1a1e8239013385e9c6023",
"text": "We present fast and robust algorithms for the inverse kinematics of serial manipulators consisting of six or fewer joints. When stated mathematically, the problem of inverse kinematics reduces to simultaneously solving a system of algebraic equations. In this paper, we use a series of algebraic and numeric transformations to reduce the problem to computing the eigenstructure of a matrix pencil. To e ciently compute the eigenstructure, we make use of the symbolic formulation of the matrix and use a number of techniques from linear algebra and matrix computations. The resulting algorithm computes all the solution of a serial manipulator with six or fewer joints in the order of tens of milliseconds on the current workstations. It has been implemented as part of a generic package, KINEM, for the inverse kinematics of serial manipulators.",
"title": ""
},
{
"docid": "6bb4600498b34121c32b5d428ec3e49f",
"text": "Parametric surfaces are an essential modeling tool in computer aided design and movie production. Even though their use is well established in industry, generating ray-traced images adds significant cost in time and memory consumption. Ray tracing such surfaces is usually accomplished by subdividing the surfaces on the fly, or by conversion to a polygonal representation. However, on-the-fly subdivision is computationally very expensive, whereas polygonal meshes require large amounts of memory. This is a particular problem for parametric surfaces with displacement, where very fine tessellation is required to faithfully represent the shape. Hence, memory restrictions are the major challenge in production rendering. In this article, we present a novel solution to this problem. We propose a compression scheme for a priori Bounding Volume Hierarchies (BVHs) on parametric patches, that reduces the data required for the hierarchy by a factor of up to 48. We further propose an approximate evaluation method that does not require leaf geometry, yielding an overall reduction of memory consumption by a factor of 60 over regular BVHs on indexed face sets and by a factor of 16 over established state-of-the-art compression schemes. Alternatively, our compression can simply be applied to a standard BVH while keeping the leaf geometry, resulting in a compression rate of up to 2:1 over current methods. Although decompression generates additional costs during traversal, we can manage very complex scenes even on the memory restrictive GPU at competitive render times.",
"title": ""
},
{
"docid": "050443f5d84369f942c3f611775d37ed",
"text": "A variety of methods for computing factor scores can be found in the psychological literature. These methods grew out of a historic debate regarding the indeterminate nature of the common factor model. Unfortunately, most researchers are unaware of the indeterminacy issue and the problems associated with a number of the factor scoring procedures. This article reviews the history and nature of factor score indeterminacy. Novel computer programs for assessing the degree of indeterminacy in a given analysis, as well as for computing and evaluating different types of factor scores, are then presented and demonstrated using data from the Wechsler Intelligence Scale for Children-Third Edition. It is argued that factor score indeterminacy should be routinely assessed and reported as part of any exploratory factor analysis and that factor scores should be thoroughly evaluated before they are reported or used in subsequent statistical analyses.",
"title": ""
}
] |
scidocsrr
|
0669ff4114dd6f1fc2a53b4e2d390bfd
|
Annotating evidence-based argumentation in biomedical text
|
[
{
"docid": "b77bf3a4cfba0033a7fcdf777c803da4",
"text": "Argumentation mining involves automatically identifying the premises, conclusion, and type of each argument as well as relationships between pairs of arguments in a document. We describe our plan to create a corpus from the biomedical genetics research literature, annotated to support argumentation mining research. We discuss the argumentation elements to be annotated, theoretical challenges, and practical issues in creating such a corpus.",
"title": ""
},
{
"docid": "15518edc9bde13f55df3192262c3a9bf",
"text": "Under the framework of the argumentation scheme theory (Walton, 1996), we developed annotation protocols for an argumentative writing task to support identification and classification of the arguments being made in essays. Each annotation protocol defined argumentation schemes (i.e., reasoning patterns) in a given writing prompt and listed questions to help evaluate an argument based on these schemes, to make the argument structure in a text explicit and classifiable. We report findings based on an annotation of 600 essays. Most annotation categories were applied reliably by human annotators, and some categories significantly contributed to essay score. An NLP system to identify sentences containing scheme-relevant critical questions was developed based on the human annotations.",
"title": ""
}
] |
[
{
"docid": "c33d102b53a22887c09e5d9a6f95daf5",
"text": "Title: Prevalence of low back pain and its risk factors among secondary school teachers at Bentong, Pahang Objective: The purpose of this study is to determine the prevalence of low back pain among Secondary School Teachers and to investigate the associated risk factors at Bentong, Pahang. Methodology: A self-administered questionnaire was distributed to 260 subjects through random sampling in 5 secondary schools. Seven female teaches were excluded because they never meet with the inclusion criteria, where at the end of the study only 253 subjects was included. Result: In the study, I found that prevalence of low back pain is high among secondary school teachers. Female teachers reported a significantly higher prevalence of low back pain when compared to male teachers. And the middle age group of teachers has reported high prevalence of pain compare to the younger and older age group. The highest risk factor for the low back pain among teachers is prolong standing, followed by prolong sitting and working with computer. Conclusion: We found a high prevalence of low back pain among school teachers with most female and middle age group people affected and they are related with highest risk factor. There is a need to develop specific strategies on ergonomics educate, regular physical exercises and occupational stress in the schools to reduce the occurrence of Work-related Musculoskeletal Disorders (WMSDs) of the low back pain among teachers.",
"title": ""
},
{
"docid": "7844d2e53deba7bcfef03f06a6bced59",
"text": "In power line communications (PLCs), the multipath-induced dispersion and the impulsive noise are the two fundamental impediments in the way of high-integrity communications. The conventional orthogonal frequency-division multiplexing (OFDM) system is capable of mitigating the multipath effects in PLCs, but it fails to suppress the impulsive noise effects. Therefore, in order to mitigate both the multipath effects and the impulsive effects in PLCs, in this paper, a compressed impairment sensing (CIS)-assisted and interleaved-double-FFT (IDFFT)-aided system is proposed for indoor broadband PLC. Similar to classic OFDM, data symbols are transmitted in the time-domain, while the equalization process is employed in the frequency domain in order to achieve the maximum attainable multipath diversity gain. In addition, a specifically designed interleaver is employed in the frequency domain in order to mitigate the impulsive noise effects, which relies on the principles of compressed sensing (CS). Specifically, by taking advantage of the interleaving process, the impairment impulsive samples can be estimated by exploiting the principle of CS and then cancelled. In order to improve the estimation performance of CS, we propose a beneficial pilot design complemented by a pilot insertion scheme. Finally, a CIS-assisted detector is proposed for the IDFFT system advocated. Our simulation results show that the proposed CIS-assisted IDFFT system is capable of achieving a significantly improved performance compared with the conventional OFDM. Furthermore, the tradeoffs to be struck in the design of the CIS-assisted IDFFT system are also studied.",
"title": ""
},
{
"docid": "a11c3f75f6ced7f43e3beeb795948436",
"text": "A new concept of building the controller of a thyristor based three-phase dual converter is presented in this paper. The controller is implemented using mixed mode digital-analog circuitry to achieve optimized performance. The realtime six state pulse patterns needed for the converter are generated by a specially designed ROM based circuit synchronized to the power frequency by a phase-locked-loop. The phase angle and other necessary commands for the converter are managed by an AT89C51 microcontroller. The proposed architecture offers 128-steps in the phase angle control, a resolution sufficient for most converter applications. Because of the hybrid nature of the implementation, the controller can change phase angles online smoothly. The computation burden on the microcontroller is nominal and hence it can easily undertake the tasks of monitoring diagnostic data like overload, loss of excitation and phase sequence. Thus a full fledged system is realizable with only one microcontroller chip, making the control system economic, reliable and efficient.",
"title": ""
},
{
"docid": "5ed0c2b69af2ac1845a58689a43ef1b7",
"text": "Gaze estimation is the process of determining the point of gaze in the space, or the visual axis of an eye. It plays an important role in representing human attention; therefore, it can be most appropriately used in Human Computer Interaction as a means of an advance computer input. Here, the focus is to develop a gaze estimation method for Human Computer Interaction using an ordinary webcam mounted on the top of the computer screen without any additional or specialized hardware. The eye center coordinates are obtained with the geometrical eye model and edge gradients. To improve the reliability, the estimates from two eye centers are combined to reduce the noise and improve the accuracy. Facial land marking is done to identify a precise reference point on the face between the nose. The ellipse fitting and RANSAC method is used to estimate the gaze coordinates and to reject the outliers. This approach can estimate the gaze coordinates with high degree of accuracy even when significant numbers of outliers are present in the data set. Several refinements such as feedback and masking, queuing and averaging are proposed to make the system more stable and useful practically. The results show that the proposed method can be successfully applied to commercial gaze tracking systems using ordinary webcams.",
"title": ""
},
{
"docid": "0b6693195ef302e2c160d65956d80eea",
"text": "Let f : Sd−1 × Sd−1 → R be a function of the form f(x,x′) = g(〈x,x′〉) for g : [−1, 1] → R. We give a simple proof that shows that poly-size depth two neural networks with (exponentially) bounded weights cannot approximate f whenever g cannot be approximated by a low degree polynomial. Moreover, for many g’s, such as g(x) = sin(πdx), the number of neurons must be 2 . Furthermore, the result holds w.r.t. the uniform distribution on Sd−1 × Sd−1. As many functions of the above form can be well approximated by poly-size depth three networks with polybounded weights, this establishes a separation between depth two and depth three networks w.r.t. the uniform distribution on Sd−1 × Sd−1.",
"title": ""
},
{
"docid": "ea95e7602bd35abe9f5df26ddd3a2110",
"text": "0957-4174/$ see front matter 2009 Elsevier Ltd. A doi:10.1016/j.eswa.2009.01.073 * Corresponding author. Tel.: +886 3 3706190. E-mail addresses: [email protected] (C.-W. L (G.-H. Tzeng). 1 Distinguished Chair Professor. To deal with complex problems, structuring them through graphical representations and analyzing causal influences can aid in illuminating complex issues, systems, or concepts. The DEMATEL method is a methodology which can confirm interdependence among variables and aid in the development of a chart to reflect interrelationships between variables, and can be used for researching and solving complicated and intertwined problem groups. The end product of the DEMATEL process is a visual representation— the impact-relations map—by which respondents organize their own actions in the world. In order to obtain a suitable impact-relations map, an appropriate threshold value is needed to obtain adequate information for further analysis and decision-making. In the existing literature, the threshold value has been determined through interviews with respondents or judged by the researcher. In most cases, it is hard and time-consuming to aggregate the respondents and make a consistent decision. In addition, in order to avoid subjective judgments, a theoretical method to select the threshold value is necessary. In this paper, we propose a method based on the entropy approach, the maximum mean de-entropy algorithm, to achieve this purpose. Using a real case to find the interrelationships between the services of a Semiconductor Intellectual Property Mall as an example, we will compare the results obtained from the respondents and from our method, and show that the impact-relations maps from these two methods could be the same. 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "f8fa7b3b1f8caf12540a98073939db1f",
"text": "Respiratory rate and body position are two major physiological parameters in sleep study, and monitoring them during sleep can provide helpful information for health care. In this paper, we present SleepMonitor, a smartwatch based system which leverages the built-in accelerometer to monitor the respiratory rate and body position. To calculate respiratory rate, we design a filter to extract the weak respiratory signal from the noisy accelerometer data collected on the wrist, and use frequency analysis to estimate the respiratory rate from the data along each axis. Further, we design a multi-axis fusion approach which can adaptively adjust the estimates from the three axes and then significantly improve the estimation accuracy. To detect the body position, we apply machine learning techniques based on the features extracted from the accelerometer data. We have implemented our system on Android Wear based smartwatches and evaluated its performance in real experiments. The results show that our system can monitor respiratory rate and body position during sleep with high accuracy under various conditions.",
"title": ""
},
{
"docid": "3c7d1826dae9b251b1ddd7d3a6837d8a",
"text": "A chatbot named Freudbot was constructed using the open source architecture of AIML to determine if a famous person application of chatbot technology could improve student-content interaction in distance education. Fifty-three students in psychology completed a study in which they chatted with Freudbot over the web for 10 minutes under one of two instructional sets. They then completed a questionnaire to provide information about their experience and demographic variables. The results from the questionnaire indicated a neutral evaluation of the chat experience although participants positively endorsed the expansion of chatbot technology and provided clear direction for future development and improvement. A basic analysis of the chatlogs indicated a high proportion of on-task behaviour. There was no effect of instructional set. Altogether, the findings indicate that famous person applications of chatbot technology may be promising as a teaching and learning tool in distance and online education. Chatbots are agents programmed to mimic human conversationalists. The first and still quite successful chatbot was ELIZA (Weizenbaum, 1966), a computer program designed to emulate a Rogerian therapist, a type of self-directed therapy where the patient’s discourse is redirected back to the patient by the therapist usually in the form of a question. “Its name was chosen to emphasize that it may be incrementally improved by its users, since its language abilities may be continually improved by a \"teacher\". Like the ELIZA of Pygmalion fame, it can be made to appear even more civilized, the relation of appearance to reality, however, remaining in the domain of the playwright.” (Weizenbaum, 1966, p.2) The playwright in this case is the programmer but instead of classic Artificial Intelligence, ELIZA was programmed with rules to give the illusion of understanding. Essentially, ELIZA was programmed to recognize keywords and choose an appropriate transformation based on the immediate linguist context. Weizenbaum used the term ‘script’ to refer to the collection of keywords and associated transformation rules. Even though ELIZA is easily exposed as a fraud in the Turing sense, the popularity of the Rogerian therapist script remains high and there are a number of sites that allow you access to ELIZA. It is interesting to note that of all the scripts planned and developed by Weisenbaum, the Rogerian therapist script was the most enduring. Arguably the most successful chatbot today is ALICE (Artificial Linguistic Internet Chat Entity), 3 time winner of the Loebner Prize, the holy grail for chatbots. ALICE was written by Richard Wallace and although no chatbot has passed the Turing test in the Loebner competition, ALICE has been judged the most human-like chatbot in 2000, 2001, and 2004. Like ELIZA, ALICE has no true understanding and is programmed to recognize templates and respond with patterns according to the context. Moreover, like ELIZA, ALICE is incrementally improved with the addition of new responses. Unlike ELIZA, ALICE is programmed to talk to people on the web for as long as possible on any topic. Compared to the ELIZA’s knowledge of 200 keywords and rules, ALICE is embodied by approximately 41,000 templates and associated patterns. Perhaps the most important difference between ALICE and ELIZA is that ALICE is written in AIML (Artificial Intelligence Markup Language), an XML-based open source language with a reasonably active development community. There are also a variety of AIML parsers available written in Java, Perl, PHP, and C++ that permit interaction through a variety of interfaces, from simple web pages to Flash-based (or other) animation, instant messaging, and even voice input/output. In addition, Pandorabots, a web service that promotes and supports the use of ALICE and AIML is reporting support for over 20,000 chatbots on their site (http://www.pandorabots.com). At Pandorabots, would-be botmasters can easily create their own chatbot by modifying the personality of ALICE or by starting from scratch. An AIML chatbot is suitable for many educational applications but our interest was in the famous personality application. Specifically, we were interested in whether students would enjoy and benefit from chatting with famous historical figures in psychology. As a distance education provider, we are always looking for ways to improve the interaction between student and course content over the web. Chatting with an historical figure via the internet may be intrinsically more interesting than the same information presented in a standard third party format over the web. In terms of a theoretical rationale, there are several bases for investigating a famous personality application of chatbot technology as learning tool in distance education. Social constructionist theories of learning emphasize collaboration and conversation as a natural and effective means of knowledge construction and elaboration. The work of Graesser and colleagues on AutoTutor is based largely on these theories (see Graesser,Wiemer-Hastings, Wiemer-Hastings, Kreuz, & Tutoring Research Group 1999). A second rationale is found in the work of Cassell and colleagues on Embodied Conversational Agents (ECA). Cassell indicates that motivation for their research is based on the primacy of conversation as a natural skill learned early and effortlessly in life (Cassell, Bickmore, Campbell, Vilhjalmsson, & Yan, 2000). A conversational interface to a famous psychologist should be engaging and intuitive. A third rationale is provided through cognitive resource theory that argues linguistic rules governing conversational exchanges are automatic in nature due to frequency of use and consequently, free up additional resources to devote to encoding, understanding, and learning. Finally, according to the media equation (Reeves & Nass, 1996), people are predisposed to treat computers, television and other instances of media as people. They describe a number of experimental studies that generally show no differences in how media is ‘treated’ in comparison to people. The social rules that govern human-human interactions appear to govern human-computer interactions as well. If this is the case, then people may be predisposed to interact with a famous person application on the computer given the close fit of the application to human and conversational characteristics.",
"title": ""
},
{
"docid": "bb72e4d6f967fb88473756cdcbb04252",
"text": "GF (Grammatical Framework) is a grammar formalism based on the distinction between abstract and concrete syntax. An abstract syntax is a free algebra of trees, and a concrete syntax is a mapping from trees to nested records of strings and features. These mappings are naturally defined as functions in a functional programming language; the GF language provides the customary functional programming constructs such as algebraic data types, pattern matching, and higher-order functions, which enable productive grammar writing and linguistic generalizations. Given the seemingly transformational power of the GF language, its computational properties are not obvious. However, all grammars written in GF can be compiled into a simple and austere core language, Canonical GF (CGF). CGF is well suited for implementing parsing and generation with grammars, as well as for proving properties of GF. This paper gives a concise description of both the core and the source language, the algorithm used in compiling GF to CGF, and some back-end optimizations on CGF.",
"title": ""
},
{
"docid": "345e46da9fc01a100f10165e82d9ca65",
"text": "We present a new theoretical framework for analyzing and learning artificial neural networks. Our approach simultaneously and adaptively learns both the structure of the network as well as its weights. The methodology is based upon and accompanied by strong data-dependent theoretical learning guarantees, so that the final network architecture provably adapts to the complexity of any given problem.",
"title": ""
},
{
"docid": "abff55f0189ac9aff9db78212c88abf0",
"text": "The climatic modifications lead to global warming; favouring the risk of the appearance and development of diseases are considered until now tropical diseases. Another important factor is the workers' immigration, the economic crisis favouring the passive transmission of new species of culicidae from different areas. Malaria is the disease with the widest distribution in the globe. Millions of people are infected every year in Africa, India, South-East Asia, Middle East, and Central and South America, with more than 41% of the global population under the risk of infestation with malaria. The increase of the number of local cases reported in 2007-2011 indicates that the conditions can favour the high local transmission in the affected areas. In the situation presented, the establishment of the level of risk concerning the reemergence of malaria in Romania becomes a priority.",
"title": ""
},
{
"docid": "f9bf332a0c2c278415a2815b5637ec75",
"text": "The Full Adder circuit is an important component in application such as Digital Signal Processing (DSP) architecture, microprocessor, and microcontroller and data processing units. This paper discusses the evolution of full adder circuits in terms of lesser power consumption, higher speed. Starting with the most conventional 28 transistor full adder and then gradually studied full adders consisting of as less as 8 transistors. We have also included some of the most popular full adder cells like dynamic CMOS [9], Dual rail domino logic[14], Static Energy Recovery Full Adder (SERF) [7] [8], Adder9A, Adder9B, GDI based full adder.",
"title": ""
},
{
"docid": "26cecceea22566025c22e66376dbb138",
"text": "The development of technologies related to the Internet of Things (IoT) provides a new perspective on applications pertaining to smart cities. Smart city applications focus on resolving issues facing people in everyday life, and have attracted a considerable amount of research interest. The typical issue encountered in such places of daily use, such as stations, shopping malls, and stadiums is crowd dynamics management. Therefore, we focus on crowd dynamics management to resolve the problem of congestion using IoT technologies. Real-time crowd dynamics management can be achieved by gathering information relating to congestion and propose less crowded places. Although many crowd dynamics management applications have been proposed in various scenarios and many models have been devised to this end, a general model for evaluating the control effectiveness of crowd dynamics management has not yet been developed in IoT research. Therefore, in this paper, we propose a model to evaluate the performance of crowd dynamics management applications. In other words, the objective of this paper is to present the proof-of-concept of control effectiveness of crowd dynamics management. Our model uses feedback control theory, and enables an integrated evaluation of the control effectiveness of crowd dynamics management methods under various scenarios. We also provide extensive numerical results to verify the effectiveness of the model.",
"title": ""
},
{
"docid": "2dd9bb2536fdc5e040544d09fe3dd4fa",
"text": "Low 1/f noise, low-dropout (LDO) regulators are becoming critical for the supply regulation of deep-submicron analog baseband and RF system-on-chip designs. A low-noise, high accuracy LDO regulator (LN-LDO) utilizing a chopper stabilized error amplifier is presented. In order to achieve fast response during load transients, a current-mode feedback amplifier (CFA) is designed as a second stage driving the regulation FET. In order to reduce clock feed-through and 1/f noise accumulation at the chopping frequency, a first-order digital SigmaDelta noise-shaper is used for chopping clock spectral spreading. With up to 1 MHz noise-shaped modulation clock, the LN-LDO achieves a noise spectral density of 32 nV/radic(Hz) and a PSR of 38 dB at 100 kHz. The proposed LDO is shown to reduce the phase noise of an integrated 32 MHz temperature compensated crystal oscillator (TCXO) at 10 kHz offset by 15 dB. Due to reduced 1/f noise requirements, the error amplifier silicon area is reduced by 75%, and the overall regulator area is reduced by 50% with respect to an equivalent noise static regulator. The current-mode feedback second stage buffer reduces regulator settling time by 60% in comparison to an equivalent power consumption voltage mode buffer, achieving 0.6 mus settling time for a 25-mA load step. The LN-LDO is designed and fabricated on a 0.25 mum CMOS process with five layers of metal, occupying 0.88 mm2.",
"title": ""
},
{
"docid": "fec4f80f907d65d4b73480b9c224d98a",
"text": "This paper presents a novel finite position set-phase locked loop (FPS-PLL) for sensorless control of surface-mounted permanent-magnet synchronous generators (PMSGs) in variable-speed wind turbines. The proposed FPS-PLL is based on the finite control set-model predictive control concept, where a finite number of rotor positions are used to estimate the back electromotive force of the PMSG. Then, the estimated rotor position, which minimizes a certain cost function, is selected to be the optimal rotor position. This eliminates the need of a fixed-gain proportional-integral controller, which is commonly utilized in the conventional PLL. The performance of the proposed FPS-PLL has been experimentally investigated and compared with that of the conventional one using a 14.5 kW PMSG with a field-oriented control scheme utilized as the generator control strategy. Furthermore, the robustness of the proposed FPS-PLL is investigated against PMSG parameters variations.",
"title": ""
},
{
"docid": "7b806cbde7cd0c2682402441a578ec9c",
"text": "We had previously shown that regularization principles lead to approximation schemes which are equivalent to networks with one layer of hidden units, called Regularization Networks. In particular, standard smoothness functionals lead to a subclass of regularization networks, the well known Radial Basis Functions approximation schemes. This paper shows that regularization networks encompass a much broader range of approximation schemes, including many of the popular general additive models and some of the neural networks. In particular, we introduce new classes of smoothness functionals that lead to diierent classes of basis functions. Additive splines as well as some tensor product splines can be obtained from appropriate classes of smoothness functionals. Furthermore, the same generalization that extends Radial Basis Functions (RBF) to Hyper Basis Functions (HBF) also leads from additive models to ridge approximation models, containing as special cases Breiman's hinge functions, some forms of Projection Pursuit Regression and several types of neural networks. We propose to use the term Generalized Regularization Networks for this broad class of approximation schemes that follow from an extension of regularization. In the probabilistic interpretation of regularization, the diierent classes of basis functions correspond to diierent classes of prior probabilities on the approximating function spaces, and therefore to diierent types of smoothness assumptions. In summary, diierent multilayer networks with one hidden layer, which we collectively call Generalized Regularization Networks, correspond to diierent classes of priors and associated smoothness functionals in a classical regularization principle. Three broad classes are a) Radial Basis Functions that can be generalized to Hyper Basis Functions, b) some tensor product splines, and c) additive splines that can be generalized to schemes of the type of ridge approximation, hinge functions and several perceptron-like neural networks with one-hidden layer.",
"title": ""
},
{
"docid": "80912c6ff371cdc47ef92e793f2497a0",
"text": "Since the explosion of the Web as a business medium, one of its primary uses has been for marketing. Soon, the Web will become a critical distribution channel for the majority of successful enterprises. The mass media, consumer marketers and advertising agencies seem to be in the midst of Internet discovery and exploitation. Before a company can envision what might sell online in the coming years, it must ®rst understand the attitudes and behaviour of its potential customers. Hence, this study examines attitudes toward various aspects of online shopping and provides a better understanding of the potential of electronic commerce for both researchers and practitioners.",
"title": ""
},
{
"docid": "0b59b6f7e24a4c647ae656a0dc8cc3ab",
"text": "Wikipedia is a goldmine of information; not just for its many readers, but also for the growing community of researchers who recognize it as a resource of exceptional scale and utility. It represents a vast investment of manual effort and judgment: a huge, constantly evolving tapestry of concepts and relations that is being applied to a host of tasks. This article provides a comprehensive description of this work. It focuses on research that extracts and makes use of the concepts, relations, facts and descriptions found in Wikipedia, and organizes the work into four broad categories: applying Wikipedia to natural language processing; using it to facilitate information retrieval and information extraction; and as a resource for ontology building. The article addresses how Wikipedia is being used as is, how it is being improved and adapted, and how it is being combined with other structures to create entirely new resources. We identify the research groups and individuals involved, and how their work has developed in the last few years. We provide a comprehensive list of the open-source software they have produced. r 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "a1f60b03cf3a7dde3090cbf0a926a7e9",
"text": "Secondary analyses of Revised NEO Personality Inventory data from 26 cultures (N = 23,031) suggest that gender differences are small relative to individual variation within genders; differences are replicated across cultures for both college-age and adult samples, and differences are broadly consistent with gender stereotypes: Women reported themselves to be higher in Neuroticism, Agreeableness, Warmth, and Openness to Feelings, whereas men were higher in Assertiveness and Openness to Ideas. Contrary to predictions from evolutionary theory, the magnitude of gender differences varied across cultures. Contrary to predictions from the social role model, gender differences were most pronounced in European and American cultures in which traditional sex roles are minimized. Possible explanations for this surprising finding are discussed, including the attribution of masculine and feminine behaviors to roles rather than traits in traditional cultures.",
"title": ""
},
{
"docid": "3ff58e78ac9fe623e53743ad05248a30",
"text": "Clock gating is an effective technique for minimizing dynamic power in sequential circuits. Applying clock-gating at gate-level not only saves time compared to implementing clock-gating in the RTL code but also saves power and can easily be automated in the synthesis process. This paper presents simulation results on various types of clock-gating at different hierarchical levels on a serial peripheral interface (SPI) design. In general power savings of about 30% and 36% reduction on toggle rate can be seen with different complex clock- gating methods with respect to no clock-gating in the design.",
"title": ""
}
] |
scidocsrr
|
515a00396b86e1b7c75d103e47ea01d9
|
TwoStep: An Authentication Method Combining Text and Graphical Passwords
|
[
{
"docid": "25fdc0032236131be6e266c6bdac37d1",
"text": "Shoulder-surfing -- using direct observation techniques, such as looking over someone's shoulder, to get passwords, PINs and other sensitive personal information -- is a problem that has been difficult to overcome. When a user enters information using a keyboard, mouse, touch screen or any traditional input device, a malicious observer may be able to acquire the user's password credentials. We present EyePassword, a system that mitigates the issues of shoulder surfing via a novel approach to user input.\n With EyePassword, a user enters sensitive input (password, PIN, etc.) by selecting from an on-screen keyboard using only the orientation of their pupils (i.e. the position of their gaze on screen), making eavesdropping by a malicious observer largely impractical. We present a number of design choices and discuss their effect on usability and security. We conducted user studies to evaluate the speed, accuracy and user acceptance of our approach. Our results demonstrate that gaze-based password entry requires marginal additional time over using a keyboard, error rates are similar to those of using a keyboard and subjects preferred the gaze-based password entry approach over traditional methods.",
"title": ""
},
{
"docid": "0c7512ac95d72436e31b9b05199eefdd",
"text": "Usable security has unique usability challenges bec ause the need for security often means that standard human-comput er-in eraction approaches cannot be directly applied. An important usability goal for authentication systems is to support users in s electing better passwords, thus increasing security by expanding th e effective password space. In click-based graphical passwords, poorly chosen passwords lead to the emergence of hotspots – portions of the image where users are more likely to select cli ck-points, allowing attackers to mount more successful diction ary attacks. We use persuasion to influence user choice in click -based graphical passwords, encouraging users to select mo re random, and hence more secure, click-points. Our approach i s to introduce persuasion to the Cued Click-Points graphical passw ord scheme (Chiasson, van Oorschot, Biddle, 2007) . Our resulting scheme significantly reduces hotspots while still maintain ing its usability.",
"title": ""
},
{
"docid": "46a66d6d3d4ad927deb96d8d15af6669",
"text": "Security questions (or challenge questions) are commonly used to authenticate users who have lost their passwords. We examined the password retrieval mechanisms for a number of personal banking websites, and found that many of them rely in part on security questions with serious usability and security weaknesses. We discuss patterns in the security questions we observed. We argue that today's personal security questions owe their strength to the hardness of an information-retrieval problem. However, as personal information becomes ubiquitously available online, the hardness of this problem, and security provided by such questions, will likely diminish over time. We supplement our survey of bank security questions with a small user study that supplies some context for how such questions are used in practice.",
"title": ""
}
] |
[
{
"docid": "727c36aac7bd0327f3edb85613dcf508",
"text": "The interpretation of adjective-noun pairs plays a crucial role in tasks such as recognizing textual entailment. Formal semantics often places adjectives into a taxonomy which should dictate adjectives’ entailment behavior when placed in adjective-noun compounds. However, we show experimentally that the behavior of subsective adjectives (e.g. red) versus non-subsective adjectives (e.g. fake) is not as cut and dry as often assumed. For example, inferences are not always symmetric: while ID is generally considered to be mutually exclusive with fake ID, fake ID is considered to entail ID. We discuss the implications of these findings for automated natural language understanding.",
"title": ""
},
{
"docid": "391f9b889b1c3ffe3e8ee422d108edcd",
"text": "Does the brain of a bilingual process language differently from that of a monolingual? We compared how bilinguals and monolinguals recruit classic language brain areas in response to a language task and asked whether there is a neural signature of bilingualism. Highly proficient and early-exposed adult Spanish-English bilinguals and English monolinguals participated. During functional magnetic resonance imaging (fMRI), participants completed a syntactic sentence judgment task [Caplan, D., Alpert, N., & Waters, G. Effects of syntactic structure and propositional number on patterns of regional cerebral blood flow. Journal of Cognitive Neuroscience, 10, 541552, 1998]. The sentences exploited differences between Spanish and English linguistic properties, allowing us to explore similarities and differences in behavioral and neural responses between bilinguals and monolinguals, and between a bilingual's two languages. If bilinguals' neural processing differs across their two languages, then differential behavioral and neural patterns should be observed in Spanish and English. Results show that behaviorally, in English, bilinguals and monolinguals had the same speed and accuracy, yet, as predicted from the Spanish-English structural differences, bilinguals had a different pattern of performance in Spanish. fMRI analyses revealed that both monolinguals (in one language) and bilinguals (in each language) showed predicted increases in activation in classic language areas (e.g., left inferior frontal cortex, LIFC), with any neural differences between the bilingual's two languages being principled and predictable based on the morphosyntactic differences between Spanish and English. However, an important difference was that bilinguals had a significantly greater increase in the blood oxygenation level-dependent signal in the LIFC (BA 45) when processing English than the English monolinguals. The results provide insight into the decades-old question about the degree of separation of bilinguals' dual-language representation. The differential activation for bilinguals and monolinguals opens the question as to whether there may possibly be a neural signature of bilingualism. Differential activation may further provide a fascinating window into the language processing potential not recruited in monolingual brains and reveal the biological extent of the neural architecture underlying all human language.",
"title": ""
},
{
"docid": "a33348ee1396be9be333eb3be8dadb39",
"text": "In the multi-MHz low voltage, high current applications, Synchronous Rectification (SR) is strongly needed due to the forward recovery and the high conduction loss of the rectifier diodes. This paper applies the SR technique to a 10-MHz isolated class-Φ2 resonant converter and proposes a self-driven level-shifted Resonant Gate Driver (RGD) for the SR FET. The proposed RGD can reduce the average on-state resistance and the associated conduction loss of the MOSFET. It also provides precise switching timing for the SR so that the body diode conduction time of the SR FET can be minimized. A 10-MHz prototype with 18 V input, 5 V/2 A output was built to verify the advantage of the SR with the proposed RGD. At full load of 2 A, the SR with the proposed RGD improves the converter efficiency from 80.2% using the SR with the conventional RGD to 82% (an improvement of 1.8%). Compared to the efficiency of 77.3% using the diode rectification, the efficiency improvement is 4.7%.",
"title": ""
},
{
"docid": "60cbe9d8e1cbc5dd87c8f438cc766a0b",
"text": "Drosophila mounts a potent host defence when challenged by various microorganisms. Analysis of this defence by molecular genetics has now provided a global picture of the mechanisms by which this insect senses infection, discriminates between various classes of microorganisms and induces the production of effector molecules, among which antimicrobial peptides are prominent. An unexpected result of these studies was the discovery that most of the genes involved in the Drosophila host defence are homologous or very similar to genes implicated in mammalian innate immune defences. Recent progress in research on Drosophila immune defence provides evidence for similarities and differences between Drosophila immune responses and mammalian innate immunity.",
"title": ""
},
{
"docid": "c8af04fbdc92bfe5b9d35220f6ee6c61",
"text": "The academic and professional literature offer many different definitions and models of IT governance (ITG). Considerable advancements have been made in identifying the components and mechanisms of ITG. However, much of the research to date has followed an empirical approach, using case studies to examine how contemporary organizations are implementing effective governance arrangements. This paper seeks to propose a theoretical model of the corporate governance of IT using the principles of cybernetics as embodied in Stafford Beer's Viable System Model (VSM) As this paper is primarily concerned with corporate governance of IT, only System 5 of the VSM is examined in detail.",
"title": ""
},
{
"docid": "59ddabc255d07fe6b8fb13082c8dd62d",
"text": "Mambo is a full-system simulator for modeling PowerPC-based systems. It provides building blocks for creating simulators that range from purely functional to timing-accurate. Functional versions support fast emulation of individual PowerPC instructions and the devices necessary for executing operating systems. Timing-accurate versions add the ability to account for device timing delays, and support the modeling of the PowerPC processor microarchitecture. We describe our experience in implementing the simulator and its uses within IBM to model future systems, support early software development, and design new system software.",
"title": ""
},
{
"docid": "b117e0e32d754f59c7d3eacdc609f63b",
"text": "Mass media campaigns are widely used to expose high proportions of large populations to messages through routine uses of existing media, such as television, radio, and newspapers. Exposure to such messages is, therefore, generally passive. Such campaigns are frequently competing with factors, such as pervasive product marketing, powerful social norms, and behaviours driven by addiction or habit. In this Review we discuss the outcomes of mass media campaigns in the context of various health-risk behaviours (eg, use of tobacco, alcohol, and other drugs, heart disease risk factors, sex-related behaviours, road safety, cancer screening and prevention, child survival, and organ or blood donation). We conclude that mass media campaigns can produce positive changes or prevent negative changes in health-related behaviours across large populations. We assess what contributes to these outcomes, such as concurrent availability of required services and products, availability of community-based programmes, and policies that support behaviour change. Finally, we propose areas for improvement, such as investment in longer better-funded campaigns to achieve adequate population exposure to media messages.",
"title": ""
},
{
"docid": "a1cd4a4ce70c9c8672eee5ffc085bf63",
"text": "Ternary logic is a promising alternative to conventional binary logic, since it is possible to achieve simplicity and energy efficiency due to the reduced circuit overhead. In this paper, a ternary magnitude comparator design based on Carbon Nanotube Field Effect Transistors (CNFETs) is presented. This design eliminates the usage of complex ternary decoder which is a part of existing designs. Elimination of decoder results in reduction of delay and power. Simulations of proposed and existing designs are done on HSPICE and results proves that the proposed 1-bit comparator consumes 81% less power and shows delay advantage of 41.6% compared to existing design. Further a methodology to extend the 1-bit comparator design to n-bit comparator design is also presented.",
"title": ""
},
{
"docid": "40a03f90a9d32ae71946ac8d4d456fca",
"text": "This paper presents a recommender system for tourism based on the tastes of the users, their demographic classification and the places they have visited in former trips. The system is able to offer recommendations for a single user or a group of users. The group recommendation is elicited out of the individual personal recommendations through the application of mechanisms such as aggregation and intersection. The elicitation mechanism is implemented as an extension of e-Tourism, a user-adapted tourism and leisure application whose main component is the Generalist Recommender System Kernel (GRSK), a domain-independent taxonomy-driven recommender system.",
"title": ""
},
{
"docid": "367d1b8e188231145824d0577ab6bd40",
"text": "This paper describes the experiences of introducing ISO 9000 into Taiwan's higher education systems. Based on an empirical investigation and a case study, the authors argue that the implementation of ISO 9000 quality systems has a positive impact on the education quality. The benefits of ISO 9000 certification are further depicted for those interested in complying with the Standard. We also justify the current progress of the ISO 9000 implementation in Taiwan with recommendations for improvement.",
"title": ""
},
{
"docid": "309a5105be37cbbae67619eac6874f12",
"text": "PURPOSE\nTo conduct a systematic review of prospective studies assessing the association of vitamin D intake or blood levels of 25-hydroxyvitamin D [25(OH)D] with the risk of colorectal cancer using meta-analysis.\n\n\nMETHODS\nRelevant studies were identified by a search of MEDLINE and EMBASE databases before October 2010 with no restrictions. We included prospective studies that reported relative risk (RR) estimates with 95% CIs for the association between vitamin D intake or blood 25(OH)D levels and the risk of colorectal, colon, or rectal cancer. Approximately 1,000,000 participants from several countries were included in this analysis.\n\n\nRESULTS\nNine studies on vitamin D intake and nine studies on blood 25(OH)D levels were included in the meta-analysis. The pooled RRs of colorectal cancer for the highest versus lowest categories of vitamin D intake and blood 25(OH)D levels were 0.88 (95% CI, 0.80 to 0.96) and 0.67 (95% CI, 0.54 to 0.80), respectively. There was no heterogeneity among studies of vitamin D intake (P = .19) or among studies of blood 25(OH)D levels (P = .96). A 10 ng/mL increment in blood 25(OH)D level conferred an RR of 0.74 (95% CI, 0.63 to 0.89).\n\n\nCONCLUSION\nVitamin D intake and blood 25(OH)D levels were inversely associated with the risk of colorectal cancer in this meta-analysis.",
"title": ""
},
{
"docid": "d7ca5db3257c5aaf0524cd3a855ac2a7",
"text": "This paper presented the clinical results of breast cancer detection using a radar-based UWB microwave system developed at the University of Bristol. Additionally, the system overview and some experimental laboratory results are presented as well. For the clinical result shown in this contribution, we compare images obtained using the standard X-ray mammography and the radar-based microwave system. The developed microwave system has apparently successfully detected the tumor in correct position, as confirmed on the X-ray image, although the compression suffered by the breast during X-ray makes a precise positional determination impossible.",
"title": ""
},
{
"docid": "619b39299531f126769aa96b3e0e84e1",
"text": "In this paper, we focus on the opinion target extraction as part of the opinion mining task. We model the problem as an information extraction task, which we address based on Conditional Random Fields (CRF). As a baseline we employ the supervised algorithm by Zhuang et al. (2006), which represents the state-of-the-art on the employed data. We evaluate the algorithms comprehensively on datasets from four different domains annotated with individual opinion target instances on a sentence level. Furthermore, we investigate the performance of our CRF-based approach and the baseline in a singleand cross-domain opinion target extraction setting. Our CRF-based approach improves the performance by 0.077, 0.126, 0.071 and 0.178 regarding F-Measure in the single-domain extraction in the four domains. In the crossdomain setting our approach improves the performance by 0.409, 0.242, 0.294 and 0.343 regarding F-Measure over the baseline.",
"title": ""
},
{
"docid": "ba3f0e792b896b38f8844807a8d8e80e",
"text": "In this paper, we present a novel self-learning single image super-resolution (SR) method, which restores a high-resolution (HR) image from self-examples extracted from the low-resolution (LR) input image itself without relying on extra external training images. In the proposed method, we directly use sampled image patches as the anchor points, and then learn multiple linear mapping functions based on anchored neighborhood regression to transform LR space into HR space. Moreover, we utilize the flipped and rotated versions of the self-examples to expand the internal patch space. Experimental comparison on standard benchmarks with state-of-the-art methods validates the effectiveness of the proposed approach.",
"title": ""
},
{
"docid": "74308938c3fbfd7ee22093d7df36009e",
"text": "BACKGROUND\nThe relationship of gray and white matter atrophy in multiple sclerosis (MS) to neuropsychological and neuropsychiatric impairment has not been examined.\n\n\nMETHODS\nIn 40 patients with MS and 15 age-/sex-matched normal controls, the authors used SPM99 to obtain whole brain normalized volumes of gray and white matter, as well as measured conventional lesion burden (total T1 hypointense and FLAIR hyperintense lesion volume). The whole brain segmentation was corrected for misclassification related to MS brain lesions. To compare the effects of gray matter, white matter, and lesion volumes with respect to brain-behavior relationships, the MS group (disease duration = 11.2 +/- 8.8 years; EDSS score = 3.3 +/- 1.9) underwent neuropsychological assessment, and was compared to a separate, larger group of age-/sex-matched normal controls (n = 83).\n\n\nRESULTS\nThe MS group had smaller gray (p = 0.009) and white matter volume (p = 0.018), impaired cognitive performance (verbal memory, visual memory, processing speed, and working memory) (all p < 0.0001), and greater neuropsychiatric symptoms (depression, p < 0.0001; dysphoria, p < 0.0001; irritability, p < 0.0001; anxiety, p < 0.0001; euphoria, p = 0.006; agitation, p = 0.02; apathy, p = 0.02; and disinhibition, p = 0.11) vs controls. Hierarchical stepwise regression analysis revealed that whole gray and white matter volumes accounted for greater variance than lesion burden in explaining cognitive performance and neuropsychiatric symptoms. White matter volume was the best predictor of mental processing speed and working memory, whereas gray matter volume predicted verbal memory, euphoria, and disinhibition.\n\n\nCONCLUSION\nBoth gray and white brain matter atrophy contribute to neuropsychological deficits in multiple sclerosis.",
"title": ""
},
{
"docid": "473f80115b7fa9979d6d6ffa2995c721",
"text": "Context Olive oil, the main fat in the Mediterranean diet, contains polyphenols, which have antioxidant properties and may affect serum lipid levels. Contribution The authors studied virgin olive oil (high in polyphenols), refined olive oil (low in polyphenols), and a mixture of the 2 oils in equal parts. Two hundred healthy young men consumed 25 mL of an olive oil daily for 3 weeks followed by the other olive oils in a randomly assigned sequence. Olive oils with greater polyphenol content increased high-density lipoprotein (HDL) cholesterol levels and decreased serum markers of oxidation. Cautions The increase in HDL cholesterol level was small. Implications Virgin olive oil might have greater health benefits than refined olive oil. The Editors Polyphenol intake has been associated with low cancer and coronary heart disease (CHD) mortality rates (1). Antioxidant and anti-inflammatory properties and improvements in endothelial dysfunction and the lipid profile have been reported for dietary polyphenols (2). Studies have recently suggested that Mediterranean health benefits may be due to a synergistic combination of phytochemicals and fatty acids (3). Olive oil, rich in oleic acid (a monounsaturated fatty acid), is the main fat of the Mediterranean diet (4). To date, most of the protective effect of olive oil within the Mediterranean diet has been attributed to its high monounsaturated fatty acid content (5). However, if the effect of olive oil can be attributed solely to its monounsaturated fatty acid content, any type of olive oil, rapeseed or canola oil, or monounsaturated fatty acidenriched fat would provide similar health benefits. Whether the beneficial effects of olive oil on the cardiovascular system are exclusively due to oleic acid remains to be elucidated. The minor components, particularly the phenolic compounds, in olive oil may contribute to the health benefits derived from the Mediterranean diet. Among olive oils usually present on the market, virgin olive oils produced by direct-press or centrifugation methods have higher phenolic content (150 to 350 mg/kg of olive oil) (6). In experimental studies, phenolic compounds in olive oil showed strong antioxidant properties (7, 8). Oxidized low-density lipoprotein (LDL) is currently thought to be more damaging to the arterial wall than native LDL cholesterol (9). Results of randomized, crossover, controlled clinical trials on the antioxidant effect of polyphenols from real-life daily doses of olive oil in humans are, however, conflicting (10). Growing evidence suggests that dietary phenols (1115) and plant-based diets (16) can modulate lipid and lipoprotein metabolism. The Effect of Olive Oil on Oxidative Damage in European Populations (EUROLIVE) Study is a multicenter, randomized, crossover, clinical intervention trial that aims to assess the effect of sustained daily doses of olive oil, as a function of its phenolic content, on the oxidative damage to lipid and LDL cholesterol levels and the lipid profile as cardiovascular risk factors. Methods Participants We recruited healthy men, 20 to 60 years of age, from 6 European cities through newspaper and university advertisements. Of the 344 persons who agreed to be screened, 200 persons were eligible (32 men from Barcelona, Spain; 33 men from Copenhagen, Denmark; 30 men from Kuopio, Finland; 31 men from Bologna, Italy; 40 men from Postdam, Germany; and 34 men from Berlin, Germany) and were enrolled from September 2002 through June 2003 (Figure 1). Participants were eligible for study inclusion if they provided written informed consent, were willing to adhere to the protocol, and were in good health. We preselected volunteers when clinical record, physical examination, and blood pressure were strictly normal and the candidate was a nonsmoker. Next, we performed a complete blood count, biochemical laboratory analyses, and urinary dipstick tests to measure levels of serum glucose, total cholesterol, creatinine, alanine aminotransferase, and triglycerides. We included candidates with values within the reference range. Exclusion criteria were smoking; use of antioxidant supplements, aspirin, or drugs with established antioxidant properties; hyperlipidemia; obesity; diabetes; hypertension; intestinal disease; or any other disease or condition that would impair adherence. We excluded women to avoid the possible interference of estrogens, which are considered to be potential antioxidants (17). All participants provided written informed consent, and the local institutional ethics committees approved the protocol. Figure 1. Study flow diagram. Sequence of olive oil administration: 1) high-, medium-, and low-polyphenol olive oil; 2) medium-, low-, and high-polyphenol olive oil; and 3) low-, high-, and medium-polyphenol olive oil. Design and Study Procedure The trial was a randomized, crossover, controlled study. We randomly assigned participants consecutively to 1 of 3 sequences of olive oil administration. Participants received a daily dose of 25 mL (22 g) of 3 olive oils with high (366 mg/kg), medium (164 mg/kg), and low (2.7 mg/kg) polyphenol content (Figure 1) in replacement of other raw fats. Sequences were high-, medium-, and low-polyphenol olive oil (sequence 1); medium-, low-, and high-polyphenol olive oil (sequence 2); and low-, high-, and medium-polyphenol olive oil (sequence 3). In the coordinating center, we prepared random allocation to each sequence, taken from a Latin square, for each center by blocks of 42 participants (14 persons in each sequence), using specific software that was developed at the Municipal Institute for Medical Research, Barcelona, Spain (Aleator, Municipal Institute for Medical Research). The random allocation was faxed to the participating centers upon request for each individual included in the study. Treatment containers were assigned a code number that was concealed from participants and investigators, and the coordinating center disclosed the code number only after completion of statistical analyses. Olive oils were specially prepared for the trial. We selected a virgin olive oil with high natural phenolic content (366 mg/kg) and measured its fatty acid and vitamin E composition. We tested refined olive oil harvested from the same cultivar and soil to find an olive oil with similar quantities of fatty acid and a similar micronutrient profile. Vitamin E was adjusted to values similar to those of the selected virgin olive oil. Because phenolic compounds are lost in the refinement process, the refined olive oil had a low phenolic content (2.7 mg/kg). By mixing virgin and refined olive oil, we obtained an olive oil with an intermediate phenolic content (164 mg/kg). Olive oils did not differ in fat and micronutrient composition (that is, vitamin E, triterpenes, and sitosterols), with the exception of phenolic content. Three-week interventions were preceded by 2-week washout periods, in which we requested that participants avoid olive and olive oil consumption. We chose the 2-week washout period to reach the equilibrium in the plasma lipid profile because longer intervention periods with fat-rich diets did not modify the lipid concentrations (18). Daily doses of 25 mL of olive oil were blindly prepared in containers delivered to the participants at the beginning of each intervention period. We instructed participants to return the 21 containers at the end of each intervention period so that the daily amount of unconsumed olive oil could be registered. Dietary Adherence We measured tyrosol and hydroxytyrosol, the 2 major phenolic compounds in olive oil as simple forms or conjugates (7), by gas chromatography and mass spectrometry in 24-hour urine before and after each intervention period as biomarkers of adherence to the type of olive oil ingested. We asked participants to keep a 3-day dietary record at baseline and after each intervention period. We requested that participants in all centers avoid a high intake of foods that contain antioxidants (that is, vegetables, legumes, fruits, tea, coffee, chocolate, wine, and beer). A nutritionist also personally advised participants to replace all types of habitually consumed raw fats with the olive oils (for example, spread the assigned olive oil on bread instead of butter, put the assigned olive oil on boiled vegetables instead of margarine, and use the assigned olive oil on salads instead of other vegetable oils or standard salad dressings). Data Collection Main outcome measures were changes in biomarkers of oxidative damage to lipids. Secondary outcomes were changes in lipid levels and in biomarkers of the antioxidant status of the participants. We assessed outcome measures at the beginning of the study (baseline) and before (preintervention) and after (postintervention) each olive oil intervention period. We collected blood samples at fasting state together with 24-hour urine and recorded anthropometric variables. We measured blood pressure with a mercury sphygmomanometer after at least a 10-minute rest in the seated position. We recorded physical activity at baseline and at the end of the study and assessed it by using the Minnesota Leisure Time Physical Activity Questionnaire (19). We measured 1) glucose and lipid profile, including serum glucose, total cholesterol, high-density lipoprotein (HDL) cholesterol, and triglyceride levels determined by enzymatic methods (2023) and LDL cholesterol levels calculated by the Friedewald formula; 2) oxidative damage to lipids, including plasma-circulating oxidized LDL measured by enzyme immunoassay, plasma total F2-isoprostanes determined by using high-performance liquid chromatography and stable isotope-dilution and mass spectrometry, plasma C18 hydroxy fatty acids measured by gas chromatography and mass spectrometry, and serum LDL cholesterol uninduced conjugated dienes measured by spectrophotometry and adjusted for the cholesterol concentration in LDL cholesterol levels; 3) antioxidant sta",
"title": ""
},
{
"docid": "8c679f94e31dc89787ccff8e79e624b5",
"text": "This paper presents a radar sensor package specifically developed for wide-coverage sounding and imaging of polar ice sheets from a variety of aircraft. Our instruments address the need for a reliable remote sensing solution well-suited for extensive surveys at low and high altitudes and capable of making measurements with fine spatial and temporal resolution. The sensor package that we are presenting consists of four primary instruments and ancillary systems with all the associated antennas integrated into the aircraft to maintain aerodynamic performance. The instruments operate simultaneously over different frequency bands within the 160 MHz-18 GHz range. The sensor package has allowed us to sound the most challenging areas of the polar ice sheets, ice sheet margins, and outlet glaciers; to map near-surface internal layers with fine resolution; and to detect the snow-air and snow-ice interfaces of snow cover over sea ice to generate estimates of snow thickness. In this paper, we provide a succinct description of each radar and associated antenna structures and present sample results to document their performance. We also give a brief overview of our field measurement programs and demonstrate the unique capability of the sensor package to perform multifrequency coincidental measurements from a single airborne platform. Finally, we illustrate the relevance of using multispectral radar data as a tool to characterize the entire ice column and to reveal important subglacial features.",
"title": ""
},
{
"docid": "db9cd84961b0fe2032ecbb52e7cc65ba",
"text": "In this paper, we describe the design of a real time water balance monitoring system, suitable for large campuses. The battery operated sensor nodes consist of an ultra-sound level sensor, a 16-bit microcontroller and a sub-gigahertz radio to setup a hub and spoke system. Real time data from the sensors is pushed to a server on the cloud to log as well as perform analytics. Industrial design of the device allows flexible mounting on a variety of tanks. Experimental results from a trial deployment in a medium sized campus are shown to illustrate the usefulness of such a system towards better management of campus water resources.",
"title": ""
},
{
"docid": "3655319a1d2ff7f4bc43235ba02566bd",
"text": "In high-performance systems, stencil computations play a crucial role as they appear in a variety of different fields of application, ranging from partial differential equation solving, to computer simulation of particles’ interaction, to image processing and computer vision. The computationally intensive nature of those algorithms created the need for solutions to efficiently implement them in order to save both execution time and energy. This, in combination with their regular structure, has justified their widespread study and the proposal of largely different approaches to their optimization.\n However, most of these works are focused on aggressive compile time optimization, cache locality optimization, and parallelism extraction for the multicore/multiprocessor domain, while fewer works are focused on the exploitation of custom architectures to further exploit the regular structure of Iterative Stencil Loops (ISLs), specifically with the goal of improving power efficiency.\n This work introduces a methodology to systematically design power-efficient hardware accelerators for the optimal execution of ISL algorithms on Field-programmable Gate Arrays (FPGAs). As part of the methodology, we introduce the notion of Streaming Stencil Time-step (SST), a streaming-based architecture capable of achieving both low resource usage and efficient data reuse thanks to an optimal data buffering strategy, and we introduce a technique called SSTs queuing that is capable of delivering a pseudolinear execution time speedup with constant bandwidth.\n The methodology has been validated on significant benchmarks on a Virtex-7 FPGA using the Xilinx Vivado suite. Results demonstrate how the efficient usage of the on-chip memory resources realized by an SST allows one to treat problem sizes whose implementation would otherwise not be possible via direct synthesis of the original, unmanipulated code via High-Level Synthesis (HLS). We also show how the SSTs queuing effectively ensures a pseudolinear throughput speedup while consuming constant off-chip bandwidth.",
"title": ""
},
{
"docid": "64bbb86981bf3cc575a02696f64109f6",
"text": "We use computational techniques to extract a large number of different features from the narrative speech of individuals with primary progressive aphasia (PPA). We examine several different types of features, including part-of-speech, complexity, context-free grammar, fluency, psycholinguistic, vocabulary richness, and acoustic, and discuss the circumstances under which they can be extracted. We consider the task of training a machine learning classifier to determine whether a participant is a control, or has the fluent or nonfluent variant of PPA. We first evaluate the individual feature sets on their classification accuracy, then perform an ablation study to determine the optimal combination of feature sets. Finally, we rank the features in four practical scenarios: given audio data only, given unsegmented transcripts only, given segmented transcripts only, and given both audio and segmented transcripts. We find that psycholinguistic features are highly discriminative in most cases, and that acoustic, context-free grammar, and part-of-speech features can also be important in some circumstances.",
"title": ""
}
] |
scidocsrr
|
43efc73e3ec85ff463c701d624d58820
|
A Light-weight Compaction Tree to Reduce I / O Amplification toward Efficient Key-Value Stores
|
[
{
"docid": "a2e8f67417c676eeb6dad21f186e018d",
"text": "We present FlashStore, a high throughput persistent keyvalue store, that uses flash memory as a non-volatile cache between RAM and hard disk. FlashStore is designed to store the working set of key-value pairs on flash and use one flash read per key lookup. As the working set changes over time, space is made for the current working set by destaging recently unused key-value pairs to hard disk and recycling pages in the flash store. FlashStore organizes key-value pairs in a log-structure on flash to exploit faster sequential write performance. It uses an in-memory hash table to index them, with hash collisions resolved by a variant of cuckoo hashing. The in-memory hash table stores compact key signatures instead of full keys so as to strike tradeoffs between RAM usage and false flash read operations. FlashStore can be used as a high throughput persistent key-value storage layer for a broad range of server class applications. We compare FlashStore with BerkeleyDB, an embedded key-value store application, running on hard disk and flash separately, so as to bring out the performance gain of FlashStore in not only using flash as a cache above hard disk but also in its use of flash aware algorithms. We use real-world data traces from two data center applications, namely, Xbox LIVE Primetime online multi-player game and inline storage deduplication, to drive and evaluate the design of FlashStore on traditional and low power server platforms. FlashStore outperforms BerkeleyDB by up to 60x on throughput (ops/sec), up to 50x on energy efficiency (ops/Joule), and up to 85x on cost efficiency (ops/sec/dollar) on the evaluated datasets.",
"title": ""
}
] |
[
{
"docid": "0ab14a40df6fe28785262d27a4f5b8ce",
"text": "State-of-the-art 3D shape classification and retrieval algorithms, hereinafter referred to as shape analysis, are often based on comparing signatures or descriptors that capture the main geometric and topological properties of 3D objects. None of the existing descriptors, however, achieve best performance on all shape classes. In this article, we explore, for the first time, the usage of covariance matrices of descriptors, instead of the descriptors themselves, in 3D shape analysis. Unlike histogram -based techniques, covariance-based 3D shape analysis enables the fusion and encoding of different types of features and modalities into a compact representation. Covariance matrices, however, are elements of the non-linear manifold of symmetric positive definite (SPD) matrices and thus \\BBL2 metrics are not suitable for their comparison and clustering. In this article, we study geodesic distances on the Riemannian manifold of SPD matrices and use them as metrics for 3D shape matching and recognition. We then: (1) introduce the concepts of bag of covariance (BoC) matrices and spatially-sensitive BoC as a generalization to the Riemannian manifold of SPD matrices of the traditional bag of features framework, and (2) generalize the standard kernel methods for supervised classification of 3D shapes to the space of covariance matrices. We evaluate the performance of the proposed BoC matrices framework and covariance -based kernel methods and demonstrate their superiority compared to their descriptor-based counterparts in various 3D shape matching, retrieval, and classification setups.",
"title": ""
},
{
"docid": "c96db1ec48caa57a9cbebe62545c5e01",
"text": "Searching for a cure for cancer is one of the most vital pursuits in modern medicine. In that aspect microRNA research plays a key role. Keeping track of the shifts and changes in established knowledge in the microRNA domain is very important. In this paper, we introduce an Ontology-Based Information Extraction method to detect occurrences of inconsistencies in microRNA research paper abstracts. We propose a method to first use the Ontology for MIcroRNA Targets (OMIT) to extract triples from the abstracts. Then we introduce a new algorithm to calculate the oppositeness of these candidate relationships. Finally we present the discovered inconsistencies in an easy to read manner to be used by medical professionals. To our best knowledge, this study is the first ontology-based information extraction model introduced to find shifts in the established knowledge in the medical domain using research paper abstracts. We downloaded 36877 abstracts from the PubMed database. From those, we found 102 inconsistencies relevant to the microRNA domain.",
"title": ""
},
{
"docid": "aeb9a3b1de003f87f6260f1cbe1e16d9",
"text": "As learning environments are gaining in features and in complexity, the e-learning industry is more and more interested in features easing teachers’ work. Learning design being a critical and time consuming task could be facilitated by intelligent components helping teachers build their learning activities. The Intelligent Learning Design Recommendation System (ILD-RS) is such a software component, designed to recommend learning paths during the learning design phase in a Learning Management System (LMS). Although ILD-RS exploits several parameters which are sometimes subject to controversy, such as learning styles and teaching styles, the main interest of the component lies on its algorithm based on Markov decision processes that takes into account the teacher’s use to refine its accuracy.",
"title": ""
},
{
"docid": "41b6a43f720fc67a3bf0b8136d7a8db9",
"text": "☆ The authors would like to acknowledge the financial ash Research Graduate School (MRGS) and the Faculty Monash University. ☆☆The authors would like to thank the two anonymou and invaluable feedback. ⁎ Corresponding author at: Department of Marketin 197, Caulfield East, Victoria, 3145. Tel.: +61 3 9903 256 E-mail addresses: [email protected] [email protected] (M.J. Matanda), michae (M.T. Ewing). 1 Tel.: +61 3 990 31286. 2 Tel.: +61 3 990 44021.",
"title": ""
},
{
"docid": "56a35139eefd215fe83811281e4e2279",
"text": "Querying graph data is a fundamental problem that witnesses an increasing interest especially for massive graph databases which come as a promising alternative to relational databases for big data modeling. In this paper, we study the problem of subgraph isomorphism search which consists to enumerate the embedding of a query graph in a data graph. The most known solutions of this NPcomplete problem are backtracking-based and result in a high computational cost when we deal with massive graph databases. We address this problem and its challenges via graph compression with modular decomposition. In our approach, subgraph isomorphism search is performed on compressed graphs without decompressing them yielding substantial reduction of the search space and consequently a significant saving in processing time as well as in storage space for the graphs. We evaluated our algorithms on nine real-word datasets. The experimental results show that our approach is efficient and scalable. © 2017 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "db53ffe2196586d570ad636decbf67de",
"text": "We present PredRNN++, a recurrent network for spatiotemporal predictive learning. In pursuit of a great modeling capability for short-term video dynamics, we make our network deeper in time by leveraging a new recurrent structure named Causal LSTM with cascaded dual memories. To alleviate the gradient propagation difficulties in deep predictive models, we propose a Gradient Highway Unit, which provides alternative quick routes for the gradient flows from outputs back to long-range previous inputs. The gradient highway units work seamlessly with the causal LSTMs, enabling our model to capture the short-term and the long-term video dependencies adaptively. Our model achieves state-of-the-art prediction results on both synthetic and real video datasets, showing its power in modeling entangled motions.",
"title": ""
},
{
"docid": "4e0108df18154d4d7d90203ad7ba2156",
"text": "Multi-stage programming languages provide a convenient notation for explicitly staging programs. Staging a definitional interpreter for a domain specific language is one way of deriving an implementation that is both readable and efficient. In an untyped setting, staging an interpreter \"removes a complete layer of interpretive overhead\", just like partial evaluation. In a typed setting however, Hindley-Milner type systems do not allow us to exploit typing information in the language being interpreted. In practice, this can mean a slowdown cost by a factor of three or mor.Previously, both type specialization and tag elimination were applied to this problem. In this paper we propose an alternative approach, namely, expressing the definitional interpreter in a dependently typed programming language. We report on our experience with the issues that arise in writing such an interpreter and in designing such a language. .To demonstrate the soundness of combining staging and dependent types in a general sense, we formalize our language (called Meta-D) and prove its type safety. To formalize Meta-D, we extend Shao, Saha, Trifonov and Papaspyrou's λH language to a multi-level setting. Building on λH allows us to demonstrate type safety in a setting where the type language contains all the calculus of inductive constructions, but without having to repeat the work needed for establishing the soundness of that system.",
"title": ""
},
{
"docid": "f9aae8fda6d94f5060835e9bf0d2ac8a",
"text": "Image retrieval is an active area of research, which is growing very rapidly. Indeed, stimulated by the rapid growth in storage capacity and processing speed, the number of images in electronic collections and the World Wide Web has considerably increased over the last few years. However, with this abundance of information, people are continuously looking for tools that help them find the image(s) they are looking for within a reasonable amount of time. These tools are image retrieval engines. When using an image retrieval engine, the user is continuously interacting with the machine. First, he1 uses the system’s interface to formulate a query that expresses his needs. Second, he provides feedback about the retrieved results at each search iteration. This allows the engine to provide more accurate results by using relevance feedback (RF) techniques. Third, he may be asked to assign a goodness score or weight to each image retrieved, which helps evaluating the system’s performance. In this chapter, we will review the main interactions between human and the machine in the context of image retrieval. We will address several issues, including: Query formulation: • How the user expresses his needs and what he is looking for • The different ways the query can be formulated: keywords-based, sentence-based, query by example image, query by sketch, query by feature values, composite queries, etc. • Query by region of interest (ROI) vs. global query. • Queries with positive example only vs. queries with both positive and negative examples. • Page zero problem: finding a good image to initiate a retrieval session. Relevance feedback: we will try to answer questions like: • Why do systems use relevance feedback? • How can the user express his needs during the relevance feedback process • How this information is exploited by the system to perform operations like feature selection or the identification of the sought image. 1 Note that the masculine gender has been used strictly to facilitate reading, and is to be understood to include the feminine.",
"title": ""
},
{
"docid": "6d80e845abb2f448a02ba8db8292835b",
"text": "We study the inverse optimal control problem in social sciences: we aim at learning a user’s true cost function from the observed temporal behavior. In contrast to traditional phenomenological works that aim to learn a generative model to fit the behavioral data, we propose a novel variational principle and treat user as a reinforcement learning algorithm, which acts by optimizing his cost function. We first propose a unified KL framework that generalizes existing maximum entropy inverse optimal control methods. We further propose a two-step Wasserstein inverse optimal control framework. In the first step, we compute the optimal measure with a novel mass transport equation. In the second step, we formulate the learning problem as a generative adversarial network. In two real world experiments — recommender systems and social networks, we show that our framework obtains significant performance gains over both existing inverse optimal control methods and point process based generative models.",
"title": ""
},
{
"docid": "fe37f6705928600f490ec87c09414451",
"text": "This work proposes a simple pipeline to classify and temporally localize activities in untrimmed videos. Our system uses features from a 3D Convolutional Neural Network (C3D) as input to train a a recurrent neural network (RNN) that learns to classify video clips of 16 frames. After clip prediction, we post-process the output of the RNN to assign a single activity label to each video, and determine the temporal boundaries of the activity within the video. We show how our system can achieve competitive results in both tasks with a simple architecture. We evaluate our method in the ActivityNet Challenge 2016, achieving a 0.5874 mAP and a 0.2237 mAP in the classification and detection tasks, respectively. Our code and models are publicly available at at: https://github.com/imatge-upc/ activitynet-2016-cvprw",
"title": ""
},
{
"docid": "305084bdd1a4a33c8d9fd102f864fb52",
"text": "We present a method for hierarchical image segmentation that defines a disaffinity graph on the image, over-segments it into watershed basins, defines a new graph on the basins, and then merges basins with a modified, size-dependent version of single linkage clustering. The quasilinear runtime of the method makes it suitable for segmenting large images. We illustrate the method on the challenging problem of segmenting 3D electron microscopic brain images.",
"title": ""
},
{
"docid": "45cee79008d25916e8f605cd85dd7f3a",
"text": "In exploring the emotional climate of long-term marriages, this study used an observational coding system to identify specific emotional behaviors expressed by middle-aged and older spouses during discussions of a marital problem. One hundred and fifty-six couples differing in age and marital satisfaction were studied. Emotional behaviors expressed by couples differed as a function of age, gender, and marital satisfaction. In older couples, the resolution of conflict was less emotionally negative and more affectionate than in middle-aged marriages. Differences between husbands and wives and between happy and unhappy marriages were also found. Wives were more affectively negative than husbands, whereas husbands were more defensive than wives, and unhappy marriages involved greater exchange of negative affect than happy marriages.",
"title": ""
},
{
"docid": "3e805d6724dc400d681b3b42393d5ebe",
"text": "This paper introduces a framework for conducting and writing an effective literature review. The target audience for the framework includes information systems (IS) doctoral students, novice IS researchers, and other IS researchers who are constantly struggling with the development of an effective literature-based foundation for a proposed research. The proposed framework follows the systematic data processing approach comprised of three major stages: 1) inputs (literature gathering and screening), 2) processing (following Bloom’s Taxonomy), and 3) outputs (writing the literature review). This paper provides the rationale for developing a solid literature review including detailed instructions on how to conduct each stage of the process proposed. The paper concludes by providing arguments for the value of an effective literature review to IS research.",
"title": ""
},
{
"docid": "4592c8f5758ccf20430dbec02644c931",
"text": "Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content.",
"title": ""
},
{
"docid": "e2ac2569aeec7c81eeaf526fe20b49b0",
"text": "In the present study, we examined the composition, amount, and uptake of yolk nutrients [fat, protein, water, and carbohydrates (COH)] during incubation of eggs from 30- and 50-wk-old broiler breeder hens. Eggs were sampled at embryonic d 0 (fresh eggs), 13, 15, 17, 19, and 21 (hatch). Egg, embryo, yolk content, and yolk sac membrane were weighed, and the yolk sac (YS; i.e., yolk content + yolk sac membrane) composition was analyzed. From 30 to 50 wk of age, the albumen weight increased by 13.3%, whereas the yolk increased by more than 40%. The proportion of fat in the fresh yolk of the 30-wk-old group was 23.8% compared with 27.4% in the 50-wk-old group, whereas the proportion of protein was 17.9% compared with 15.6%, respectively. During incubation, results indicated that water and protein infiltrated from other egg compartments to the YS. Accordingly, the calculated change in the content of water and protein between fresh yolk and sampled YS does not represent the true uptake of these components from the YS to the embryo, and only fat uptake from the YS can be accurately estimated. By embryonic d 15, fat uptake relative to embryo weight was lower in the 30-wk-old group than in the 50-wk-old group. However, by embryonic d 21, embryos of both groups reached similar relative fat uptake, suggesting that to hatch, embryos must attain a certain amount of fat as a source of energy for the hatching process. The amount of COH in the YS increased similarly during incubation in eggs from hens of both ages, reaching a peak at embryonic d 19, suggesting COH synthesis in the YS. At hatch, the amount of protein, water, and COH in the residual YS, relative to the weight of the yolk-free chick, was similar in eggs from young and old hens. However, chicks from the younger hens had less fat in the YS for their immediate posthatch nutrition compared with those from the older hens.",
"title": ""
},
{
"docid": "8996068836559be2b253cd04aeaa285b",
"text": "We present AutonoVi-Sim, a novel high-fidelity simulation platform for autonomous driving data generation and driving strategy testing. AutonoVi-Sim is a collection of high-level extensible modules which allows the rapid development and testing of vehicle configurations and facilitates construction of complex traffic scenarios. Autonovi-Sim supports multiple vehicles with unique steering or acceleration limits, as well as unique tire parameters and dynamics profiles. Engineers can specify the specific vehicle sensor systems and vary time of day and weather conditions to generate robust data and gain insight into how conditions affect the performance of a particular algorithm. In addition, AutonoVi-Sim supports navigation for non-vehicle traffic participants such as cyclists and pedestrians, allowing engineers to specify routes for these actors, or to create scripted scenarios which place the vehicle in dangerous reactive situations. Autonovi-Sim facilitates training of deep-learning algorithms by enabling data export from the vehicle's sensors, including camera data, LIDAR, relative positions of traffic participants, and detection and classification results. Thus, AutonoVi-Sim allows for the rapid prototyping, development and testing of autonomous driving algorithms under varying vehicle, road, traffic, and weather conditions. In this paper, we detail the simulator and provide specific performance and data benchmarks.",
"title": ""
},
{
"docid": "a61ae3623a0ba25e38828f3fe225a633",
"text": "Manufacturers always face cost-reduction and efficiency challenges in their operations. Industries require improvement in Production Lead Times, costs and customer service levels to survive. Because of this, companies have become more customers focused. The result is that companies have been putting in significant effort to improve their efficiency. In this paper Value Stream Mapping (VSM) tool is used in bearing manufacturing industry by focusing both on processes and their cycle times for a product UC208 INNER which is used in plumber block. In order to use the value stream mapping, relevant data has been collected and analyzed. After collecting the data customer need was identified. Current state map was draw by defining the resources and activities needed to manufacture, deliver the product. The study of current state map shows the areas for improvement and identifying the different types of wastes. From the current state map, it was noticeable that Annealing and CNC Machining processing have higher cycle time and work in process. The lean principles and techniques implemented or suggested and future state map was created and the total lead time was reduced from 7.3 days to 3.8 days. The WIP at each work station has also been reduced. The production lead time was reduced from 409 seconds to 344 seconds.",
"title": ""
},
{
"docid": "32f0cc62e05f18e60f39d0c0595129e2",
"text": "Learning from multi-view data is important in many applications. In this paper, we propose a novel convex subspace representation learning method for unsupervised multi-view clustering. We first formulate the subspace learning with multiple views as a joint optimization problem with a common subspace representation matrix and a group sparsity inducing norm. By exploiting the properties of dual norms, we then show a convex min-max dual formulation with a sparsity inducing trace norm can be obtained. We develop a proximal bundle optimization algorithm to globally solve the minmax optimization problem. Our empirical study shows the proposed subspace representation learning method can effectively facilitate multi-view clustering and induce superior clustering results than alternative multiview clustering methods.",
"title": ""
},
{
"docid": "72bc2130b650ec95c459507eb1159323",
"text": "Prior work has identified several optimal algorithms for scheduling independent, implicit-deadline sporadic (or periodic) real-time tasks on identical multiprocessors. These algorithms, however, are subject to high conceptual complexity and typically incur considerable runtime overheads. This paper establishes that, empirically, near-optimal schedulability can also be achieved with a far simpler approach that combines three well-known techniques (reservations, semi-partitioned scheduling, and period transformation) with some novel task-placement heuristics.In large-scale schedulability experiments, the proposed approach is shown to achieve near-optimal hard real-time schedulability (99+% schedulable utilization) across a wide range of processor and task counts. With an implementation in LITMUSRT, the proposed approach is shown to be practical and to incur only low runtime overheads, comparable to a conventional partitioned scheduler. It is further shown that basic slack management techniques can help to avoid more than 50% of all migrations of semi-partitioned reservations if tasks execute on average for less than their provisioned worst-case execution time.Two main conclusions are drawn: pragmatically speaking, global scheduling is not required to support static workloads of independent, implicit-deadline sporadic (or periodic) tasks; and since such simple workloads are well supported, future research on multiprocessor real-time scheduling should consider more challenging workloads (e.g., adaptive workloads, dynamic task arrivals or mode changes, shared resources, precedence constraints, etc.).",
"title": ""
},
{
"docid": "775182872259257a0abff42d53b7bb04",
"text": "Matriptase is an epithelial-derived, cell surface serine protease. This protease activates hepatocyte growth factor (HGF) and urokinase plasminogen activator (uPA), two proteins thought to be involved in the growth and motility of cancer cells, particularly carcinomas, and in the vascularization of tumors. Thus, matriptase may play an important role in the progression of carcinomas, such as breast cancer. We examined the regulation of activation of matriptase in human breast cancer cells, in comparison to non-transformed mammary epithelial cells 184A1N4 and MCF-10A. Results clearly indicated that unlike non-transformed mammary epithelial cells, breast cancer cells do not respond to the known activators of matriptase, serum and sphingosine 1-phosphate (S1P). Similar levels of activated matriptase were detected in breast cancer cells, grown in the presence or absence of S1P. However, up to five-fold higher levels of activated matriptase were detected in the conditioned media from the cancer cells grown in the absence of serum and S1P, when compared to non-transformed mammary epithelial cells. S1P also induces formation of cortical actin structures in non-transformed cells, but not in breast cancer cells. These results show that in non-transformed cells, S1P induces a rearrangement of the actin cytoskeleton and stimulates proteolytic activity on cell surfaces. In contrast, S1P treatment of breast cancer cells does not activate matriptase, and instead these cells constitutively activate the protease. In addition, breast cancer cells respond differently to S1P in terms of the regulation of actin cytoskeletal structures. Matriptase and its cognate inhibitor, HGF activator inhibitor 1 (HAI-1) colocalize on the cell periphery of breast cancer cells and form stable complexes in the extracellular milieu, suggesting that the inhibitor serves to prevent undesired proteolysis in these cells. Finally, we demonstrate that treatment of T-47D cells with epidermal growth factor (EGF), which promotes cell ruffling, stimulates increased accumulation of activated matriptase at the sites of membrane ruffling, suggesting a possible functional role at these sites.",
"title": ""
}
] |
scidocsrr
|
292a1ecb109ec9f0a1c59433d8b4b81c
|
Finexus: Tracking Precise Motions of Multiple Fingertips Using Magnetic Sensing
|
[
{
"docid": "39180c1e2636a12a9d46d94fe3ebfa65",
"text": "We present a novel machine learning based algorithm extending the interaction space around mobile devices. The technique uses only the RGB camera now commonplace on off-the-shelf mobile devices. Our algorithm robustly recognizes a wide range of in-air gestures, supporting user variation, and varying lighting conditions. We demonstrate that our algorithm runs in real-time on unmodified mobile devices, including resource-constrained smartphones and smartwatches. Our goal is not to replace the touchscreen as primary input device, but rather to augment and enrich the existing interaction vocabulary using gestures. While touch input works well for many scenarios, we demonstrate numerous interaction tasks such as mode switches, application and task management, menu selection and certain types of navigation, where such input can be either complemented or better served by in-air gestures. This removes screen real-estate issues on small touchscreens, and allows input to be expanded to the 3D space around the device. We present results for recognition accuracy (93% test and 98% train), impact of memory footprint and other model parameters. Finally, we report results from preliminary user evaluations, discuss advantages and limitations and conclude with directions for future work.",
"title": ""
},
{
"docid": "3b2607bda35e535c2c4410e4c2b21a4f",
"text": "There has been recent interest in designing systems that use the tongue as an input interface. Prior work however either require surgical procedures or in-mouth sensor placements. In this paper, we introduce TongueSee, a non-intrusive tongue machine interface that can recognize a rich set of tongue gestures using electromyography (EMG) signals from the surface of the skin. We demonstrate the feasibility and robustness of TongueSee with experimental studies to classify six tongue gestures across eight participants. TongueSee achieves a classification accuracy of 94.17% and a false positive probability of 0.000358 per second using three-protrusion preamble design.",
"title": ""
}
] |
[
{
"docid": "c014a0d9f75570af7734a0dfb0f2b535",
"text": "This paper introduces a modified PSO, Non-dominated Sorting Particle Swarm Optimizer (NSPSO), for better multiobjective optimization. NSPSO extends the basic form of PSO by making a better use of particles’ personal bests and offspring for more effective nondomination comparisons. Instead of a single comparison between a particle’s personal best and its offspring, NSPSO compares all particles’ personal bests and their offspring in the entire population. This proves to be effective in providing an appropriate selection pressure to propel the swarm population towards the Pareto-optimal front. By using the non-dominated sorting concept and two parameter-free niching methods, NSPSO and its variants have shown remarkable performance against a set of well-known difficult test functions (ZDT series). Our results and comparison with NSGA II show that NSPSO is highly competitive with existing evolutionary and PSO multiobjective algorithms.",
"title": ""
},
{
"docid": "7c10c80327bdf96c96016d787051afac",
"text": "A biofilm is a structured consortium of bacteria embedded in a self-produced polymer matrix consisting of polysaccharide, protein and DNA. Bacterial biofilms cause chronic infections because they show increased tolerance to antibiotics and disinfectant chemicals as well as resisting phagocytosis and other components of the body's defence system. The persistence of, for example, staphylococcal infections related to foreign bodies is due to biofilm formation. Likewise, chronic Pseudomonas aeruginosa lung infection in cystic fibrosis patients is caused by biofilm-growing mucoid strains. Characteristically, gradients of nutrients and oxygen exist from the top to the bottom of biofilms and these gradients are associated with decreased bacterial metabolic activity and increased doubling times of the bacterial cells; it is these more or less dormant cells that are responsible for some of the tolerance to antibiotics. Biofilm growth is associated with an increased level of mutations as well as with quorum-sensing-regulated mechanisms. Conventional resistance mechanisms such as chromosomal beta-lactamase, upregulated efflux pumps and mutations in antibiotic target molecules in bacteria also contribute to the survival of biofilms. Biofilms can be prevented by early aggressive antibiotic prophylaxis or therapy and they can be treated by chronic suppressive therapy. A promising strategy may be the use of enzymes that can dissolve the biofilm matrix (e.g. DNase and alginate lyase) as well as quorum-sensing inhibitors that increase biofilm susceptibility to antibiotics.",
"title": ""
},
{
"docid": "07cbbb184a627456922a1e66ae54d3d2",
"text": "A maximum likelihood (ML) acoustic source location estimation method is presented for the application in a wireless ad hoc sensor network. This method uses acoustic signal energy measurements taken at individual sensors of an ad hoc wireless sensor network to estimate the locations of multiple acoustic sources. Compared to the existing acoustic energy based source localization methods, this proposed ML method delivers more accurate results and offers the enhanced capability of multiple source localization. A multiresolution search algorithm and an expectation-maximization (EM) like iterative algorithm are proposed to expedite the computation of source locations. The Crame/spl acute/r-Rao Bound (CRB) of the ML source location estimate has been derived. The CRB is used to analyze the impacts of sensor placement to the accuracy of location estimates for single target scenario. Extensive simulations have been conducted. It is observed that the proposed ML method consistently outperforms existing acoustic energy based source localization methods. An example applying this method to track military vehicles using real world experiment data also demonstrates the performance advantage of this proposed method over a previously proposed acoustic energy source localization method.",
"title": ""
},
{
"docid": "fb44e3c2624d92c9ed408ebd00bdb793",
"text": "A novel method for online data acquisition of cursive handwriting is described. A video camera is used to record the handwriting of a user. From the acquired sequence of images, the movement of the tip of the pen is reconstructed. A prototype of the system has been implemented and tested. In one series of tests, the performance of the system was visually assessed. In another series of experiments, the system was combined with an existing online handwriting recognizer. Good results have been obtained in both sets of experiments.",
"title": ""
},
{
"docid": "cf52fd01af4e01f28eeb14e0c6bce7e9",
"text": "Most applications manipulate persistent data, yet traditional systems decouple data manipulation from persistence in a two-level storage model. Programming languages and system software manipulate data in one set of formats in volatile main memory (DRAM) using a load/store interface, while storage systems maintain persistence in another set of formats in non-volatile memories, such as Flash and hard disk drives in traditional systems, using a file system interface. Unfortunately, such an approach suffers from the system performance and energy overheads of locating data, moving data, and translating data between the different formats of these two levels of storage that are accessed via two vastly different interfaces. Yet today, new non-volatile memory (NVM) technologies show the promise of storage capacity and endurance similar to or better than Flash at latencies comparable to DRAM, making them prime candidates for providing applications a persistent single-level store with a single load/store interface to access all system data. Our key insight is that in future systems equipped with NVM, the energy consumed executing operating system and file system code to access persistent data in traditional systems becomes an increasingly large contributor to total energy. The goal of this work is to explore the design of a Persistent Memory Manager that coordinates the management of memory and storage under a single hardware unit in a single address space. Our initial simulation-based exploration shows that such a system with a persistent memory can improve energy efficiency and performance by eliminating the instructions and data movement traditionally used to perform I/O operations.",
"title": ""
},
{
"docid": "c61e64ebef3ec28622732dd3a85f602d",
"text": "BACKGROUND: Systematic Literature Reviews (SLRs) have gained significant popularity among software engineering (SE) researchers since 2004. Several researchers have also been working on improving the scientific and technological support for SLRs in SE. We argue that there is also an essential need for evidence-based body of knowledge about different aspects of the adoption of SLRs in SE. OBJECTIVE: The main objective of this research is to empirically investigate the adoption and use of SLRs in SE research from various perspectives. METHOD: We used multi-method approach as it is based on a combination of complementary research methods which are expected to compensate each others' limitations. RESULTS: A large majority of the participants are convinced of the value of using a rigorous and systematic methodology for literature reviews. However, there are concerns about the required time and resources for SLRs. One of the most important motivators for performing SLRs is new findings and inception of innovative ideas for further research. The reported SLRs are more influential compared to the traditional literature reviews in terms of number of citations. One of the main challenges of conducting SLRs is drawing a balance between rigor and required effort. CONCLUSIONS: SLR has become a popular research methodology for conducting literature review and evidence aggregation in SE. There is an overall positive perception about this methodology. The findings provide interesting insights into different aspects of SLRs. We expect that the findings can provide valuable information to readers on what can be expected from conducting SLRs and the potential impact of such reviews.",
"title": ""
},
{
"docid": "8f47dc7401999924dba5cb3003194071",
"text": "Few types of signal streams are as ubiquitous as music. Here we consider the problem of extracting essential ingredients of music signals, such as well-defined global temporal structure in the form of nested periodicities (or meter). Can we construct an adaptive signal processing device that learns by example how to generate new instances of a given musical style? Because recurrent neural networks can in principle learn the temporal structure of a signal, they are good candidates for such a task. Unfortunately, music composed by standard recurrent neural networks (RNNs) often lacks global coherence. The reason for this failure seems to be that RNNs cannot keep track of temporally distant events that indicate global music structure. Long Short-Term Memory (LSTM) has succeeded in similar domains where other RNNs have failed, such as timing & counting and learning of context sensitive languages. In the current study we show that LSTM is also a good mechanism for learning to compose music. We present experimental results showing that LSTM successfully learns a form of blues music and is able to compose novel (and we believe pleasing) melodies in that style. Remarkably, once the network has found the relevant structure it does not drift from it: LSTM is able to play the blues with good timing and proper structure as long as one is willing to listen.",
"title": ""
},
{
"docid": "fd48614d255b7c7bc7054b4d5de69a15",
"text": "Article history: Received 31 December 2007 Received in revised form 12 December 2008 Accepted 3 January 2009",
"title": ""
},
{
"docid": "c1cdb2ab2a594e7fbb1dfdb261f0910c",
"text": "Adaptive tracking-by-detection methods are widely used in computer vision for tracking arbitrary objects. Current approaches treat the tracking problem as a classification task and use online learning techniques to update the object model. However, for these updates to happen one needs to convert the estimated object position into a set of labelled training examples, and it is not clear how best to perform this intermediate step. Furthermore, the objective for the classifier (label prediction) is not explicitly coupled to the objective for the tracker (estimation of object position). In this paper, we present a framework for adaptive visual object tracking based on structured output prediction. By explicitly allowing the output space to express the needs of the tracker, we avoid the need for an intermediate classification step. Our method uses a kernelised structured output support vector machine (SVM), which is learned online to provide adaptive tracking. To allow our tracker to run at high frame rates, we (a) introduce a budgeting mechanism that prevents the unbounded growth in the number of support vectors that would otherwise occur during tracking, and (b) show how to implement tracking on the GPU. Experimentally, we show that our algorithm is able to outperform state-of-the-art trackers on various benchmark videos. Additionally, we show that we can easily incorporate additional features and kernels into our framework, which results in increased tracking performance.",
"title": ""
},
{
"docid": "e3739a934ecd7b99f2d35a19f2aed5cf",
"text": "We consider distributed algorithms for solving dynamic programming problems whereby several processors participate simultaneously in the computation while maintaining coordination by information exchange via communication links. A model of asynchronous distributed computation is developed which requires very weak assumptions on the ordering of computations, the timing of information exchange, the amount of local information needed at each computation node, and the initial conditions for the algorithm. The class of problems considered is very broad and includes shortest path problems, and finite and infinite horizon stochastic optimal control problems. When specialized to a shortest path problem the algorithm reduces to the algorithm originally implemented for routing of messages in the ARPANET.",
"title": ""
},
{
"docid": "c5f90692d25f2703f46a5b82f16c1918",
"text": "This paper describes a grip force analysis of a tendon-driven prosthetic hand. The grip force is equivalent to tensile force transmitted by a link mechanism. Assuming that the tensile force to pull tendon is constant, the grip force according to the angle of MCP joint is analyzed by the statics. From experimental results, we show the maximum grip force of tendon-driven finger is 6N when the constant tensile force is 14N.",
"title": ""
},
{
"docid": "f1d67673483176bd6e596e4f078c17b4",
"text": "The current web suffers information overloading: it is increasingly difficult and time consuming to obtain information desired. Ontologies, the key concept behind the Semantic Web, will provide the means to overcome such problem by providing meaning to the available data. An ontology provides a shared and common understanding of a domain and information machine-processable semantics. To make the Semantic Web a reality and lift current Web to its full potential, powerful and expressive languages are required. Such web ontology languages must be able to describe and organize knowledge in the Web in a machine understandable way. However, organizing knowledge requires the facilities of a logical formalism which can deal with temporal, spatial, epistemic, and inferential aspects of knowledge. Implementations of Web ontology languages must provide these inference services, making them much more than just simple data storage and retrieval systems. This paper presents a state of the art for the most relevant Semantic Web Languages: XML, RDF(s), OIL, DAML+OIL, and OWL, together with a detailed comparison based on modeling primitives and language to language characteristics.",
"title": ""
},
{
"docid": "0432fe84f5d73dd8d220ec00f6dab426",
"text": "Universal style transfer aims to transfer arbitrary visual styles to content images. Existing feed-forward based methods, while enjoying the inference efficiency, are mainly limited by inability of generalizing to unseen styles or compromised visual quality. In this paper, we present a simple yet effective method that tackles these limitations without training on any pre-defined styles. The key ingredient of our method is a pair of feature transforms, whitening and coloring, that are embedded to an image reconstruction network. The whitening and coloring transforms reflect a direct matching of feature covariance of the content image to a given style image, which shares similar spirits with the optimization of Gram matrix based cost in neural style transfer. We demonstrate the effectiveness of our algorithm by generating high-quality stylized images with comparisons to a number of recent methods. We also analyze our method by visualizing the whitened features and synthesizing textures via simple feature coloring.",
"title": ""
},
{
"docid": "d6d8ef59feb54c76fdcc43b31b9bf5f8",
"text": "We consider the classical TD(0) algorithm implemented on a network of agents wherein the agents also incorporate updates received from neighboring agents using a gossip-like mechanism. The combined scheme is shown to converge for both discounted and average cost problems.",
"title": ""
},
{
"docid": "256376e1867ee923ff72d3376c3be918",
"text": "Driven by recent vision and graphics applications such as image segmentation and object recognition, computing pixel-accurate saliency values to uniformly highlight foreground objects becomes increasingly important. In this paper, we propose a unified framework called pixelwise image saliency aggregating (PISA) various bottom-up cues and priors. It generates spatially coherent yet detail-preserving, pixel-accurate, and fine-grained saliency, and overcomes the limitations of previous methods, which use homogeneous superpixel based and color only treatment. PISA aggregates multiple saliency cues in a global context, such as complementary color and structure contrast measures, with their spatial priors in the image domain. The saliency confidence is further jointly modeled with a neighborhood consistence constraint into an energy minimization formulation, in which each pixel will be evaluated with multiple hypothetical saliency levels. Instead of using global discrete optimization methods, we employ the cost-volume filtering technique to solve our formulation, assigning the saliency levels smoothly while preserving the edge-aware structure details. In addition, a faster version of PISA is developed using a gradient-driven image subsampling strategy to greatly improve the runtime efficiency while keeping comparable detection accuracy. Extensive experiments on a number of public data sets suggest that PISA convincingly outperforms other state-of-the-art approaches. In addition, with this work, we also create a new data set containing 800 commodity images for evaluating saliency detection.",
"title": ""
},
{
"docid": "8a363302ed84e4d0d67850e1ddc947ee",
"text": "Deep learning has significantly advanced the state of the art in machine learning. However, neural networks are often considered black boxes. There is significant effort to develop techniques that explain a classifier’s decisions. Although some of these approaches have resulted in compelling visualisations, there is a lack of theory of what is actually explained. Here we present an analysis of these methods and formulate a quality criterion for explanation methods. On this ground, we propose an improved method that may serve as an extension for existing backprojection and decomposition techniques.",
"title": ""
},
{
"docid": "3f13d10dd2db9f52f903e293ed80ff13",
"text": "Mitigating the impact of computer failure is possible if accurate failure predictions are provided. Resources, applications, and services can be scheduled around predicted failure and limit the impact. Such strategies are especially important for multi-computer systems, such as compute clusters, that experience a higher rate failure due to the large number of components. However providing accurate predictions with sufficient lead time remains a challenging problem. This paper describes a new spectrum-kernel Support Vector Machine (SVM) approach to predict failure events based on system log files. These files contain messages that represent a change of system state. While a single message in the file may not be sufficient for predicting failure, a sequence or pattern of messages may be. The approach described in this paper will use a sliding window (sub-sequence) of messages to predict the likelihood of failure. The a frequency representation of the message sub-sequences observed are then used as input to the SVM. The SVM then associates the messages to a class of failed or non-failed system. Experimental results using actual system log files from a Linux-based compute cluster indicate the proposed spectrum-kernel SVM approach has promise and can predict hard disk failure with an accuracy of 73% two days in advance.",
"title": ""
},
{
"docid": "df82963e0320f46f46498d81a8a324e9",
"text": "Today, most high-performance computing (HPC) platforms have heterogeneous hardware resources (CPUs, GPUs, storage, etc.) A Graphics Processing Unit (GPU) is a parallel computing coprocessor specialized in accelerating vector operations. The prediction of application execution times over these devices is a great challenge and is essential for efficient job scheduling. There are different approaches to do this, such as analytical modeling and machine learning techniques. Analytic predictive models are useful, but require manual inclusion of interactions between architecture and software, and may not capture the complex interactions in GPU architectures. Machine learning techniques can learn to capture these interactions without manual intervention, but may require large training sets. In this paper, we compare three different machine learning approaches: linear regression, support vector machines and random forests with a BSP-based analytical model, to predict the execution time of GPU applications. As input to the machine learning algorithms, we use profiling information from 9 different applications executed over 9 different GPUs. We show that machine learning approaches provide reasonable predictions for different cases. Although the predictions were inferior to the analytical model, they required no detailed knowledge of application code, hardware characteristics or explicit modeling. Consequently, whenever a database with profile information is available or can be generated, machine learning techniques can be useful for deploying automated on-line performance prediction for scheduling applications on heterogeneous architectures containing GPUs.",
"title": ""
},
{
"docid": "11de03383fbd4178613eb4bdf47b90be",
"text": "Question Generation (QG) and Question Answering (QA) are some of the many challenges for natural language understanding and interfaces. As humans need to ask good questions, the potential benefits from automated QG systems may assist them in meeting useful inquiry needs. In this paper, we consider an automatic Sentence-to-Question generation task, where given a sentence, the Question Generation (QG) system generates a set of questions for which the sentence contains, implies, or needs answers. To facilitate the question generation task, we build elementary sentences from the input complex sentences using a syntactic parser. A named entity recognizer and a part of speech tagger are applied on each of these sentences to encode necessary information. We classify the sentences based on their subject, verb, object and preposition for determining the possible type of questions to be generated. We use the TREC-2007 (Question Answering Track) dataset for our experiments and evaluation. Mots-clés : Génération de questions, Analyseur syntaxique, Phrases élémentaires, POS Tagging.",
"title": ""
},
{
"docid": "82e6da590f8f836c9a06c26ef4440005",
"text": "We introduce a new count-based optimistic exploration algorithm for reinforcement learning (RL) that is feasible in environments with highdimensional state-action spaces. The success of RL algorithms in these domains depends crucially on generalisation from limited training experience. Function approximation techniques enable RL agents to generalise in order to estimate the value of unvisited states, but at present few methods enable generalisation regarding uncertainty. This has prevented the combination of scalable RL algorithms with efficient exploration strategies that drive the agent to reduce its uncertainty. We present a new method for computing a generalised state visit-count, which allows the agent to estimate the uncertainty associated with any state. Our φ-pseudocount achieves generalisation by exploiting the same feature representation of the state space that is used for value function approximation. States that have less frequently observed features are deemed more uncertain. The φ-ExplorationBonus algorithm rewards the agent for exploring in feature space rather than in the untransformed state space. The method is simpler and less computationally expensive than some previous proposals, and achieves near state-of-the-art results on highdimensional RL benchmarks.",
"title": ""
}
] |
scidocsrr
|
bcaefaa91111e493b790e4bfe0b06758
|
Survey of Visual Question Answering: Datasets and Techniques
|
[
{
"docid": "8b998b9f8ea6cfe5f80a5b3a1b87f807",
"text": "We describe a very simple bag-of-words baseline for visual question answering. This baseline concatenates the word features from the question and CNN features from the image to predict the answer. When evaluated on the challenging VQA dataset [2], it shows comparable performance to many recent approaches using recurrent neural networks. To explore the strength and weakness of the trained model, we also provide an interactive web demo1, and open-source code2.",
"title": ""
},
{
"docid": "5f2818d3a560aa34cc6b3dbfd6b8f2cc",
"text": "Open Information Extraction (IE) systems extract relational tuples from text, without requiring a pre-specified vocabulary, by identifying relation phrases and associated arguments in arbitrary sentences. However, stateof-the-art Open IE systems such as REVERB and WOE share two important weaknesses – (1) they extract only relations that are mediated by verbs, and (2) they ignore context, thus extracting tuples that are not asserted as factual. This paper presents OLLIE, a substantially improved Open IE system that addresses both these limitations. First, OLLIE achieves high yield by extracting relations mediated by nouns, adjectives, and more. Second, a context-analysis step increases precision by including contextual information from the sentence in the extractions. OLLIE obtains 2.7 times the area under precision-yield curve (AUC) compared to REVERB and 1.9 times the AUC of WOE.",
"title": ""
}
] |
[
{
"docid": "4faef20f6f8807f500b0a555f0f0ed2b",
"text": "Online search and item recommendation systems are often based on being able to correctly label items with topical keywords. Typically, topical labelers analyze the main text associated with the item, but social media posts are often multimedia in nature and contain contents beyond the main text. Topic labeling for social media posts is therefore an important open problem for supporting effective social media search and recommendation. In this work, we present a novel solution to this problem for Google+ posts, in which we integrated a number of different entity extractors and annotators, each responsible for a part of the post (e.g. text body, embedded picture, video, or web link). To account for the varying quality of different annotator outputs, we first utilized crowdsourcing to measure the accuracy of individual entity annotators, and then used supervised machine learning to combine different entity annotators based on their relative accuracy. Evaluating using a ground truth data set, we found that our approach substantially outperforms topic labels obtained from the main text, as well as naive combinations of the individual annotators. By accurately applying topic labels according to their relevance to social media posts, the results enables better search and item recommendation.",
"title": ""
},
{
"docid": "718f06e935df4ac319177d8a3a995da6",
"text": "A lot of real-world data is spread across multiple domains. Handling such data has been a challenging task. Heterogeneous face biometrics has begun to receive attention in recent years. In real-world scenarios, many surveillance cameras capture data in the NIR (near infrared) spectrum. However, most datasets accessible to law enforcement have been collected in the VIS (visible light) domain. Thus, there exists a need to match NIR to VIS face images. In this paper, we approach the problem by developing a method to reconstruct VIS images in the NIR domain and vice-versa. This approach is more applicable to real-world scenarios since it does not involve having to project millions of VIS database images into learned common subspace for subsequent matching. We present a cross-spectral joint ℓ0 minimization based dictionary learning approach to learn a mapping function between the two domains. One can then use the function to reconstruct facial images between the domains. Our method is open set and can reconstruct any face not present in the training data. We present results on the CASIA NIR-VIS v2.0 database and report state-of-the-art results.",
"title": ""
},
{
"docid": "ba92025b0930fa0182053f3d51fe131b",
"text": "In this paper we present two path planning algorithms based on Bézier curves for autonomous vehicles with waypoints and corridor constraints. Bézier curves have useful properties for the path generation problem. The paper describes how the algorithms apply these properties to generate the reference trajectory for vehicles to satisfy the path constraints. Both algorithms join cubic Bézier curve segments smoothly to generate the path. Additionally, we discuss the constrained optimization problem that optimizes the resulting path for user-defined cost function. The simulation shows the generation of successful routes for autonomous vehicles using these algorithms as well as control results for a simple kinematic vehicle. Extensions of these algorithms towards navigating through an unstructured environment with limited sensor range are discussed.",
"title": ""
},
{
"docid": "274373d46b748d92e6913496507353b1",
"text": "This paper introduces a blind watermarking based on a convolutional neural network (CNN). We propose an iterative learning framework to secure robustness of watermarking. One loop of learning process consists of the following three stages: Watermark embedding, attack simulation, and weight update. We have learned a network that can detect a 1-bit message from a image sub-block. Experimental results show that this learned network is an extension of the frequency domain that is widely used in existing watermarking scheme. The proposed scheme achieved robustness against geometric and signal processing attacks with a learning time of one day.",
"title": ""
},
{
"docid": "36152b59aaaaa7e3a69ac57db17e44b8",
"text": "In this paper, a reliable road/obstacle detection with 3D point cloud for intelligent vehicle on a variety of challenging environments (undulated road and/or uphill/ downhill) is handled. For robust detection of road we propose the followings: 1) correction of 3D point cloud distorted by the motion of vehicle (high speed and heading up and down) incorporating vehicle posture information; 2) guideline for the best selection of the proper features such as gradient value, height average of neighboring node; 3) transformation of the road detection problem into a classification problem of different features; and 4) inference algorithm based on MRF with the loopy belief propagation for the area that the LIDAR does not cover. In experiments, we use a publicly available dataset as well as numerous scans acquired by the HDL-64E sensor mounted on experimental vehicle in inner city traffic scenes. The results show that the proposed method is more robust and reliable than the conventional approach based on the height value on the variety of challenging environment. Jaemin Byun Robot and Cognitive System Research Department(RCSRD) in Electronicsand Telecommunications Research Institute (ETRI), daejeon, south korea, e-mail: [email protected], Ki-in Na Robot and Cognitive System Research Department(RCSRD) in Electronicsand Telecommunications Research Institute (ETRI), daejeon, south korea, e-mail: [email protected] Beom-su Seo Robot and Cognitive System Research Department(RCSRD) in Electronicsand Telecommunications Research Institute (ETRI), daejeon, south korea, e-mail: [email protected] MyungChan Roh Robot and Cognitive System Research Department(RCSRD) in Electronicsand Telecommunications Research Institute (ETRI), daejeon, south korea, e-mail: [email protected]",
"title": ""
},
{
"docid": "de67aeb2530695bcc6453791a5fa8c77",
"text": "Sebaceous carcinoma is a rare adenocarcinoma with variable degrees of sebaceous differentiation, most commonly found on periocular skin, but also occasionally occur extraocular. It can occur in isolation or as part of the MuirTorre syndrome. Sebaceous carcinomas are yellow or red nodules or plaques often with a friable surface, ulceration, or crusting. On histological examination, sebaceous carcinomas are typically poorly circumscribed, asymmetric, and infiltrative. Individual cells are pleomorphic with atypical nuclei, mitoses, and a coarsely vacuolated cytoplasm.",
"title": ""
},
{
"docid": "d3b0957b31f47620c0fa8e65a1cc086a",
"text": "In this paper, we propose series of algorithms for detecting change points in time-series data based on subspace identification, meaning a geometric approach for estimating linear state-space models behind time-series data. Our algorithms are derived from the principle that the subspace spanned by the columns of an observability matrix and the one spanned by the subsequences of time-series data are approximately equivalent. In this paper, we derive a batch-type algorithm applicable to ordinary time-series data, i.e. consisting of only output series, and then introduce the online version of the algorithm and the extension to be available with input-output time-series data. We illustrate the effectiveness of our algorithms with comparative experiments using some artificial and real datasets.",
"title": ""
},
{
"docid": "39562cb335eafabbe32fa483c4febc02",
"text": "Previous studies on social enterprises reported that unlike private enterprise consumers, social enterprise consumers appreciate the social value of the enterprise products and that social value affects customer satisfaction and repurchase intention. However, previous literature also pointed out that focusing only on social value as the factor affecting purchase behavior does not reflect the change in the situation of social enterprises. We expect that not only social but also a variety of other value consumers perceive from the products of social enterprises influence consumer satisfaction and repurchase intention. The purpose of this study is as follows. First, we intend to find the customer value for the products and services of social enterprises. Second, we intend to examine whether the positive relationships between quality and value of products/services reported in numerous previous studies applies to social enterprises. Third, we would like to find out whether satisfaction from social enterprise products and services affect the actual repurchase intention. Finally, in order to find dynamic interaction among the variables, this study models the key flow of the factors influencing the social enterprise consumers’ repurchase intention: perceived quality perceived value customer satisfaction repurchase intention. The results show that there are positive relationships between the consumer perception of quality and that of functional, emotional and social value. We also find positive relationships between the perception of functional, emotional and social value and customer satisfaction. Our findings show that the consumers of social enterprises perceive social value, along with the functional and emotional value, through the quality of products and services. The perceived value has positive effects on customer satisfaction and repurchase intention in the future. This study shows that the positive relationships between quality and value and customer satisfaction and repurchase intention found in numerous previous studies also exist in the context of social enterprises. In the last section, we discuss the practical implications for social enterprises based on the findings of our study and present the directions for future studies.",
"title": ""
},
{
"docid": "6d65238e93aa1a9a0e5e522af8ecb2e0",
"text": "We introduce end-to-end neural network based models for simulating users of task-oriented dialogue systems. User simulation in dialogue systems is crucial from two different perspectives: (i) automatic evaluation of different dialogue models, and (ii) training task-oriented dialogue systems. We design a hierarchical sequence-to-sequence model that first encodes the initial user goal and system turns into fixed length representations using Recurrent Neural Networks (RNN). It then encodes the dialogue history using another RNN layer. At each turn, user responses are decoded from the hidden representations of the dialogue level RNN. This hierarchical user simulator (HUS) approach allows the model to capture undiscovered parts of the user goal without the need of an explicit dialogue state tracking. We further develop several variants by utilizing a latent variable model to inject random variations into user responses to promote diversity in simulated user responses and a novel goal regularization mechanism to penalize divergence of user responses from the initial user goal. We evaluate the proposed models on movie ticket booking domain by systematically interacting each user simulator with various dialogue system policies trained with different objectives and users.",
"title": ""
},
{
"docid": "b677a4762ceb4ec6f9f1fc418a701982",
"text": "NoSQL databases are the new breed of databases developed to overcome the drawbacks of RDBMS. The goal of NoSQL is to provide scalability, availability and meet other requirements of cloud computing. The common motivation of NoSQL design is to meet scalability and fail over. In most of the NoSQL database systems, data is partitioned and replicated across multiple nodes. Inherently, most of them use either Google's MapReduce or Hadoop Distributed File System or Hadoop MapReduce for data collection. Cassandra, HBase and MongoDB are mostly used and they can be termed as the representative of NoSQL world. This tutorial discusses the features of NoSQL databases in the light of CAP theorem.",
"title": ""
},
{
"docid": "af0dfe672a8828587e3b27ef473ea98e",
"text": "Machine comprehension of text is the overarching goal of a great deal of research in natural language processing. The Machine Comprehension Test (Richardson et al., 2013) was recently proposed to assess methods on an open-domain, extensible, and easy-to-evaluate task consisting of two datasets. In this paper we develop a lexical matching method that takes into account multiple context windows, question types and coreference resolution. We show that the proposed method outperforms the baseline of Richardson et al. (2013), and despite its relative simplicity, is comparable to recent work using machine learning. We hope that our approach will inform future work on this task. Furthermore, we argue that MC500 is harder than MC160 due to the way question answer pairs were created.",
"title": ""
},
{
"docid": "766b18cdae33d729d21d6f1b2b038091",
"text": "1.1 Terminology Intercultural communication or communication between people of different cultural backgrounds has always been and will probably remain an important precondition of human co-existance on earth. The purpose of this paper is to provide a framework of factors thatare important in intercultural communication within a general model of human, primarily linguistic, communication. The term intercultural is chosen over the largely synonymousterm cross-cultural because it is linked to language use such as “interdisciplinary”, that is cooperation between people with different scientific backgrounds. Perhaps the term also has somewhat fewer connotations than crosscultural. It is not cultures that communicate, whatever that might imply, but people (and possibly social institutions) with different cultural backgrounds that do. In general, the term”cross-cultural” is probably best used for comparisons between cultures (”crosscultural comparison”).",
"title": ""
},
{
"docid": "33da6c0dcc2e62059080a4b1d220ef8b",
"text": "We generalize the scattering transform to graphs and consequently construct a convolutional neural network on graphs. We show that under certain conditions, any feature generated by such a network is approximately invariant to permutations and stable to graph manipulations. Numerical results demonstrate competitive performance on relevant datasets.",
"title": ""
},
{
"docid": "823de7c39d197f6dc8efd452b36de82c",
"text": "Sialic acid (SA), N-acetylated derivatives of neuraminic acid, play a central role in the biomedical functioning of humans. The normal range of total sialic acid (TSA) level in serum/plasma is 1.58-2.22 mmol L-1, the free form of SA only constituting 0.5-3 mumol L-1 and the lipid-associated (LSA) forms 10-50 mumol L-1. Notably, considerably higher amounts of free SA are found in urine than in serum/plasma (approximately 50% of the total SA). In inherited SA storage diseases such as Salla's disease, SA levels are elevated many times over, and their determination during clinical investigation is well established. Furthermore, a number of reports describe elevated SA levels in various other diseases, tentatively suggesting broader clinical utility for SA markers. Increased SA concentrations have been reported during inflammatory processes, probably resulting from increased levels of richly sialylated acute-phase glycoproteins. A connection between increased SA levels and elevated stroke and cardiovascular mortality risk has also been reported. In addition, SA levels are slightly increased in cancer, positively correlating with the degree of metastasis, as well as in alcohol abuse, diabetes, chronic renal failure and chronic glomerulonephritis. Several different mechanisms are assumed to underlie the elevated SA concentrations in these disorders. The apparent non-specificity of SA to a given disease limits the potential clinical usefulness of SA determination. In addition, some non-pathological factors, such as aging, pregnancy and smoking, may cause changes in SA concentrations. The absolute increases in SA levels are also rather small (save those in inherited SA storage disorders); this further limits the clinical potential of SA as a marker. Tentatively, SA markers might serve as adjuncts, when combined with other markers, in disease screening, disease progression follow-up, and in the monitoring of treatment response. To become clinically useful, however, the existing SA determination assays need to be considerably refined to reduce interferences, to be specific for certain SA forms, and to be more easy to use.",
"title": ""
},
{
"docid": "c25af20f13575e34070d2025f4542416",
"text": "Link prediction is one of the fundamental tools in social network analysis, used to identify relationships that are not otherwise observed. Commonly, link prediction is performed by means of a similarity metric, with the idea that a pair of similar nodes are likely to be connected. However, traditional link prediction based on similarity metrics assumes that available network data is accurate. We study the problem of adversarial link prediction, where an adversary aims to hide a target link by removing a limited subset of edges from the observed subgraph. We show that optimal attacks on local similarity metrics—that is, metrics which use only the information about the node pair and their network neighbors—can be found in linear time. In contrast, attacking Katz and ACT metrics which use global information about network topology is NP-Hard. We present an approximation algorithm for optimal attacks on Katz similarity, and a principled heuristic for ACT attacks. Extensive experiments demonstrate the efficacy of our methods.",
"title": ""
},
{
"docid": "0f10aa71d58858ea1d8d7571a7cbfe22",
"text": "We study hierarchical classification in the general case when an instance could belong to more than one class node in the underlying taxonomy. Experiments done in previous work showed that a simple hierarchy of Support Vectors Machines (SVM) with a top-down evaluation scheme has a surprisingly good performance on this kind of task. In this paper, we introduce a refined evaluation scheme which turns the hierarchical SVM classifier into an approximator of the Bayes optimal classifier with respect to a simple stochastic model for the labels. Experiments on synthetic datasets, generated according to this stochastic model, show that our refined algorithm outperforms the simple hierarchical SVM. On real-world data, however, the advantage brought by our approach is a bit less clear. We conjecture this is due to a higher noise rate for the training labels in the low levels of the taxonomy.",
"title": ""
},
{
"docid": "6387707b2aba0400e517e427b26e4589",
"text": "This thesis investigates the phase noise of two different 2-stage cross-coupled pair unsaturated ring oscillators with no tail current source. One oscillator consists of top crosscoupled pair delay cells, and the other consists of top cross-coupled pair and bottom crosscoupled pair delay cells. Under a low supply voltage restriction, a phase noise model is developed and applied to both ring oscillators. Both top cross-coupled pair and top and bottom cross-coupled pair oscillators are fabricated with 0.13 μm CMOS technology. Phase noise measurements of -92 dBc/Hz and -89 dBc/Hz ,respectively, at 1 MHz offset is obtained from the chip, which agree with theory and simulations. Top cross-coupled ring oscillator, with phase noise of -92 dBc/Hz at 1 MHz offset, is implemented in a second order sigma-delta time to digital converter. System level and transistor level functional simulation and timing jitter simulation are obtained.",
"title": ""
},
{
"docid": "f92e4ca37d29c1f564f155a783b1606c",
"text": "If we are to believe the technology hype cycle, we are entering a new era of Cognitive Computing, enabled by advances in natural language processing, machine learning, and more broadly artificial intelligence. These advances, combined with evolutionary progress in areas such as knowledge representation, automated planning, user experience technologies, software-as-a-service and crowdsourcing, have the potential to transform many industries. In this paper, we discuss transformations of BPM that advances in the Cognitive Computing will bring. We focus on three of the most signficant aspects of this transformation, namely: (a) Cognitive Computing will enable ”knowledge acquisition at scale”, which will lead to a transformation in Knowledge-intensive Processes (KiP’s); (b) We envision a new process meta-model will emerge that is centered around a “Plan-Act-Learn” cycle; and (c) Cognitive Computing can enable learning about processes from implicit descriptions (at both designand run-time), opening opportunities for new levels of automation and business process support, for both traditional business processes and KiP’s. We use the term cognitive BPM to refer to a new BPM paradigm encompassing all aspects of BPM that are impacted and enabled by Cognitive Computing. We argue that a fundamental understanding of cognitive BPM requires a new research framing of the business process ecosystem. The paper presents a conceptual framework for cognitive BPM, a brief survey of state of the art in emerging areas of Cognitive BPM, and discussion of key directions for further research.",
"title": ""
},
{
"docid": "ae97effd4e999ccf580d32c8522b6f59",
"text": "Eight isolates of cellulose-degrading bacteria (CDB) were isolated from four different invertebrates (termite, snail, caterpillar, and bookworm) by enriching the basal culture medium with filter paper as substrate for cellulose degradation. To indicate the cellulase activity of the organisms, diameter of clear zone around the colony and hydrolytic value on cellulose Congo Red agar media were measured. CDB 8 and CDB 10 exhibited the maximum zone of clearance around the colony with diameter of 45 and 50 mm and with the hydrolytic value of 9 and 9.8, respectively. The enzyme assays for two enzymes, filter paper cellulase (FPC), and cellulase (endoglucanase), were examined by methods recommended by the International Union of Pure and Applied Chemistry (IUPAC). The extracellular cellulase activities ranged from 0.012 to 0.196 IU/mL for FPC and 0.162 to 0.400 IU/mL for endoglucanase assay. All the cultures were also further tested for their capacity to degrade filter paper by gravimetric method. The maximum filter paper degradation percentage was estimated to be 65.7 for CDB 8. Selected bacterial isolates CDB 2, 7, 8, and 10 were co-cultured with Saccharomyces cerevisiae for simultaneous saccharification and fermentation. Ethanol production was positively tested after five days of incubation with acidified potassium dichromate.",
"title": ""
},
{
"docid": "dd60c1f0ae3707cbeb24da1137ee327d",
"text": "Silicone oils have wide range of applications in personal care products due to their unique properties of high lubricity, non-toxicity, excessive spreading and film formation. They are usually employed in the form of emulsions due to their inert nature. Until now, different conventional emulsification techniques have been developed and applied to prepare silicone oil emulsions. The size and uniformity of emulsions showed important influence on stability of droplets, which further affect the application performance. Therefore, various strategies were developed to improve the stability as well as application performance of silicone oil emulsions. In this review, we highlight different factors influencing the stability of silicone oil emulsions and explain various strategies to overcome the stability problems. In addition, the silicone deposition on the surface of hair substrates and different approaches to increase their deposition are also discussed in detail.",
"title": ""
}
] |
scidocsrr
|
428e83dbffee76b39fdb238cb44b15dd
|
High-Resolution Direct Position Determination Using MVDR
|
[
{
"docid": "27488ded8276967b9fd71ec40eec28d8",
"text": "This paper discusses the use of modern 2D spectral estimation algorithms for synthetic aperture radar (SAR) imaging. The motivation for applying power spectrum estimation methods to SAR imaging is to improve resolution, remove sidelobe artifacts, and reduce speckle compared to what is possible with conventional Fourier transform SAR imaging techniques. This paper makes two principal contributions to the field of adaptive SAR imaging. First, it is a comprehensive comparison of 2D spectral estimation methods for SAR imaging. It provides a synopsis of the algorithms available, discusses their relative merits for SAR imaging, and illustrates their performance on simulated and collected SAR imagery. Some of the algorithms presented or their derivations are new, as are some of the insights into or analyses of the algorithms. Second, this work develops multichannel variants of four related algorithms, minimum variance method (MVM), reduced-rank MVM (RRMVM), adaptive sidelobe reduction (ASR) and space variant apodization (SVA) to estimate both reflectivity intensity and interferometric height from polarimetric displaced-aperture interferometric data. All of these interferometric variants are new. In the interferometric contest, adaptive spectral estimation can improve the height estimates through a combination of adaptive nulling and averaging. Examples illustrate that MVM, ASR, and SVA offer significant advantages over Fourier methods for estimating both scattering intensity and interferometric height, and allow empirical comparison of the accuracies of Fourier, MVM, ASR, and SVA interferometric height estimates.",
"title": ""
},
{
"docid": "950a6a611f1ceceeec49534c939b4e0f",
"text": "Often signals and system parameters are most conveniently represented as complex-valued vectors. This occurs, for example, in array processing [1], as well as in communication systems [7] when processing narrowband signals using the equivalent complex baseband representation [2]. Furthermore, in many important applications one attempts to optimize a scalar real-valued measure of performance over the complex parameters defining the signal or system of interest. This is the case, for example, in LMS adaptive filtering where complex filter coefficients are adapted on line. To effect this adaption one attempts to optimize the performance measure by adjustments of the coefficients along its gradient direction [16, 23].",
"title": ""
}
] |
[
{
"docid": "f68b11af8958117f75fc82c40c51c395",
"text": "Uncertainty accompanies our life processes and covers almost all fields of scientific studies. Two general categories of uncertainty, namely, aleatory uncertainty and epistemic uncertainty, exist in the world. While aleatory uncertainty refers to the inherent randomness in nature, derived from natural variability of the physical world (e.g., random show of a flipped coin), epistemic uncertainty origins from human's lack of knowledge of the physical world, as well as ability of measuring and modeling the physical world (e.g., computation of the distance between two cities). Different kinds of uncertainty call for different handling methods. Aggarwal, Yu, Sarma, and Zhang et al. have made good surveys on uncertain database management based on the probability theory. This paper reviews multidisciplinary uncertainty processing activities in diverse fields. Beyond the dominant probability theory and fuzzy theory, we also review information-gap theory and recently derived uncertainty theory. Practices of these uncertainty handling theories in the domains of economics, engineering, ecology, and information sciences are also described. It is our hope that this study could provide insights to the database community on how uncertainty is managed in other disciplines, and further challenge and inspire database researchers to develop more advanced data management techniques and tools to cope with a variety of uncertainty issues in the real world.",
"title": ""
},
{
"docid": "561b37c506657693d27fa65341faf51e",
"text": "Currently, much of machine learning is opaque, just like a “black box”. However, in order for humans to understand, trust and effectively manage the emerging AI systems, an AI needs to be able to explain its decisions and conclusions. In this paper, I propose an argumentation-based approach to explainable AI, which has the potential to generate more comprehensive explanations than existing approaches.",
"title": ""
},
{
"docid": "c7eceedbb7c6665dca1db772a22452dc",
"text": "This paper proposes a quadruped walking robot that has high performance as a working machine. This robot is needed for various tasks controlled by tele-operation, especially for humanitarian mine detection and removal. Since there are numerous personnel landmines that are still in place from many wars, it is desirable to provide a safe and inexpensive tool that civilians can use to remove those mines. The authors have been working on the concept of the humanitarian demining robot systems for 4 years and have performed basic experiments with the rst prototype VK-I using the modi ed quadruped walking robot, TITAN-VIII. After those experiments, it was possible to re ne some concepts and now the new robot has a tool (end-effector)changing system on its back, so that by utilizing the legs as manipulation arms and connecting various tools to the foot, it can perform mine detection and removal tasks. To accomplish these tasks, we developed various end-effectors that can be attached to the working leg. In this paper we will discuss the mechanical design of the new walking robot called TITAN-IX to be applied to the new system VK-II.",
"title": ""
},
{
"docid": "e601c68a6118139c1183ba4abd012183",
"text": "Robert M. Golub, MD, Editor The JAMA Patient Page is a public service of JAMA. The information and recommendations appearing on this page are appropriate in most instances, but they are not a substitute for medical diagnosis. For specific information concerning your personal medical condition, JAMA suggests that you consult your physician. This page may be photocopied noncommercially by physicians and other health care professionals to share with patients. To purchase bulk reprints, call 312/464-0776. C H IL D H E A TH The Journal of the American Medical Association",
"title": ""
},
{
"docid": "301de1ec8dc10d8962a346c49eb5a65f",
"text": "An in-pipe inspection robot is designed in this paper for which its pitch rate is controllable and an optimal control is implemented for it subject to input minimization. In-pipe inspection robots are requisite mobile robots to investigate the pipelines. Most of the in-pipe inspection robots are supposed to move with constant pitch of rate. An in-pipe inspection robot is proposed in this paper based on screw locomotion which is steerable in order to handle the pitch rate of the movement and bypass the probable obstacles. Considering the fact that for this robot the number of actuators of the system is more than the Degrees of Freedom (DOFs) of the system, optimization of its control inputs is performed using optimal control approach. In this paper the dynamic model of the mentioned steerable screw in-pipe inspection robot is extracted and it is controlled within a predefined trajectory in an optimal way. The proper mechanism is designed and its related kinematics and kinetics are derived. Then the objective function is defined to optimize the controlling input error simultaneously. The nonlinear state space is linearized around its operating point and optimization is implemented using Linear Quadratic Regulator (LQR). Validity and efficiency of the designed robot and controller are verified using MATLAB simulations.",
"title": ""
},
{
"docid": "e72f8ad61a7927fee8b0a32152b0aa4b",
"text": "Geolocation prediction is vital to geospatial applications like localised search and local event detection. Predominately, social media geolocation models are based on full text data, including common words with no geospatial dimension (e.g. today) and noisy strings (tmrw), potentially hampering prediction and leading to slower/more memory-intensive models. In this paper, we focus on finding location indicative words (LIWs) via feature selection, and establishing whether the reduced feature set boosts geolocation accuracy. Our results show that an information gain ratiobased approach surpasses other methods at LIW selection, outperforming state-of-the-art geolocation prediction methods by 10.6% in accuracy and reducing the mean and median of prediction error distance by 45km and 209km, respectively, on a public dataset. We further formulate notions of prediction confidence, and demonstrate that performance is even higher in cases where our model is more confident, striking a trade-off between accuracy and coverage. Finally, the identified LIWs reveal regional language differences, which could be potentially useful for lexicographers.",
"title": ""
},
{
"docid": "1a91e143f4430b11f3af242d6e07cbba",
"text": "Random graph matching refers to recovering the underlying vertex correspondence between two random graphs with correlated edges; a prominent example is when the two random graphs are given by Erdős-Rényi graphs G(n, d n ). This can be viewed as an average-case and noisy version of the graph isomorphism problem. Under this model, the maximum likelihood estimator is equivalent to solving the intractable quadratic assignment problem. This work develops an Õ(nd + n)-time algorithm which perfectly recovers the true vertex correspondence with high probability, provided that the average degree is at least d = Ω(log n) and the two graphs differ by at most δ = O(log−2(n)) fraction of edges. For dense graphs and sparse graphs, this can be improved to δ = O(log−2/3(n)) and δ = O(log−2(d)) respectively, both in polynomial time. The methodology is based on appropriately chosen distance statistics of the degree profiles (empirical distribution of the degrees of neighbors). Before this work, the best known result achieves δ = O(1) and n ≤ d ≤ n for some constant c with an n-time algorithm [BCL18] and δ = Õ((d/n)) and d = Ω̃(n) with a polynomial-time algorithm [DCKG18].",
"title": ""
},
{
"docid": "4ef6a80f243305b4c26d12684118cc2d",
"text": "A wide variety of neural-network architectures have been proposed for the task of Chinese word segmentation. Surprisingly, we find that a bidirectional LSTM model, when combined with standard deep learning techniques and best practices, can achieve better accuracy on many of the popular datasets as compared to models based on more complex neuralnetwork architectures. Furthermore, our error analysis shows that out-of-vocabulary words remain challenging for neural-network models, and many of the remaining errors are unlikely to be fixed through architecture changes. Instead, more effort should be made on exploring resources for further improvement.",
"title": ""
},
{
"docid": "e43056aad827cd5eea146418aa89ec09",
"text": "The detection and analysis of clusters has become commonplace within geographic information science and has been applied in epidemiology, crime prevention, ecology, demography and other fields. One of the many methods for detecting and analyzing these clusters involves searching the dataset with a flock of boids (bird objects). While boids are effective at searching the dataset once their behaviors are properly configured, it can be difficult to find the proper configuration. Since genetic algorithms have been successfully used to configure neural networks, they may also be useful for configuring parameters guiding boid behaviors. In this paper, we develop a genetic algorithm to evolve the ideal boid behaviors. Preliminary results indicate that, even though the genetic algorithm does not return the same configuration each time, it does converge on configurations that improve over the parameters used when boids were initially proposed for geographic cluster detection. Also, once configured, the boids perform as well as other cluster detection methods. Continued work with this system could determine which parameters have a greater effect on the results of the boid system and could also discover rules for configuring a flock of boids directly from properties of the dataset, such as point density, rather than requiring the time-consuming process of optimizing the parameters for each new dataset.",
"title": ""
},
{
"docid": "897fb39d295defc4b6e495236a2c74b1",
"text": "Generative modeling of high-dimensional data is a key problem in machine learning. Successful approaches include latent variable models and autoregressive models. The complementary strengths of these approaches, to model global and local image statistics respectively, suggest hybrid models combining the strengths of both models. Our contribution is to train such hybrid models using an auxiliary loss function that controls which information is captured by the latent variables and what is left to the autoregressive decoder. In contrast, prior work on such hybrid models needed to limit the capacity of the autoregressive decoder to prevent degenerate models that ignore the latent variables and only rely on autoregressive modeling. Our approach results in models with meaningful latent variable representations, and which rely on powerful autoregressive decoders to model image details. Our model generates qualitatively convincing samples, and yields stateof-the-art quantitative results.",
"title": ""
},
{
"docid": "8f9e3bb85b4a2fcff3374fd700ac3261",
"text": "Vehicle theft has become a pervasive problem in metropolitan cities. The aim of our work is to reduce the vehicle and fuel theft with an alert given by commonly used smart phones. The modern vehicles are interconnected with computer systems so that the information can be obtained from vehicular sources and Internet services. This provides space for tracking the vehicle through smart phones. In our work, an Advanced Encryption Standard (AES) algorithm is implemented which integrates a smart phone with classical embedded systems to avoid vehicle theft.",
"title": ""
},
{
"docid": "242746fd37b45c83d8f4d8a03c1079d3",
"text": "BACKGROUND\nThe use of wheat grass (Triticum aestivum) juice for treatment of various gastrointestinal and other conditions had been suggested by its proponents for more than 30 years, but was never clinically assessed in a controlled trial. A preliminary unpublished pilot study suggested efficacy of wheat grass juice in the treatment of ulcerative colitis (UC).\n\n\nMETHODS\nA randomized, double-blind, placebo-controlled study. One gastroenterology unit in a tertiary hospital and three study coordinating centers in three major cities in Israel. Twenty-three patients diagnosed clinically and sigmoidoscopically with active distal UC were randomly allocated to receive either 100 cc of wheat grass juice, or a matching placebo, daily for 1 month. Efficacy of treatment was assessed by a 4-fold disease activity index that included rectal bleeding and number of bowel movements as determined from patient diary records, a sigmoidoscopic evaluation, and global assessment by a physician.\n\n\nRESULTS\nTwenty-one patients completed the study, and full information was available on 19 of them. Treatment with wheat grass juice was associated with significant reductions in the overall disease activity index (P=0.031) and in the severity of rectal bleeding (P = 0.025). No serious side effects were found. Fresh extract of wheat grass demonstrated a prominent tracing in cyclic voltammetry methodology, presumably corresponding to four groups of compounds that exhibit anti-oxidative properties.\n\n\nCONCLUSION\nWheat grass juice appeared effective and safe as a single or adjuvant treatment of active distal UC.",
"title": ""
},
{
"docid": "118f6ab5b61a334ff8a23f5c139c110c",
"text": "Many tasks in the biomedical domain require the assignment of one or more predefined labels to input text, where the labels are a part of a hierarchical structure (such as a taxonomy). The conventional approach is to use a one-vs.-rest (OVR) classification setup, where a binary classifier is trained for each label in the taxonomy or ontology where all instances not belonging to the class are considered negative examples. The main drawbacks to this approach are that dependencies between classes are not leveraged in the training and classification process, and the additional computational cost of training parallel classifiers. In this paper, we apply a new method for hierarchical multi-label text classification that initializes a neural network model final hidden layer such that it leverages label co-occurrence relations such as hypernymy. This approach elegantly lends itself to hierarchical classification. We evaluated this approach using two hierarchical multi-label text classification tasks in the biomedical domain using both sentenceand document-level classification. Our evaluation shows promising results for this approach.",
"title": ""
},
{
"docid": "1ad08b9ecc0a08f5e0847547c55ea90d",
"text": "Text summarization is the process of creating a shorter version of one or more text documents. Automatic text summarization has become an important way of finding relevant information in large text libraries or in the Internet. Extractive text summarization techniques select entire sentences from documents according to some criteria to form a summary. Sentence scoring is the technique most used for extractive text summarization, today. Depending on the context, however, some techniques may yield better results than some others. This paper advocates the thesis that the quality of the summary obtained with combinations of sentence scoring methods depend on text subject. Such hypothesis is evaluated using three different contexts: news, blogs and articles. The results obtained show the validity of the hypothesis formulated and point at which techniques are more effective in each of those contexts studied.",
"title": ""
},
{
"docid": "95a845c61fd1e98d62f1ab175d167276",
"text": "The ability to transfer knowledge from previous experiences is critical for an agent to rapidly adapt to different environments and effectively learn new tasks. In this paper we conduct an empirical study of Deep Q-Networks (DQNs) where the agent is evaluated on previously unseen environments. We show that we can train a robust network for navigation in 3D environments and demonstrate its effectiveness in generalizing to unknown maps with unknown background textures. We further investigate the effectiveness of pretraining and finetuning for transferring knowledge between various scenarios in 3D environments. In particular, we show that the features learnt by the navigation network can be effectively utilized to transfer knowledge between a diverse set of tasks, such as object collection, deathmatch, and self-localization.",
"title": ""
},
{
"docid": "e88f19cdd7f21c5aafedc13143bae00f",
"text": "For a long time, the term virtualization implied talking about hypervisor-based virtualization. However, in the past few years container-based virtualization got mature and especially Docker gained a lot of attention. Hypervisor-based virtualization provides strong isolation of a complete operating system whereas container-based virtualization strives to isolate processes from other processes at little resource costs. In this paper, hypervisor and container-based virtualization are differentiated and the mechanisms behind Docker and LXC are described. The way from a simple chroot over a container framework to a ready to use container management solution is shown and a look on the security of containers in general is taken. This paper gives an overview of the two different virtualization approaches and their advantages and disadvantages.",
"title": ""
},
{
"docid": "a60a8c3ee4e95f9e4cf0b29908404dcd",
"text": "This paper successfully implements compressed sensing (CS) to a near-field wideband 3-D synthetic aperture radar (SAR) imaging system. SAR data are measured at a low percentage of random-selected positions on a uniform grid of planar aperture in the stripmap mode. The near-field 3-D range migration algorithm (RMA) is used in combination with the CS principle to reconstruct the 3-D image via l1 regularized least-square approach. Experiments were performed with Q-band stepped-frequency monostatic stripmap SAR imaging system on a blue foam embedded with eight rubber pads and one copper square chip. The results of the experiments show near-field 3-D image of the specimen under test (SUT) can be reconstructed efficiently from low percentage of the full measurement positions, which largely lessens the data collection load. The reconstructed image was better focused and denoised.",
"title": ""
},
{
"docid": "f1166b493020d5c1f54fca517662eb40",
"text": "It is important for researchers to efficiently conduct quality literature studies. Hence, a structured and efficient approach is essential. We overview work that has demonstrated the potential for using software tools in literature reviews. We highlight the untapped opportunities in using an end-to-end tool-supported literature review methodology. Qualitative data-analysis tools such as NVivo are immensely useful as a means to analyze, synthesize, and write up literature reviews. In this paper, we describe how to organize and prepare papers for analysis and provide detailed guidelines for actually coding and analyzing papers, including detailed illustrative strategies to effectively write up and present the results. We present a detailed case study as an illustrative example of the proposed approach put into practice. We discuss the means, value, and also pitfalls of applying tool-supported literature review approaches. We contribute to the literature by proposing a four-phased tool-supported methodology that serves as best practice in conducting literature reviews in IS. By viewing the literature review process as a qualitative study and treating the literature as the “data set”, we address the complex puzzle of how best to extract relevant literature and justify its scope, relevance, and quality. We provide systematic guidelines for novice IS researchers seeking to conduct a robust literature review.",
"title": ""
},
{
"docid": "2e0b3d2b61e7cccf725202f73275dffb",
"text": "Introduction.........................................................................................................76 Scope and purpose of the chapter....................................................................79 Sustainability, globalization and organic agriculture ..........................................79 Dimensions of sustainability ...........................................................................80 Different meanings of globalization and sustainability...................................82 Sustainability and organic agriculture.............................................................83 The ethics and justice of ecological justice .........................................................84 Ecological justice as an ethical concept ..........................................................85 The justice of ecological justice ......................................................................87 Summing up ....................................................................................................89 Challenges for organic agriculture ......................................................................90 Commodification of commons........................................................................91 How to address externalities ...........................................................................92 Growing distances...........................................................................................94 Putting ecological justice into organic practice...................................................97 The way of certified organic agriculture .........................................................98 The way of non-certified organic agriculture................................................102 Organic agriculture as an alternative example ..............................................106 Conclusions.......................................................................................................108",
"title": ""
},
{
"docid": "d00957d93af7b2551073ba84b6c0f2a6",
"text": "High demand for computation resources severely hinders deployment of large-scale Deep Neural Networks (DNN) in resource constrained devices. In this work, we propose a Structured Sparsity Learning (SSL) method to regularize the structures (i.e., filters, channels, filter shapes, and layer depth) of DNNs. SSL can: (1) learn a compact structure from a bigger DNN to reduce computation cost; (2) obtain a hardware-friendly structured sparsity of DNN to efficiently accelerate the DNN’s evaluation. Experimental results show that SSL achieves on average 5.1× and 3.1× speedups of convolutional layer computation of AlexNet against CPU and GPU, respectively, with off-the-shelf libraries. These speedups are about twice speedups of non-structured sparsity; (3) regularize the DNN structure to improve classification accuracy. The results show that for CIFAR-10, regularization on layer depth can reduce 20 layers of a Deep Residual Network (ResNet) to 18 layers while improve the accuracy from 91.25% to 92.60%, which is still slightly higher than that of original ResNet with 32 layers. For AlexNet, structure regularization by SSL also reduces the error by ∼ 1%. Our source code can be found at https://github.com/wenwei202/caffe/tree/scnn",
"title": ""
}
] |
scidocsrr
|
b463551e63fd71a70993b2e2542bc48e
|
Distributed e-voting using the Smart Card Web Server
|
[
{
"docid": "02bc71435bd53d8331e3ad2b30588c6d",
"text": "Voting with cryptographic auditing, sometimes called open-audit voting, has remained, for the most part, a theoretical endeavor. In spite of dozens of fascinating protocols and recent ground-breaking advances in the field, there exist only a handful of specialized implementations that few people have experienced directly. As a result, the benefits of cryptographically audited elections have remained elusive. We present Helios, the first web-based, open-audit voting system. Helios is publicly accessible today: anyone can create and run an election, and any willing observer can audit the entire process. Helios is ideal for online software communities, local clubs, student government, and other environments where trustworthy, secretballot elections are required but coercion is not a serious concern. With Helios, we hope to expose many to the power of open-audit elections.",
"title": ""
}
] |
[
{
"docid": "cc5ede31b7dd9faa2cce9d2aa8819a3c",
"text": "Despite considerable research on systems, algorithms and hardware to speed up deep learning workloads, there is no standard means of evaluating end-to-end deep learning performance. Existing benchmarks measure proxy metrics, such as time to process one minibatch of data, that do not indicate whether the system as a whole will produce a high-quality result. In this work, we introduce DAWNBench, a benchmark and competition focused on end-to-end training time to achieve a state-of-the-art accuracy level, as well as inference time with that accuracy. Using time to accuracy as a target metric, we explore how different optimizations, including choice of optimizer, stochastic depth, and multi-GPU training, affect end-to-end training performance. Our results demonstrate that optimizations can interact in non-trivial ways when used in conjunction, producing lower speed-ups and less accurate models. We believe DAWNBench will provide a useful, reproducible means of evaluating the many trade-offs in deep learning systems.",
"title": ""
},
{
"docid": "dfb13625c6c03932b6dd83a77a782073",
"text": "Location Based Service (LBS), although it greatly benefits the daily life of mobile device users, has introduced significant threats to privacy. In an LBS system, even under the protection of pseudonyms, users may become victims of inference attacks, where an adversary reveals a user's real identity and complete moving trajectory with the aid of side information, e.g., accidental identity disclosure through personal encounters. To enhance privacy protection for LBS users, a common approach is to include extra fake location information associated with different pseudonyms, known as dummy users, in normal location reports. Due to the high cost of dummy generation using resource constrained mobile devices, self-interested users may free-ride on others' efforts. The presence of such selfish behaviors may have an adverse effect on privacy protection. In this paper, we study the behaviors of self-interested users in the LBS system from a game-theoretic perspective. We model the distributed dummy user generation as Bayesian games in both static and timing-aware contexts, and analyze the existence and properties of the Bayesian Nash Equilibria for both models. Based on the analysis, we propose a strategy selection algorithm to help users achieve optimized payoffs. Leveraging a beta distribution generalized from real-world location privacy data traces, we perform simulations to assess the privacy protection effectiveness of our approach. The simulation results validate our theoretical analysis for the dummy user generation game models.",
"title": ""
},
{
"docid": "5f513e3d58a10d2748983bfa06c11df2",
"text": "AIM\nThe aim of this study is to report a clinical case of oral nevus.\n\n\nBACKGROUND\nNevus is a congenital or acquired benign neoplasia that can be observed in the skin or mucous membranes. It is an uncommon condition in the oral mucosa. When it does occur, the preferred location is on the palate, followed by the cheek mucosa, lip and tongue.\n\n\nCASE REPORT\nIn this case study, we relate the diagnosis and treatment of a 23-year-old female patient with an irregular, pigmented lesion of the oral mucosa that underwent excisional biopsy resulting in a diagnosis of intramucosal nevus.\n\n\nCONCLUSION\nNevus can appear in the oral mucosa and should be removed.\n\n\nCLINICAL SIGNIFICANCE\nIt is important for dental professionals to adequately categorize and treat pigmented lesions in the mouth.",
"title": ""
},
{
"docid": "0d370adcb9194467dbfce118e9d8344c",
"text": "The hippocampus has contributed enormously to our understanding of the operation of elemental brain circuits, not least through the classification of forebrain interneurons. Understanding the operation of interneuron networks however requires not only a wiring diagram that describes the innervation and postsynaptic targets of different GABAergic cells, but also an appreciation of the temporal dimension. Interneurons differ extensively in their intrinsic firing rates, their recruitment in different brain rhythms, and in their synaptic kinetics. Furthermore, in common with principal neurons, both the synapses innervating interneurons and the synapses made by these cells are highly modifiable, reflecting both their recent or remote use (short-term and long-term plasticity) and the action of extracellular messengers. This review examines recent progress in understanding how different hippocampal interneuron networks contribute to feedback and feed-forward inhibition at different timescales.",
"title": ""
},
{
"docid": "6333c96a209a08268adffd2dce4751e8",
"text": "Induction machines with pole-phase modulation (PPM) can extend speed / torque capabilities for applications in integrated starter/generator and hybrid electric vehicles. In this paper, a general winding design rule for the PPM of induction machines is proposed. A prototype is used to verify the proposed method and the feasibility of the designed pole-changing winding. Besides, the characteristics of three different structure machines — conventional winding machine, toroidal winding machine and dual-rotor torodial winding machine operated in the same mode are compared by using the finite element software JMAG-studio. The results show that both conventional winding machine and dual-rotor torodial winding machine present good performances. Moreover, conventional winding machine has many advantages such as a simple structure, so its experimental prototype is designed and manufactured for next control work.",
"title": ""
},
{
"docid": "56a52c6a6b1815daee9f65d8ffc2610e",
"text": "State of the art methods for image and object retrieval exploit both appearance (via visual words) and local geometry (spatial extent, relative pose). In large scale problems, memory becomes a limiting factor - local geometry is stored for each feature detected in each image and requires storage larger than the inverted file and term frequency and inverted document frequency weights together. We propose a novel method for learning discretized local geometry representation based on minimization of average reprojection error in the space of ellipses. The representation requires only 24 bits per feature without drop in performance. Additionally, we show that if the gravity vector assumption is used consistently from the feature description to spatial verification, it improves retrieval performance and decreases the memory footprint. The proposed method outperforms state of the art retrieval algorithms in a standard image retrieval benchmark.",
"title": ""
},
{
"docid": "4dc05debbbe6c8103d772d634f91c86c",
"text": "In this paper we shows the experimental results using a microcontroller and hardware integration with the EMC2 software, using the Fuzzy Gain Scheduling PI Controller in a mechatronic prototype. The structure of the fuzzy 157 Research in Computing Science 116 (2016) pp. 157–169; rec. 2016-03-23; acc. 2016-05-11 controller is composed by two-inputs and two-outputs, is a TITO system. The error control feedback and their derivative are the inputs, while the proportional and integral gains are the fuzzy controller outputs. Was defined five Gaussian membership functions for the fuzzy sets by each input, the product fuzzy logic operator (AND connective) and the centroid defuzzifier was used to infer the gains outputs. The structure of fuzzy rule base are type Sugeno, zero-order. The experimental result in closed-loop shows the viability end effectiveness of the position fuzzy controller strategy. To verify the robustness of this controller structure, two different experiments was making: undisturbed and disturbance both in closed-loop. This work presents comparative experimental results, using the Classical tune rule of Ziegler-Nichols and the Fuzzy Gain Scheduling PI Controller, for a mechatronic system widely used in various industries applications.",
"title": ""
},
{
"docid": "998c631c38d49705994e85252b500882",
"text": "The botnet, as one of the most formidable threats to cyber security, is often used to launch large-scale attack sabotage. How to accurately identify the botnet, especially to improve the performance of the detection model, is a key technical issue. In this paper, we propose a framework based on generative adversarial networks to augment botnet detection models (Bot-GAN). Moreover, we explore the performance of the proposed framework based on flows. The experimental results show that Bot-GAN is suitable for augmenting the original detection model. Compared with the original detection model, the proposed approach improves the detection performance, and decreases the false positive rate, which provides an effective method for improving the detection performance. In addition, it also retains the primary characteristics of the original detection model, which does not care about the network payload information, and has the ability to detect novel botnets and others using encryption or proprietary protocols.",
"title": ""
},
{
"docid": "eaed6338dd4d25307aab04cb1441844b",
"text": "In the network communications, network intrusion is the most important concern nowadays. The increasing occurrence of network attacks is a devastating problem for network services. Various research works are already conducted to find an effective and efficient solution to prevent intrusion in the network in order to ensure network security and privacy. Machine learning is an effective analysis tool to detect any anomalous events occurred in the network traffic flow. In this paper, a combination of two machine learning algorithms is proposed to classify any anomalous behavior in the network traffic. The overall efficiency of the proposed method is dignified by evaluating the detection accuracy, false positive rate, false negative rate and time taken to detect the intrusion. The proposed method demonstrates the effectiveness of the algorithm in detecting the intrusion with higher detection accuracy of 98.76% and lower false positive rate of 0.09% and false negative rate of 1.15%, whereas the normal SVM based scheme achieved a detection accuracy of 88.03% and false positive rate of 4.2% and false negative rate of 7.77%. Keywords—Intrusion Detection; Machine Learning; Support Vector Machine, Supervised Learning",
"title": ""
},
{
"docid": "1f8a8604c82de9a863646c581eccc3fa",
"text": "In this paper, we introduce an enhancement for speech recognition systems using an unsupervised speaker clustering technique. The proposed technique is mainly based on I-vectors and Self-Organizing Map Neural Network (SOM). The input to the proposed algorithm is a set of speech utterances. For each utterance, we extract 100-dimensional I-vector and then SOM is used to group the utterances to different speakers. In our experiments, we compared our technique with Normalized Cross Likelihood ratio Clustering (NCLR). Results show that the proposed technique reduces the speaker error rate in comparison with NCLR. Finally, we have experimented the effect of speaker clustering on Speaker Adaptive Training (SAT) in a speech recognition system implemented to test the performance of the proposed technique. It was noted that the proposed technique reduced the WER over clustering speakers with NCLR.",
"title": ""
},
{
"docid": "d57555ce6b3fdd12052ea667bff915ed",
"text": "This paper presents a novel structure for ultra broadband 4:1 broadside-coupled PCB impedance transformer. Analysis, simulations and measurements of the developed transformer are introduced and discussed. Three prototypes of the proposed structure are implemented at center frequencies 5.65 GHz, 4.35 GHz and 3.65 GHz, respectively with fractional bandwidth of greater than 180 %. The implemented transformers show an ultra broadband performance with a transmission loss less than 1 dB and return loss at least 10 dB across the desired bandwidth. During comparison, simulations and measurements are found very close to each other. To the author's best knowledge the achieved performance of the designed transformer is better than so far published state of the art results.",
"title": ""
},
{
"docid": "3b27f02b96f079e57714ef7c2f688b48",
"text": "Polycystic ovary syndrome (PCOS) affects 5-10% of women in reproductive age and is characterized by oligo/amenorrhea, androgen excess, insulin resistance, and typical polycystic ovarian morphology. It is the most common cause of infertility secondary to ovulatory dysfunction. The underlying etiology is still unknown but is believed to be multifactorial. Insulin-sensitizing compounds such as inositol, a B-complex vitamin, and its stereoisomers (myo-inositol and D-chiro-inositol) have been studied as an effective treatment of PCOS. Administration of inositol in PCOS has been shown to improve not only the metabolic and hormonal parameters but also ovarian function and the response to assisted-reproductive technology (ART). Accumulating evidence suggests that it is also capable of improving folliculogenesis and embryo quality and increasing the mature oocyte yield following ovarian stimulation for ART in women with PCOS. In the current review, we collate the evidence and summarize our current knowledge on ovarian stimulation and ART outcomes following inositol treatment in women with PCOS undergoing in vitro fertilization (IVF) and/or intracytoplasmic sperm injection (ICSI).",
"title": ""
},
{
"docid": "a7f29c88c2fb7423cffb153eec105b50",
"text": "Collective cell migration is fundamental to gaining insights into various important biological processes such as wound healing and cancer metastasis. In particular, recent in vitro studies and in silico simulations suggest that mechanics can explain the social behavior of multicellular clusters to a large extent with minimal knowledge of various cellular signaling pathways. These results suggest that a mechanistic perspective is necessary for a comprehensive and holistic understanding of collective cell migration, and this review aims to provide a broad overview of such a perspective.",
"title": ""
},
{
"docid": "8ab80b9f51166e7b5cc1b60da443bc6b",
"text": "How to represent a map of the environment is a key question of robotics. In this paper, we focus on suggesting a representation well-suited for online map building from vision-based data and online planning in 3D. We propose to combine a commonly-used representation in computer graphics and surface reconstruction, projective Truncated Signed Distance Field (TSDF), with a representation frequently used for collision checking and collision costs in planning, Euclidean Signed Distance Field (ESDF), and validate this combined approach in simulation. We argue that this type of map is better-suited for robotic applications than existing representations.",
"title": ""
},
{
"docid": "e42a1faf3d983bac59c0bfdd79212093",
"text": "L eadership matters, according to prominent leadership scholars (see also Bennis, 2007). But what is leadership? That turns out to be a challenging question to answer. Leadership is a complex and diverse topic, and trying to make sense of leadership research can be an intimidating endeavor. One comprehensive handbook of leadership (Bass, 2008), covering more than a century of scientific study, comprises more than 1,200 pages of text and more than 200 additional pages of references! There is clearly a substantial scholarly body of leadership theory and research that continues to grow each year. Given the sheer volume of leadership scholarship that is available, our purpose is not to try to review it all. That is why our focus is on the nature or essence of leadership as we and our chapter authors see it. But to fully understand and appreciate the nature of leadership, it is essential that readers have some background knowledge of the history of leadership research, the various theoretical streams that have evolved over the years, and emerging issues that are pushing the boundaries of the leadership frontier. Further complicating our task is that more than one hundred years of leadership research have led to several paradigm shifts and a voluminous body of knowledge. On several occasions, scholars of leadership became quite frustrated by the large amount of false starts, incremental theoretical advances, and contradictory findings. As stated more than five decades ago by Warren Bennis (1959, pp. 259–260), “Of all the hazy and confounding areas in social psychology, leadership theory undoubtedly contends for Leadership: Past, Present, and Future",
"title": ""
},
{
"docid": "0952701dd63326f8a78eb5bc9a62223f",
"text": "The self-organizing map (SOM) is an automatic data-analysis method. It is widely applied to clustering problems and data exploration in industry, finance, natural sciences, and linguistics. The most extensive applications, exemplified in this paper, can be found in the management of massive textual databases and in bioinformatics. The SOM is related to the classical vector quantization (VQ), which is used extensively in digital signal processing and transmission. Like in VQ, the SOM represents a distribution of input data items using a finite set of models. In the SOM, however, these models are automatically associated with the nodes of a regular (usually two-dimensional) grid in an orderly fashion such that more similar models become automatically associated with nodes that are adjacent in the grid, whereas less similar models are situated farther away from each other in the grid. This organization, a kind of similarity diagram of the models, makes it possible to obtain an insight into the topographic relationships of data, especially of high-dimensional data items. If the data items belong to certain predetermined classes, the models (and the nodes) can be calibrated according to these classes. An unknown input item is then classified according to that node, the model of which is most similar with it in some metric used in the construction of the SOM. A new finding introduced in this paper is that an input item can even more accurately be represented by a linear mixture of a few best-matching models. This becomes possible by a least-squares fitting procedure where the coefficients in the linear mixture of models are constrained to nonnegative values.",
"title": ""
},
{
"docid": "dfba2b7750fc705f6fb0f87e4ff3a51a",
"text": "The Internet is a technological development that has the potential to change not only the way society retains and accesses knowledge but also to transform and restructure traditional models of higher education, particularly the delivery and interaction in and with course materials and associated resources. Utilising the Internet to deliver eLearning initiatives has created expectations both in the business market and in higher education institutions. Indeed, eLearning has enabled universities to expand on their current geographical reach, to capitalise on new prospective students and to establish themselves as global educational providers. This paper examines the issues surrounding the implementation of eLearning into higher education, including the structure and delivery of higher education, the implications to both students and lecturers and the global impact on society. This journal article is available in Journal of University Teaching & Learning Practice: http://ro.uow.edu.au/jutlp/vol2/iss1/3 Journa l o f Un ivers i t y Teach ing and Learn ing Prac t i ce A Study Into The Effects Of eLearning On Higher Education",
"title": ""
},
{
"docid": "2410a4b40b833d1729fac37020ec13be",
"text": "Understanding how ecological conditions influence physiological responses is fundamental to forensic entomology. When determining the minimum postmortem interval with blow fly evidence in forensic investigations, using a reliable and accurate model of development is integral. Many published studies vary in results, source populations, and experimental designs. Accordingly, disentangling genetic causes of developmental variation from environmental causes is difficult. This study determined the minimum time of development and pupal sizes of three populations of Lucilia sericata Meigen (Diptera: Calliphoridae; from California, Michigan, and West Virginia) at two temperatures (20 degrees C and 33.5 degrees C). Development times differed significantly between strain and temperature. In addition, California pupae were the largest and fastest developing at 20 degrees C, but at 33.5 degrees C, though they still maintained their rank in size among the three populations, they were the slowest to develop. These results indicate a need to account for genetic differences in development, and genetic variation in environmental responses, when estimating a postmortem interval with entomological data.",
"title": ""
},
{
"docid": "0678581b45854e8903c0812a25fd9ad1",
"text": "In this study we explored the relationship between narcissism and the individual's use of personal pronouns during extemporaneous monologues. The subjects, 24 males and 24 females, were asked to talk for approximately 5 minutes on any topic they chose. Following the monologues the subjects were administered the Narcissistic Personality Inventory, the Eysenck Personality Questionnaire, and the Rotter Internal-External Locus of Control Scale. The monologues were tape-recorded and later transcribed and analyzed for the subjects' use of personal pronouns. As hypothesized, individuals who scored higher on narcissism tended to use more first person singular pronouns and fewer first person plural pronouns. Discriminant validity for the relationship between narcissism and first person pronoun usage was exhibited in that narcissism did not show a relationship with subjects' use of second and third person pronouns, nor did the personality variables of extraversion, neuroticism, or locus of control exhibit any relationship with the subjects' personal pronoun usage.",
"title": ""
},
{
"docid": "31ab58f42f5f34f765d28aead4ae7fe3",
"text": "Machine learning (ML) has become a core component of many real-world applications and training data is a key factor that drives current progress. This huge success has led Internet companies to deploy machine learning as a service (MLaaS). Recently, the first membership inference attack has shown that extraction of information on the training set is possible in such MLaaS settings, which has severe security and privacy implications. However, the early demonstrations of the feasibility of such attacks have many assumptions on the adversary, such as using multiple so-called shadow models, knowledge of the target model structure, and having a dataset from the same distribution as the target model’s training data. We relax all these key assumptions, thereby showing that such attacks are very broadly applicable at low cost and thereby pose a more severe risk than previously thought. We present the most comprehensive study so far on this emerging and developing threat using eight diverse datasets which show the viability of the proposed attacks across domains. In addition, we propose the first effective defense mechanisms against such broader class of membership inference attacks that maintain a high level of utility of the ML model.",
"title": ""
}
] |
scidocsrr
|
729df00e045685cb04422a95c729b106
|
Data Security and Privacy-Preserving in Edge Computing Paradigm: Survey and Open Issues
|
[
{
"docid": "8f40ff9cc3fb3c69bb9df657045ca892",
"text": "This article presents an architecture vision to address the challenges placed on 5G mobile networks. A two-layer architecture is proposed, consisting of a radio network and a network cloud, integrating various enablers such as small cells, massive MIMO, control/user plane split, NFV, and SDN. Three main concepts are integrated: ultra-dense small cell deployments on licensed and unlicensed spectrum, under control/user plane split architecture, to address capacity and data rate challenges; NFV and SDN to provide flexible network deployment and operation; and intelligent use of network data to facilitate optimal use of network resources for QoE provisioning and planning. An initial proof of concept evaluation is presented to demonstrate the potential of the proposal. Finally, other issues that must be addressed to realize a complete 5G architecture vision are discussed.",
"title": ""
},
{
"docid": "738a69ad1006c94a257a25c1210f6542",
"text": "Encrypted data search allows cloud to offer fundamental information retrieval service to its users in a privacy-preserving way. In most existing schemes, search result is returned by a semi-trusted server and usually considered authentic. However, in practice, the server may malfunction or even be malicious itself. Therefore, users need a result verification mechanism to detect the potential misbehavior in this computation outsourcing model and rebuild their confidence in the whole search process. On the other hand, cloud typically hosts large outsourced data of users in its storage. The verification cost should be efficient enough for practical use, i.e., it only depends on the corresponding search operation, regardless of the file collection size. In this paper, we are among the first to investigate the efficient search result verification problem and propose an encrypted data search scheme that enables users to conduct secure conjunctive keyword search, update the outsourced file collection and verify the authenticity of the search result efficiently. The proposed verification mechanism is efficient and flexible, which can be either delegated to a public trusted authority (TA) or be executed privately by data users. We formally prove the universally composable (UC) security of our scheme. Experimental result shows its practical efficiency even with a large dataset.",
"title": ""
}
] |
[
{
"docid": "0a4a124589dffca733fa9fa87dc94b35",
"text": "where ri is the reward in cycle i of a given history, and the expected value is taken over all possible interaction histories of π and μ. The choice of γi is a subtle issue that controls how greedy or far sighted the agent should be. Here we use the near-harmonic γi := 1/i2 as this produces an agent with increasing farsightedness of the order of its current age [Hutter2004]. As we desire an extremely general definition of intelligence for arbitrary systems, our space of environments should be as large as possible. An obvious choice is the space of all probability measures, however this causes serious problems as we cannot even describe some of these measures in a finite way.",
"title": ""
},
{
"docid": "7bb079fd51771a9dc45a73bc53a797ee",
"text": "This paper analyzes a recently published algorithm for page replacement in hierarchical paged memory systems [O'Neil et al. 1993]. The algorithm is called the LRU-<italic>K</italic> method, and reduces to the well-known LRU (Least Recently Used) method for <italic>K</italic> = 1. Previous work [O'Neil et al. 1993; Weikum et al. 1994; Johnson and Shasha 1994] has shown the effectiveness for <italic>K</italic> > 1 by simulation, especially in the most common case of <italic>K</italic> = 2. The basic idea in LRU-<italic>K</italic> is to keep track of the times of the last <italic>K</italic> references to memory pages, and to use this statistical information to rank-order the pages as to their expected future behavior. Based on this the page replacement policy decision is made: which memory-resident page to replace when a newly accessed page must be read into memory. In the current paper, we prove, under the assumptions of the independent reference model, that LRU-<italic>K</italic> is optimal. Specifically we show: given the times of the (up to) <italic>K</italic> most recent references to each disk page, no other algorithm <italic>A</italic> making decisions to keep pages in a memory buffer holding <italic>n</italic> - 1 pages based on this infomation can improve on the expected number of I/Os to access pages over the LRU-<italic>K</italic> algorithm using a memory buffer holding <italic>n</italic> pages. The proof uses the Bayesian formula to relate the space of actual page probabilities of the model to the space of observable page numbers on which the replacement decision is acutally made.",
"title": ""
},
{
"docid": "82f38828416d08bbb6ee195c3ca071eb",
"text": "Real-time ride-sharing applications (e.g., Uber and Lyft) are very popular in recent years. Motivated by the ride-sharing application, we propose a new type of query in road networks, called the optimal multi-meeting-point route (OMMPR) query. Given a road network G, a source nodes, a target node t, and a set of query nodes U, the OMMPR query aims at finding the best route starting from s and ending at t such that the weighted average cost between the cost of the route and the total cost of the shortest paths from every query node to the route is minimized. We show that the problem of computing the OMMPR query is NP-hard. To answer the OMMPR query efficiently, we propose two novel parameterized solutions based on dynamic programming (DP), with the number of query nodes l (i.e., l = |U|) as a parameter, which is typically very small in practice. The two proposed parameterized algorithms run in O(3l · m + 2l · n · (l + log (n))) and O(2l · (m + n · (l + log (n)))) time, respectively, where n and m denote the number of nodes and edges in graph G, thus they are tractable in practice. To reduce the search space of the DP-based algorithms, we propose two novel optimized algorithms based on bidirectional DP and a carefully-designed lower bounding technique. We conduct extensive experimental studies on four large real-world road networks, and the results demonstrate the efficiency of the proposed algorithms.",
"title": ""
},
{
"docid": "b917ec2f16939a819625b6750597c40c",
"text": "In an increasing number of scientific disciplines, large data collections are emerging as important community resources. In domains as diverse as global climate change, high energy physics, and computational genomics, the volume of interesting data is already measured in terabytes and will soon total petabytes. The communities of researchers that need to access and analyze this data (often using sophisticated and computationally expensive techniques) are often large and are almost always geographically distributed, as are the computing and storage resources that these communities rely upon to store and analyze their data [17]. This combination of large dataset size, geographic distribution of users and resources, and computationally intensive analysis results in complex and stringent performance demands that are not satisfied by any existing data management infrastructure. A large scientific collaboration may generate many queries, each involving access to—or supercomputer-class computations on—gigabytes or terabytes of data. Efficient and reliable execution of these queries may require careful management of terabyte caches, gigabit/s data transfer over wide area networks, coscheduling of data transfers and supercomputer computation, accurate performance estimations to guide the selection of dataset replicas, and other advanced techniques that collectively maximize use of scarce storage, networking, and computing resources. The literature offers numerous point solutions that address these issues (e.g., see [17, 14, 19, 3]). But no integrating architecture exists that allows us to identify requirements and components common to different systems and hence apply different technologies in a coordinated fashion to a range of dataintensive petabyte-scale application domains. Motivated by these considerations, we have launched a collaborative effort to design and produce such an integrating architecture. We call this architecture the data grid, to emphasize its role as a specialization and extension of the “Grid” that has emerged recently as an integrating infrastructure for distributed computation [10, 20, 15]. Our goal in this effort is to define the requirements that a data grid must satisfy and the components and APIs that will be required in its implementation. We hope that the definition of such an architecture will accelerate progress on petascale data-intensive computing by enabling the integration of currently disjoint approaches, encouraging the deployment of basic enabling technologies, and revealing technology gaps that require further research and development. In addition, we plan to construct a reference implementation for this architecture so as to enable large-scale experimentation.",
"title": ""
},
{
"docid": "82a1285063aadcebd386fac6cb5214f0",
"text": "Programs that take highly-structured files as inputs normally process inputs in stages: syntax parsing, semantic checking, and application execution. Deep bugs are often hidden in the application execution stage, and it is non-trivial to automatically generate test inputs to trigger them. Mutation-based fuzzing generates test inputs by modifying well-formed seed inputs randomly or heuristically. Most inputs are rejected at the early syntax parsing stage. Differently, generation-based fuzzing generates inputs from a specification (e.g., grammar). They can quickly carry the fuzzing beyond the syntax parsing stage. However, most inputs fail to pass the semantic checking (e.g., violating semantic rules), which restricts their capability of discovering deep bugs. In this paper, we propose a novel data-driven seed generation approach, named Skyfire, which leverages the knowledge in the vast amount of existing samples to generate well-distributed seed inputs for fuzzing programs that process highly-structured inputs. Skyfire takes as inputs a corpus and a grammar, and consists of two steps. The first step of Skyfire learns a probabilistic context-sensitive grammar (PCSG) to specify both syntax features and semantic rules, and then the second step leverages the learned PCSG to generate seed inputs. We fed the collected samples and the inputs generated by Skyfire as seeds of AFL to fuzz several open-source XSLT and XML engines (i.e., Sablotron, libxslt, and libxml2). The results have demonstrated that Skyfire can generate well-distributed inputs and thus significantly improve the code coverage (i.e., 20% for line coverage and 15% for function coverage on average) and the bug-finding capability of fuzzers. We also used the inputs generated by Skyfire to fuzz the closed-source JavaScript and rendering engine of Internet Explorer 11. Altogether, we discovered 19 new memory corruption bugs (among which there are 16 new vulnerabilities and received 33.5k USD bug bounty rewards) and 32 denial-of-service bugs.",
"title": ""
},
{
"docid": "4f3d2b869322125a8fad8a39726c99f8",
"text": "Routing Protocol for Low Power and Lossy Networks (RPL) is the routing protocol for IoT and Wireless Sensor Networks. RPL is a lightweight protocol, having good routing functionality, but has basic security functionality. This may make RPL vulnerable to various attacks. Providing security to IoT networks is challenging, due to their constrained nature and connectivity to the unsecured internet. This survey presents the elaborated review on the security of Routing Protocol for Low Power and Lossy Networks (RPL). This survey is built upon the previous work on RPL security and adapts to the security issues and constraints specific to Internet of Things. An approach to classifying RPL attacks is made based on Confidentiality, Integrity, and Availability. Along with that, we surveyed existing solutions to attacks which are evaluated and given possible solutions (theoretically, from various literature) to the attacks which are not yet evaluated. We further conclude with open research challenges and future work needs to be done in order to secure RPL for Internet of Things (IoT).",
"title": ""
},
{
"docid": "54ab143dc18413c58c20612dbae142eb",
"text": "Elderly adults may master challenging cognitive demands by additionally recruiting the cross-hemispheric counterparts of otherwise unilaterally engaged brain regions, a strategy that seems to be at odds with the notion of lateralized functions in cerebral cortex. We wondered whether bilateral activation might be a general coping strategy that is independent of age, task content and brain region. While using functional magnetic resonance imaging (fMRI), we pushed young and old subjects to their working memory (WM) capacity limits in verbal, spatial, and object domains. Then, we compared the fMRI signal reflecting WM maintenance between hemispheric counterparts of various task-relevant cerebral regions that are known to exhibit lateralization. Whereas language-related areas kept their lateralized activation pattern independent of age in difficult tasks, we observed bilaterality in dorsolateral and anterior prefrontal cortex across WM domains and age groups. In summary, the additional recruitment of cross-hemispheric counterparts seems to be an age-independent domain-general strategy to master cognitive challenges. This phenomenon is largely confined to prefrontal cortex, which is arguably less specialized and more flexible than other parts of the brain.",
"title": ""
},
{
"docid": "f25eff52ed3c862f4e18cbcd4a1f1c5b",
"text": "BACKGROUND\nAdolescent girls dwelling in slums are vulnerable to poor reproductive health due to lack of awareness about reproductive health and low life skills. These girls are in a crucial stage of their life cycle and their health can impact the health of future generations. Despite adolescents comprising almost one-quarter of the Indian population they are ill served in terms of reproductive health.\n\n\nMETHODS\nThis cross-sectional study was done among 130 slum-dwelling adolescent girls, aged 13-19 years, using multistage sampling method from five slums in Chennai, southern India. The reproductive and menstrual morbidity profile, personal and environmental menstrual hygiene was assessed to determine their reproductive health-seeking behaviour and life skills.\n\n\nRESULTS\nNinety-five (73%) girls (95% CI 66.23-81.36) reported menstrual morbidity and 66 (51%; 95% CI 50.74-52.25) had symptoms suggestive of reproductive/urinary tract infection. Of the girls surveyed, 55 (42%) were married. Nearly 25% (95% CI 23.07-26.92) of the married girls had a history of abortion and 18% (95% CI 11.32-25.07) had self-treated with medications for the same. Contraceptive use among ever-married girls was 22.7% (95% CI 20.83-24.56). Even though 75% of respondents knew about HIV/AIDS, their knowledge of modes of transmission and prevention were low (39% and 19%, respectively). Almost 39% of respondents felt shame or insecurity as the key barrier for not seeking reproductive healthcare. About 52% had low life skill levels. On logistic regression, menstrual morbidity was high among those with low life skills, symptoms suggestive of reproductive/urinary tract infection were high among those who were married before 14 years of age and life skills were high among those who belonged to the scheduled caste community.\n\n\nCONCLUSION\nThere is a high prevalence of menstrual/reproductive morbidity, self-treated abortion and low knowledge about modes of HIV transmission/prevention and use of contraceptives among adolescent girls in slums in Chennai. There is a need to initiate community-level life skill education, sex education and behaviour change communication.",
"title": ""
},
{
"docid": "6a72b09ce61635254acb0affb1d5496e",
"text": "We introduce a new large-scale video dataset designed to assess the performance of diverse visual event recognition algorithms with a focus on continuous visual event recognition (CVER) in outdoor areas with wide coverage. Previous datasets for action recognition are unrealistic for real-world surveillance because they consist of short clips showing one action by one individual [15, 8]. Datasets have been developed for movies [11] and sports [12], but, these actions and scene conditions do not apply effectively to surveillance videos. Our dataset consists of many outdoor scenes with actions occurring naturally by non-actors in continuously captured videos of the real world. The dataset includes large numbers of instances for 23 event types distributed throughout 29 hours of video. This data is accompanied by detailed annotations which include both moving object tracks and event examples, which will provide solid basis for large-scale evaluation. Additionally, we propose different types of evaluation modes for visual recognition tasks and evaluation metrics along with our preliminary experimental results. We believe that this dataset will stimulate diverse aspects of computer vision research and help us to advance the CVER tasks in the years ahead.",
"title": ""
},
{
"docid": "09f2f2184cb064851238a10d1d661b9e",
"text": "The rapid proliferation of information technologies especially the web 2.0 techniques have changed the fundamental ways how things can be done in many areas, including how researchers could communicate and collaborate with each other. The presence of the sheer volume of researcher and topical research information on the Web has led to the problem of information overload. There is a pressing need to develop researcher recommender systems such that users can be provided with personalized recommendations of the researchers they can potentially collaborate with for mutual research benefits. In an academic context, recommending suitable research partners to researchers can facilitate knowledge discovery and exchange, and ultimately improve the research productivity of both sides. Existing expertise recommendation research usually investigates into the expert finding problem from two independent dimensions, namely, the social relations and the common expertise. The main contribution of this paper is that we propose a novel researcher recommendation approach which combines the two dimensions of social relations and common expertise in a unified framework to improve the effectiveness of personalized researcher recommendation. Moreover, how our proposed framework can be applied to the real-world academic contexts is explained based on two case studies.",
"title": ""
},
{
"docid": "9eee8f4ab2f5dd466ebe541718f445ba",
"text": "PURPOSE\nThis study investigated the effect of muscle stretching during warm-up on the risk of exercise-related injury.\n\n\nMETHODS\n1538 male army recruits were randomly allocated to stretch or control groups. During the ensuing 12 wk of training, both groups performed active warm-up exercises before physical training sessions. In addition, the stretch group performed one 20-s static stretch under supervision for each of six major leg muscle groups during every warm-up. The control group did not stretch.\n\n\nRESULTS\n333 lower-limb injuries were recorded during the training period, including 214 soft-tissue injuries. There were 158 injuries in the stretch group and 175 in the control group. There was no significant effect of preexercise stretching on all-injuries risk (hazard ratio [HR] = 0.95, 95% CI 0.77-1.18), soft-tissue injury risk (HR = 0.83, 95% CI 0.63-1.09), or bone injury risk (HR = 1.22, 95% CI 0.86-1.76). Fitness (20-m progressive shuttle run test score), age, and enlistment date all significantly predicted injury risk (P < 0.01 for each), but height, weight, and body mass index did not.\n\n\nCONCLUSION\nA typical muscle stretching protocol performed during preexercise warm-ups does not produce clinically meaningful reductions in risk of exercise-related injury in army recruits. Fitness may be an important, modifiable risk factor.",
"title": ""
},
{
"docid": "b4719bacbbbce62af85fd5dec1f3fab2",
"text": "The retina, like many other central nervous system structures, contains a huge diversity of neuronal types. Mammalian retinas contain approximately 55 distinct cell types, each with a different function. The census of cell types is nearing completion, as the development of quantitative methods makes it possible to be reasonably confident that few additional types exist. Although much remains to be learned, the fundamental structural principles are now becoming clear. They give a bottom-up view of the strategies used in the retina's processing of visual information and suggest new questions for physiological experiments and modeling.",
"title": ""
},
{
"docid": "b55a0ae61e2b0c36b5143ef2b7b2dbf0",
"text": "This study reports a comparison of screening tests for dyslexia, dyspraxia and Meares-Irlen (M-I) syndrome in a Higher Education setting, the University of Worcester. Using a sample of 74 volunteer students, we compared the current tutor-delivered battery of 15 subtests with a computerized test, the Lucid Adult Dyslexia Screening test (LADS), and both of these with data on assessment outcomes. The sensitivity of this tutor battery was higher than LADS in predicting dyslexia, dyspraxia or M-I syndrome (91% compared with 66%) and its specificity was lower (79% compared with 90%). Stepwise logistic regression on these tests was used to identify a better performing subset of tests, when combined with a change in practice for M-I syndrome screening. This syndrome itself proved to be a powerful discriminator for dyslexia and/or dyspraxia, and we therefore recommend it as the first stage in a two-stage screening process. The specificity and sensitivity of the new battery, the second part of which comprises LADS plus four of the original tutor delivered subtests, provided the best overall performance: 94% sensitivity and 92% specificity. We anticipate that the new two-part screening process would not take longer to complete.",
"title": ""
},
{
"docid": "6d76c28d29438d87a3815bd4029df63f",
"text": "We use the full query set of the TPC-H Benchmark as a case study for the efficient implementation of decision support queries on main memory column-store databases. Instead of splitting a query into separate independent operators, we consider the query as a whole and translate the execution plan into a single function performing the query. This allows highly efficient CPU utilization, minimal materialization, and execution in a single pass over the data for most queries. The single pass is performed in parallel and scales near-linearly with the number of cores. The resulting query plans for most of the 22 queries are remarkably simple and are suited for automatic generation and fast compilation. Using a data-parallel, NUMA-aware many-core implementation with block summaries, inverted index data structures, and efficient aggregation algorithms, we achieve one to two orders of magnitude better performance than the current record holders of the TPC-H Benchmark.",
"title": ""
},
{
"docid": "03e82d63b105a4ffd9af8a5fc473b5ed",
"text": "This paper describes a lumped-element 5-way Wilkinson power divider with broadband characteristics. The circuit contains multi-section LC-ladder circuits between input and output ports, and each output port is connected through series RLC circuits. By designing the divider based on multi-section matching transformer and L-section matching network techniques, the proposed 5-way divider can achieve broadband characteristics. In order to verify the design procedure, the proposed divider was designed and fabricated at a center frequency of 300MHz. The fabricated divider exhibited broadband characteristics with a relative bandwidth of about 75%.",
"title": ""
},
{
"docid": "322dcd68d7467c477c241bedc28fce11",
"text": "The automobile mathematical model is established on the analysis to the automobile electric power steering system (EPS) structural style and the performance. In order to solve the problem that the most automobile power steering is difficult to determine the PID controller parameter, the article uses the fuzzy neural network PID control in EPS. Through the simulation of PID control and the fuzzy neural network PID control computation, the test result indicated that, fuzzy neural network PID the control EPS system has a better robustness compared to traditional PID the control EPS, can improve EPS effectively the steering characteristic and the automobile changes characteristic well.",
"title": ""
},
{
"docid": "e75df6ff31c9840712cf1a4d7f6582cd",
"text": "Endotoxin, a constituent of Gram-negative bacteria, stimulates macrophages to release large quantities of tumor necrosis factor (TNF) and interleukin-1 (IL-1), which can precipitate tissue injury and lethal shock (endotoxemia). Antagonists of TNF and IL-1 have shown limited efficacy in clinical trials, possibly because these cytokines are early mediators in pathogenesis. Here a potential late mediator of lethality is identified and characterized in a mouse model. High mobility group-1 (HMG-1) protein was found to be released by cultured macrophages more than 8 hours after stimulation with endotoxin, TNF, or IL-1. Mice showed increased serum levels of HMG-1 from 8 to 32 hours after endotoxin exposure. Delayed administration of antibodies to HMG-1 attenuated endotoxin lethality in mice, and administration of HMG-1 itself was lethal. Septic patients who succumbed to infection had increased serum HMG-1 levels, suggesting that this protein warrants investigation as a therapeutic target.",
"title": ""
},
{
"docid": "d577fa400a0f15ae7effbf0776d2dc3a",
"text": "In this work the authors propose a Butler Matrix (BM) based beamforming network (BFN) that feeds a linear antenna array. A BM 8×8 is integrated with a specific Switching Network (SN) and a group of Switched-Line Phase Shifters (SLPS), in order to properly drive the excitation of the antenna elements. The resulting configuration provides enhanced beamforming flexibility compared to a Switched Beam System (SBS) fed by a typical BM. Thus, a smart beamforming structure can be designed that combines the simplicity of a SBS with some of the advantages of an Adaptive Array System (AAS).",
"title": ""
},
{
"docid": "b49e21ca1d2cb26c670466a5da421854",
"text": "Topic modelling is a well-studied field that aims to identify topics from traditional documents such as news articles and reports. More recently, Latent Dirichlet Allocation (LDA) and its variants, have been applied on social media platforms to model and study topics relating to sports, politics and companies. While these applications were able to successfully identify the general topics, we posit that standard LDA can be augmented with spatial and temporal considerations based on the geo-coordinates and timestamps of social media posts. Towards this effort, we propose a spatial and temporal variant of LDA to better detect more specific topics, such as a particular art exhibit held at a museum or a security incident happening on a particular day. We validate our approach on a Twitter dataset and find that the detected topics are well-aligned to real-life events happening on the specific days and locations.",
"title": ""
},
{
"docid": "49a54c57984c3feaef32b708ae328109",
"text": "While it has a long history, the last 30 years have brought considerable advances to the discipline of forensic anthropology worldwide. Every so often it is essential that these advances are noticed and trends assessed. It is also important to identify those research areas that are needed for the forthcoming years. The purpose of this special issue is to examine some of the examples of research that might identify the trends in the 21st century. Of the 14 papers 5 dealt with facial features and identification such as facial profile determination and skull-photo superimposition. Age (fetus and cranial thickness), sex (supranasal region, arm and leg bones) and stature (from the arm bones) estimation were represented by five articles. Others discussed the estimation of time since death, skull color and diabetes, and a case study dealing with a mummy and skeletal analysis in comparison with DNA identification. These papers show that age, sex, and stature are still important issues of the discipline. Research on the human face is moving from hit and miss case studies to a more scientifically sound direction. A lack of studies on trauma and taphonomy is very clear. Anthropologists with other scientists can develop research areas to make the identification process more reliable. Research should include the assessment of animal attacks on human remains, factors affecting decomposition rates, and aging of the human face. Lastly anthropologists should be involved in the education of forensic pathologists about osteological techniques and investigators regarding archaeology of crime scenes.",
"title": ""
}
] |
scidocsrr
|
f8ed8843d4d16aa0340e02b9d12559b9
|
Titian: Data Provenance Support in Spark
|
[
{
"docid": "36c4b2ab451c24d2d0d6abcbec491116",
"text": "A key advantage of scientific workflow systems over traditional scripting approaches is their ability to automatically record data and process dependencies introduced during workflow runs. This information is often represented through provenance graphs, which can be used by scientists to better understand, reproduce, and verify scientific results. However, while most systems record and store data and process dependencies, few provide easy-to-use and efficient approaches for accessing and querying provenance information. Instead, users formulate provenance graph queries directly against physical data representations (e.g., relational, XML, or RDF), leading to queries that are difficult to express and expensive to evaluate. We address these problems through a high-level query language tailored for expressing provenance graph queries. The language is based on a general model of provenance supporting scientific workflows that process XML data and employ update semantics. Query constructs are provided for querying both structure and lineage information. Unlike other languages that return sets of nodes as answers, our query language is closed, i.e., answers to lineage queries are sets of lineage dependencies (edges) allowing answers to be further queried. We provide a formal semantics for the language and present novel techniques for efficiently evaluating lineage queries. Experimental results on real and synthetic provenance traces demonstrate that our lineage based optimizations outperform an in-memory and standard database implementation by orders of magnitude. We also show that our strategies are feasible and can significantly reduce both provenance storage size and query execution time when compared with standard approaches.",
"title": ""
},
{
"docid": "c5ca8f5d78b001f05b214566f5586193",
"text": "As architecture, systems, and data management communities pay greater attention to innovative big data systems and architecture, the pressure of benchmarking and evaluating these systems rises. However, the complexity, diversity, frequently changed workloads, and rapid evolution of big data systems raise great challenges in big data benchmarking. Considering the broad use of big data systems, for the sake of fairness, big data benchmarks must include diversity of data and workloads, which is the prerequisite for evaluating big data systems and architecture. Most of the state-of-the-art big data benchmarking efforts target evaluating specific types of applications or system software stacks, and hence they are not qualified for serving the purposes mentioned above. This paper presents our joint research efforts on this issue with several industrial partners. Our big data benchmark suite-BigDataBench not only covers broad application scenarios, but also includes diverse and representative data sets. Currently, we choose 19 big data benchmarks from dimensions of application scenarios, operations/ algorithms, data types, data sources, software stacks, and application types, and they are comprehensive for fairly measuring and evaluating big data systems and architecture. BigDataBench is publicly available from the project home page http://prof.ict.ac.cn/BigDataBench. Also, we comprehensively characterize 19 big data workloads included in BigDataBench with varying data inputs. On a typical state-of-practice processor, Intel Xeon E5645, we have the following observations: First, in comparison with the traditional benchmarks: including PARSEC, HPCC, and SPECCPU, big data applications have very low operation intensity, which measures the ratio of the total number of instructions divided by the total byte number of memory accesses; Second, the volume of data input has non-negligible impact on micro-architecture characteristics, which may impose challenges for simulation-based big data architecture research; Last but not least, corroborating the observations in CloudSuite and DCBench (which use smaller data inputs), we find that the numbers of L1 instruction cache (L1I) misses per 1000 instructions (in short, MPKI) of the big data applications are higher than in the traditional benchmarks; also, we find that L3 caches are effective for the big data applications, corroborating the observation in DCBench.",
"title": ""
}
] |
[
{
"docid": "45356e33e51d8d2e2bfb6365d8269a69",
"text": "We survey research on self-driving cars published in the literature focusing on autonomous cars developed since the DARPA challenges, which are equipped with an autonomy system that can be categorized as SAE level 3 or higher. The architecture of the autonomy system of self-driving cars is typically organized into the perception system and the decision-making system. The perception system is generally divided into many subsystems responsible for tasks such as self-driving-car localization, static obstacles mapping, moving obstacles detection and tracking, road mapping, traffic signalization detection and recognition, among others. The decision-making system is commonly partitioned as well into many subsystems responsible for tasks such as route planning, path planning, behavior selection, motion planning, and control. In this survey, we present the typical architecture of the autonomy system of self-driving cars. We also review research on relevant methods for perception and decision making. Furthermore, we present a detailed description of the architecture of the autonomy system of the UFES's car, IARA. Finally, we list prominent autonomous research cars developed by technology companies and reported in the media.",
"title": ""
},
{
"docid": "c8be0e643c72c7abea1ad758ac2b49a8",
"text": "Visual attention plays an important role to understand images and demonstrates its effectiveness in generating natural language descriptions of images. On the other hand, recent studies show that language associated with an image can steer visual attention in the scene during our cognitive process. Inspired by this, we introduce a text-guided attention model for image captioning, which learns to drive visual attention using associated captions. For this model, we propose an exemplarbased learning approach that retrieves from training data associated captions with each image, and use them to learn attention on visual features. Our attention model enables to describe a detailed state of scenes by distinguishing small or confusable objects effectively. We validate our model on MSCOCO Captioning benchmark and achieve the state-of-theart performance in standard metrics.",
"title": ""
},
{
"docid": "3c3f79268772b593908825f0ec6b0363",
"text": "A new prototype of an efficiency-improved zero voltage soft-switching (ZVS) high-frequency resonant (HF-R) inverter for induction heating (IH) applications is presented in this paper. By adopting the dual pulse modulation mode (DPMM) that incorporates a submode power regulation scheme such as pulse density modulation, pulse frequency modulation, and asymmetrical pulse width modulation into main one of the resonant current phase angle difference ( θ) control, the IH load power can be widely regulated under the condition of ZVS, while significantly improving the efficiency in the low output power setting. The essential performances on the output power regulations and ZVS operations with the DPMM schemes are demonstrated in an experiment based on a 1 kW-60 kHz laboratory prototype of the ZVS HF-R inverter. The validity of each DPMM scheme is originally compared and evaluated from a practical point of view.",
"title": ""
},
{
"docid": "713f5cb9fad4b4ede3f577350ef69be8",
"text": "Representation learning for networks provides the new way to mine graphs, unfortunately most current researches are limited to homogeneous networks. In reality, most of the graphs we are facing are heterogeneous. Therefore, to be able to represent nodes by considering the semantics of edges and nodes is critical for us to solve real world problems. In this paper, we develop the edge2vec model, which can represent nodes by considering edge semantics. An edge-type transition matrix is initiated from the Expectation-Maximization (EM) framework , and a stochastic gradient descent (SGD) model is leveraged to learn node embedding in a heterogeneous graph incorporating the learned transition matrix afterwards. Edge2vec is verified and evaluated on three medical domain tasks, which are medical entity classification, compound-gene binding prediction, and medical information retrieval. Experimental result shows that by considering edge-types into node embedding learning in heterogeneous graph, edge2vec significantly outperforms the other state-of-art models on all three tasks.",
"title": ""
},
{
"docid": "174cc0eae96aeb79841b1acfb4813dd4",
"text": "In this paper, we study the problem of learning a shallow artificial neural network that best fits a training data set. We study this problem in the over-parameterized regime where the numbers of observations are fewer than the number of parameters in the model. We show that with the quadratic activations, the optimization landscape of training, such shallow neural networks, has certain favorable characteristics that allow globally optimal models to be found efficiently using a variety of local search heuristics. This result holds for an arbitrary training data of input/output pairs. For differentiable activation functions, we also show that gradient descent, when suitably initialized, converges at a linear rate to a globally optimal model. This result focuses on a realizable model where the inputs are chosen i.i.d. from a Gaussian distribution and the labels are generated according to planted weight coefficients.",
"title": ""
},
{
"docid": "253072dcfdf4c417819ce8eee6af886f",
"text": "The majority of theoretical work in machine learning is done under the assumption of exchangeability: essentially, it is assumed that the examples are generated from the same probability distribution independently. This paper is concerned with the problem of testing the exchangeability assumption in the on-line mode: examples are observed one by one and the goal is to monitor on-line the strength of evidence against the hypothesis of exchangeability. We introduce the notion of exchangeability martingales, which are on-line procedures for detecting deviations from exchangeability; in essence, they are betting schemes that never risk bankruptcy and are fair under the hypothesis of exchangeability. Some specific exchangeability martingales are constructed using Transductive Confidence Machine. We report experimental results showing their performance on the USPS benchmark data set of hand-written digits (known to be somewhat heterogeneous); one of them multiplies the initial capital by more than 10; this means that the hypothesis of exchangeability is rejected at the significance level 10−18.",
"title": ""
},
{
"docid": "bdc1d214884770b979161ba709454486",
"text": "The traditional two-stage stochastic programming approach is to minimize the total expected cost with the assumption that the distribution of the random parameters is known. However, in most practices, the actual distribution of the random parameters is not known, and instead, only a series of historical data are available. Thus, the solution obtained from the traditional twostage stochastic program can be biased and suboptimal for the true problem, if the estimated distribution of the random parameters is not accurate, which is usually true when only a limited amount of historical data are available. In this paper, we propose a data-driven risk-averse stochastic optimization approach. Based on the observed historical data, we construct the confidence set of the ambiguous distribution of the random parameters, and develop a riskaverse stochastic optimization framework to minimize the total expected cost under the worstcase distribution within the constructed confidence set. We introduce the Wasserstein metric to construct the confidence set and by using this metric, we can successfully reformulate the risk-averse two-stage stochastic program to its tractable counterpart. In addition, we derive the worst-case distribution and develop efficient algorithms to solve the reformulated problem. Moreover, we perform convergence analysis to show that the risk averseness of the proposed formulation vanishes as the amount of historical data grows to infinity, and accordingly, the corresponding optimal objective value converges to that of the traditional risk-neutral twostage stochastic program. We further precisely derive the convergence rate, which indicates the value of data. Finally, the numerical experiments on risk-averse stochastic facility location and stochastic unit commitment problems verify the effectiveness of our proposed framework.",
"title": ""
},
{
"docid": "69db7c856f0b8754d20d61e909cab337",
"text": "In this paper, we introduce a Matlab-based toolbox called OPTIPLAN, which is intended to formulate, solve and simulate problems of obstacle avoidance based on model predictive control (MPC). The main goal of the toolbox is that it allows the users to simply set up even complex control problems without loss in efficiency only in few lines of code. Slow mathematical and technical details are fully automated allowing researchers to focus on problem formulation. It can easily perform MPC based closed-loop simulations followed by fetching visualizations of the results. From the theoretical point of view, non-convex obstacle avoidance constraints are tackled in two ways in OPTIPLAN: either by solving mixed-integer program using binary variables, or using time-varying constraints, which leads to a suboptimal solution, but the problem remains convex.",
"title": ""
},
{
"docid": "2a811ac141a9c5fb0cea4b644b406234",
"text": "Leadership is a process influence between leaders and subordinates where a leader attempts to influence the behaviour of subordinates to achieve the organizational goals. Organizational success in achieving its goals and objectives depends on the leaders of the organization and their leadership styles. By adopting the appropriate leadership styles, leaders can affect employee job satisfaction, commitment and productivity. Two hundred Malaysian executives working in public sectors voluntarily participated in this study. Two types of leadership styles, namely, transactional and transformational were found to have direct relationships with employees’ job satisfaction. The results showed that transformational leadership style has a stronger relationship with job satisfaction. This implies that transformational leadership is deemed suitable for managing government organizations. Implications of the findings were discussed further.",
"title": ""
},
{
"docid": "90b913e3857625f3237ff7a47f675fbb",
"text": "A new approach for the design of UWB hairpin-comb filters is presented. The filters can be designed to possess broad upper stopband characteristics by controlling the overall size of their resonators. The measured frequency characteristics of implemented UWB filters show potential first spurious passbands centered at about six times the fundamental passband center frequencies.",
"title": ""
},
{
"docid": "a286f9f594ef563ba082fb454eddc8bc",
"text": "The visual inspection of Mura defects is still a challenging task in the quality control of panel displays because of the intrinsically nonuniform brightness and blurry contours of these defects. The current methods cannot detect all Mura defect types simultaneously, especially small defects. In this paper, we introduce an accurate Mura defect visual inspection (AMVI) method for the fast simultaneous inspection of various Mura defect types. The method consists of two parts: an outlier-prejudging-based image background construction (OPBC) algorithm is proposed to quickly reduce the influence of image backgrounds with uneven brightness and to coarsely estimate the candidate regions of Mura defects. Then, a novel region-gradient-based level set (RGLS) algorithm is applied only to these candidate regions to quickly and accurately segment the contours of the Mura defects. To demonstrate the performance of AMVI, several experiments are conducted to compare AMVI with other popular visual inspection methods are conducted. The experimental results show that AMVI tends to achieve better inspection performance and can quickly and accurately inspect a greater number of Mura defect types, especially for small and large Mura defects with uneven backlight. Note to Practitioners—The traditional Mura visual inspection method can address only medium-sized Mura defects, such as region Mura, cluster Mura, and vertical-band Mura, and is not suitable for small Mura defects, for example, spot Mura. The proposed accurate Mura defect visual inspection (AMVI) method can accurately and simultaneously inspect not only medium-sized Mura defects but also small and large Mura defects. The proposed outlier-prejudging-based image background construction (OPBC) algorithm of the AMVI method is employed to improve the Mura true detection rate, while the proposed region-gradient-based level set (RGLS) algorithm is used to reduce the Mura false detection rate. Moreover, this method can be applied to online vision inspection: OPBC can be implemented in parallel processing units, while RGLS is applied only to the candidate regions of the inspected image. In addition, AMVI can be extended to other low-contrast defect vision inspection tasks, such as the inspection of glass, steel strips, and ceramic tiles.",
"title": ""
},
{
"docid": "caea7a535cd5994aeea15293d1bae90a",
"text": "In this paper, the design and development of a novel interleaved tri-state boost converter (ITBC), which produces lower ripple and exhibits better dynamic response, is discussed. Boost converters are frequently connected in parallel and operate in an interleaving mode for the reduction of ripple content in source current and in output voltage. In this way, interleaved boost converter (IBC) is conceived, which improves the power handling capabilities and increases the overall system rating. It also has the advantage of reduction of the ripple content in source current and output voltage, but when control-to-output transfer function of IBC is derived under continuous conduction mode of operation, then a right-half-plane (RHP) zero appears in the transfer function. Due to the presence of RHP zero, IBC has nonminimum phase problem, which deteriorates the dynamic performance. The tri-state boost converter (TBC) is the best choice for RHP zero elimination, but due to the extra freewheeling mode, ripple content will also be increased. The proposed converter is a parallel combination of two TBC and operates in an interleaving mode. Therefore, the proposed converter has both of the advantages of TBC and IBC. The performance analyses of ITBC, TBC, and IBC have been studied based on simulation and experimental results. From the comparative analysis, it is observed that ITBC is performed better than other two converters. The ripple comparisons between three converters have also been done. It is found that the ripple content in ITBC is slightly greater than IBC but is less than TBC.",
"title": ""
},
{
"docid": "ba461f1698bc2b2e5aee756c45d5dd4e",
"text": "Context-sensitive guidance (CSG) can help users make better security decisions. Applications with CSG ask the user to provide relevant context information. Based on such information, these applications then decide or suggest an appropriate course of action. However, users often deem security dialogs irrelevant to the tasks they are performing and try to evade them. This paper contributes two new techniques for hardening CSG against automatic and false user answers. Polymorphic dialogs continuously change the form of required user inputs and intentionally delay the latter, forcing users to pay attention to security decisions. Audited dialogs thwart false user answers by (1) warning users that their answers will be forwarded to auditors, and (2) allowing auditors to quarantine users who provide unjustified answers. We implemented CSG against email-borne viruses on the Thunderbird email agent. One version, CSG-PD, includes CSG and polymorphic dialogs. Another version, CSG-PAD, includes CSG and both polymorphic and audited dialogs. In user studies, we found that untrained users accept significantly less unjustified risks with CSG-PD than with conventional dialogs. Moreover, they accept significantly less unjustified risks with CSG-PAD than with CSG-PD. CSG-PD and CSG-PAD have insignificant effect on acceptance of justified risks.",
"title": ""
},
{
"docid": "828f0751e1a49ac95ed0305ba310f5e4",
"text": "In recent years, the advancements in Information and Communication Technology (ICT) are mainly focused on the Internet of Things (IoT). In a real-world scenario, IoT based services improve the domestic environment and are used in various applications. Home automation based IoT is versatile and popular applications. In home automation, all home appliances are networked together and able to operate without human involvement. Home automation gives a significant change in humans life which gives smart operating of home appliances. This motivated us to develop a new solution which controls some home appliances like light, fan, door cartons, energy consumption, and level of the Gas cylinder using various sensors like LM35, IR sensors, LDR module, Node MCU ESP8266, and Arduino UNO. The proposed solution uses the sensor and detects the presence or absence of a human object in the housework accordingly. Our solution also provides information about the energy consumed by the house owner regularly in the form of message. Also, it checks, the level of gas in the gas cylinder if it reaches lesser than the threshold, it automatically books the gas and sends a reference number as a message to the house owner. The proposed solution is deployed and tested for various conditions. Finally, in this paper, the working model of our proposed solution is developed as a prototype and explained as a working model.",
"title": ""
},
{
"docid": "9eaab923986bf74bdd073f6766ca45b2",
"text": "This paper introduces a novel generation system that composes humanlike descriptions of images from computer vision detections. By leveraging syntactically informed word co-occurrence statistics, the generator filters and constrains the noisy detections output from a vision system to generate syntactic trees that detail what the computer vision system sees. Results show that the generation system outperforms state-of-the-art systems, automatically generating some of the most natural image descriptions to date.",
"title": ""
},
{
"docid": "24e980e722f2ef10206fa8bd5bee6ef9",
"text": "A growing body of literature suggests that virtual reality is a successful tool for exposure therapy in the treatment of anxiety disorders. Virtual reality (VR) researchers posit the construct of presence, defined as the interpretation of an artificial stimulus as if it were real, to be a presumed factor that enables anxiety to be felt during virtual reality exposure therapy (VRE). However, a handful of empirical studies on the relation between presence and anxiety in VRE have yielded mixed findings. The current study tested the following hypotheses about the relation between presence and anxiety in VRE with a clinical sample of fearful flyers: (1) presence is related to in-session anxiety; (2) presence mediates the extent that pre-existing (pre-treatment) anxiety is experienced during exposure with VR; (3) presence is positively related to the amount of phobic elements included within the virtual environment; (4) presence is related to treatment outcome. Results supported presence as a factor that contributes to the experience of anxiety in the virtual environment as well as a relation between presence and the phobic elements, but did not support a relation between presence and treatment outcome. The study suggests that presence may be a necessary but insufficient requirement for successful VRE.",
"title": ""
},
{
"docid": "ddcf9180119dfa0b26d7b6d4c0ed958e",
"text": "BACKGROUND\nHandling of upper lateral cartilages (ULCs) is of prime importance in rhinoplasty. This study presents the experiences among 2500 cases of rhinoplasty in the past 10 years for managing of ULCs to minimize unwilling results of the shape and functional problems of the nose.\n\n\nMETHODS\nAll cases of rhinoplasties were done by the same surgeon from 2002 to 2013. Management of ULCs changed from resection to preserving the ULCs and to enhance their structural and functional roles. The techniques were spreader grafts, suturing of ULC together at the level or above the septum, using ULCs as auto-spreader flaps and very rarely trimming of ULCs unilaterally or bilaterally for making symmetric dorsal aesthetic lines. Fifty cases were operated based on this classification. Most cases were in type II and III. There were 7 cases in type I and 8 cases in type IV.\n\n\nRESULTS\nAmong most cases, the results were satisfactory although there were 8 cases for revision and among them, 2 cases had some fullness on dorsum and supra-tip because of inappropriate judgment on keeping the relationship between dorsum and tip. The problems in the shape and airways role of the nose reduced dramatically and a useful algorithm was presented.\n\n\nCONCLUSION\nULCs have great important roles in shape and function of nose. Preserving methods to keep these structures are of importance in surgical treatments of primary rhinoplasties. The presented algorithm helps to manage the ULCs in different anatomic types of the noses especially for surgeons who are in learning curve period.",
"title": ""
},
{
"docid": "2675b10d79ab7831550cd901ac81eec9",
"text": "This paper presents a self-supervised framework for training interest point detectors and descriptors suitable for a large number of multiple-view geometry problems in computer vision. As opposed to patch-based neural networks, our fully-convolutional model operates on full-sized images and jointly computes pixel-level interest point locations and associated descriptors in one forward pass. We introduce Homographic Adaptation, a multi-scale, multi-homography approach for boosting interest point detection repeatability and performing cross-domain adaptation (e.g., synthetic-to-real). Our model, when trained on the MS-COCO generic image dataset using Homographic Adaptation, is able to repeatedly detect a much richer set of interest points than the initial pre-adapted deep model and any other traditional corner detector. The final system gives rise to state-of-the-art homography estimation results on HPatches when compared to LIFT, SIFT and ORB.",
"title": ""
},
{
"docid": "9a6da0c3e93683eb36e3e489d7e3f4ce",
"text": "Speech and language processing technology has the potential of playing an important role in future deep space missions. To be able to replicate the success of speech technologies from ground to space, it is important to understand how astronaut’s speech production mechanism changes when they are in space. In this study, we investigate the variations of astronaut’s voice characteristic during NASA Apollo 11 mission. While the focus is constrained to analysis of the three astronauts voices who participated in the Apollo 11 mission, it is the first step towards our long term objective of automating large components of space missions with speech and language technology. The result of this study is also significant from an historical point of view as it provides a new perspective of understanding the key moment of human history landing a man on the moon, as well as employed for future advancement in speech and language technology in “non-neutral”conditions.",
"title": ""
},
{
"docid": "a208f2a2720313479773c00a74b1cbc6",
"text": "I present a web service for querying an embedding of entities in the Wikidata knowledge graph. The embedding is trained on the Wikidata dump using Gensim’s Word2Vec implementation and a simple graph walk. A REST API is implemented. Together with the Wikidata API the web service exposes a multilingual resource for over 600’000 Wikidata items and properties.",
"title": ""
}
] |
scidocsrr
|
c5043a221506c8045022f1208afa67dc
|
Probabilistic Model Checking for Complex Cognitive Tasks - A case study in human-robot interaction
|
[
{
"docid": "5545b3b7d24b9f5b9298f5779166ca01",
"text": "In a large variety of situations one would like to have an expressive and accurate model of observed animal or human behavior. While general purpose mathematical models may capture successfully properties of observed behavior, it is desirable to root models in biological facts. Because of ample empirical evidence for reward-based learning in visuomotor tasks, we use a computational model based on the assumption that the observed agent is balancing the costs and benefits of its behavior to meet its goals. This leads to using the framework of reinforcement learning, which additionally provides well-established algorithms for learning of visuomotor task solutions. To quantify the agent’s goals as rewards implicit in the observed behavior, we propose to use inverse reinforcement learning, which quantifies the agent’s goals as rewards implicit in the observed behavior. Based on the assumption of a modular cognitive architecture, we introduce a modular inverse reinforcement learning algorithm that estimates the relative reward contributions of the component tasks in navigation, consisting of following a path while avoiding obstacles and approaching targets. It is shown how to recover the component reward weights for individual tasks and that variability in observed trajectories can be explained succinctly through behavioral goals. It is demonstrated through simulations that good estimates can be obtained already with modest amounts of observation data, which in turn allows the prediction of behavior in novel configurations.",
"title": ""
},
{
"docid": "4d857311f86baca70700bb78c8771f22",
"text": "Randomization is a key element in sequential and distributed computing. Reasoning about randomized algorithms is highly non-trivial. In the 1980s, this initiated first proof methods, logics, and model-checking algorithms. The field of probabilistic verification has developed considerably since then. This paper surveys the algorithmic verification of probabilistic models, in particular probabilistic model checking. We provide an informal account of the main models, the underlying algorithms, applications from reliability and dependability analysis---and beyond---and describe recent developments towards automated parameter synthesis.",
"title": ""
}
] |
[
{
"docid": "b2bfcd7d72bd9d774add0008dcab86c4",
"text": "Titanium dioxide nanoparticles, obtained using the sol-gel method and modified with organic solvents, such as acetone, acetonitrile, benzene, diethyl ether, dimethyl sulfoxide, toluene, and chloroform, were used as the filler of polydimethylsiloxane-based electrorheological fluids. The effect of electric field strength on the shear stress and yield stress of electrorheological fluids was investigated, as well as the spectra of their dielectric relaxation in the frequency range from 25 to 106 Hz. Modification of titanium dioxide by polar molecules was found to enhance the electrorheological effect, as compared with unmodified TiO2, in accordance with the widely accepted concept of polar molecule dominated electrorheological effect (PM-ER). The most unexpected result of this study was an increase in the electrorheological effect during the application of nonpolar solvents with zero or near-zero dipole moments as the modifiers. It is suggested that nonpolar solvents, besides providing additional polarization effects at the filler particles interface, alter the internal pressure in the gaps between the particles. As a result, the filler particles are attracted to one another, leading to an increase in their aggregation and the formation of a network of bonds between the particles through liquid bridge contacts. Such changes in the electrorheological fluid structure result in a significant increase in the mechanical strength of the structures that arise when an electric field is applied, and an increase in the observed electrorheological effect in comparison with the unmodified titanium dioxide.",
"title": ""
},
{
"docid": "1a2b8e09251e6b041d40da157051e61c",
"text": "Abstract. Unmanned ground vehicles have important applications in high speed, rough terrain scenarios. In these scenarios unexpected and dangerous situations can occur that require rapid hazard avoidance maneuvers. At high speeds, there is limited time to perform navigation and hazard avoidance calculations based on detailed vehicle and terrain models. This paper presents a method for high speed hazard avoidance based on the “trajectory space,” which is a compact model-based representation of a robot’s dynamic performance limits in rough, natural terrain. Simulation and experimental results on a small gasoline-powered unmanned ground vehicle demonstrate the method’s effectiveness on sloped and rough terrain.",
"title": ""
},
{
"docid": "6021388395ddd784422a22d30dac8797",
"text": "Introduction: The European Directive 2013/59/EURATOM requires patient radiation dose information to be included in the medical report of radiological procedures. To provide effective communication to the patient, it is necessary to first assess the patient's level of knowledge regarding medical exposure. The goal of this work is to survey patients’ current knowledge level of both medical exposure to ionizing radiation and professional disciplines and communication means used by patients to garner information. Material and Methods: A questionnaire was designed comprised of thirteen questions: 737 patients participated in the survey. The data were analysed based on population age, education, and number of radiological procedures received in the three years prior to survey. Results: A majority of respondents (56.4%) did not know which modality uses ionizing radiation. 74.7% had never discussed with healthcare professionals the risk concerning their medical radiological procedures. 70.1% were not aware of the professionals that have expertise to discuss the use of ionizing radiation for medical purposes, and 84.7% believe it is important to have the radiation dose information stated in the medical report. Conclusion: Patients agree with new regulations that it is important to know the radiation level related to the medical exposure, but there is little awareness in terms of which modalities use X-Rays and the professionals and channels that can help them to better understand the exposure information. To plan effective communication, it is essential to devise methods and adequate resources for key professionals (medical physicists, radiologists, referring physicians) to convey correct and effective information.",
"title": ""
},
{
"docid": "bd8788c3d4adc5f3671f741e884c7f34",
"text": "We present a method for learning an embedding that places images of humans in similar poses nearby. This embedding can be used as a direct method of comparing images based on human pose, avoiding potential challenges of estimating body joint positions. Pose embedding learning is formulated under a triplet-based distance criterion. A deep architecture is used to allow learning of a representation capable of making distinctions between different poses. Experiments on human pose matching and retrieval from video data demonstrate the potential of the method.",
"title": ""
},
{
"docid": "e16f1b1d4b583f5d198eac8d01d12c48",
"text": "Mathematical models have been widely used in the studies of biological signaling pathways. Among these studies, two systems biology approaches have been applied: top-down and bottom-up systems biology. The former approach focuses on X-omics researches involving the measurement of experimental data in a large scale, for example proteomics, metabolomics, or fluxomics and transcriptomics. In contrast, the bottom-up approach studies the interaction of the network components and employs mathematical models to gain some insights about the mechanisms and dynamics of biological systems. This chapter introduces how to use the bottom-up approach to establish mathematical models for cell signaling studies.",
"title": ""
},
{
"docid": "0cdbc09691a2c4fc87ab10487d1627df",
"text": "<italic>Objective:</italic> Rapid advances of high-throughput technologies and wide adoption of electronic health records (EHRs) have led to fast accumulation of –omic and EHR data. These voluminous complex data contain abundant information for precision medicine, and big data analytics can extract such knowledge to improve the quality of healthcare. <italic>Methods:</italic> In this paper, we present –omic and EHR data characteristics, associated challenges, and data analytics including data preprocessing, mining, and modeling. <italic>Results:</italic> To demonstrate how big data analytics enables precision medicine, we provide two case studies, including identifying disease biomarkers from multi-omic data and incorporating –omic information into EHR. <italic>Conclusion: </italic> Big data analytics is able to address –omic and EHR data challenges for paradigm shift toward precision medicine. <italic>Significance:</italic> Big data analytics makes sense of –omic and EHR data to improve healthcare outcome. It has long lasting societal impact.",
"title": ""
},
{
"docid": "885bf946dbbfc462cd066794fe486da3",
"text": "Efficient implementation of block cipher is important on the way to achieving high efficiency with good understand ability. Numerous number of block cipher including Advance Encryption Standard have been implemented using different platform. However the understanding of the AES algorithm step by step is very complicated. This paper presents the implementation of AES algorithm and explains Avalanche effect with the help of Avalanche test result. For this purpose we use Xilinx ISE 9.1i platform in Algorithm development and ModelSim SE 6.3f platform for results confirmation and computation.",
"title": ""
},
{
"docid": "c15492fea3db1af99bc8a04bdff71fdc",
"text": "The high cost of locating faults in programs has motivated the development of techniques that assist in fault localization by automating part of the process of searching for faults. Empirical studies that compare these techniques have reported the relative effectiveness of four existing techniques on a set of subjects. These studies compare the rankings that the techniques compute for statements in the subject programs and the effectiveness of these rankings in locating the faults. However, it is unknown how these four techniques compare with Tarantula, another existing fault-localization technique, although this technique also provides a way to rank statements in terms of their suspiciousness. Thus, we performed a study to compare the Tarantula technique with the four techniques previously compared. This paper presents our study---it overviews the Tarantula technique along with the four other techniques studied, describes our experiment, and reports and discusses the results. Our studies show that, on the same set of subjects, the Tarantula technique consistently outperforms the other four techniques in terms of effectiveness in fault localization, and is comparable in efficiency to the least expensive of the other four techniques.",
"title": ""
},
{
"docid": "e83e6284d3c9cf8fddf972a25d869a1b",
"text": "Internet-based learning systems are being used in many universities and firms but their adoption requires a solid understanding of the user acceptance processes. Our effort used an extended version of the technology acceptance model (TAM), including cognitive absorption, in a formal empirical study to explain the acceptance of such systems. It was intended to provide insight for improving the assessment of on-line learning systems and for enhancing the underlying system itself. The work involved the examination of the proposed model variables for Internet-based learning systems acceptance. Using an on-line learning system as the target technology, assessment of the psychometric properties of the scales proved acceptable and confirmatory factor analysis supported the proposed model structure. A partial-least-squares structural modeling approach was used to evaluate the explanatory power and causal links of the model. Overall, the results provided support for the model as explaining acceptance of an on-line learning system and for cognitive absorption as a variable that influences TAM variables. # 2004 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "558abc8028d1d5b6956d2cf046efb983",
"text": "A key question concerns the extent to which sexual differentiation of human behavior is influenced by sex hormones present during sensitive periods of development (organizational effects), as occurs in other mammalian species. The most important sensitive period has been considered to be prenatal, but there is increasing attention to puberty as another organizational period, with the possibility of decreasing sensitivity to sex hormones across the pubertal transition. In this paper, we review evidence that sex hormones present during the prenatal and pubertal periods produce permanent changes to behavior. There is good evidence that exposure to high levels of androgens during prenatal development results in masculinization of activity and occupational interests, sexual orientation, and some spatial abilities; prenatal androgens have a smaller effect on gender identity, and there is insufficient information about androgen effects on sex-linked behavior problems. There is little good evidence regarding long-lasting behavioral effects of pubertal hormones, but there is some suggestion that they influence gender identity and perhaps some sex-linked forms of psychopathology, and there are many opportunities to study this issue.",
"title": ""
},
{
"docid": "199df544c19711fbee2dd49e60956243",
"text": "Languages vary strikingly in how they encode motion events. In some languages (e.g. English), manner of motion is typically encoded within the verb, while direction of motion information appears in modifiers. In other languages (e.g. Greek), the verb usually encodes the direction of motion, while the manner information is often omitted, or encoded in modifiers. We designed two studies to investigate whether these language-specific patterns affect speakers' reasoning about motion. We compared the performance of English and Greek children and adults (a) in nonlinguistic (memory and categorization) tasks involving motion events, and (b) in their linguistic descriptions of these same motion events. Even though the two linguistic groups differed significantly in terms of their linguistic preferences, their performance in the nonlinguistic tasks was identical. More surprisingly, the linguistic descriptions given by subjects within language also failed to correlate consistently with their memory and categorization performance in the relevant regards. For the domain studied, these results are consistent with the view that conceptual development and organization are largely independent of language-specific labeling practices. The discussion emphasizes that the necessarily sketchy nature of language use assures that it will be at best a crude index of thought.",
"title": ""
},
{
"docid": "cde1419d6b4912b414a3c83139dc3f06",
"text": "This book results from a decade of presenting the user-centered design (UCD) methodology for hundreds of companies (p. xxiii) and appears to be the book complement to the professional development short course. Its purpose is to encourage software developers to focus on the total user experience of software products during the whole of the development cycle. The notion of the “total user experience” is valuable because it focuses attention on the whole product-use cycle, from initial awareness through productive use.",
"title": ""
},
{
"docid": "334e29faadafff9a0d6e0017ea1d2fef",
"text": "OBJECTIVES\nTo provide typical examples of biomedical ontologies in action, emphasizing the role played by biomedical ontologies in knowledge management, data integration and decision support.\n\n\nMETHODS\nBiomedical ontologies selected for their practical impact are examined from a functional perspective. Examples of applications are taken from operational systems and the biomedical literature, with a bias towards recent journal articles.\n\n\nRESULTS\nThe ontologies under investigation in this survey include SNOMED CT, the Logical Observation Identifiers, Names, and Codes (LOINC), the Foundational Model of Anatomy, the Gene Ontology, RxNorm, the National Cancer Institute Thesaurus, the International Classification of Diseases, the Medical Subject Headings (MeSH) and the Unified Medical Language System (UMLS). The roles played by biomedical ontologies are classified into three major categories: knowledge management (indexing and retrieval of data and information, access to information, mapping among ontologies); data integration, exchange and semantic interoperability; and decision support and reasoning (data selection and aggregation, decision support, natural language processing applications, knowledge discovery).\n\n\nCONCLUSIONS\nOntologies play an important role in biomedical research through a variety of applications. While ontologies are used primarily as a source of vocabulary for standardization and integration purposes, many applications also use them as a source of computable knowledge. Barriers to the use of ontologies in biomedical applications are discussed.",
"title": ""
},
{
"docid": "f60f04f117c835b6b074fc9bed5d9226",
"text": "Personal photographs are being captured in digital form at an accelerating rate, and our computational tools for searching, browsing, and sharing these photos are struggling to keep pace. One promising approach is automatic face recognition, which would allow photos to be organized by the identities of the individuals they contain. However, achieving accurate recognition at the scale of the Web requires discriminating among hundreds of millions of individuals and would seem to be a daunting task. This paper argues that social network context may be the key for large-scale face recognition to succeed. Many personal photographs are shared on the Web through online social network sites, and we can leverage the resources and structure of such social networks to improve face recognition rates on the images shared. Drawing upon real photo collections from volunteers who are members of a popular online social network, we asses the availability of resources to improve face recognition and discuss techniques for applying these resources.",
"title": ""
},
{
"docid": "5b9d26fc8b5c45a26377885f75c0f509",
"text": "Background: The objective of this study is to assess the feasibility of aprimary transfistula anorectoplasty (TFARP) in congenital recto-vestibular fistula without a covering colostomy in the north of Iraq. Patients and Methods: Female patients having imperforate anus with congenital rectovestibular fistula presenting to pediatric surgical centres in the north of Iraq (Mosul & Erbil) between 1995 to 2011 were reviewed in a nonrandomized manner, after excluding those with pouch colon, rectovaginal fistula and patients with colostomy. All cases underwent one stage primary (TFARP) anorectoplasty at age between 1-30 months, after on table rectal irrigation with normal saline & povidoneIodine. They were kept nil by mouth until 24 hours postoperatively. Postoperative regular anal dilatation were commenced after 2 weeks of operation when needed. The results were evaluated for need of bowel preparation, duration of surgery,, cosmetic appearance, commencement of feed and hospital stay,postoperative results. Patients were also followed up for assessment of continence and anal dilatation.",
"title": ""
},
{
"docid": "6d43a207fa5483dc47d4c52467c3d159",
"text": "Cardiovascular disease (CVD) is now the leading cause of death globally and is a growing health concern. Dietary factors are important in the pathogenesis of CVD and may to a large degree determine CVD risk, but have been less extensively investigated. Functional foods are those that are thought to have physiological benefits and/or reduce the risk of chronic disease beyond their basic nutritional functions. The food industry has started to market products labelled as \"functional foods.\" Although many review articles have focused on individual dietary variables as determinants of CVD that can be modified to reduce the risk of CVD, the aim of this current paper was to examine the impact of functional foods in relation to the development and progression of CVD. Epidemiologic studies have demonstrated the association between certain dietary patterns and cardiovascular health. Research into the cardio-protective potential of their dietary components might support the development of functional foods and nutraceuticals. This paper will also compare the effect of individual bioactive dietary compounds with the effect of some dietary patterns in terms of their cardiovascular protection.",
"title": ""
},
{
"docid": "c83db87d7ac59e1faf75b408953e1324",
"text": "PURPOSE\nThis project was conducted to obtain information about reading problems of adults with traumatic brain injury (TBI) with mild-to-moderate cognitive impairments and to investigate how these readers respond to reading comprehension strategy prompts integrated into digital versions of text.\n\n\nMETHOD\nParticipants from 2 groups, adults with TBI (n = 15) and matched controls (n = 15), read 4 different 500-word expository science passages linked to either a strategy prompt condition or a no-strategy prompt condition. The participants' reading comprehension was evaluated using sentence verification and free recall tasks.\n\n\nRESULTS\nThe TBI and control groups exhibited significant differences on 2 of the 5 reading comprehension measures: paraphrase statements on a sentence verification task and communication units on a free recall task. Unexpected group differences were noted on the participants' prerequisite reading skills. For the within-group comparison, participants showed significantly higher reading comprehension scores on 2 free recall measures: words per communication unit and type-token ratio. There were no significant interactions.\n\n\nCONCLUSION\nThe results help to elucidate the nature of reading comprehension in adults with TBI with mild-to-moderate cognitive impairments and endorse further evaluation of reading comprehension strategies as a potential intervention option for these individuals. Future research is needed to better understand how individual differences influence a person's reading and response to intervention.",
"title": ""
},
{
"docid": "c2756af71724249b458ffdf7a49c4060",
"text": "Objectives. Cooccurring psychiatric disorders influence the outcome and prognosis of gender dysphoria. The aim of this study is to assess psychiatric comorbidities in a group of patients. Methods. Eighty-three patients requesting sex reassignment surgery (SRS) were recruited and assessed through the Persian Structured Clinical Interview for DSM-IV Axis I disorders (SCID-I). Results. Fifty-seven (62.7%) patients had at least one psychiatric comorbidity. Major depressive disorder (33.7%), specific phobia (20.5%), and adjustment disorder (15.7%) were the three most prevalent disorders. Conclusion. Consistent with most earlier researches, the majority of patients with gender dysphoria had psychiatric Axis I comorbidity.",
"title": ""
},
{
"docid": "c6ab3d07e068637082b88160ca2f4988",
"text": "This paper focuses on the design of a real-time particle-swarm-optimization-based proportional-integral-differential (PSO-PID) control scheme for the levitated balancing and propulsive positioning of a magnetic-levitation (maglev) transportation system. The dynamic model of a maglev transportation system, including levitated electromagnets and a propulsive linear induction motor based on the concepts of mechanical geometry and motion dynamics, is first constructed. The control objective is to design a real-time PID control methodology via PSO gain selections and to directly ensure the stability of the controlled system without the requirement of strict constraints, detailed system information, and auxiliary compensated controllers despite the existence of uncertainties. The effectiveness of the proposed PSO-PID control scheme for the maglev transportation system is verified by numerical simulations and experimental results, and its superiority is indicated in comparison with PSO-PID in previous literature and conventional sliding-mode (SM) control strategies. With the proposed PSO-PID control scheme, the controlled maglev transportation system possesses the advantages of favorable control performance without chattering phenomena in SM control and robustness to uncertainties superior to fixed-gain PSO-PID control.",
"title": ""
},
{
"docid": "b19aab238e0eafef52974a87300750a3",
"text": "This paper introduces a method to detect a fault associated with critical components/subsystems of an engineered system. It is required, in this case, to detect the fault condition as early as possible, with specified degree of confidence and a prescribed false alarm rate. Innovative features of the enabling technologies include a Bayesian estimation algorithm called particle filtering, which employs features or condition indicators derived from sensor data in combination with simple models of the system's degrading state to detect a deviation or discrepancy between a baseline (no-fault) distribution and its current counterpart. The scheme requires a fault progression model describing the degrading state of the system in the operation. A generic model based on fatigue analysis is provided and its parameters adaptation is discussed in detail. The scheme provides the probability of abnormal condition and the presence of a fault is confirmed for a given confidence level. The efficacy of the proposed approach is illustrated with data acquired from bearings typically found on aircraft and monitored via a properly instrumented test rig.",
"title": ""
}
] |
scidocsrr
|
df2d95312d5e1f11d73da89f9e4bdfe9
|
Aggressive language in an online hacking forum
|
[
{
"docid": "0034edb604e5196b18c550353ffe9ea9",
"text": "As the body of research on abusive language detection and analysis grows, there is a need for critical consideration of the relationships between different subtasks that have been grouped under this label. Based on work on hate speech, cyberbullying, and online abuse we propose a typology that captures central similarities and differences between subtasks and we discuss its implications for data annotation and feature construction. We emphasize the practical actions that can be taken by researchers to best approach their abusive language detection subtask of interest.",
"title": ""
},
{
"docid": "8f29a231b801a018a6d18befc0d06d0b",
"text": "The paper introduces a deep learningbased Twitter hate-speech text classification system. The classifier assigns each tweet to one of four predefined categories: racism, sexism, both (racism and sexism) and non-hate-speech. Four Convolutional Neural Network models were trained on resp. character 4-grams, word vectors based on semantic information built using word2vec, randomly generated word vectors, and word vectors combined with character n-grams. The feature set was down-sized in the networks by maxpooling, and a softmax function used to classify tweets. Tested by 10-fold crossvalidation, the model based on word2vec embeddings performed best, with higher precision than recall, and a 78.3% F-score.",
"title": ""
},
{
"docid": "79ece5e02742de09b01908668383e8f2",
"text": "Hate speech in the form of racist and sexist remarks are a common occurrence on social media. For that reason, many social media services address the problem of identifying hate speech, but the definition of hate speech varies markedly and is largely a manual effort (BBC, 2015; Lomas, 2015). We provide a list of criteria founded in critical race theory, and use them to annotate a publicly available corpus of more than 16k tweets. We analyze the impact of various extra-linguistic features in conjunction with character n-grams for hatespeech detection. We also present a dictionary based the most indicative words in our data.",
"title": ""
},
{
"docid": "2aade03834c6db2ecc2912996fd97501",
"text": "User contributions in the form of posts, comments, and votes are essential to the success of online communities. However, allowing user participation also invites undesirable behavior such as trolling. In this paper, we characterize antisocial behavior in three large online discussion communities by analyzing users who were banned from these communities. We find that such users tend to concentrate their efforts in a small number of threads, are more likely to post irrelevantly, and are more successful at garnering responses from other users. Studying the evolution of these users from the moment they join a community up to when they get banned, we find that not only do they write worse than other users over time, but they also become increasingly less tolerated by the community. Further, we discover that antisocial behavior is exacerbated when community feedback is overly harsh. Our analysis also reveals distinct groups of users with different levels of antisocial behavior that can change over time. We use these insights to identify antisocial users early on, a task of high practical importance to community maintainers.",
"title": ""
},
{
"docid": "a89761358ab819ff110458948a6af44d",
"text": "Automatic abusive language detection is a difficult but important task for online social media. Our research explores a twostep approach of performing classification on abusive language and then classifying into specific types and compares it with one-step approach of doing one multi-class classification for detecting sexist and racist languages. With a public English Twitter corpus of 20 thousand tweets in the type of sexism and racism, our approach shows a promising performance of 0.827 Fmeasure by using HybridCNN in one-step and 0.824 F-measure by using logistic regression in two-steps.",
"title": ""
},
{
"docid": "11f8f76c8bf3dc28ce685e9cf3e92c8e",
"text": "The damage personal attacks make to online discourse motivates many platforms to try to curb the phenomenon. However, understanding the prevalence and impact of personal attacks in online platforms at scale remains surprisingly difficult. The contribution of this paper is to develop and illustrate a method that combines crowdsourcing and machine learning to analyze personal attacks at scale. We show an evaluation method for a classifier in terms of the aggregated number of crowd-workers it can approximate. We apply our methodology to English Wikipedia, generating a corpus of over 100k high quality human-labeled comments and 63M machine-labeled ones from a classifier that is as good as the aggregate of 3 crowd-workers. Using the corpus of machine-labeled scores, our methodology allows us to explore some of the open questions about the nature of online personal attacks. This reveals that the majority of personal attacks on Wikipedia are not the result of a few malicious users, nor primarily the consequence of allowing anonymous contributions.",
"title": ""
}
] |
[
{
"docid": "9e79f1d03dd8d14f22b966fa83d8d5c5",
"text": "Hashing has become an increasingly popular technique for fast nearest neighbor search. Despite its successful progress in classic pointto-point search, there are few studies regarding point-to-hyperplane search, which has strong practical capabilities of scaling up applications like active learning with SVMs. Existing hyperplane hashing methods enable the fast search based on randomly generated hash codes, but still suffer from a low collision probability and thus usually require long codes for a satisfying performance. To overcome this problem, this paper proposes a multilinear hyperplane hashing that generates a hash bit using multiple linear projections. Our theoretical analysis shows that with an even number of random linear projections, the multilinear hash function possesses strong locality sensitivity to hyperplane queries. To leverage its sensitivity to the angle distance, we further introduce an angular quantization based learning framework for compact multilinear hashing, which considerably boosts the search performance with less hash bits. Experiments with applications to large-scale (up to one million) active learning on two datasets demonstrate the overall superiority of the proposed approach.",
"title": ""
},
{
"docid": "db1d5903d2d49d995f5d3b6dd0681323",
"text": "Diffusion tensor imaging (DTI) is an exciting new MRI modality that can reveal detailed anatomy of the white matter. DTI also allows us to approximate the 3D trajectories of major white matter bundles. By combining the identified tract coordinates with various types of MR parameter maps, such as T2 and diffusion properties, we can perform tract-specific analysis of these parameters. Unfortunately, 3D tract reconstruction is marred by noise, partial volume effects, and complicated axonal structures. Furthermore, changes in diffusion anisotropy under pathological conditions could alter the results of 3D tract reconstruction. In this study, we created a white matter parcellation atlas based on probabilistic maps of 11 major white matter tracts derived from the DTI data from 28 normal subjects. Using these probabilistic maps, automated tract-specific quantification of fractional anisotropy and mean diffusivity were performed. Excellent correlation was found between the automated and the individual tractography-based results. This tool allows efficient initial screening of the status of multiple white matter tracts.",
"title": ""
},
{
"docid": "9e1cefe8c58774ea54b507a3702f825f",
"text": "Organizations and individuals are increasingly impacted by misuses of information that result from security lapses. Most of the cumulative research on information security has investigated the technical side of this critical issue, but securing organizational systems has its grounding in personal behavior. The fact remains that even with implementing mandatory controls, the application of computing defenses has not kept pace with abusers’ attempts to undermine them. Studies of information security contravention behaviors have focused on some aspects of security lapses and have provided some behavioral recommendations such as punishment of offenders or ethics training. While this research has provided some insight on information security contravention, they leave incomplete our understanding of the omission of information security measures among people who know how to protect their systems but fail to do so. Yet carelessness with information and failure to take available precautions contributes to significant civil losses and even to crimes. Explanatory theory to guide research that might help to answer important questions about how to treat this omission problem lacks empirical testing. This empirical study uses protection motivation theory to articulate and test a threat control model to validate assumptions and better understand the ‘‘knowing-doing” gap, so that more effective interventions can be developed. 2008 Elsevier Ltd. All rights reserved. d. All rights reserved. Workman), [email protected] (W.H. Bommer), [email protected] 2800 M. Workman et al. / Computers in Human Behavior 24 (2008) 2799–2816",
"title": ""
},
{
"docid": "58984ddb8d4c28dc63caa29bc245e259",
"text": "OpenCL is an open standard to write parallel applications for heterogeneous computing systems. Since its usage is restricted to a single operating system instance, programmers need to use a mix of OpenCL and MPI to program a heterogeneous cluster. In this paper, we introduce an MPI-OpenCL implementation of the LINPACK benchmark for a cluster with multi-GPU nodes. The LINPACK benchmark is one of the most widely used benchmark applications for evaluating high performance computing systems. Our implementation is based on High Performance LINPACK (HPL) and uses the blocked LU decomposition algorithm. We address that optimizations aimed at reducing the overhead of CPUs are necessary to overcome the performance gap between the CPUs and the multiple GPUs. Our LINPACK implementation achieves 93.69 Tflops (46 percent of the theoretical peak) on the target cluster with 49 nodes, each node containing two eight-core CPUs and four GPUs.",
"title": ""
},
{
"docid": "80d0cfb5a0ac803061869fc56a4beb16",
"text": "The digital-sharing economy presents opportunities for individuals to find temporary employment, generate extra income, increase reciprocity, enhance social interaction, and access resources not otherwise attainable. Although the sharing economy is profitable, little is known about its use among the unemployed or those struggling financially. This paper describes the results of a participatory-design based workshop to investigate the perception and feasibility of finding temporary employment and sharing spare resources using sharing-economy applications. Specifically, this study included 20 individuals seeking employment in a U.S. city suffering economic decline. We identify success factors of the digital-sharing economy to these populations, identify shortcomings and propose mitigation strategies based on prior research related to trust, social capital and theories of collective efficacy. Finally, we contribute new principles that may foster collaborative consumption within this population and identify new concepts for practical employment applications among these populations.",
"title": ""
},
{
"docid": "800337ef10a4245db4e45a1a5931e578",
"text": "This paper describes a method for generating sense-tagged data using Wikipedia as a source of sense annotations. Through word sense disambiguation experiments, we show that the Wikipedia-based sense annotations are reliable and can be used to construct accurate sense classifiers.",
"title": ""
},
{
"docid": "93b7c1d83593f05f5a44bb30b8c6c3cf",
"text": "We describe how a question-answering system can learn about its domain from conversational dialogs. Our system learns to relate concepts in science questions to propositions in a fact corpus, stores new concepts and relations in a knowledge graph (KG), and uses the graph to solve questions. We are the first to acquire knowledge for question-answering from open, natural language dialogs without a fixed ontology or domain model that predetermines what users can say. Our relation-based strategies complete more successful dialogs than a query expansion baseline, our taskdriven relations are more effective for solving science questions than relations from general knowledge sources, and our method is practical enough to generalize to other domains.",
"title": ""
},
{
"docid": "d022a755229f5799e0811601e35e562c",
"text": "The use of orthopedic implants in joints has revolutionized the treatment of patients with many debilitating chronic musculoskeletal diseases such as osteoarthritis. However, the introduction of foreign material into the human body predisposes the body to infection. The treatment of these infections has become very complicated since the orthopedic implants serve as a surface for multiple species of bacteria to grow at a time into a resistant biofilm layer. This biofilm layer serves as a protectant for the bacterial colonies on the implant making them more resistant and difficult to eradicate when using standard antibiotic treatment. In some cases, the use of antibiotics alone has even made the bacteria more resistant to treatment. Thus, there has been surge in the creation of non-antibiotic anti-biofilm agents to help disrupt the biofilms on the orthopedic implants to help eliminate the infections. In this study, we discuss infections of orthopedic implants in the shoulder then we review the main categories of anti-biofilm agents that have been used for the treatment of infections on orthopedic implants. Then, we introduce some of the newer biofilm disrupting technology that has been studied in the past few years that may advance the treatment options for orthopedic implants in the future.",
"title": ""
},
{
"docid": "3c5e3f2fe99cb8f5b26a880abfe388f8",
"text": "Facial point detection is an active area in computer vision due to its relevance to many applications. It is a nontrivial task, since facial shapes vary significantly with facial expressions, poses or occlusion. In this paper, we address this problem by proposing a discriminative deep face shape model that is constructed based on an augmented factorized three-way Restricted Boltzmann Machines model. Specifically, the discriminative deep model combines the top-down information from the embedded face shape patterns and the bottom up measurements from local point detectors in a unified framework. In addition, along with the model, effective algorithms are proposed to perform model learning and to infer the true facial point locations from their measurements. Based on the discriminative deep face shape model, 68 facial points are detected on facial images in both controlled and “in-the-wild” conditions. Experiments on benchmark data sets show the effectiveness of the proposed facial point detection algorithm against state-of-the-art methods.",
"title": ""
},
{
"docid": "94f364c7b1f4254db525c3c6108a9e4c",
"text": "A planar radar sensor for automotive application is presented. The design comprises a fully integrated transceiver multi-chip module (MCM) and an electronically steerable microstrip patch array. The antenna feed network is based on a modified Rotman-lens. An extended angular coverage together with an adapted resolution allows for the integration of automatic cruise control (ACC), precrash sensing and cut-in detection within a single 77 GHz frontend. For ease of manufacturing the interconnects between antenna and MCM rely on a mixed wire bond and flip-chip approach. The concept is validated by laboratory radar measurements.",
"title": ""
},
{
"docid": "7875910ad044232b4631ecacfec65656",
"text": "In this study, a questionnaire (Cyberbullying Questionnaire, CBQ) was developed to assess the prevalence of numerous modalities of cyberbullying (CB) in adolescents. The association of CB with the use of other forms of violence, exposure to violence, acceptance and rejection by peers was also examined. In the study, participants were 1431 adolescents, aged between 12 and17 years (726 girls and 682 boys). The adolescents responded to the CBQ, measures of reactive and proactive aggression, exposure to violence, justification of the use of violence, and perceived social support of peers. Sociometric measures were also used to assess the use of direct and relational aggression and the degree of acceptance and rejection by peers. The results revealed excellent psychometric properties for the CBQ. Of the adolescents, 44.1% responded affirmatively to at least one act of CB. Boys used CB to greater extent than girls. Lastly, CB was significantly associated with the use of proactive aggression, justification of violence, exposure to violence, and less perceived social support of friends. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "1574abcbcff64f1c6fd725e0b5cf3df0",
"text": "Model compression is essential for serving large deep neural nets on devices with limited resources or applications that require real-time responses. As a case study, a neural language model usually consists of one or more recurrent layers sandwiched between an embedding layer used for representing input tokens and a softmax layer for generating output tokens. For problems with a very large vocabulary size, the embedding and the softmax matrices can account for more than half of the model size. For instance, the bigLSTM model achieves great performance on the One-Billion-Word (OBW) dataset with around 800k vocabulary, and its word embedding and softmax matrices use more than 6GBytes space, and are responsible for over 90% of the model parameters. In this paper, we propose GroupReduce, a novel compression method for neural language models, based on vocabulary-partition (block) based low-rank matrix approximation and the inherent frequency distribution of tokens (the power-law distribution of words). The experimental results show our method can significantly outperform traditional compression methods such as low-rank approximation and pruning. On the OBW dataset, our method achieved 6.6 times compression rate for the embedding and softmax matrices, and when combined with quantization, our method can achieve 26 times compression rate, which translates to a factor of 12.8 times compression for the entire model with very little degradation in perplexity.",
"title": ""
},
{
"docid": "20238a257954a4a0d02549250b082dce",
"text": "Wearable, flexible healthcare devices, which can monitor health data to predict and diagnose disease in advance, benefit society. Toward this future, various flexible and stretchable sensors as well as other components are demonstrated by arranging materials, structures, and processes. Although there are many sensor demonstrations, the fundamental characteristics such as the dependence of a temperature sensor on film thickness and the impact of adhesive for an electrocardiogram (ECG) sensor are yet to be explored in detail. In this study, the effect of film thickness for skin temperature measurements, adhesive force, and reliability of gel-less ECG sensors as well as an integrated real-time demonstration is reported. Depending on the ambient conditions, film thickness strongly affects the precision of skin temperature measurements, resulting in a thin flexible film suitable for a temperature sensor in wearable device applications. Furthermore, by arranging the material composition, stable gel-less sticky ECG electrodes are realized. Finally, real-time simultaneous skin temperature and ECG signal recordings are demonstrated by attaching an optimized device onto a volunteer's chest.",
"title": ""
},
{
"docid": "227e7dd9797e50c00c5e0b3e0933f5f4",
"text": "The ABSTRACT is to be in fully-justified italicized text, at the top of the left-hand column, below the author and affiliation information. Use the word \" Abstract \" as the title, in 12-point Times, boldface type, centered relative to the column, initially capitalized. The abstract is to be in 10-point, single-spaced type. Leave two blank lines after the Abstract, then begin the main text. Look at previous CVPR abstracts to get a feel for style and length.",
"title": ""
},
{
"docid": "b6bbf7affff4c6a29e964141302daf56",
"text": "Existing natural media painting simulations have produced high-quality results, but have required powerful compute hardware and have been limited to screen resolutions. Digital artists would like to be able to use watercolor-like painting tools, but at print resolutions and on lower end hardware such as laptops or even slates. We present a procedural algorithm for generating watercolor-like dynamic paint behaviors in a lightweight manner. Our goal is not to exactly duplicate watercolor painting, but to create a range of dynamic behaviors that allow users to achieve a similar style of process and result, while at the same time having a unique character of its own. Our stroke representation is vector based, allowing for rendering at arbitrary resolutions, and our procedural pigment advection algorithm is fast enough to support painting on slate devices. We demonstrate our technique in a commercially available slate application used by professional artists. Finally, we present a detailed analysis of the different vector-rendering technologies available.",
"title": ""
},
{
"docid": "68d3c7108c195222d1f5d4b75fdc8399",
"text": "The opinions expressed in this paper do not necessarily reflect the position of Fondazione Eni Enrico Mattei Corso Magenta, 63, 20123 Milano (I), web site: www.feem.it, e-mail: [email protected] Effects of Tourism Upon the Economy of Small and MediumSized European Cities. Cultural Tourists and “The Others” Barbara Del Corpo, Ugo Gasparino, Elena Bellini and William Malizia NOTA DI LAVORO 44.2008",
"title": ""
},
{
"docid": "a63db4f5e588e23e4832eae581fc1c4b",
"text": "Driver drowsiness is a major cause of mortality in traffic accidents worldwide. Electroencephalographic (EEG) signal, which reflects the brain activities, is more directly related to drowsiness. Thus, many Brain-Machine-Interface (BMI) systems have been proposed to detect driver drowsiness. However, detecting driver drowsiness at its early stage poses a major practical hurdle when using existing BMI systems. This study proposes a context-aware BMI system aimed to detect driver drowsiness at its early stage by enriching the EEG data with the intensity of head-movements. The proposed system is carefully designed for low-power consumption with on-chip feature extraction and low energy Bluetooth connection. Also, the proposed system is implemented using JAVA programming language as a mobile application for on-line analysis. In total, 266 datasets obtained from six subjects who participated in a one-hour monotonous driving simulation experiment were used to evaluate this system. According to a video-based reference, the proposed system obtained an overall detection accuracy of 82.71% for classifying alert and slightly drowsy events by using EEG data alone and 96.24% by using the hybrid data of head-movement and EEG. These results indicate that the combination of EEG data and head-movement contextual information constitutes a robust solution for the early detection of driver drowsiness.",
"title": ""
},
{
"docid": "05540e05370b632f8b8cd165ae7d1d29",
"text": "We describe FreeCam a system capable of generating live free-viewpoint video by simulating the output of a virtual camera moving through a dynamic scene. The FreeCam sensing hardware consists of a small number of static color video cameras and state-of-the-art Kinect depth sensors, and the FreeCam software uses a number of advanced GPU processing and rendering techniques to seamlessly merge the input streams, providing a pleasant user experience. A system such as FreeCam is critical for applications such as telepresence, 3D video-conferencing and interactive 3D TV. FreeCam may also be used to produce multi-view video, which is critical to drive newgeneration autostereoscopic lenticular 3D displays.",
"title": ""
},
{
"docid": "4b0b7dfa79556970e900a129d06e3b0c",
"text": "We present the science and technology roadmap for graphene, related two-dimensional crystals, and hybrid systems, targeting an evolution in technology, that might lead to impacts and benefits reaching into most areas of society. This roadmap was developed within the framework of the European Graphene Flagship and outlines the main targets and research areas as best understood at the start of this ambitious project. We provide an overview of the key aspects of graphene and related materials (GRMs), ranging from fundamental research challenges to a variety of applications in a large number of sectors, highlighting the steps necessary to take GRMs from a state of raw potential to a point where they might revolutionize multiple industries. We also define an extensive list of acronyms in an effort to standardize the nomenclature in this emerging field.",
"title": ""
},
{
"docid": "42979dd6ad989896111ef4de8d26b2fb",
"text": "Online dating services let users expand their dating pool beyond their social network and specify important characteristics of potential partners. To assess compatibility, users share personal information — e.g., identifying details or sensitive opinions about sexual preferences or worldviews — in profiles or in one-on-one communication. Thus, participating in online dating poses inherent privacy risks. How people reason about these privacy risks in modern online dating ecosystems has not been extensively studied. We present the results of a survey we designed to examine privacy-related risks, practices, and expectations of people who use or have used online dating, then delve deeper using semi-structured interviews. We additionally analyzed 400 Tinder profiles to explore how these issues manifest in practice. Our results reveal tensions between privacy and competing user values and goals, and we demonstrate how these results can inform future designs.",
"title": ""
}
] |
scidocsrr
|
af6d04859e8295b9cd615b0dcbcb6b30
|
Job recommender systems: A survey
|
[
{
"docid": "9a79af1c226073cc129087695295a4e5",
"text": "This paper presents an effective approach for resume information extraction to support automatic resume management and routing. A cascaded information extraction (IE) framework is designed. In the first pass, a resume is segmented into a consecutive blocks attached with labels indicating the information types. Then in the second pass, the detailed information, such as Name and Address, are identified in certain blocks (e.g. blocks labelled with Personal Information), instead of searching globally in the entire resume. The most appropriate model is selected through experiments for each IE task in different passes. The experimental results show that this cascaded hybrid model achieves better F-score than flat models that do not apply the hierarchical structure of resumes. It also shows that applying different IE models in different passes according to the contextual structure is effective.",
"title": ""
}
] |
[
{
"docid": "6d405b0f6b1381cec5e1d001e1102404",
"text": "Consensus is an important building block for building replicated systems, and many consensus protocols have been proposed. In this paper, we investigate the building blocks of consensus protocols and use these building blocks to assemble a skeleton that can be configured to produce, among others, three well-known consensus protocols: Paxos, Chandra-Toueg, and Ben-Or. Although each of these protocols specifies only one quorum system explicitly, all also employ a second quorum system. We use the skeleton to implement a replicated service, allowing us to compare the performance of these consensus protocols under various workloads and failure scenarios.",
"title": ""
},
{
"docid": "44ba90b77cb6bc324fbeebe096b93cd0",
"text": "With the growth of fandom population, a considerable amount of broadcast sports videos have been recorded, and a lot of research has focused on automatically detecting semantic events in the recorded video to develop an efficient video browsing tool for a general viewer. However, a professional sportsman or coach wonders about high level semantics in a different perspective, such as the offensive or defensive strategy performed by the players. Analyzing tactics is much more challenging in a broadcast basketball video than in other kinds of sports videos due to its complicated scenes and varied camera movements. In this paper, by developing a quadrangle candidate generation algorithm and refining the model fitting score, we ameliorate the court-based camera calibration technique to be applicable to broadcast basketball videos. Player trajectories are extracted from the video by a CamShift-based tracking method and mapped to the real-world court coordinates according to the calibrated results. The player position/trajectory information in the court coordinates can be further analyzed for professional-oriented applications such as detecting wide open event, retrieving target video clips based on trajectories, and inferring implicit/explicit tactics. Experimental results show the robustness of the proposed calibration and tracking algorithms, and three practicable applications are introduced to address the applicability of our system.",
"title": ""
},
{
"docid": "4381dfbb321feaca3299605b76836e93",
"text": "This paper deals with the design of a Model Predictive Control (MPC) approach for the altitude and attitude stabilization and tracking of a Quad Tilt Wing (QTW) type of Unmanned Aerial Vehicles (UAVs). This Vertical Take-Off and Landing (VTOL) aircraft can take-off and landing vertically such as helicopters and is convertible to the fixed-wing configuration for horizontal flight using a tilting mechanism for its rotors/wings. A nonlinear dynamical model, relating to the vertical flight mode of this QTW, is firstly developed using the Newton-Euler formalism, in describing the aerodynamic forces and moments acting on the aircraft. This established model, linearized around an equilibrium operating point, is then used to design a MPC approach for the stabilization and tracking of the QTW attitude and altitude. In order to show the performance superiority of the proposed MPC technique, a comparison with the known Linear Quadratic (LQ) strategy is carried out. All simulation results, obtained for both MPC and LQ approaches, are presented and discussed.",
"title": ""
},
{
"docid": "a679d37b88485cf71569f9aeefefbac5",
"text": "Incrementality is ubiquitous in human-human interaction and beneficial for human-computer interaction. It has been a topic of research in different parts of the NLP community, mostly with focus on the specific topic at hand even though incremental systems have to deal with similar challenges regardless of domain. In this survey, I consolidate and categorize the approaches, identifying similarities and differences in the computation and data, and show trade-offs that have to be considered. A focus lies on evaluating incremental systems because the standard metrics often fail to capture the incremental properties of a system and coming up with a suitable evaluation scheme is non-trivial. Title and Abstract in German Inkrementelle Sprachverarbeitung: Herausforderungen, Strategien und Evaluation Inkrementalität ist allgegenwärtig in Mensch-Mensch-Interaktiton und hilfreich für MenschComputer-Interaktion. In verschiedenen Teilen der NLP-Community wird an Inkrementalität geforscht, zumeist fokussiert auf eine konkrete Aufgabe, obwohl sich inkrementellen Systemen domänenübergreifend ähnliche Herausforderungen stellen. In diesem Überblick trage ich Ansätze zusammen, kategorisiere sie und stelle Ähnlichkeiten und Unterschiede in Berechnung und Daten sowie nötige Abwägungen vor. Ein Fokus liegt auf der Evaluierung inkrementeller Systeme, da Standardmetriken of nicht in der Lage sind, die inkrementellen Eigenschaften eines Systems einzufangen und passende Evaluationsschemata zu entwickeln nicht einfach ist.",
"title": ""
},
{
"docid": "efd6856e774b258858c43d7746639317",
"text": "In this paper, we propose a vision-based robust vehicle distance estimation algorithm that supports motorists to rapidly perceive relative distance of oncoming and passing vehicles thereby minimizing the risk of hazardous circumstances. And, as it is expected, the silhouettes of background stationary objects may appear in the motion scene, which pop-up due to motion of the camera, which is mounted on dashboard of the host vehicle. To avoid the effect of false positive detection of stationary objects and to determine the ego motion a new Morphological Strip Matching Algorithm and Recursive Stencil Mapping Algorithm(MSM-RSMA)is proposed. A new series of stencils are created where non-stationary objects are taken off after detecting stationary objects by applying a shape matching technique to each image strip pair. Then the vertical shift is estimated recursively with new stencils with identified stationary background objects. Finally, relative comparison of known templates are used to estimate the distance, which is further certified by value obtained for vertical shift. We apply analysis of relative dimensions of bounding box of the detected vehicle with relevant templates to calculate the relative distance. We prove that our method is capable of providing a comparatively fast distance estimation while keeping its robustness in different environments changes.",
"title": ""
},
{
"docid": "2d4cb6980cf8716699bdffca6cfed274",
"text": "Advances in laser technology have progressed so rapidly during the past decade that successful treatment of many cutaneous concerns and congenital defects, including vascular and pigmented lesions, tattoos, scars and unwanted haircan be achieved. The demand for laser surgery has increased as a result of the relative ease with low incidence of adverse postoperative sequelae. In this review, the currently available laser systems with cutaneous applications are outlined to identify the various types of dermatologic lasers available, to list their clinical indications and to understand the possible side effects.",
"title": ""
},
{
"docid": "9229a48b8df014b896abb60548759e36",
"text": "Given that a user interface interacts with users, a critical factor to be considered in improving the usability of an e-learning user interface is user-friendliness. Affordances enable users to more easily approach and engage in learning tasks because they strengthen positive, activating emotions. However, most studies on affordances limit themselves to an examination of the affordance attributes of e-learning tools rather than determining how to increase such attributes. A design approach is needed to improve affordances for e-learning user interfaces. Using Maier and Fadel’s Affordance-Based Design methodology as a framework, the researchers in this study identified affordance factors, suggested affordance design strategies for the user interface, and redesigned an affordable user interface prototype. The identified affordance factors and strategies were reviewed and validated in Delphi meetings whose members were teachers, e-learning specialists, and educational researchers. The effects of the redesigned user interface on usability were evaluated by fifth-grade participating in the experimental study. The results show that affordances led users to experience positive emotions, and as a result, use the interface effectively, efficiently, and satisfactorily. Implications were discussed for designing strategies to enhance the affordances of the user interfaces of e-learning and other learning technology tools.",
"title": ""
},
{
"docid": "f1220465c3ac6da5a2edc96b5979d4be",
"text": "We consider Complexity Leadership Theory [Uhl-Bien, M., Marion, R., & McKelvey, B. (2007). Complexity Leadership Theory: Shifting leadership from the industrial age to the knowledge era. The Leadership Quarterly.] in contexts of bureaucratic forms of organizing to describe how adaptive dynamics can work in combination with administrative functions to generate emergence and change in organizations. Complexity leadership approaches are consistent with the central assertion of the meso argument that leadership is multi-level, processual, contextual, and interactive. In this paper we focus on the adaptive function, an interactive process between adaptive leadership (an agentic behavior) and complexity dynamics (nonagentic social dynamics) that generates emergent outcomes (e.g., innovation, learning, adaptability) for the firm. Propositions regarding the actions of complexity leadership in bureaucratic forms of organizing are offered. © 2009 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "807a94db483f0ca72d3096e4897d2c76",
"text": "A typical scene contains many different objects that, because of the limited processing capacity of the visual system, compete for neural representation. The competition among multiple objects in visual cortex can be biased by both bottom-up sensory-driven mechanisms and top-down influences, such as selective attention. Functional brain imaging studies reveal that, both in the absence and in the presence of visual stimulation, biasing signals due to selective attention can modulate neural activity in visual cortex in several ways. Although the competition among stimuli for representation is ultimately resolved within visual cortex, the source of top-down biasing signals derives from a network of areas in frontal and parietal cortex.",
"title": ""
},
{
"docid": "43cd3b5ac6e2e2f240f4feb44be65b99",
"text": "Executive Overview Toyota’s Production System (TPS) is based on “lean” principles including a focus on the customer, continual improvement and quality through waste reduction, and tightly integrated upstream and downstream processes as part of a lean value chain. Most manufacturing companies have adopted some type of “lean initiative,” and the lean movement recently has gone beyond the shop floor to white-collar offices and is even spreading to service industries. Unfortunately, most of these efforts represent limited, piecemeal approaches—quick fixes to reduce lead time and costs and to increase quality—that almost never create a true learning culture. We outline and illustrate the management principles of TPS that can be applied beyond manufacturing to any technical or service process. It is a true systems approach that effectively integrates people, processes, and technology—one that must be adopted as a continual, comprehensive, and coordinated effort for change and learning across the organization.",
"title": ""
},
{
"docid": "7e3cdead80a1d17b064b67ddacd5d8c1",
"text": "BACKGROUND\nThe aim of the study was to evaluate the relationship between depression and Internet addiction among adolescents.\n\n\nSAMPLING AND METHOD\nA total of 452 Korean adolescents were studied. First, they were evaluated for their severity of Internet addiction with consideration of their behavioral characteristics and their primary purpose for computer use. Second, we investigated correlations between Internet addiction and depression, alcohol dependence and obsessive-compulsive symptoms. Third, the relationship between Internet addiction and biogenetic temperament as assessed by the Temperament and Character Inventory was evaluated.\n\n\nRESULTS\nInternet addiction was significantly associated with depressive symptoms and obsessive-compulsive symptoms. Regarding biogenetic temperament and character patterns, high harm avoidance, low self-directedness, low cooperativeness and high self-transcendence were correlated with Internet addiction. In multivariate analysis, among clinical symptoms depression was most closely related to Internet addiction, even after controlling for differences in biogenetic temperament.\n\n\nCONCLUSIONS\nThis study reveals a significant association between Internet addiction and depressive symptoms in adolescents. This association is supported by temperament profiles of the Internet addiction group. The data suggest the necessity of the evaluation of the potential underlying depression in the treatment of Internet-addicted adolescents.",
"title": ""
},
{
"docid": "a361214a42392cbd0ba3e0775d32c839",
"text": "We propose a design methodology to exploit adaptive nanodevices (memristors), virtually immune to their variability. Memristors are used as synapses in a spiking neural network performing unsupervised learning. The memristors learn through an adaptation of spike timing dependent plasticity. Neurons' threshold is adjusted following a homeostasis-type rule. System level simulations on a textbook case show that performance can compare with traditional supervised networks of similar complexity. They also show the system can retain functionality with extreme variations of various memristors' parameters, thanks to the robustness of the scheme, its unsupervised nature, and the power of homeostasis. Additionally the network can adjust to stimuli presented with different coding schemes.",
"title": ""
},
{
"docid": "22f633957b40d9027aceff93a68964b5",
"text": "Most of previous image denoising methods focus on additive white Gaussian noise (AWGN). However,the real-world noisy image denoising problem with the advancing of the computer vision techiniques. In order to promote the study on this problem while implementing the concurrent real-world image denoising datasets, we construct a new benchmark dataset which contains comprehensive real-world noisy images of different natural scenes. These images are captured by different cameras under different camera settings. We evaluate the different denoising methods on our new dataset as well as previous datasets. Extensive experimental results demonstrate that the recently proposed methods designed specifically for realistic noise removal based on sparse or low rank theories achieve better denoising performance and are more robust than other competing methods, and the newly proposed dataset is more challenging. The constructed dataset of real photographs is publicly available at https://github.com/csjunxu/PolyUDataset for researchers to investigate new real-world image denoising methods. We will add more analysis on the noise statistics in the real photographs of our new dataset in the next version of this article.",
"title": ""
},
{
"docid": "da0b5fc36cd36b1a3aa7ebb9441e3e15",
"text": "In Steganography, the total message will be invisible into a cover media such as text, audio, video, and image in which attackers don't have any idea about the original message that the media contain and which algorithm use to embed or extract it. In this paper, the proposed technique has focused on Bitmap image as it is uncompressed and convenient than any other image format to implement LSB Steganography method. For better security AES cryptography technique has also been used in the proposed method. Before applying the Steganography technique, AES cryptography will change the secret message into cipher text to ensure two layer security of the message. In the proposed technique, a new Steganography technique is being developed to hide large data in Bitmap image using filtering based algorithm, which uses MSB bits for filtering purpose. This method uses the concept of status checking for insertion and retrieval of message. This method is an improvement of Least Significant Bit (LSB) method for hiding information in images. It is being predicted that the proposed method will able to hide large data in a single image retaining the advantages and discarding the disadvantages of the traditional LSB method. Various sizes of data are stored inside the images and the PSNR are also calculated for each of the images tested. Based on the PSNR value, the Stego image has higher PSNR value as compared to other method. Hence the proposed Steganography technique is very efficient to hide the secret information inside an image.",
"title": ""
},
{
"docid": "4dffb7bcd82bcc2fbb7291233e4f8f88",
"text": "In the following paper, we present a framework for quickly training 2D object detectors for robotic perception. Our method can be used by robotics practitioners to quickly (under 30 seconds per object) build a large-scale real-time perception system. In particular, we show how to create new detectors on the fly using large-scale internet image databases, thus allowing a user to choose among thousands of available categories to build a detection system suitable for the particular robotic application. Furthermore, we show how to adapt these models to the current environment with just a few in-situ images. Experiments on existing 2D benchmarks evaluate the speed, accuracy, and flexibility of our system.",
"title": ""
},
{
"docid": "1ab0308539bc6508b924316b39a963ca",
"text": "Daily wafer fabrication in semiconductor foundry depends on considerable metrology operations for tool-quality and process-quality assurance. The metrology operations required a lot of metrology tools, which increase FAB's investment. Also, these metrology operations will increase cycle time of wafer process. Metrology operations do not bring any value added to wafer but only quality assurance. This article provides a new method denoted virtual metrology (VM) to utilize sensor data collected from 300 mm FAB's tools to forecast quality data of wafers and tools. This proposed method designs key steps to establish a VM control model based on neural networks and to develop and deploy applications following SEMI EDA (equipment data acquisition) standards.",
"title": ""
},
{
"docid": "36f37bdf7da56a57f29d026dca77e494",
"text": "Fifth generation (5G) systems are expected to introduce a revolution in the ICT domain with innovative networking features, such as device-to-device (D2D) communications. Accordingly, in-proximity devices directly communicate with each other, thus avoiding routing the data across the network infrastructure. This innovative technology is deemed to be also of high relevance to support effective heterogeneous objects interconnection within future IoT ecosystems. However, several open challenges shall be solved to achieve a seamless and reliable deployment of proximity-based communications. In this paper, we give a contribution to trust and security enhancements for opportunistic hop-by-hop forwarding schemes that rely on cellular D2D communications. To tackle the presence of malicious nodes in the network, reliability and reputation notions are introduced to model the level of trust among involved devices. To this aim, social-awareness of devices is accounted for, to better support D2D-based multihop content uploading. Our simulative results in small-scale IoT environments, demonstrate that data loss due to malicious nodes can be drastically reduced and gains in uploading time be reached with the proposed solution.",
"title": ""
},
{
"docid": "ea525c15c1cbb4a4a716e897287fd770",
"text": "This study explored student teachers’ cognitive presence and learning achievements by integrating the SOP Model in which self-study (S), online group discussion (O) and double-stage presentations (P) were implemented in the flipped classroom. The research was conducted at a university in Taiwan with 31 student teachers. Preand post-worksheets measuring knowledge of educational issues were administered before and after group discussion. Quantitative content analysis and behavior sequential analysis were used to evaluate cognitive presence, while a paired-samples t-test analyzed learning achievement. The results showed that the participants had the highest proportion of “Exploration,” the second largest rate of “Integration,” but rarely reached “Resolution.” The participants’ achievements were greatly enhanced using the SOP Model in terms of the scores of the preand post-worksheets. Moreover, the groups with a higher proportion of “Integration” (I) and “Resolution” (R) performed best in the post-worksheets and were also the most progressive groups. Both highand low-rated groups had significant correlations between the “I” and “R” phases, with “I” “R” in the low-rated groups but “R” “I” in the high-rated groups. The instructional design of the SOP Model can be a reference for future pedagogical implementations in the higher educational context.",
"title": ""
},
{
"docid": "69f6b21da3fa48f485fc612d385e7869",
"text": "Recurrent neural networks (RNN) have been successfully applied for recognition of cursive handwritten documents, both in English and Arabic scripts. Ability of RNNs to model context in sequence data like speech and text makes them a suitable candidate to develop OCR systems for printed Nabataean scripts (including Nastaleeq for which no OCR system is available to date). In this work, we have presented the results of applying RNN to printed Urdu text in Nastaleeq script. Bidirectional Long Short Term Memory (BLSTM) architecture with Connectionist Temporal Classification (CTC) output layer was employed to recognize printed Urdu text. We evaluated BLSTM networks for two cases: one ignoring the character's shape variations and the second is considering them. The recognition error rate at character level for first case is 5.15% and for the second is 13.6%. These results were obtained on synthetically generated UPTI dataset containing artificially degraded images to reflect some real-world scanning artifacts along with clean images. Comparison with shape-matching based method is also presented.",
"title": ""
},
{
"docid": "11a140232485cb8bcc4914b8538ab5ea",
"text": "We explain why we feel that the comparison betwen Common Lisp and Fortran in a recent article by Fateman et al. in this journal is not entirely fair.",
"title": ""
}
] |
scidocsrr
|
ecb58e300529674c908d818463bb08e9
|
Some Like it Hoax: Automated Fake News Detection in Social Networks
|
[
{
"docid": "facc1845ddde1957b2c1b74a62d74261",
"text": "The large availability of user provided contents on online social media facilitates people aggregation around shared beliefs, interests, worldviews and narratives. In spite of the enthusiastic rhetoric about the so called collective intelligence unsubstantiated rumors and conspiracy theories-e.g., chemtrails, reptilians or the Illuminati-are pervasive in online social networks (OSN). In this work we study, on a sample of 1.2 million of individuals, how information related to very distinct narratives-i.e. main stream scientific and conspiracy news-are consumed and shape communities on Facebook. Our results show that polarized communities emerge around distinct types of contents and usual consumers of conspiracy news result to be more focused and self-contained on their specific contents. To test potential biases induced by the continued exposure to unsubstantiated rumors on users' content selection, we conclude our analysis measuring how users respond to 4,709 troll information-i.e. parodistic and sarcastic imitation of conspiracy theories. We find that 77.92% of likes and 80.86% of comments are from users usually interacting with conspiracy stories.",
"title": ""
},
{
"docid": "3e178af724c907f1a5e02998dc311ff4",
"text": "We present results of a new approach to detect destructive article revisions, so-called vandalism, in Wikipedia. Vandalism detection is a one-class classification problem, where vandalism edits are the target to be identified among all revisions. Interestingly, vandalism detection has not been addressed in the Information Retrieval literature by now. In this paper we discuss the characteristics of vandalism as humans recognize it and develop features to render vandalism detection as a machine learning task. We compiled a large number of vandalism edits in a corpus, which allows for the comparison of existing and new detection approaches. Using logistic regression we achieve 83% precision at 77% recall with our model. Compared to the rule-based methods that are currently applied in Wikipedia, our approach increases the F -Measure performance by 49% while being faster at the same time. Introduction. The content of the well-known Web encyclopedia Wikipedia is created collaboratively by volunteers. Every visitor of a Wikipedia Web site can participate immediately in the authoring process: articles are created, edited, or deleted without need for authentication. In practice, an article is developed incrementally since, ideally, authors review and revise the work of others. Till this day about 8 million articles in 253 languages have been authored in this way. However, all times the Wikipedia and its freedom of editing has been misused by some editors. We distinguish them into three groups: (i) lobbyists, who try to push their own agenda, (ii) spammers, who solicit products or services, and (iii) vandals, who deliberately destroy the work of others. The Wikipedia community has developed policies for a manual recognition and handling of such cases, but enforcing them requires the manpower of many. With the rapid growth of Wikipedia a shift from article contributors to editors working on article maintenance is observed. Hence it is surprising that there is little research to support editors from the latter group or to automatize their tasks. As part of our research Table 1 surveys the existing tools for the prevention of editing misuse. Related Work. The first attempt to aid lobbying detection was the WikiScanner tool which maps IP numbers recorded from anonymous editors to their domain name. This way editors can be found who are biased with respect to the topic in question. Since there are diverse ways for lobbyists to disguise their identity a manual check of all edits for hints of lobbying is still necessary. There has been much research concerning spam detection in e-mails, among Web pages, or in blogs. In general, machine learning approaches, possibly combined with C. Macdonald et al. (Eds.): ECIR 2008, LNCS 4956, pp. 663–668, 2008. c © Springer-Verlag Berlin Heidelberg 2008 664 M. Potthast, B. Stein, and R. Gerling Table 1. Tools for the prevention of editing misuse with respect to the target group, and the type of automation (aid, full). Tools shown gray use the same or a very similar rule set as the tool listed in the line above. Tool Target Type Status URL (October 2007) WikiScanner lobbyists aid active http://wikiscanner.virgil.gr AntiVandalBot (AVB) vandals full inactive http://en.wikipedia.org/wiki/WP:AVB MartinBot vandals full inactive http://en.wikipedia.org/wiki/User:MartinBot T-850 Robotic Assistant vandals full active http://en.wikipedia.org/wiki/User:T-850_Robotic_Assistant WerdnaAntiVandalBot vandals full active http://en.wikipedia.org/wiki/User:WerdnaAntiVandalBot Xenophon vandals full active http://en.wikipedia.org/wiki/User:Xenophon_(bot) ClueBot vandals full active http://en.wikipedia.org/wiki/User:ClueBot CounterVandalismBot vandals full active http://en.wikipedia.org/wiki/User:CounterVandalismBot PkgBot vandals aid active http://meta.wikimedia.org/wiki/CVN/Bots MiszaBot vandals aid active http://en.wikipedia.org/wiki/User:MiszaBot manually developed rules, do an excellent spam detection job [1]. The respective technology may also be adequate for a misuse analysis in Wikipedia, but the applicability has not been investigated yet. Vandalism was recognized as an open problem by researchers studying online collaboration [2,4,5,6,7,8], and, of course, by the Wikipedia community.1 The former provide statistical or empirical analyses concerning vandalism, but neglect its detection. The latter developed four small sets of detection rules but did not evaluate the performance. Misuses such as trolling and flame wars in discussion boards are related to vandalism, but so far no research exists to detect either of them. In this paper we develop foundations for an automatic vandalism detection in Wikipedia: (i) we define vandalism detection as a classification task, (ii) discuss the characteristics by which humans recognize vandalism, and (iii) develop tailored features to quantify them. (iv) A machine-readable corpus of vandalism edits is provided as a common baseline for future research. (v) Finally, we report on experiments related to vandalism detection based on this corpus. Vandalism Detection Task. Let E = {e1, . . . , en} denote a set of edits, where each edit e comprises two consecutive revisions of the same document d from Wikipedia, say, e = (dt, dt+1). Let F = {f1, . . . , fp} denote a set of vandalism indicating features where each feature fi is a function that maps edits onto real numbers, fi : E → R. Using F an edit e is represented as a vector e = (f1(e), . . . , fp(e)); E is the set of edit representations for the edits in E. Given a vandalism corpus E which has a realistic ratio of edits classified as vandalism and well-intentioned edits, a classifier c, c : E → {0, 1}, is trained with examples from E. c serves as an approximation of c∗, the true predictor of the fact whether or not an edit forms a vandalism case. Using F and c one can classify an edit e as vandalism by computing c(e). 1 http://en.wikipedia.org/wiki/Wikipedia:WikiProject_Vandalism_studies (October 2007) Automatic Vandalism Detection in Wikipedia 665 Table 2. Organization of vandalism edits along the dimensions “Edited content” and “Editing category”: the matrix shows for each combination the portion of specific vandalism edits at all vandalism edits. For vandalized structure insertion edits and content insertion edits also a list of their typical characteristics is given. It includes both the characteristics described in the previous research and the Wikipedia policies. Editing Edited content category Text Structure Link Media Insertion 43.9% Characteristics: point of view, off topic, nonsense, vulgarism, duplication, gobbledegook 14.6% Characteristics: formatting, highlighting 6.9% 0.7% Replacement 45.8% 15.5% 4.7% 2.0% Deletion 31.6% 20.3% 22.9% 19.4% Vandalism Indicating Features. We have manually analyzed 301 cases of vandalism to learn about their characteristics and, based on these insights, to develop a feature set F . Table 2 organizes our findings as a matrix of vandalism edits along the dimensions “Edited content” and “Editing category”; Table 3 summarizes our features. Table 3. Features which quantify the characteristics of vandalism in Wikipedia Feature f Description char distribution deviation of the edit’s character distribution from the expectation char sequence longest consecutive sequence of the same character in an edit compressibility compression rate of an edit’s text upper case ratio ratio of upper case letters to all letters of an edit’s text term frequency average relative frequency of an edit’s words in the new revision longest word length of the longest word pronoun frequency number of pronouns relative to the number of an edit’s words (only first-person and second-person pronouns are considered) pronoun impact percentage by which an edit’s pronouns increase the number of pronouns in the new revision vulgarism frequency number of vulgar words relative to the number of an edit’s words vulgarism impact percentage by which an edit’s vulgar words increase the number of vulgar words in the new revision size ratio the size of the new version compared to the size of the old one replacement similarity similarity of deleted text to the text inserted in exchange context relation similarity of the new version to Wikipedia articles found for keywords extracted from the inserted text anonymity whether an edit was submitted anonymously, or not comment length the character length of the comment supplied with an edit edits per user number of previously submitted edits from the same editor or IP 666 M. Potthast, B. Stein, and R. Gerling For two vandalism categories the matrix shows particular characteristics by which an edit is recognized as vandalism: a vandalism edit has the “point of view” characteristic if the vandal expresses personal opinion, which often entails the use of personal pronouns. Many vandalism edits introduce off-topic text with respect to the surrounding text, are nonsense in that they contradict common sense, or do not form a correct sentence from their language. The first three characteristics are very difficult to be quantified, and research in this direction will be necessary to develop reliable analysis methods. Vulgar vandalism can be detected with a dictionary of vulgar words; however, one has to consider the context of a vulgar word since several Wikipedia articles contain vulgar words in a correct sense. Hence we quantify the impact of a vulgar word based on the point of time it has been inserted into an article rather than simply checking its occurrence. If an inserted text duplicates other text within the article or within Wikipedia, one may also speak of vandalism, but this is presumably the least offending case. Very often vandalism consists only of gobbledygook: a string of characters which has no meaning whatsoever, for instance if the keyboard is hit randomly. Another common characteristic of vandalism is that it is often highlighted by capital letters or by the repetition of cha",
"title": ""
}
] |
[
{
"docid": "108c03b1e2934e4b5ac2476a29eb58fd",
"text": "The idea behind ego depletion is that willpower draws on a limited mental resource, so that engaging in an act of self-control impairs self-control in subsequent tasks. To present ego depletion as more than a convenient metaphor, some researchers have proposed that glucose is the limited resource that becomes depleted with self-control. However, there have been theoretical challenges to the proposed glucose mechanism, and the experiments that have tested it have found mixed results. We used a new meta-analytic tool, p-curve analysis, to examine the reliability of the evidence from these experiments. We found that the effect sizes reported in this literature are possibly influenced by publication or reporting bias and that, even within studies yielding significant results, the evidential value of this research is weak. In light of these results, and pending further evidence, researchers and policymakers should refrain from drawing any conclusions about the role of glucose in self-control.",
"title": ""
},
{
"docid": "bf9e828c9e3ee8d64d387cd518fb6b2d",
"text": "As smartphone penetration saturates, we are witnessing a new trend in personal mobile devices—wearable mobile devices or simply wearables as it is often called. Wearables come in many different forms and flavors targeting different accessories and clothing that people wear. Although small in size, they are often expected to continuously sense, collect, and upload various physiological data to improve quality of life. These requirements put significant demand on improving communication security and reducing power consumption of the system, fueling new research in these areas. In this paper, we first provide a comprehensive survey and classification of commercially available wearables and research prototypes. We then examine the communication security issues facing the popular wearables followed by a survey of solutions studied in the literature. We also categorize and explain the techniques for improving the power efficiency of wearables. Next, we survey the research literature in wearable computing. We conclude with future directions in wearable market and research.",
"title": ""
},
{
"docid": "cefabe1b4193483d258739674b53f773",
"text": "This paper describes design and development of omnidirectional magnetic climbing robots with high maneuverability for inspection of ferromagnetic 3D human made structures. The main focus of this article is design, analysis and implementation of magnetic omnidirectional wheels for climbing robots. We discuss the effect of the associated problems of such wheels, e.g. vibration, on climbing robots. This paper also describes the evolution of magnetic omnidirectional wheels throughout the design and development of several solutions, resulting in lighter and smaller wheels which have less vibration and adapt better to smaller radius structures. These wheels are installed on a chassis which adapts passively to flat and curved structures, enabling the robot to climb and navigate on such structures.",
"title": ""
},
{
"docid": "f102cc8d3ba32f9a16f522db25143e2d",
"text": "As technology advances man-machine interaction is becoming an unavoidable activity. So an effective method of communication with machines enhances the quality of life. If it is able to operate a system by simply commanding, then it will be a great blessing to the users. Speech is the most effective mode of communication used by humans. So by introducing voice user interfaces the interaction with the machines can be made more user friendly. This paper implements a speaker independent speech recognition system for limited vocabulary Malayalam Words in Raspberry Pi. Mel Frequency Cepstral Coefficients (MFCC) are the features for classification and this paper proposes Radial Basis Function (RBF) kernel in Support Vector Machine (SVM) classifier gives better accuracy in speech recognition than linear kernel. An overall accuracy of 91.8% is obtained with this work.",
"title": ""
},
{
"docid": "0eb3d3c33b62c04ed5d34fc3a38b5182",
"text": "We propose a general methodology (PURE-LET) to design and optimize a wide class of transform-domain thresholding algorithms for denoising images corrupted by mixed Poisson-Gaussian noise. We express the denoising process as a linear expansion of thresholds (LET) that we optimize by relying on a purely data-adaptive unbiased estimate of the mean-squared error (MSE), derived in a non-Bayesian framework (PURE: Poisson-Gaussian unbiased risk estimate). We provide a practical approximation of this theoretical MSE estimate for the tractable optimization of arbitrary transform-domain thresholding. We then propose a pointwise estimator for undecimated filterbank transforms, which consists of subband-adaptive thresholding functions with signal-dependent thresholds that are globally optimized in the image domain. We finally demonstrate the potential of the proposed approach through extensive comparisons with state-of-the-art techniques that are specifically tailored to the estimation of Poisson intensities. We also present denoising results obtained on real images of low-count fluorescence microscopy.",
"title": ""
},
{
"docid": "c4df97f3db23c91f0ce02411d2e1e999",
"text": "One important challenge for probabilistic logics is reasoning with very large knowledge bases (KBs) of imperfect information, such as those produced by modern web-scale information extraction systems. One scalability problem shared by many probabilistic logics is that answering queries involves “grounding” the query—i.e., mapping it to a propositional representation—and the size of a “grounding” grows with database size. To address this bottleneck, we present a first-order probabilistic language called ProPPR in which approximate “local groundings” can be constructed in time independent of database size. Technically, ProPPR is an extension to stochastic logic programs that is biased towards short derivations; it is also closely related to an earlier relational learning algorithm called the path ranking algorithm. We show that the problem of constructing proofs for this logic is related to computation of personalized PageRank on a linearized version of the proof space, and based on this connection, we develop a provably-correct approximate grounding scheme, based on the PageRank–Nibble algorithm. Building on this, we develop a fast and easily-parallelized weight-learning algorithm for ProPPR. In our experiments, we show that learning for ProPPR is orders of magnitude faster than learning for Markov logic networks; that allowing mutual recursion (joint learning) in KB inference leads to improvements in performance; and that ProPPR can learn weights for a mutually recursive program with hundreds of clauses defining scores of interrelated predicates over a KB containing one million entities.",
"title": ""
},
{
"docid": "69c223a3732005111abecd116e0ea390",
"text": "The present study examines age-related changes in skeletal muscle size and function after 12 yr. Twelve healthy sedentary men were studied in 1985-86 (T1) and nine (initial mean age 65.4 +/- 4.2 yr) were reevaluated in 1997-98 (T2). Isokinetic muscle strength of the knee and elbow extensors and flexors showed losses (P < 0.05) ranging from 20 to 30% at slow and fast angular velocities. Computerized tomography (n = 7) showed reductions (P < 0.05) in the cross-sectional area (CSA) of the thigh (12.5%), all thigh muscles (14.7%), quadriceps femoris muscle (16.1%), and flexor muscles (14. 9%). Analysis of covariance showed that strength at T1 and changes in CSA were independent predictors of strength at T2. Muscle biopsies taken from vastus lateralis muscles (n = 6) showed a reduction in percentage of type I fibers (T1 = 60% vs. T2 = 42%) with no change in mean area in either fiber type. The capillary-to-fiber ratio was significantly lower at T2 (1.39 vs. 1. 08; P = 0.043). Our observations suggest that a quantitative loss in muscle CSA is a major contributor to the decrease in muscle strength seen with advancing age and, together with muscle strength at T1, accounts for 90% of the variability in strength at T2.",
"title": ""
},
{
"docid": "6be4ab6ce54ad9d7396d4546e2c825f1",
"text": "T paper is motivated by the success of YouTube, which is attractive to content creators as well as corporations for its potential to rapidly disseminate digital content. The networked structure of interactions on YouTube and the tremendous variation in the success of videos posted online lends itself to an inquiry of the role of social influence. Using a unique data set of video information and user information collected from YouTube, we find that social interactions are influential not only in determining which videos become successful but also on the magnitude of that impact. We also find evidence for a number of mechanisms by which social influence is transmitted, such as (i) a preference for conformity and homophily and (ii) the role of social networks in guiding opinion formation and directing product search and discovery. Econometrically, the problem in identifying social influence is that individuals’ choices depend in great part upon the choices of other individuals, referred to as the reflection problem. Another problem in identification is to distinguish between social contagion and user heterogeneity in the diffusion process. Our results are in sharp contrast to earlier models of diffusion, such as the Bass model, that do not distinguish between different social processes that are responsible for the process of diffusion. Our results are robust to potential self-selection according to user tastes, temporal heterogeneity and the reflection problem. Implications for researchers and managers are discussed.",
"title": ""
},
{
"docid": "926db14af35f9682c28a64e855fb76e5",
"text": "This paper reports about the development of a Named Entity Recognition (NER) system for Bengali using the statistical Conditional Random Fields (CRFs). The system makes use of the different contextual information of the words along with the variety of features that are helpful in predicting the various named entity (NE) classes. A portion of the partially NE tagged Bengali news corpus, developed from the archive of a leading Bengali newspaper available in the web, has been used to develop the system. The training set consists of 150K words and has been manually annotated with a NE tagset of seventeen tags. Experimental results of the 10-fold cross validation test show the effectiveness of the proposed CRF based NER system with an overall average Recall, Precision and F-Score values of 93.8%, 87.8% and 90.7%, respectively.",
"title": ""
},
{
"docid": "8fc0d896dfb5411079068f11800aac93",
"text": "This paper is concerned with estimating a probability density function of human skin color using a nite Gaussian mixture model whose parameters are estimated through the EM algorithm Hawkins statistical test on the normality and homoscedasticity common covariance matrix of the estimated Gaussian mixture models is performed and McLachlan s bootstrap method is used to test the number of components in a mixture Experimental results show that the estimated Gaussian mixture model ts skin images from a large database Applications of the estimated density function in image and video databases are presented",
"title": ""
},
{
"docid": "cd2fd948b08fd8b187cc9615d9bee8f1",
"text": "The spacing effect in list learning occurs because identical massed items suffer encoding deficits and because spaced items benefit from retrieval and increased time in working memory. Requiring the retrieval of identical items produced a spacing effect for recall and recognition, both for intentional and incidental learning. Not requiring retrieval produced spacing only for intentional learning because intentional learning encourages retrieval. Once-presented words provided baselines for these effects. Next, massed and spaced word pairs were judged for matches on their first three letters, forcing retrieval. The words were not identical, so there was no encoding deficit. Retrieval could and did cause spacing only for the first word of each pair; time in working memory, only for the second.",
"title": ""
},
{
"docid": "68c068b17a66cf7ac75ae02a25138adb",
"text": "In the last 10 or 15 years, exciting new developments have occurred in the field of filters and multiplexers. The two words that best explain this phenomenon are “topology” and “technology.” Thus, this paper will cover new connections, new materials, and new processes. This paper summarizes what is new, describes the new devices, and avoids simply surveying the entire field. It is not possible to cover everything, and regrettably, some new and possibly exciting areas are omitted (such as filters fabricated using liquid crystal polymers, active element inclusion, extreme power devices used in accelerators), and others, ..., because something has to be left for the next paper! The paper does cover bandstop filters, intrinsically switched film bulk acoustic resonator, and barium-strontium-titanate filters, brand new derivations of lossy filter synthesis and dual-band filter synthesis, multimode networks, new work on optimal multiplexer (multiport network) configurations, and substrate integrated waveguide filters. In each of these areas significant new work has been recently performed and is presented in a summary format suitable for those thinking that the area of filters and multiplexers has become mature (stagnant) with nothing interesting to learn. Nothing could be further from the truth!",
"title": ""
},
{
"docid": "da7b01d888bde1984088f190e08af77e",
"text": "One of the most frequently cited sarcasm realizations is the use of positive sentiment within negative context. We propose a novel approach towards modeling a sentiment context of a document via the sequence of sentiment labels assigned to its sentences. We demonstrate that the sentiment flow shifts (from negative to positive and from positive to negative) can be used as reliable classification features for the task of sarcasm detection. Our classifier achieves the F1-measure of 0.7 for all reviews, going up to 0.9 for the reviews with high star ratings (positive reviews), which are the reviews that are materially affected by the presence of sarcasm in the text. Introduction Verbal irony or sarcasm has been studied by psychologists, linguists, and computer scientists for different types of text: speech, fiction, Twitter messages, Internet dialog, product reviews, etc. Sentiment is widely used as a classification feature for the detection of whether a text snippet or a document is sarcastic or not. The popularity of this feature can be explained by the fact that it is agreed that in many cases sarcasm is manifested in a document via a text snippet with positive sentiment applied to a negative situation. Given that the notion of sarcasm (or verbal irony, or irony for that matter) does not have a formal definition except that in the case of sarcasm/irony a nonsalient interpretation has the priority over a salient one, positive utterance within a negative context is a reliable feature to use (Riloff et al. 2013). Other features (textual and non-textual) used for the task of identifying sarcastic text are: emoticons (GonzalezIbáñez, Muresan, and Wacholder 2011), heavy punctuation (Carvalho et al. 2009), hashtags (Wang et al. 2015), quotation marks (Carvalho et al. 2009), positive interjections (Gonzalez-Ibáñez, Muresan, and Wacholder 2011), lexical N-gram cues associated with sarcasm (Davidov, Tsur, and Rappoport 2010), lists of positive and negative words (Gonzalez-Ibáñez, Muresan, and Wacholder 2011), etc. It must be noted that the above features are designed to predict sarcasm in short messages. In this work we demonstrate that these features do not work well for long documents. This means that other features should be devised for detecting sarcasm on a document level. Copyright c © 2017, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Recently the necessity of looking beyond the text snippets and into the context that surrounds the possibly sarcastic text utterance got a lot of attention. Researchers investigate the effect of context on sarcasm and design features to capture the global context within which sarcasm appears. Wallace et al. (2015) work on comments from Reddit threads about politics. Wang et al. (2015) work with Twitter messages and analyze these messages as a part of a larger Twitter thread. In both cases, the context is derived using lexical and nonlexical features of the surrounding messages and the information about the overall polarity of the thread (e.g., whether the Reddit thread is a part of the conversation among conservatives or not). The generated context has a certain sentiment that is used for the task of sarcasm detection. In our work we rely on the importance of context for sarcasm detection. Our approach to contexualization is based on the common belief that a sarcastic document contains a passage which, when taken out of context and analyzed as a stand-alone sentence with the priority of the salient meaning over non-salient one, can be classified as positive but within a given (typically negative) context becomes the holder of sarcasm. For example, the following sentence marked with a positive sentiment label1 while being a part of an overall negative (1 ) review of a Bill Clinton biography documentary signals the presence of sarcasm in the review2. This dvd is great if you think that Gennifer Flowers, Paula Jones and Monica Lewinsky were the highlights of the Clinton administration. However, sarcasm can be observed in overall positive (5 ) reviews as well. For example, in a positive (5 ) review about a movie, the following sentence marked as negative is a good signal of sarcasm being present in the review. I believe this film was secretly banned from Oscar consideration due to the fact the committee felt it would be unfair to the other nominees. All sentiment labels presented in this paper are obtained using the Stanford Sentiment Analysis tool (Socher et al. 2013) with the 5-point sentiment scale: very negative (-2), negative (-1), neutral (0), positive (+1), very positive (+2). The Stanford Sentiment Analysis tool sentence sentiment prediction accuracy is 85.4% All examples presented in this paper are from existing Amazon product reviews. We preserve the original orthography, punctuation, and capitalization Proceedings of the Thirtieth International Florida Artificial Intelligence Research Society Conference",
"title": ""
},
{
"docid": "7b205b171481afeb46d7347428b223cf",
"text": "The power–voltage characteristic of photovoltaic (PV) arrays displays multiple local maximum power points when all the modules do not receive uniform solar irradiance, i.e., under partial shading conditions (PSCs). Conventional maximum power point tracking (MPPT) methods are shown to be effective under uniform solar irradiance conditions. However, they may fail to track the global peak under PSCs. This paper proposes a new method for MPPT of PV arrays under both PSCs and uniform conditions. By analyzing the solar irradiance pattern and using the popular Hill Climbing method, the proposed method tracks all local maximum power points. The performance of the proposed method is evaluated through simulations in MATLAB/SIMULINK environment. Besides, the accuracy of the proposed method is proved using experimental results.",
"title": ""
},
{
"docid": "839c5d8d1c78b6d303898a062d04d825",
"text": "The paper proposes the novel design of a 3T XOR gate combining complementary CMOS with pass transistor logic. The design has been compared with earlier proposed 4T and 6T XOR gates and a significant improvement in silicon area and power-delay product has been obtained. An eight transistor full adder has been designed using the proposed three-transistor XOR gate and its performance has been investigated using 0.15 m and 0.35 m technologies. Compared to the earlier designed 10 transistor full adder, the proposed adder shows a significant improvement in silicon area and power delay product. The whole simulation has been carried out using HSPICE. Keywords—XOR gate, full adder, improvement in speed, area minimization, transistor count minimization.",
"title": ""
},
{
"docid": "c5e29d6477aa183ad340448d5e3df193",
"text": "The shift to cloud technologies is a paradigm change that offers considerable financial and administrative gains. However governmental and business institutions wanting to tap into these gains are concerned with security issues. The cloud presents new vulnerabilities and is dominated by new kinds of applications, which calls for new security solutions. Intuitively, Byzantine fault tolerant (BFT) replication has many benefits to enforce integrity and availability in clouds. Existing BFT systems, however, are not suited for typical “data-flow processing” cloud applications which analyze large amounts of data in a parallelizable manner: indeed, existing BFT solutions focus on replicating single monolithic servers, whilst data-flow applications consist in several different stages, each of which may give rise to multiple components at runtime to exploit cheap hardware parallelism; similarly, BFT replication hinges on comparison of redundant outputs generated, which in the case of data-flow processing can represent huge amounts of data. In fact, current limits of data processing directly depend on the amount of data that can be processed per time unit. In this paper we present ClusterBFT, a system that secures computations being run in the cloud by leveraging BFT replication coupled with fault isolation. In short, ClusterBFT leverages a combination of variable-degree clustering, approximated and offline output comparison, smart deployment, and separation of duty, to achieve a parameterized tradeoff between fault tolerance and overhead in practice. We demonstrate the low overhead achieved with ClusterBFT when securing dataflow computations expressed in Apache Pig, and Hadoop. Our solution allows assured computation with less than 10 percent latency overhead as shown by our evaluation.",
"title": ""
},
{
"docid": "70574bc8ad9fece3328ca77f17eec90f",
"text": "Five different proposed measures of similarity or semantic distance in WordNet were experimentally compared by examining their performance in a real-word spelling correction system. It was found that Jiang and Conrath’s measure gave the best results overall. That of Hirst and St-Onge seriously over-related, that of Resnik seriously under-related, and those of Lin and of Leacock and Chodorow fell in between.",
"title": ""
},
{
"docid": "34d8b9fa5159e161ee0050900be4fa62",
"text": "Singular value decomposition (SVD), together with the expectation-maximization (EM) procedure, can be used to find a low-dimension model that maximizes the log-likelihood of observed ratings in recommendation systems. However, the computational cost of this approach is a major concern, since each iteration of the EM algorithm requires a new SVD computation. We present a novel algorithm that incorporates SVD approximation into the EM procedure to reduce the overall computational cost while maintaining accurate predictions. Furthermore, we propose a new framework for collaborating filtering in distributed recommendation systems that allows users to maintain their own rating profiles for privacy. A server periodically collects aggregate information from those users that are online to provide predictions for all users. Both theoretical analysis and experimental results show that this framework is effective and achieves almost the same prediction performance as that of centralized systems.",
"title": ""
},
{
"docid": "e4f3a9f89235ed11c4186d9c937a9620",
"text": "The human hand is an exceptionally significant part of the human body which has a very complex biological system with bones, joints, and muscles. Among all hand functions, power grasping plays a crucial role in the activities of daily living. In this research a prosthetic terminal device is designed to assist the power grasping activities of amputees subjected to wrist disarticulation. The designed terminal device contains four identical fingers made of a novel linkage mechanism, which can accomplish flexion and extension. With the intention of verifying the effectiveness of the mechanism, kinematic analysis has been carried out. Furthermore, the motion simulation has demonstrated that the mechanism is capable of generating the appropriate finger movements to accomplish cylindrical and spherical power grasps. In addition, the work envelop of the proposed prosthetic finger has been determined. The 3D printed prototype of the finger was experimentally tested. The experimental results validate the effectiveness of the proposed mechanism to gain the expected motion patterns.",
"title": ""
}
] |
scidocsrr
|
327249ce15e79809078754be1049bc4a
|
Age classification with deep learning face representation
|
[
{
"docid": "38350c305f2ba731dd5a56bc78337aec",
"text": "Recently, head pose estimation (HPE) from low-resolution surveillance data has gained in importance. However, monocular and multi-view HPE approaches still work poorly under target motion, as facial appearance distorts owing to camera perspective and scale changes when a person moves around. To this end, we propose FEGA-MTL, a novel framework based on Multi-Task Learning (MTL) for classifying the head pose of a person who moves freely in an environment monitored by multiple, large field-of-view surveillance cameras. Upon partitioning the monitored scene into a dense uniform spatial grid, FEGA-MTL simultaneously clusters grid partitions into regions with similar facial appearance, while learning region-specific head pose classifiers. In the learning phase, guided by two graphs which a-priori model the similarity among (1) grid partitions based on camera geometry and (2) head pose classes, FEGA-MTL derives the optimal scene partitioning and associated pose classifiers. Upon determining the target's position using a person tracker at test time, the corresponding region-specific classifier is invoked for HPE. The FEGA-MTL framework naturally extends to a weakly supervised setting where the target's walking direction is employed as a proxy in lieu of head orientation. Experiments confirm that FEGA-MTL significantly outperforms competing single-task and multi-task learning methods in multi-view settings.",
"title": ""
}
] |
[
{
"docid": "b21c6ab3b97fd23f8fe1f8645608b29f",
"text": "Daily activity recognition can help people to maintain a healthy lifestyle and robot to better interact with users. Robots could therefore use the information coming from the activities performed by users to give them some custom hints to improve lifestyle and daily routine. The pervasiveness of smart things together with advances in cloud robotics can help the robot to perceive and collect more information about the users and the environment. In particular thanks to the miniaturization and low cost of Inertial Measurement Units, in the last years, body-worn activity recognition has gained popularity. In this work, we investigated the performances with an unsupervised approach to recognize eight different gestures performed in daily living wearing a system composed of two inertial sensors placed on the hand and on the wrist. In this context our aim is to evaluate whether the system is able to recognize the gestures in more realistic applications, where is not possible to have a training set. The classification problem was analyzed using two unsupervised approaches (K-Mean and Gaussian Mixture Model), with an intra-subject and an inter-subject analysis, and two supervised approaches (Support Vector Machine and Random Forest), with a 10-fold cross validation analysis and with a Leave-One-Subject-Out analysis to compare the results. The outcomes show that even in an unsupervised context the system is able to recognize the gestures with an averaged accuracy of 0.917 in the K-Mean inter-subject approach and 0.796 in the Gaussian Mixture Model inter-subject one.",
"title": ""
},
{
"docid": "8614f8d645036f1d22189cf0bdae6c7a",
"text": "We present a fully convolutional neural network for segmenting ischemic stroke lesions in CT perfusion images for the ISLES 2018 challenge. Treatment of stroke is time sensitive and current standards for lesion identification require manual segmentation, a time consuming and challenging process. Automatic segmentation methods present the possibility of accurately identifying lesions and improving treatment planning. Our model is based on the PSPNet, a network architecture that makes use of pyramid pooling to provide global and local contextual information. To learn the varying shapes of the lesions, we train our network using focal loss, a loss function designed for the network to focus on learning the more difficult samples. We compare our model to networks trained using the U-Net and V-Net architectures. Our approach demonstrates effective performance in lesion segmentation and ranked among the top performers at the challenge conclusion.",
"title": ""
},
{
"docid": "d05a179a28cab9cb47be0638ae7b525c",
"text": "Ionizing radiation effects on CMOS image sensors (CIS) manufactured using a 0.18 mum imaging technology are presented through the behavior analysis of elementary structures, such as field oxide FET, gated diodes, photodiodes and MOSFETs. Oxide characterizations appear necessary to understand ionizing dose effects on devices and then on image sensors. The main degradations observed are photodiode dark current increases (caused by a generation current enhancement), minimum size NMOSFET off-state current rises and minimum size PMOSFET radiation induced narrow channel effects. All these effects are attributed to the shallow trench isolation degradation which appears much more sensitive to ionizing radiation than inter layer dielectrics. Unusual post annealing effects are reported in these thick oxides. Finally, the consequences on sensor design are discussed thanks to an irradiated pixel array and a comparison with previous work is discussed.",
"title": ""
},
{
"docid": "2e3319cf6daead166c94345c52a8389a",
"text": "Due to their high energy density and low material cost, lithium-sulfur batteries represent a promising energy storage system for a multitude of emerging applications, ranging from stationary grid storage to mobile electric vehicles. This review aims to summarize major developments in the field of lithium-sulfur batteries, starting from an overview of their electrochemistry, technical challenges and potential solutions, along with some theoretical calculation results to advance our understanding of the material interactions involved. Next, we examine the most extensively-used design strategy: encapsulation of sulfur cathodes in carbon host materials. Other emerging host materials, such as polymeric and inorganic materials, are discussed as well. This is followed by a survey of novel battery configurations, including the use of lithium sulfide cathodes and lithium polysulfide catholytes, as well as recent burgeoning efforts in the modification of separators and protection of lithium metal anodes. Finally, we conclude with an outlook section to offer some insight on the future directions and prospects of lithium-sulfur batteries.",
"title": ""
},
{
"docid": "2a78ef9f2d3fb35e1595a6ffca20851b",
"text": "Is AI antithetical to good user interface design? From the earliest times in the development of computers, activities in human-computer interaction (HCI) and AI have been intertwined. But as subfields of computer science, HCI and AI have always had a love-hate relationship. The goal of HCI is to make computers easier to use and more helpful to their users. The goal of artificial intelligence is to model human thinking and to embody those mechanisms in computers. How are these goals related? Some in HCI have seen these goals sometimes in opposition. They worry that the heuristic nature of many AI algorithms will lead to unreliability in the interface. They worry that AI’s emphasis on mimicking human decision-making functions might usurp the decision-making prerogative of the human user. These concerns are not completely without merit. There are certainly many examples of failed attempts to prematurely foist AI on the public. These attempts gave AI a bad name, at least at the time. But so too have there been failed attempts to popularize new HCI approaches. The first commercial versions of window systems, such as the Xerox Star and early versions of Microsoft Windows, weren’t well accepted at the time of their introduction. Later design iterations of window systems, such as the Macintosh and Windows 3.0, finally achieved success. Key was that these early failures did not lead their developers to conclude window systems were a bad idea. Researchers shouldn’t construe these (perceived) AI failures as a refutation of the idea of AI in interfaces. Modern PDA, smartphone, and tablet computers are now beginning to have quite usable handwriting recognition. Voice recognition is being increasingly employed on phones, and even in the noisy environment of cars. Animated agents, more polite, less intrusive, and better thought out, might also make a",
"title": ""
},
{
"docid": "568e132003eb78311d897993004f9a38",
"text": "This study compared immediate (overnight) and progressive switching to oxcarbazepine monotherapy in patients with partial seizures unsatisfactorily treated with carbamazepine monotherapy. Patients were randomised to either an overnight (n = 140) or a progressive switch (n = 146) from carbamazepine to oxcarbazepine monotherapy at a dose ratio of 1:1.5. The difference between the two switch groups in the mean monthly seizure frequency supported the equivalence of overnight and progressive switching (difference of 0.02 excluding outliers; 95% confidence interval (CI) -0.74, 0.78). Following the switch from carbamazepine to oxcarbazepine, there was a reduction in median monthly seizure frequency in both the overnight group (from 1.5 to 0; P = 0.0005) and the progressive group (from 1.0 to 0.4; P = 0.003). The proportion of seizure-free patients increased from 38 to 51% (P = 0.002) and 39 to 49% (P = -0.01) in the overnight and progressive groups, respectively. In addition, the proportion of patients experiencing no clinically significant adverse events did not differ between the two switch methods (difference of 2.5; 95% CI -4.1, 9.0). For patients who are unsatisfactorily treated with carbamazepine monotherapy, overnight switch to oxcarbazepine monotherapy is as effective and well tolerated as a progressive switch, therefore allowing simple and flexible individualised treatment. Switching to oxcarbazepine monotherapy appears to be beneficial for patients who are unsatisfactorily treated with carbamazepine monotherapy, independently of the switch method used.",
"title": ""
},
{
"docid": "6647d87d300404bece0c90f684c580a0",
"text": "The paper presents a new control strategy to enhance the ability of reactive power support of a doubly fed induction generator (DFIG) based wind turbine during serious voltage dips. The proposed strategy is an advanced low voltage ride through (LVRT) control scheme, with which a part of the captured wind energy during grid faults is stored temporarily in the rotor's inertia energy and the remaining energy is available to the grid while the DC-link voltage and rotor current are kept below the dangerous levels. After grid fault clearance, the control strategy ensures smooth release of the rotor's excessive inertia energy into the grid. Based on these designs, the DFIG's reactive power capacity on the stator and the grid side converter is handled carefully to satisfy the new grid code requirements strictly. Simulation studies are presented and discussed.",
"title": ""
},
{
"docid": "0b41c2e8be4b9880a834b44375eb6c75",
"text": "We propose AliMe Chat, an open-domain chatbot engine that integrates the joint results of Information Retrieval (IR) and Sequence to Sequence (Seq2Seq) based generation models. AliMe Chat uses an attentive Seq2Seq based rerank model to optimize the joint results. Extensive experiments show our engine outperforms both IR and generation based models. We launch AliMe Chat for a real-world industrial application and observe better results than another public chatbot.",
"title": ""
},
{
"docid": "7bba98f32af32b04f8f35ac963a20d27",
"text": "The promise of affordable, automatic approaches to real-time captioning imagines a future in which deaf and hard of hearing (DHH) users have immediate access to speech in the world around them my simply picking up their phone or other mobile device. While the challenges of processing highly variable natural language has prevented automated approaches from completing this task reliably enough for use in settings such as classrooms or workplaces [4], recent work in crowd-powered approaches have allowed groups of non-expert captionists to provide a similarly-flexible source of captions for DHH users. This is in contrast to current human-powered approaches, which use highly-trained professional captionists who can type up to 250 words per minute (WPM), but also can cost over $100/hr. In this paper, we describe a real-time demo of Legion:Scribe (or just \"Scribe\"), a crowd-powered captioning system that allows untrained participants and volunteers to provide reliable captions with less than 5 seconds of latency by computationally merging their input into a single collective answer that is more accurate and more complete than any one worker could have generated alone.",
"title": ""
},
{
"docid": "4100a10b2a03f3a1ba712901cee406d2",
"text": "Traditionally, many clinicians tend to forego esthetic considerations when full-coverage restorations are indicated for pediatric patients with primary dentitions. However, the availability of new zirconia pediatric crowns and reliable techniques for cementation makes esthetic outcomes practical and consistent when restoring primary dentition. Two cases are described: a 3-year-old boy who presented with severe early childhood caries affecting both anterior and posterior teeth, and a 6-year-old boy who presented with extensive caries of his primary posterior dentition, including a molar requiring full coverage. The parents of both boys were concerned about esthetics, and the extent of decay indicated the need for full-coverage restorations. This led to the boys receiving treatment using a restorative procedure in which the carious teeth were prepared for and restored with esthetic tooth-colored zirconia crowns. In both cases, comfortable function and pleasing esthetics were achieved.",
"title": ""
},
{
"docid": "4a487825a05b10d94b1837cbe1d7c171",
"text": "INTRODUCTION Time of Flight (TOF) range cameras, besides being used in industrial metrology applications, have also a potential interest in consumer application such as ambient assisted living and gaming. In these fields, the information offered by the sensor can be used to efficiently track the position of objects and people in the camera field of view, thus overcoming many of the problems, which are present when analyzing conventional intensity images. The need of lowering the overall system cost and power consumption, while increasing the sensor resolution, has triggered the exploration of more advanced CMOS technologies to make sensors suitable for these applications. However, migration to new technologies is not straightforward, since the most mature commercial 3D sensors employ dedicated CCD-CMOS technologies, which cannot be translated to new processes without any process modification. In this contribution a comparative overview of three different pixel architectures aimed at TOF 3D imaging, and implemented in the same 0.18-μm CMOS technology, is given and the main advantages and drawbacks of each solution are analyzed.",
"title": ""
},
{
"docid": "91f3268092606d2bd1698096e32c824f",
"text": "Classic pipeline models for task-oriented dialogue system require explicit modeling the dialogue states and hand-crafted action spaces to query a domain-specific knowledge base. Conversely, sequence-to-sequence models learn to map dialogue history to the response in current turn without explicit knowledge base querying. In this work, we propose a novel framework that leverages the advantages of classic pipeline and sequence-to-sequence models. Our framework models a dialogue state as a fixed-size distributed representation and use this representation to query a knowledge base via an attention mechanism. Experiment on Stanford Multi-turn Multi-domain Taskoriented Dialogue Dataset shows that our framework significantly outperforms other sequenceto-sequence based baseline models on both automatic and human evaluation. Title and Abstract in Chinese 面向任务型对话中基于对话状态表示的序列到序列学习 面向任务型对话中,传统流水线模型要求对对话状态进行显式建模。这需要人工定义对 领域相关的知识库进行检索的动作空间。相反地,序列到序列模型可以直接学习从对话 历史到当前轮回复的一个映射,但其没有显式地进行知识库的检索。在本文中,我们提 出了一个结合传统流水线与序列到序列二者优点的模型。我们的模型将对话历史建模为 一组固定大小的分布式表示。基于这组表示,我们利用注意力机制对知识库进行检索。 在斯坦福多轮多领域对话数据集上的实验证明,我们的模型在自动评价与人工评价上优 于其他基于序列到序列的模型。",
"title": ""
},
{
"docid": "7252372bdacaa69b93e52a7741c8f4c2",
"text": "This paper introduces a novel type of actuator that is investigated by ESA for force-reflection to a wearable exoskeleton. The actuator consists of a DC motor that is relocated from the joint by means of Bowden cable transmissions. The actuator shall support the development of truly ergonomic and compact wearable man-machine interfaces. Important Bowden cable transmission characteristics are discussed, which dictate a specific hardware design for such an actuator. A first prototype is shown, which was used to analyze these basic characteristics of the transmissions and to proof the overall actuation concept. A second, improved prototype is introduced, which is currently used to investigate the achievable performance as a master actuator in a master-slave control with force-feedback. Initial experimental results are presented, which show good actuator performance in a 4 channel control scheme with a slave joint. The actuator features low movement resistance in free motion and can reflect high torques during hard contact situations. High contact stability can be achieved. The actuator seems therefore well suited to be implemented into the ESA exoskeleton for space-robotic telemanipulation",
"title": ""
},
{
"docid": "877bc8fb07b60f61bcd3b98e925a7aa0",
"text": "Most existing approaches to autonomous driving fall into one of two categories: modular pipelines, that build an extensive model of the environment, and imitation learning approaches, that map images directly to control outputs. A recently proposed third paradigm, direct perception, aims to combine the advantages of both by using a neural network to learn appropriate low-dimensional intermediate representations. However, existing direct perception approaches are restricted to simple highway situations, lacking the ability to navigate intersections, stop at traffic lights or respect speed limits. In this work, we propose a direct perception approach which maps video input to intermediate representations suitable for autonomous navigation in complex urban environments given high-level directional inputs. Compared to state-of-the-art reinforcement and conditional imitation learning approaches, we achieve an improvement of up to 68 % in goal-directed navigation on the challenging CARLA simulation benchmark. In addition, our approach is the first to handle traffic lights and speed signs by using image-level labels only, as well as smooth car-following, resulting in a significant reduction of traffic accidents in simulation.",
"title": ""
},
{
"docid": "c24550119d4251d6d7ce1219b8aa0ee4",
"text": "This article considers the delivery of efficient and effective dental services for patients whose disability and/or medical condition may not be obvious and which consequently can present a hidden challenge in the dental setting. Knowing that the patient has a particular condition, what its features are and how it impacts on dental treatment and oral health, and modifying treatment accordingly can minimise the risk of complications. The taking of a careful medical history that asks the right questions in a manner that encourages disclosure is key to highlighting hidden hazards and this article offers guidance for treating those patients who have epilepsy, latex sensitivity, acquired or inherited bleeding disorders and patients taking oral or intravenous bisphosphonates.",
"title": ""
},
{
"docid": "5c5e9a93b4838cbebd1d031a6d1038c4",
"text": "Live migration of virtual machines (VMs) is key feature of virtualization that is extensively leveraged in IaaS cloud environments: it is the basic building block of several important features, such as load balancing, pro-active fault tolerance, power management, online maintenance, etc. While most live migration efforts concentrate on how to transfer the memory from source to destination during the migration process, comparatively little attention has been devoted to the transfer of storage. This problem is gaining increasing importance: due to performance reasons, virtual machines that run large-scale, data-intensive applications tend to rely on local storage, which poses a difficult challenge on live migration: it needs to handle storage transfer in addition to memory transfer. This paper proposes a memory migration independent approach that addresses this challenge. It relies on a hybrid active push / prioritized prefetch strategy, which makes it highly resilient to rapid changes of disk state exhibited by I/O intensive workloads. At the same time, it is minimally intrusive in order to ensure a maximum of portability with a wide range of hypervisors. Large scale experiments that involve multiple simultaneous migrations of both synthetic benchmarks and a real scientific application show improvements of up to 10x faster migration time, 10x less bandwidth consumption and 8x less performance degradation over state-of-art.",
"title": ""
},
{
"docid": "a7742016ea7da3d33a3a557d593ae149",
"text": "The present work overviews the application of recommender systems in various financial domains. The relevant literature is investigated based on two directions. First, a domain-based categorization is discussed focusing on those recommendation problems, where the existing literature is significant. Second, the application of various recommendation algorithms and data mining techniques is summarized. The purpose of this paper is providing a basis for further scientific research and product development in this field.",
"title": ""
},
{
"docid": "b92484f67bf2d3f71d51aee9fb7abc86",
"text": "This research addresses the kinds of matching elements that determine analogical relatedness and literal similarity. Despite theoretical agreement on the importance of relational match, the empirical evidence is neither systematic nor definitive. In 3 studies, participants performed online evaluations of relatedness of sentence pairs that varied in either the object or relational match. Results show a consistent focus on relational matches as the main determinant of analogical acceptance. In addition, analogy does not require strict overall identity of relational concepts. Semantically overlapping but nonsynonymous relations were commonly accepted, but required more processing time. Finally, performance in a similarity rating task partly paralleled analogical acceptance; however, relatively more weight was given to object matches. Implications for psychological theories of analogy and similarity are addressed.",
"title": ""
},
{
"docid": "43de53a8c215d7b3ecf6252253abe3ed",
"text": "Semantic mapping is a very active and growing research area, with important applications in indoor and outdoor robotic applications. However, most of the research on semantic mapping has focused on indoor mapping and there is a need for developing semantic mapping methodologies for large-scale outdoor scenarios. In this work, a novel semantic mapping methodology for large-scale outdoor scenes in autonomous off-road driving applications is proposed. The semantic map representation consists of a large-scale topological map built using semantic image information. Thus, the proposed representation aims to solve the large-scale outdoors semantic mapping problem by using a graph based topological map, where relevant information for autonomous driving is added using semantic information from the image description. As a proof of concept, the proposed methodology is applied to the semantic map building of a real outdoor scenario.",
"title": ""
},
{
"docid": "bae6a214381859ac955f1651c7df0c0f",
"text": "The fastcluster package is a C++ library for hierarchical, agglomerative clustering. It provides a fast implementation of the most efficient, current algorithms when the input is a dissimilarity index. Moreover, it features memory-saving routines for hierarchical clustering of vector data. It improves both asymptotic time complexity (in most cases) and practical performance (in all cases) compared to the existing implementations in standard software: several R packages, MATLAB, Mathematica, Python with SciPy. The fastcluster package presently has interfaces to R and Python. Part of the functionality is designed as a drop-in replacement for the methods hclust and flashClust in R and scipy.cluster.hierarchy.linkage in Python, so that existing programs can be effortlessly adapted for improved performance.",
"title": ""
}
] |
scidocsrr
|
937765e465ed05decbcf71da3c584d90
|
Generalized parallel CRC computation on FPGA
|
[
{
"docid": "b60555d52e5a8772ba128b184ec6de73",
"text": "Standardized 32-bit Cyclic Redundancy Codes provide fewer bits of guaranteed error detection than they could, achieving a Hamming Distance (HD) of only 4 for maximum-length Ethernet messages, whereas HD=6 is possible. Although research has revealed improved codes, exploring the entire design space has previously been computationally intractable, even for special-purpose hardware. Moreover, no CRC polynomial has yet been found that satisfies an emerging need to attain both HD=6 for 12K bit messages and HD=4 for message lengths beyond 64K bits. This paper presents results from the first exhaustive search of the 32-bit CRC design space. Results from previous research are validated and extended to include identifying all polynomials achieving a better HD than the IEEE 802.3 CRC-32 polynomial. A new class of polynomials is identified that provides HD=6 up to nearly 16K bit and HD=4 up to 114K bit message lengths, providing the best achievable design point that maximizes error detection for both legacy and new applications, including potentially iSCSI and application-implemented error checks.",
"title": ""
},
{
"docid": "0cb490aacaf237bdade71479151ab8d2",
"text": "This brief presents a high-speed parallel cyclic redundancy check (CRC) implementation based on unfolding, pipelining, and retiming algorithms. CRC architectures are first pipelined to reduce the iteration bound by using novel look-ahead pipelining methods and then unfolded and retimed to design high-speed parallel circuits. A comparison on commonly used generator polynomials between the proposed design and previously proposed parallel CRC algorithms shows that the proposed design can increase the speed by up to 25% and control or even reduce hardware cost",
"title": ""
}
] |
[
{
"docid": "1d32c84e539e10f99b92b54f2f71970b",
"text": "Stories are the most natural ways for people to deal with information about the changing world. They provide an efficient schematic structure to order and relate events according to some explanation. We describe (1) a formal model for representing storylines to handle streams of news and (2) a first implementation of a system that automatically extracts the ingredients of a storyline from news articles according to the model. Our model mimics the basic notions from narratology by adding bridging relations to timelines of events in relation to a climax point. We provide a method for defining the climax score of each event and the bridging relations between them. We generate a JSON structure for any set of news articles to represent the different stories they contain and visualize these stories on a timeline with climax and bridging relations. This visualization helps inspecting the validity of the generated structures.",
"title": ""
},
{
"docid": "d157d7b6e1c5796b6d7e8fedf66e81d8",
"text": "Intrusion detection for computer network systems becomes one of the most critical tasks for network administrators today. It has an important role for organizations, governments and our society due to its valuable resources on computer networks. Traditional misuse detection strategies are unable to detect new and unknown intrusion. Besides , anomaly detection in network security is aim to distinguish between illegal or malicious events and normal behavior of network systems. Anomaly detection can be considered as a classification problem where it builds models of normal network behavior, which it uses to detect new patterns that significantly deviate from the model. Most of the current research on anomaly detection is based on the learning of normally and anomaly behaviors. They do not take into account the previous, recent events to detect the new incoming one. In this paper, we propose a real time collective anomaly detection model based on neural network learning and feature operating. Normally a Long Short-Term Memory Recurrent Neural Network (LSTM RNN) is trained only on normal data and it is capable of predicting several time steps ahead of an input. In our approach, a LSTM RNN is trained with normal time series data before performing a live prediction for each time step. Instead of considering each time step separately, the observation of prediction errors from a certain number of time steps is now proposed as a new idea for detecting collective anomalies. The prediction errors from a number of the latest time steps above a threshold will indicate a collective anomaly. The model is built on a time series version of the KDD 1999 dataset. The experiments demonstrate that it is possible to offer reliable and efficient for collective anomaly detection.",
"title": ""
},
{
"docid": "97c8806b425bc7448baf904ae01b16e1",
"text": "Consumers or the Customers are valuable assets for any organisation as they are the ultimate destination of any products or services. Since, they are the ultimate end users of any product or services, thus, the success of any organisation depends upon the satisfaction of the consumers, if not they will switch to other brands. Due to this reason, the satisfaction of the consumers becomes priority for any organisations. For satisfying the consumers, one has to know about what consumer buy, why they buy it, when they buy it, how and how often they buy it and what made them to switch to other brands. The present paper is an attempt to study the shampoo buying patterns among the individuals. The study also examines the various factors which influence the consumers to buy a shampoo of particular brand and reasons for their switching to other brands.",
"title": ""
},
{
"docid": "a5a7e3fe9d6eaf8fc25e7fd91b74219e",
"text": "We present in this paper a new approach that uses supervised machine learning techniques to improve the performances of optimization algorithms in the context of mixed-integer programming (MIP). We focus on the branch-and-bound (B&B) algorithm, which is the traditional algorithm used to solve MIP problems. In B&B, variable branching is the key component that most conditions the efficiency of the optimization. Good branching strategies exist but are computationally expensive and usually hinder the optimization rather than improving it. Our approach consists in imitating the decisions taken by a supposedly good branching strategy, strong branching in our case, with a fast approximation. To this end, we develop a set of features describing the state of the ongoing optimization and show how supervised machine learning can be used to approximate the desired branching strategy. The approximated function is created by a supervised machine learning algorithm from a set of observed branching decisions taken by the target strategy. The experiments performed on randomly generated and standard benchmark (MIPLIB) problems show promising results.",
"title": ""
},
{
"docid": "aba674bc0b1d66f901ece0617dee115c",
"text": "An appropriate special case of a transform developed by J. Radon in 1917 is shown to have the major properties of the Hough transform which is useful for finding line segments in digital pictures. Such an observation may be useful in further efforts to generalize the Hough transform. Techniques for applying the Radon transform to lines and pixels are developed through examples, and the appropriate generalization to arbitrary curves is discussed.",
"title": ""
},
{
"docid": "b813635e27731d5ca25597d7a5984fc0",
"text": "Glioblastoma multiforme (GBM) represents an aggressive tumor type with poor prognosis. The majority of GBM patients cannot be cured. There is high willingness among patients for the compassionate use of non-approved medications, which might occasionally lead to profound toxicity. A 65-year-old patient with glioblastoma multiforme (GBM) has been treated with radiochemotherapy including temozolomide (TMZ) after surgery. The treatment outcome was evaluated as stable disease with a tendency to slow tumor progression. In addition to standard medication (ondansetron, valproic acid, levetiracetam, lorazepam, clobazam), the patient took the antimalarial drug artesunate (ART) and a decoction of Chinese herbs (Coptis chinensis, Siegesbeckia orientalis, Artemisia scoparia, Dictamnus dasycarpus). In consequence, the clinical status deteriorated. Elevated liver enzymes were noted with peak values of 238 U/L (GPT/ALAT), 226 U/L (GOT/ASAT), and 347 U/L (γ-GT), respectively. After cessation of ART and Chinese herbs, the values returned back to normal and the patient felt well again. In the literature, hepatotoxicity is well documented for TMZ, but is very rare for ART. Among the Chinese herbs used, Dictamnus dasycarpus has been reported to induce liver injury. Additional medication included valproic acid and levetiracetam, which are also reported to exert hepatotoxicity. While all drugs alone may bear a minor risk for hepatotoxicity, the combination treatment might have caused increased liver enzyme activities. It can be speculated that the combination of these drugs caused liver injury. We conclude that the compassionate use of ART and Chinese herbs is not recommended during standard radiochemotherapy with TMZ for GBM.",
"title": ""
},
{
"docid": "945553f360d7f569f15d249dbc5fa8cd",
"text": "One of the main issues in service collaborations among business partners is the possible lack of trust among them. A promising approach to cope with this issue is leveraging on blockchain technology by encoding with smart contracts the business process workflow. This brings the benefits of trust decentralization, transparency, and accountability of the service composition process. However, data in the blockchain are public, implying thus serious consequences on confidentiality and privacy. Moreover, smart contracts can access data outside the blockchain only through Oracles, which might pose new confidentiality risks if no assumptions are made on their trustworthiness. For these reasons, in this paper, we are interested in investigating how to ensure data confidentiality during business process execution on blockchain even in the presence of an untrusted Oracle.",
"title": ""
},
{
"docid": "1d3192e66e042e67dabeae96ca345def",
"text": "Privacy-enhancing technologies (PETs), which constitute a wide array of technical means for protecting users’ privacy, have gained considerable momentum in both academia and industry. However, existing surveys of PETs fail to delineate what sorts of privacy the described technologies enhance, which makes it difficult to differentiate between the various PETs. Moreover, those surveys could not consider very recent important developments with regard to PET solutions. The goal of this chapter is two-fold. First, we provide an analytical framework to differentiate various PETs. This analytical framework consists of high-level privacy principles and concrete privacy concerns. Secondly, we use this framework to evaluate representative up-to-date PETs, specifically with regard to the privacy concerns they address, and how they address them (i.e., what privacy principles they follow). Based on findings of the evaluation, we outline several future research directions.",
"title": ""
},
{
"docid": "a8164a657a247761147c6012fd5442c9",
"text": "Latent variable models are a powerful tool for addressing several tasks in machine learning. However, the algorithms for learning the parameters of latent variable models are prone to getting stuck in a bad local optimum. To alleviate this problem, we build on the intuition that, rather than considering all samples simultaneously, the algorithm should be presented with the training data in a meaningful order that facilitates learning. The order of the samples is determined by how easy they are. The main challenge is that typically we are not provided with a readily computable measure of the easiness of samples. We address this issue by proposing a novel, iterative self-paced learning algorithm where each iteration simultaneously selects easy samples and learns a new parameter vector. The number of samples selected is governed by a weight that is annealed until the entire training data has been considered. We empirically demonstrate that the self-paced learning algorithm outperforms the state of the art method for learning a latent structural SVM on four applications: object localization, noun phrase coreference, motif finding and handwritten digit recognition.",
"title": ""
},
{
"docid": "1a0a299c53924e08eb767512de230f44",
"text": "Binary code reutilization is the process of automatically identifying the interface and extracting the instructions and data dependencies of a code fragment from an executable program, so that it is selfcontained and can be reused by external code. Binary code reutilization is useful for a number of security applications, including reusing the proprietary cryptographic or unpacking functions from a malware sample and for rewriting a network dialog. In this paper we conduct the first systematic study of automated binary code reutilization and its security applications. The main challenge in binary code reutilization is understanding the code fragment’s interface. We propose a novel technique to identify the prototype of an undocumented code fragment directly from the program’s binary, without access to source code or symbol information. Further, we must also extract the code itself from the binary so that it is self-contained and can be easily reused in another program. We design and implement a tool that uses a combination of dynamic and static analysis to automatically identify the prototype and extract the instructions of an assembly function into a form that can be reused by other C code. The extracted function can be run independently of the rest of the program’s functionality and shared with other users. We apply our approach to scenarios that include extracting the encryption and decryption routines from malware samples, and show that these routines can be reused by a network proxy to decrypt encrypted traffic on the network. This allows the network proxy to rewrite the malware’s encrypted traffic by combining the extracted encryption and decryption functions with the session keys and the protocol grammar. We also show that we can reuse a code fragment from an unpacking function for the unpacking routine for a different sample of the same family, even if the code fragment is not a complete function.",
"title": ""
},
{
"docid": "6e690c5aa54b28ba23d9ac63db4cc73a",
"text": "The Topic Detection and Tracking (TDT) evaluation program has included a \"cluster detection\" task since its inception in 1996. Systems were required to process a stream of broadcast news stories and partition them into non-overlapping clusters. A system's effectiveness was measured by comparing the generated clusters to \"truth\" clusters created by human annotators. Starting in 2003, TDT is moving to a more realistic model that permits overlapping clusters (stories may be on more than one topic) and encourages the creation of a hierarchy to structure the relationships between clusters (topics). We explore a range of possible evaluation models for this modified TDT clustering task to understand the best approach for mapping between the human-generated \"truth\" clusters and a much richer hierarchical structure. We demonstrate that some obvious evaluation techniques fail for degenerate cases. For a few others we attempt to develop an intuitive sense of what the evaluation numbers mean. We settle on some approaches that incorporate a strong balance between cluster errors (misses and false alarms) and the distance it takes to travel between stories within the hierarchy.",
"title": ""
},
{
"docid": "5b41a7c287b54b16e9d791cb62d7aa5a",
"text": "Recent evidence demonstrates that children are selective in their social learning, preferring to learn from a previously accurate speaker than from a previously inaccurate one. We examined whether children assessing speakers' reliability take into account how speakers achieved their prior accuracy. In Study 1, when faced with two accurate informants, 4- and 5-year-olds (but not 3-year-olds) were more likely to seek novel information from an informant who had previously given the answers unaided than from an informant who had always relied on help from a third party. Similarly, in Study 2, 4-year-olds were more likely to trust the testimony of an unaided informant over the testimony provided by an assisted informant. Our results indicate that when children reach around 4 years of age, their selective trust extends beyond simple generalizations based on informants' past accuracy to a more sophisticated selectivity that distinguishes between truly knowledgeable informants and merely accurate informants who may not be reliable in the long term.",
"title": ""
},
{
"docid": "f44ad33cfe612c99d5b9ac52e3bb4c70",
"text": "Kongetira, Poonacha. MSEE., Purdue University, August 1994. Modelling of Selective Epitaxial Growth(SEG) and Epitaxial Lateral Overgrowth( ELO) of Silicon in SiH2C12-HC1-H2 system. Major Professor: Gerold W. Neudeck. A semi-empirical model for the growth rate of selective epitaxial silicon(SEG) in the Dichlorosilane-HC1-Hz system that represents the experimenltal data has been presented. All epitaxy runs were done using a Gemini-I LPCVD pancake reactor. Dichlorosilane was used as the source gas and hydrogen as the carrier gas. Hydrogen Cllloride(HC1) was used to ensure that no nucleation took place on the oxide. The growth rate expression was considered to be the sum of a growth term dependent on the partial pressures of Dichlorosilane and hydrogen, and an etch berm that varies as the partial pressure of HC1. The growth and etch terms were found to have an Arrhenius relation with temperature, with activation energies of 52kcal/mol and 36kcal/mol respectively. Good agreement was obtained with experimental data. The variation of the selectivity threshold was correctly predicted, which had been a problem with earlier models for SEG growth rates. SEG/ELO Silicon was grown from 920-970°C at 40 and 150 torr pressures for a variety of HCI concentrations. In addition previous data collected by our research group at 820-1020°C and 40-150torr were used in the model.",
"title": ""
},
{
"docid": "560a19017dcc240d48bb879c3165b3e1",
"text": "Battery management systems in hybrid electric vehicle battery packs must estimate values descriptive of the pack’s present operating condition. These include: battery state of charge, power fade, capacity fade, and instantaneous available power. The estimation mechanism must adapt to changing cell characteristics as cells age and therefore provide accurate estimates over the lifetime of the pack. In a series of three papers, we propose a method, based on extended Kalman filtering (EKF), that is able to accomplish these goals on a lithium ion polymer battery pack. We expect that it will also work well on other battery chemistries. These papers cover the required mathematical background, cell modeling and system identification requirements, and the final solution, together with results. In order to use EKF to estimate the desired quantities, we first require a mathematical model that can accurately capture the dynamics of a cell. In this paper we “evolve” a suitable model from one that is very primitive to one that is more advanced and works well in practice. The final model includes terms that describe the dynamic contributions due to open-circuit voltage, ohmic loss, polarization time constants, electro-chemical hysteresis, and the effects of temperature. We also give a means, based on EKF, whereby the constant model parameters may be determined from cell test data. Results are presented that demonstrate it is possible to achieve root-mean-squared modeling error smaller than the level of quantization error expected in an implementation. © 2004 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "fea34b4a4b0b2dcdacdc57dce66f31de",
"text": "Deep neural networks have become the state-ofart methods in many fields of machine learning recently. Still, there is no easy way how to choose a network architecture which can significantly influence the network performance. This work is a step towards an automatic architecture design. We propose an algorithm for an optimization of a network architecture based on evolution strategies. The al gorithm is inspired by and designed directly for the Keras library [3] which is one of the most common implementations of deep neural networks. The proposed algorithm is tested on MNIST data set and the prediction of air pollution based on sensor measurements, and it is compared to several fixed architectures and support vector regression.",
"title": ""
},
{
"docid": "2dee247b24afc7ddba44b312c0832bc1",
"text": "During crowded events, cellular networks face voice and data traffic volumes that are often orders of magnitude higher than what they face during routine days. Despite the use of portable base stations for temporarily increasing communication capacity and free Wi-Fi access points for offloading Internet traffic from cellular base stations, crowded events still present significant challenges for cellular network operators looking to reduce dropped call events and improve Internet speeds. For an effective cellular network design, management, and optimization, it is crucial to understand how cellular network performance degrades during crowded events, what causes this degradation, and how practical mitigation schemes would perform in real-life crowded events. This paper makes a first step toward this end by characterizing the operational performance of a tier-1 cellular network in the U.S. during two high-profile crowded events in 2012. We illustrate how the changes in population distribution, user behavior, and application workload during crowded events result in significant voice and data performance degradation, including more than two orders of magnitude increase in connection failures. Our findings suggest two mechanisms that can improve performance without resorting to costly infrastructure changes: radio resource allocation tuning and opportunistic connection sharing. Using trace-driven simulations, we show that more aggressive release of radio resources via 1-2 s shorter radio resource control timeouts as compared with routine days helps to achieve better tradeoff between wasted radio resources, energy consumption, and delay during crowded events, and opportunistic connection sharing can reduce connection failures by 95% when employed by a small number of devices in each cell sector.",
"title": ""
},
{
"docid": "959a8602cb7292a7daf341d2b7614492",
"text": "This paper presents a calibration method for eye-in-hand systems in order to estimate the hand-eye and the robot-world transformations. The estimation takes place in terms of a parametrization of a stochastic model. In order to perform optimally, a metric on the group of the rigid transformations SE(3) and the corresponding error model are proposed for nonlinear optimization. This novel metric works well with both common formulations AX=XB and AX=ZB, and makes use of them in accordance with the nature of the problem. The metric also adapts itself to the system precision characteristics. The method is compared in performance to earlier approaches",
"title": ""
},
{
"docid": "a238ba310374a78d9c0e09bee5aaf123",
"text": "Automatically constructed knowledge bases (KB’s) are a powerful asset for search, analytics, recommendations and data integration, with intensive use at big industrial stakeholders. Examples are the knowledge graphs for search engines (e.g., Google, Bing, Baidu) and social networks (e.g., Facebook), as well as domain-specific KB’s (e.g., Bloomberg, Walmart). These achievements are rooted in academic research and community projects. The largest general-purpose KB’s with publicly accessible contents are BabelNet, DBpedia, Wikidata, and Yago. They contain millions of entities, organized in hundreds to hundred thousands of semantic classes, and billions of relational facts on entities. These and other knowledge and data resources are interlinked at the entity level, forming the Web of Linked Open Data.",
"title": ""
},
{
"docid": "8ee5a9dde6f919637618787f6ffcc777",
"text": "Microbial infection initiates complex interactions between the pathogen and the host. Pathogens express several signature molecules, known as pathogen-associated molecular patterns (PAMPs), which are essential for survival and pathogenicity. PAMPs are sensed by evolutionarily conserved, germline-encoded host sensors known as pathogen recognition receptors (PRRs). Recognition of PAMPs by PRRs rapidly triggers an array of anti-microbial immune responses through the induction of various inflammatory cytokines, chemokines and type I interferons. These responses also initiate the development of pathogen-specific, long-lasting adaptive immunity through B and T lymphocytes. Several families of PRRs, including Toll-like receptors (TLRs), RIG-I-like receptors (RLRs), NOD-like receptors (NLRs), and DNA receptors (cytosolic sensors for DNA), are known to play a crucial role in host defense. In this review, we comprehensively review the recent progress in the field of PAMP recognition by PRRs and the signaling pathways activated by PRRs.",
"title": ""
}
] |
scidocsrr
|
9174af16f0d2360cf68fcc8308434213
|
Text normalization in mandarin text-to-speech system
|
[
{
"docid": "275a5302219385f22706b483ecc77a74",
"text": "This paper describes a bilingual text-to-speech (TTS) system, Microsoft Mulan, which switches between Mandarin and English smoothly and which maintains the sentence level intonation even for mixed-lingual texts. Mulan is constructed on the basis of the Soft Prediction Only prosodic strategy and the Prosodic-Constraint Orient unit-selection strategy. The unitselection module of Mulan is shared across languages. It is insensitive to language identity, even though the syllable is used as the smallest unit in Mandarin, and the phoneme in English. Mulan has a unique module, the language-dispatching module, which dispatches texts to the language-specific front-ends and merges the outputs of the two front-ends together. The mixed texts are “uttered” out with the same voice. According to our informal listening test, the speech synthesized with Mulan sounds quite natural. Sample waves can be heard at: http://research.microsoft.com/~echang/projects/tts/mulan.htm.",
"title": ""
},
{
"docid": "758e19c8e39ad9e85d17d1ab67c9ef14",
"text": "In addition to ordinary words and names, real text contains non-standard “words” (NSWs), including numbers, abbreviations, dates, currency amounts and acronyms. Typically, one cannot find NSWs in a dictionary, nor can one find their pronunciation by an application of ordinary “letter-to-sound” rules. Non-standard words also have a greater propensity than ordinary words to be ambiguous with respect to their interpretation or pronunciation. In many applications, it is desirable to “normalize” text by replacing the NSWs with the contextually appropriate ordinary word or sequence of words. Typical technology for text normalization involves sets of ad hoc rules tuned to handle one or two genres of text (often newspaper-style text) with the expected result that the techniques do not usually generalize well to new domains. The purpose of the work reported here is to take some initial steps towards addressing deficiencies in previous approaches to text normalization. We developed a taxonomy of NSWs on the basis of four rather distinct text types—news text, a recipes newsgroup, a hardware-product-specific newsgroup, and real-estate classified ads. We then investigated the application of several general techniques including n-gram language models, decision trees and weighted finite-state transducers to the range of NSW types, and demonstrated that a systematic treatment can lead to better results than have been obtained by the ad hoc treatments that have typically been used in the past. For abbreviation expansion in particular, we investigated both supervised and unsupervised approaches. We report results in terms of word-error rate, which is standard in speech recognition evaluations, but which has only occasionally been used as an overall measure in evaluating text normalization systems. c © 2001 Academic Press Author for correspondence: AT&T Labs–Research, Shannon Laboratory, Room B207, 180 Park Avenue, PO Box 971, Florham Park, NJ 07932-0000, U.S.A. E-mail: [email protected] 0885–2308/01/030287 + 47 $35.00/0 c © 2001 Academic Press",
"title": ""
}
] |
[
{
"docid": "1274e55cc173f64fcc9a191d859c2e41",
"text": "We present an O*(n3) randomized algorithm for estimating the volume of a well-rounded convex body given by a membership oracle, improving on the previous best complexity of O*(n4). The new algorithmic ingredient is an accelerated cooling schedule where the rate of cooling increases with the temperature. Previously, the known approach for potentially achieving such complexity relied on a positive resolution of the KLS hyperplane conjecture, a central open problem in convex geometry.",
"title": ""
},
{
"docid": "54d993c0765c334e4213e9f435675ed1",
"text": "This article identifies four factors for consideration in norms-based research to enhance the predictive ability of theoretical models. First, it makes the distinction between perceived and collective norms and between descriptive and injunctive norms. Second, the article addresses the role of important moderators in the relationship between descriptive norms and behaviors, including outcome expectations, group identity, and ego involvement. Third, it discusses the role of both interpersonal and mass communication in normative influences. Lastly, it outlines behavioral attributes that determine susceptibility to normative influences, including behavioral ambiguity and the public or private nature of the behavior.",
"title": ""
},
{
"docid": "90a1fc43ee44634bce3658463503994e",
"text": "Large-scale distributed training requires significant communication bandwidth for gradient exchange that limits the scalability of multi-node training, and requires expensive high-bandwidth network infrastructure. The situation gets even worse with distributed training on mobile devices (federated learning), which suffers from higher latency, lower throughput, and intermittent poor connections. In this paper, we find 99.9% of the gradient exchange in distributed SGD are redundant, and propose Deep Gradient Compression (DGC) to greatly reduce the communication bandwidth. To preserve accuracy during this compression, DGC employs four methods: momentum correction, local gradient clipping, momentum factor masking, and warm-up training. We have applied Deep Gradient Compression to image classification, speech recognition, and language modeling with multiple datasets including Cifar10, ImageNet, Penn Treebank, and Librispeech Corpus. On these scenarios, Deep Gradient Compression achieves a gradient compression ratio from 270× to 600× without losing accuracy, cutting the gradient size of ResNet-50 from 97MB to 0.35MB, and for DeepSpeech from 488MB to 0.74MB. Deep gradient compression enables large-scale distributed training on inexpensive commodity 1Gbps Ethernet and facilitates distributed training on mobile.",
"title": ""
},
{
"docid": "2515a3ace56b101d03f8c9fed515b7d3",
"text": "Characteristics of knowledge, people engaged in knowledge transfer, and knowledge stickiness: evidence from Chinese R & D team Huang Huan, Ma Yongyuan, Zhang Sheng, Dou Qinchao, Article information: To cite this document: Huang Huan, Ma Yongyuan, Zhang Sheng, Dou Qinchao, \"Characteristics of knowledge, people engaged in knowledge transfer, and knowledge stickiness: evidence from Chinese R & D team\", Journal of Knowledge Management, https:// doi.org/10.1108/JKM-02-2017-0054 Permanent link to this document: https://doi.org/10.1108/JKM-02-2017-0054",
"title": ""
},
{
"docid": "0c9a76222f885b95f965211e555e16cd",
"text": "In this paper we address the following question: “Can we approximately sample from a Bayesian posterior distribution if we are only allowed to touch a small mini-batch of data-items for every sample we generate?”. An algorithm based on the Langevin equation with stochastic gradients (SGLD) was previously proposed to solve this, but its mixing rate was slow. By leveraging the Bayesian Central Limit Theorem, we extend the SGLD algorithm so that at high mixing rates it will sample from a normal approximation of the posterior, while for slow mixing rates it will mimic the behavior of SGLD with a pre-conditioner matrix. As a bonus, the proposed algorithm is reminiscent of Fisher scoring (with stochastic gradients) and as such an efficient optimizer during burn-in.",
"title": ""
},
{
"docid": "5e6c24f5f3a2a3c3b0aff67e747757cb",
"text": "Traps have been used extensively to provide early warning of hidden pest infestations. To date, however, there is only one type of trap on the market in the U.K. for storage mites, namely the BT mite trap, or monitor. Laboratory studies have shown that under the test conditions (20 °C, 65% RH) the BT trap is effective at detecting mites for at least 10 days for all three species tested: Lepidoglyphus destructor, Tyrophagus longior and Acarus siro. Further tests showed that all three species reached a trap at a distance of approximately 80 cm in a 24 h period. In experiments using 100 mites of each species, and regardless of either temperature (15 or 20 °C) or relative humidity (65 or 80% RH), the most abundant species in the traps was T. longior, followed by A. siro then L. destructor. Trap catches were highest at 20 °C and 65% RH. Temperature had a greater effect on mite numbers than humidity. Tests using different densities of each mite species showed that the number of L. destructor found in/on the trap was significantly reduced when either of the other two species was dominant. It would appear that there is an interaction between L. destructor and the other two mite species which affects relative numbers found within the trap.",
"title": ""
},
{
"docid": "2ab6bc212e45c3d5775e760e5a01c0ef",
"text": "The face recognition systems are used to recognize the person by using merely a person’s image. The face detection scheme is the primary method which is used to extract the region of interest (ROI). The ROI is further processed under the face recognition scheme. In the proposed model, we are going to use the cross-correlation algorithm along with the viola jones for the purpose of face recognition to recognize the person. The proposed model is proposed using the Cross-correlation algorithm along with cross correlation scheme in order to recognize the person by evaluating the facial features.",
"title": ""
},
{
"docid": "a333e0e08d7c5b52e08c2e88bdeb1cd1",
"text": "Money laundering (ML) involves moving illicit funds, which may be linked to drug trafficking or organized crime, through a series of transactions or accounts to disguise origin or ownership. China is facing severe challenge on money laundering with an estimated 200 billion RMB laundered annually. Decision tree method is used in this paper to create the determination rules of the money laundering risk by customer profiles of a commercial bank in China. A sample of twenty-eight customers with four attributes is used to induced and validate a decision tree method. The result indicates the effectiveness of decision tree in generating AML rules from companies' customer profiles. The anti-money laundering system in small and middle commerical bank in China is highly needed.",
"title": ""
},
{
"docid": "c3ee2beee84cd32e543c4b634062eeac",
"text": "In this paper, a hierarchical feature extraction method is proposed for image recognition. The key idea of the proposed method is to extract an effective feature, called local neural response (LNR), of the input image with nontrivial discrimination and invariance properties by alternating between local coding and maximum pooling operation. The local coding, which is carried out on the locally linear manifold, can extract the salient feature of image patches and leads to a sparse measure matrix on which maximum pooling is carried out. The maximum pooling operation builds the translation invariance into the model. We also show that other invariant properties, such as rotation and scaling, can be induced by the proposed model. In addition, a template selection algorithm is presented to reduce computational complexity and to improve the discrimination ability of the LNR. Experimental results show that our method is robust to local distortion and clutter compared with state-of-the-art algorithms.",
"title": ""
},
{
"docid": "421ab26a36eb4f9d97dfb323e394fa38",
"text": "Dual-system approaches to psychology explain the fundamental properties of human judgment, decision making, and behavior across diverse domains. Yet, the appropriate characterization of each system is a source of debate. For instance, a large body of research on moral psychology makes use of the contrast between \"emotional\" and \"rational/cognitive\" processes, yet even the chief proponents of this division recognize its shortcomings. Largely independently, research in the computational neurosciences has identified a broad division between two algorithms for learning and choice derived from formal models of reinforcement learning. One assigns value to actions intrinsically based on past experience, while another derives representations of value from an internally represented causal model of the world. This division between action- and outcome-based value representation provides an ideal framework for a dual-system theory in the moral domain.",
"title": ""
},
{
"docid": "fa7682dc85d868e57527fdb3124b309c",
"text": "The seminal 2003 paper by Cosley, Lab, Albert, Konstan, and Reidl, demonstrated the susceptibility of recommender systems to rating biases. To facilitate browsing and selection, almost all recommender systems display average ratings before accepting ratings from users which has been shown to bias ratings. This effect is called Social Inuence Bias (SIB); the tendency to conform to the perceived \\norm\" in a community. We propose a methodology to 1) learn, 2) analyze, and 3) mitigate the effect of SIB in recommender systems. In the Learning phase, we build a baseline dataset by allowing users to rate twice: before and after seeing the average rating. In the Analysis phase, we apply a new non-parametric significance test based on the Wilcoxon statistic to test whether the data is consistent with SIB. If significant, we propose a Mitigation phase using polynomial regression and the Bayesian Information Criterion (BIC) to predict unbiased ratings. We evaluate our approach on a dataset of 9390 ratings from the California Report Card (CRC), a rating-based system designed to encourage political engagement. We found statistically significant evidence of SIB. Mitigating models were able to predict changed ratings with a normalized RMSE of 12.8% and reduce bias by 76.3%. The CRC, our data, and experimental code are available at: http://californiareportcard.org/data/",
"title": ""
},
{
"docid": "a7f046dcc5e15ccfbe748fa2af400c98",
"text": "INTRODUCTION\nSmoking and alcohol use (beyond social norms) by health sciences students are behaviors contradictory to the social function they will perform as health promoters in their eventual professions.\n\n\nOBJECTIVES\nIdentify prevalence of tobacco and alcohol use in health sciences students in Mexico and Cuba, in order to support educational interventions to promote healthy lifestyles and development of professional competencies to help reduce the harmful impact of these legal drugs in both countries.\n\n\nMETHODS\nA descriptive cross-sectional study was conducted using quantitative and qualitative techniques. Data were collected from health sciences students on a voluntary basis in both countries using the same anonymous self-administered questionnaire, followed by an in-depth interview.\n\n\nRESULTS\nPrevalence of tobacco use was 56.4% among Mexican students and 37% among Cuban. It was higher among men in both cases, but substantial levels were observed in women as well. The majority of both groups were regularly exposed to environmental tobacco smoke. Prevalence of alcohol use was 76.9% in Mexican students, among whom 44.4% were classified as at-risk users. Prevalence of alcohol use in Cuban students was 74.1%, with 3.7% classified as at risk.\n\n\nCONCLUSIONS\nThe high prevalence of tobacco and alcohol use in these health sciences students is cause for concern, with consequences not only for their individual health, but also for their professional effectiveness in helping reduce these drugs' impact in both countries.",
"title": ""
},
{
"docid": "f7252ab3871dfae3860f575515867db6",
"text": "This review paper deals with IoT that can be used to improve cultivation of food crops, as lots of research work is going on to monitor the effective food crop cycle, since from the start to till harvesting the famers are facing very difficult for better yielding of food crops. Although few initiatives have also been taken by the Indian Government for providing online and mobile messaging services to farmers related to agricultural queries and agro vendor’s information to farmers even such information’s are not enough for farmer so still lot of research work need to be carried out on current agricultural approaches so that continuous sensing and monitoring of crops by convergence of sensors with IoT and making farmers to aware about crops growth, harvest time periodically and in turn making high productivity of crops and also ensuring correct delivery of products to end consumers at right place and right time.",
"title": ""
},
{
"docid": "cf2e54d22fbf261a51a226f7f5adc4f5",
"text": "We propose a new fast, robust and controllable method to simulate the dynamic destruction of large and complex objects in real time. The common method for fracture simulation in computer games is to pre-fracture models and replace objects by their pre-computed parts at run-time. This popular method is computationally cheap but has the disadvantages that the fracture pattern does not align with the impact location and that the number of hierarchical fracture levels is fixed. Our method allows dynamic fracturing of large objects into an unlimited number of pieces fast enough to be used in computer games. We represent visual meshes by volumetric approximate convex decompositions (VACD) and apply user-defined fracture patterns dependent on the impact location. The method supports partial fracturing meaning that fracture patterns can be applied locally at multiple locations of an object. We propose new methods for computing a VACD, for approximate convex hull construction and for detecting islands in the convex decomposition after partial destruction in order to determine support structures.",
"title": ""
},
{
"docid": "9a0707d2ccf6ede92960f3162f8ef5d6",
"text": "Using data from magnetic resonance imaging (MRI), autopsy, endocranial measurements, and other techniques, we show that (1) brain size is correlated with cognitive ability about .44 using MRI; (2) brain size varies by age, sex, social class, and race; and (3) cognitive ability varies by age, sex, social class, and race. Brain size and cognitive ability show a curvilinear relation with age, increasing to young adulthood and then decreasing; increasing from women to men; increasing with socioeconomic status; and increasing from Africans to Europeans to Asians. Although only further research can determine if such correlations represent cause and effect, it is clear that the direction of the brain-size/cognitive-ability relationships described by Paul Broca (1824-1880), Francis Galton (1822-1911), and other nineteenth-century visionaries is true, and that the null hypothesis of no relation, strongly advocated over the last half century, is false.",
"title": ""
},
{
"docid": "a2514f994292481d0fe6b37afe619cb5",
"text": "The purpose of this tutorial is to present an overview of various information hiding techniques. A brief history of steganography is provided along with techniques that were used to hide information. Text, image and audio based information hiding techniques are discussed. This paper also provides a basic introduction to digital watermarking. 1. History of Information Hiding The idea of communicating secretly is as old as communication itself. In this section, we briefly discuss the historical development of information hiding techniques such as steganography/ watermarking. Early steganography was messy. Before phones, before mail, before horses, messages were sent on foot. If you wanted to hide a message, you had two choices: have the messenger memorize it, or hide it on the messenger. While information hiding techniques have received a tremendous attention recently, its application goes back to Greek times. According to Greek historian Herodotus, the famous Greek tyrant Histiaeus, while in prison, used unusual method to send message to his son-in-law. He shaved the head of a slave to tattoo a message on his scalp. Histiaeus then waited until the hair grew back on slave’s head prior to sending him off to his son-inlaw. The second story also came from Herodotus, which claims that a soldier named Demeratus needed to send a message to Sparta that Xerxes intended to invade Greece. Back then, the writing medium was written on wax-covered tablet. Demeratus removed the wax from the tablet, wrote the secret message on the underlying wood, recovered the tablet with wax to make it appear as a blank tablet and finally sent the document without being detected. Invisible inks have always been a popular method of steganography. Ancient Romans used to write between lines using invisible inks based on readily available substances such as fruit juices, urine and milk. When heated, the invisible inks would darken, and become legible. Ovid in his “Art of Love” suggests using milk to write invisibly. Later chemically affected sympathetic inks were developed. Invisible inks were used as recently as World War II. Modern invisible inks fluoresce under ultraviolet light and are used as anti-counterfeit devices. For example, \"VOID\" is printed on checks and other official documents in an ink that appears under the strong ultraviolet light used for photocopies. The monk Johannes Trithemius, considered one of the founders of modern cryptography, had ingenuity in spades. His three volume work Steganographia, written around 1500, describes an extensive system for concealing secret messages within innocuous texts. On its surface, the book seems to be a magical text, and the initial reaction in the 16th century was so strong that Steganographia was only circulated privately until publication in 1606. But less than five years ago, Jim Reeds of AT&T Labs deciphered mysterious codes in the third volume, showing that Trithemius' work is more a treatise on cryptology than demonology. Reeds' fascinating account of the code breaking process is quite readable. One of Trithemius' schemes was to conceal messages in long invocations of the names of angels, with the secret message appearing as a pattern of letters within the words. For example, as every other letter in every other word: padiel aporsy mesarpon omeuas peludyn malpreaxo which reveals \"prymus apex.\" Another clever invention in Steganographia was the \"Ave Maria\" cipher. The book contains a series of tables, each of which has a list of words, one per letter. To code a message, the message letters are replaced by the corresponding words. If the tables are used in order, one table per letter, then the coded message will appear to be an innocent prayer. The earliest actual book on steganography was a four hundred page work written by Gaspari Schott in 1665 and called Steganographica. Although most of the ideas came from Trithemius, it was a start. Further development in the field occurred in 1883, with the publication of Auguste Kerchoffs’ Cryptographie militaire. Although this work was mostly about cryptography, it describes some principles that are worth keeping in mind when designing a new steganographic system.",
"title": ""
},
{
"docid": "72e6d897e8852fca481d39237cf04e36",
"text": "CONTEXT\nPrimary care physicians report high levels of distress, which is linked to burnout, attrition, and poorer quality of care. Programs to reduce burnout before it results in impairment are rare; data on these programs are scarce.\n\n\nOBJECTIVE\nTo determine whether an intensive educational program in mindfulness, communication, and self-awareness is associated with improvement in primary care physicians' well-being, psychological distress, burnout, and capacity for relating to patients.\n\n\nDESIGN, SETTING, AND PARTICIPANTS\nBefore-and-after study of 70 primary care physicians in Rochester, New York, in a continuing medical education (CME) course in 2007-2008. The course included mindfulness meditation, self-awareness exercises, narratives about meaningful clinical experiences, appreciative interviews, didactic material, and discussion. An 8-week intensive phase (2.5 h/wk, 7-hour retreat) was followed by a 10-month maintenance phase (2.5 h/mo).\n\n\nMAIN OUTCOME MEASURES\nMindfulness (2 subscales), burnout (3 subscales), empathy (3 subscales), psychosocial orientation, personality (5 factors), and mood (6 subscales) measured at baseline and at 2, 12, and 15 months.\n\n\nRESULTS\nOver the course of the program and follow-up, participants demonstrated improvements in mindfulness (raw score, 45.2 to 54.1; raw score change [Delta], 8.9; 95% confidence interval [CI], 7.0 to 10.8); burnout (emotional exhaustion, 26.8 to 20.0; Delta = -6.8; 95% CI, -4.8 to -8.8; depersonalization, 8.4 to 5.9; Delta = -2.5; 95% CI, -1.4 to -3.6; and personal accomplishment, 40.2 to 42.6; Delta = 2.4; 95% CI, 1.2 to 3.6); empathy (116.6 to 121.2; Delta = 4.6; 95% CI, 2.2 to 7.0); physician belief scale (76.7 to 72.6; Delta = -4.1; 95% CI, -1.8 to -6.4); total mood disturbance (33.2 to 16.1; Delta = -17.1; 95% CI, -11 to -23.2), and personality (conscientiousness, 6.5 to 6.8; Delta = 0.3; 95% CI, 0.1 to 5 and emotional stability, 6.1 to 6.6; Delta = 0.5; 95% CI, 0.3 to 0.7). Improvements in mindfulness were correlated with improvements in total mood disturbance (r = -0.39, P < .001), perspective taking subscale of physician empathy (r = 0.31, P < .001), burnout (emotional exhaustion and personal accomplishment subscales, r = -0.32 and 0.33, respectively; P < .001), and personality factors (conscientiousness and emotional stability, r = 0.29 and 0.25, respectively; P < .001).\n\n\nCONCLUSIONS\nParticipation in a mindful communication program was associated with short-term and sustained improvements in well-being and attitudes associated with patient-centered care. Because before-and-after designs limit inferences about intervention effects, these findings warrant randomized trials involving a variety of practicing physicians.",
"title": ""
},
{
"docid": "718433393201b5521a003df6503fe18b",
"text": "The issue of potential data misuse rises whenever it is collected from several sources. In a common setting, a large database is either horizontally or vertically partitioned between multiple entities who want to find global trends from the data. Such tasks can be solved with secure multi-party computation (MPC) techniques. However, practitioners tend to consider such solutions inefficient. Furthermore, there are no established tools for applying secure multi-party computation in real-world applications. In this paper, we describe Sharemind—a toolkit, which allows data mining specialist with no cryptographic expertise to develop data mining algorithms with good security guarantees. We list the building blocks needed to deploy a privacy-preserving data mining application and explain the design decisions that make Sharemind applications efficient in practice. To validate the practical feasibility of our approach, we implemented and benchmarked four algorithms for frequent itemset mining.",
"title": ""
},
{
"docid": "3669d58dc1bed1d83e5d0d6747771f0e",
"text": "To cite: He A, Kwatra SG, Kim N, et al. BMJ Case Rep Published online: [please include Day Month Year] doi:10.1136/bcr-2016214761 DESCRIPTION A 26-year-old woman with a reported history of tinea versicolour presented for persistent hypopigmentation on her bilateral forearms. Detailed examination revealed multiple small (5–10 mm), irregularly shaped white macules on the extensor surfaces of the bilateral forearms overlying slightly erythaematous skin. The surrounding erythaematous skin blanched with pressure and with elevation of the upper extremities the white macules were no longer visible (figures 1 and 2). A clinical diagnosis of Bier spots was made based on the patient’s characteristic clinical features. Bier spots are completely asymptomatic and are often found on the extensor surfaces of the upper and lower extremities, although they are sometimes generalised. They are a benign physiological vascular anomaly, arising either from cutaneous vessels responding to venous hypertension or from small vessel vasoconstriction leading to tissue hypoxia. 3 Our patient had neither personal nor family history of vascular disease. Bier spots are easily diagnosed by a classic sign on physical examination: the pale macules disappear with pressure applied on the surrounding skin or by elevating the affected limbs (figure 2). However, Bier spots can be easily confused with a variety of other disorders associated with hypopigmented macules. The differential diagnosis includes vitiligo, postinflammatory hypopigmentation and tinea versicolour, which was a prior diagnosis in this case. Bier spots are often idiopathic and regress spontaneously, although there are reports of Bier spots heralding systemic diseases, such as scleroderma renal crisis, mixed cryoglobulinaemia or lymphoma. Since most Bier spots are idiopathic and transient, no treatment is required.",
"title": ""
},
{
"docid": "968c0de61cbd45e04155ecfc6eaf6891",
"text": "An accurate abstractive summary of a document should contain all its salient information and should be logically entailed by the input document. We improve these important aspects of abstractive summarization via multi-task learning with the auxiliary tasks of question generation and entailment generation, where the former teaches the summarization model how to look for salient questioning-worthy details, and the latter teaches the model how to rewrite a summary which is a directed-logical subset of the input document. We also propose novel multitask architectures with high-level (semantic) layer-specific sharing across multiple encoder and decoder layers of the three tasks, as well as soft-sharing mechanisms (and show performance ablations and analysis examples of each contribution). Overall, we achieve statistically significant improvements over the state-ofthe-art on both the CNN/DailyMail and Gigaword datasets, as well as on the DUC2002 transfer setup. We also present several quantitative and qualitative analysis studies of our model’s learned saliency and entailment skills.",
"title": ""
}
] |
scidocsrr
|
b28ae287b0d0572600fe03c9cca7c696
|
Vein Detection System using Infrared Light
|
[
{
"docid": "1f9940ff3e31267cfeb62b2a7915aba9",
"text": "Infrared vein detection is one of the newest biomedical techniques researched today. Basic principal behind this is, when IR light transmitted on palm it passes through tissue and veins absorbs that light and the vein appears darker than surrounding tissue. This paper presents vein detection system using strong IR light source, webcam, Matlab based image processing algorithm. Using the Strong IR light source consisting of high intensity led and webcam camera we captured transillumination image of palm. Image processing algorithm is used to separate out the veins from palm.",
"title": ""
}
] |
[
{
"docid": "d60fa38df9a4e5692d5c36ea3aa88772",
"text": "Many human activities require precise judgments about the physical properties and dynamics of multiple objects. Classic work suggests that people’s intuitive models of physics are relatively poor and error-prone, based on highly simplified heuristics that apply only in special cases or incorrect general principles (e.g., impetus instead of momentum). These conclusions seem at odds with the breadth and sophistication of naive physical reasoning in real-world situations. Our work measures the boundaries of people’s physical reasoning and tests the richness of intuitive physics knowledge in more complex scenes. We asked participants to make quantitative judgments about stability and other physical properties of virtual 3D towers. We found their judgments correlated highly with a model observer that uses simulations based on realistic physical dynamics and sampling-based approximate probabilistic inference to efficiently and accurately estimate these properties. Several alternative heuristic accounts provide substantially worse fits.",
"title": ""
},
{
"docid": "1134469ecc1d3c47981b2bdffecc9296",
"text": "Automated valet parking services provide great potential to increase the attractiveness of electric vehicles by mitigating their two main current deficiencies: reduced driving ranges and prolonged refueling times. The European research project V-Charge aims at providing this service on designated parking lots using close-to-market sensors only. For this purpose the project developed a prototype capable of performing fully automated navigation in mixed traffic on designated parking lots and GPS-denied parking garages with cameras and ultrasonic sensors only. This paper summarizes the work of the project, comprising advances in network communication and parking space scheduling, multi-camera calibration, semantic mapping concepts, visual localization and motion planning. The project pushed visual localization, environment perception and automated parking to centimetre precision. The developed infrastructure-based camera calibration and semi-supervised semantic mapping concepts greatly reduce maintenance efforts. Results are presented from extensive month-long field tests.",
"title": ""
},
{
"docid": "2b3851ac0d4202a90896d160523bedc3",
"text": "Crying is a communication method used by infants given the limitations of language. Parents or nannies who have never had the experience to take care of the baby will experience anxiety when the infant is crying. Therefore, we need a way to understand about infant's cry and apply the formula. This research develops a system to classify the infant's cry sound using MACF (Mel-Frequency Cepstrum Coefficients) feature extraction and BNN (Backpropagation Neural Network) based on voice type. It is classified into 3 classes: hungry, discomfort, and tired. A voice input must be ascertained as infant's cry sound which using 3 features extraction (pitch with 2 approaches: Modified Autocorrelation Function and Cepstrum Pitch Determination, Energy, and Harmonic Ratio). The features coefficients of MFCC are furthermore classified by Backpropagation Neural Network. The experiment shows that the system can classify the infant's cry sound quite well, with 30 coefficients and 10 neurons in the hidden layer.",
"title": ""
},
{
"docid": "b6d2cd611d34c2c5f28f7348aa56f1b1",
"text": "This paper tackles the problem of semi-supervised video object segmentation, that is, segmenting an object in a sequence given its mask in the first frame. One of the main challenges in this scenario is the change of appearance of the objects of interest. Their semantics, on the other hand, do not vary. This paper investigates how to take advantage of such invariance via the introduction of a semantic prior that guides the appearance model. Specifically, given the segmentation mask of the first frame of a sequence, we estimate the semantics of the object of interest, and propagate that knowledge throughout the sequence to improve the results based on an appearance model. We present Semantically-Guided Video Object Segmentation (SGV), which improves results over previous state of the art on two different datasets using a variety of evaluation metrics, while running in half a second per frame.",
"title": ""
},
{
"docid": "c43f26b8f58bb93b6dbb1034a77163ec",
"text": "Protecting software copyright has been an issue since the late 1970’s, and software license validation has been a primary method employed in an attempt to minimise software piracy and protect software copyright. This paper presents a novel method for decentralised peer-topeer software license validation using cryptocurrency blockchain technology to ameliorate software piracy, and to provide a mechanism for all software developers to protect their copyrighted works.",
"title": ""
},
{
"docid": "392d37f9d7a52a1f9cbcd97e1311de74",
"text": "Fuzzy Logic Controllers are a specific model of Fuzzy Rule Based Systems suitable for engineering applications for which classic control strategies do not achieve good results or for when it is too difficult to obtain a mathematical model. Recently, the International Electrotechnical Commission has published a standard for fuzzy control programming in part 7 of the IEC 61131 norm in order to offer a well defined common understanding of the basic means with which to integrate fuzzy control applications in control systems. In this paper, we introduce an open source Java library called jFuzzyLogic which offers a fully functional and complete implementation of a fuzzy inference system according to this standard, providing a programming interface and Eclipse plugin to easily write and test code for fuzzy control applications. A case study is given to illustrate the use of jFuzzyLogic.",
"title": ""
},
{
"docid": "3d332b3ae4487a7272ae1e2204965f98",
"text": "Robots are increasingly present in modern industry and also in everyday life. Their applications range from health-related situations, for assistance to elderly people or in surgical operations, to automatic and driver-less vehicles (on wheels or flying) or for driving assistance. Recently, an interest towards robotics applied in agriculture and gardening has arisen, with applications to automatic seeding and cropping or to plant disease control, etc. Autonomous lawn mowers are succesful market applications of gardening robotics. In this paper, we present a novel robot that is developed within the TrimBot2020 project, funded by the EU H2020 program. The project aims at prototyping the first outdoor robot for automatic bush trimming and rose pruning.",
"title": ""
},
{
"docid": "de2bbd675430ffcb490f090f8baec98d",
"text": "In this letter, we analyze the electromagnetic characteristic of a frequency selective surface (FSS) radome using the physical optics (PO) method and ray tracing technique. We consider the cross-loop slot FSS and the tangent-ogive radome. Radiation pattern of the FSS radome is computed to illustrate the electromagnetic transmission characteristic.",
"title": ""
},
{
"docid": "9a66f3a0c7c5e625e26909f04f43f5f4",
"text": "El propósito de este estudio fue examinar el impacto relativo de los diferentes tipos de liderazgo en los resultados académicos y no académicos de los estudiantes. La metodología consistió en el análisis de los resultados de 27 estudios publicados sobre la relación entre liderazgo y resultados de los estudiantes. El primer metaanálisis, que incluyó 22 de los 27 estudios, implicó una comparación de los efectos de la transformación y liderazgo instructivo en los resultados de los estudiantes. Con el segundo meta-análisis se realizó una comparación de los efectos de cinco conjuntos derivados inductivamente de prácticas de liderazgo en los resultados de los estudiantes. Doce de los estudios contribuyeron a este segundo análisis. El primer meta-análisis indicó que el efecto promedio de liderazgo instructivo en los resultados de los estudiantes fue de tres a cuatro veces la de liderazgo transformacional. La inspección de los elementos de la encuesta que se utilizaron para medir el liderazgo escolar reveló cinco conjuntos de prácticas de liderazgo o dimensiones: el establecimiento de metas y expectativas; dotación de recursos estratégicos, la planificación, coordinación y evaluación de la enseñanza y el currículo; promoción y participan en el aprendizaje y desarrollo de los profesores, y la garantía de un ambiente ordenado y de apoyo. El segundo metaanálisis reveló fuertes efectos promedio para la dimensión de liderazgo que implica promover y participar en el aprendizaje docente, el desarrollo y efectos moderados de las dimensiones relacionadas con la fijación de objetivos y la planificación, coordinación y evaluación de la enseñanza y el currículo. Las comparaciones entre el liderazgo transformacional y el instructivo y entre las cinco dimensiones de liderazgo sugirieron que los líderes que focalizan sus relaciones, su trabajo y su aprendizaje en el asunto clave de la enseñanza y el aprendizaje, tendrán una mayor influencia en los resultados de los estudiantiles. El artículo concluye con una discusión sobre la necesidad de que liderazgo, investigación y práctica estén más estrechamente vinculados a la evidencia sobre la enseñanza eficaz y el aprendizaje efectivo del profesorado. Dicha alineación podría aumentar aún más el impacto del liderazgo escolar en los resultados de los estudiantes.",
"title": ""
},
{
"docid": "8b38fd43c9d418b356ef009e9612e564",
"text": "English. This work aims at evaluating and comparing two different frameworks for the unsupervised topic modelling of the CompWHoB Corpus, namely our political-linguistic dataset. The first approach is represented by the application of the latent DirichLet Allocation (henceforth LDA), defining the evaluation of this model as baseline of comparison. The second framework employs Word2Vec technique to learn the word vector representations to be later used to topic-model our data. Compared to the previously defined LDA baseline, results show that the use of Word2Vec word embeddings significantly improves topic modelling performance but only when an accurate and taskoriented linguistic pre-processing step is carried out. Italiano. L’obiettivo di questo contributo è di valutare e confrontare due differenti framework per l’apprendimento automatico del topic sul CompWHoB Corpus, la nostra risorsa testuale. Dopo aver implementato il modello della latent DirichLet Allocation, abbiamo definito come standard di riferimento la valutazione di questo stesso approccio. Come secondo framework, abbiamo utilizzato il modello Word2Vec per apprendere le rappresentazioni vettoriali dei termini successivamente impiegati come input per la fase di apprendimento automatico del topic. I risulati mostrano che utilizzando i ‘word embeddings’ generati da Word2Vec, le prestazioni del modello aumentano significativamente ma solo se supportati da una accurata fase di ‘pre-processing’ linguisti-",
"title": ""
},
{
"docid": "01a4abce1498d3e2b334bd81ecf12ef0",
"text": "Online Social Networking has gained huge popularity amongst the masses. It is common for the users of Online Social Networks (OSNs) to share information with digital friends but in the bargain they loose privacy. Users are unaware of the privacy risks involved when they share their sensitive information in the network. The users should be aware of their privacy quotient and should know where they stand in the privacy measuring scale. In this paper we have described and calculated the Privacy Quotient i.e a privacy metric to measure the privacy of the user's profile using the naive approach. In the starting of the paper we have given the detailed analysis of the survey that we have carried out to know how well do people understand privacy in online social networks. At last we have proposed a model that will ensure privacy in the unstructured data. It will make use of the Item Response Theory model to measure the privacy leaks in the messages and text that is being posted by the users of the online social networking sites.",
"title": ""
},
{
"docid": "6a4844bf755830d14fb24caff1aa8442",
"text": "We present a stochastic first-order optimization algorithm, named BCSC, that adds a cyclic constraint to stochastic block-coordinate descent. It uses different subsets of the data to update different subsets of the parameters, thus limiting the detrimental effect of outliers in the training set. Empirical tests in benchmark datasets show that our algorithm outperforms state-of-the-art optimization methods in both accuracy as well as convergence speed. The improvements are consistent across different architectures, and can be combined with other training techniques and regularization methods.",
"title": ""
},
{
"docid": "f53885bda1368b5d7b9d14848d3002d2",
"text": "This paper presents a method for a reconfigurable magnetic resonance-coupled wireless power transfer (R-MRC-WPT) system in order to achieve higher transmission efficiency under various transmission distance and/or misalignment conditions. Higher efficiency, longer transmission distance, and larger misalignment tolerance can be achieved with the presented R-MRC-WPT system when compared to the conventional four-coil MRC-WPT (C-MRC-WPT) system. The reconfigurability in the R-MRC-WPT system is achieved by adaptively switching between different sizes of drive loops and load loops. All drive loops are in the same plane and all load loops are also in the same plane; this method does not require mechanical movements of the drive loop and load loop and does not result in the system volume increase. Theoretical basis of the method for the R-MRC-WPT system is derived based on a circuit model and an analytical model. Results from a proof-of-concept experimental prototype, with transmitter and receiver coil diameter of 60 cm each, show that the transmission efficiency of the R-MRC-WPT system is higher than the transmission efficiency of the C-MRC-WPT system and the capacitor tuning system for all distances up to 200 cm (~3.3 times the coil diameter) and for all lateral misalignment values within 60 cm (one coil diameter).",
"title": ""
},
{
"docid": "a1147a7b8bc6777ebb2ab7b4f308cc80",
"text": "We present a new graph-theoretic approach to the problem of image segmentation. Our method uses local criteria and yet produces results that reflect global properties of the image. We develop a framework that provides specific definitions of what it means for an image to be underor over-segmented. We then present an efficient algorithm for computing a segmentation that is neither undernor over-segmented according to these definitions. Our segmentation criterion is based on intensity differences between neighboring pixels. An important characteristic of the approach is that it is able to preserve detail in low-variability regions while ignoring detail in high-variability regions, which we illustrate with several examples on both real and sythetic images.",
"title": ""
},
{
"docid": "812e228e35a985d17f008506ddda94b4",
"text": "Chaotic signals have been considered potentially attractive in many signal processing applications ranging from wideband communication systems to cryptography and watermarking. Besides, some devices as nonlinear adaptive filters and phase-locked loops can present chaotic behavior. In this paper, we derive analytical expressions for the autocorrelation sequence, power spectral density and essential bandwidth of chaotic signals generated by the skew tent map. From these results, we suggest possible applications in communication systems. & 2009 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "605a078c74d37007654094b4b426ece8",
"text": "Currently, blockchain technology, which is decentralized and may provide tamper-resistance to recorded data, is experiencing exponential growth in industry and research. In this paper, we propose the MIStore, a blockchain-based medical insurance storage system. Due to blockchain’s the property of tamper-resistance, MIStore may provide a high-credibility to users. In a basic instance of the system, there are a hospital, patient, insurance company and n servers. Specifically, the hospital performs a (t, n)-threshold MIStore protocol among the n servers. For the protocol, any node of the blockchain may join the protocol to be a server if the node and the hospital wish. Patient’s spending data is stored by the hospital in the blockchain and is protected by the n servers. Any t servers may help the insurance company to obtain a sum of a part of the patient’s spending data, which servers can perform homomorphic computations on. However, the n servers cannot learn anything from the patient’s spending data, which recorded in the blockchain, forever as long as more than n − t servers are honest. Besides, because most of verifications are performed by record-nodes and all related data is stored at the blockchain, thus the insurance company, servers and the hospital only need small memory and CPU. Finally, we deploy the MIStore on the Ethererum blockchain and give the corresponding performance evaluation.",
"title": ""
},
{
"docid": "4c4bfcadd71890ccce9e58d88091f6b3",
"text": "With the dramatic growth of the game industry over the past decade, its rapid inclusion in many sectors of today’s society, and the increased complexity of games, game development has reached a point where it is no longer humanly possible to use only manual techniques to create games. Large parts of games need to be designed, built, and tested automatically. In recent years, researchers have delved into artificial intelligence techniques to support, assist, and even drive game development. Such techniques include procedural content generation, automated narration, player modelling and adaptation, and automated game design. This research is still very young, but already the games industry is taking small steps to integrate some of these techniques in their approach to design. The goal of this seminar was to bring together researchers and industry representatives who work at the forefront of artificial intelligence (AI) and computational intelligence (CI) in games, to (1) explore and extend the possibilities of AI-driven game design, (2) to identify the most viable applications of AI-driven game design in the game industry, and (3) to investigate new approaches to AI-driven game design. To this end, the seminar included a wide range of researchers and developers, including specialists in AI/CI for abstract games, commercial video games, and serious games. Thus, it fostered a better understanding of and unified vision on AI-driven game design, using input from both scientists as well as AI specialists from industry. Seminar November 19–24, 2017 – http://www.dagstuhl.de/17471 1998 ACM Subject Classification I.2.1 Artificial Intelligence Games",
"title": ""
},
{
"docid": "c6a649a1eed332be8fc39bfa238f4214",
"text": "The Internet of things (IoT), which integrates a variety of devices into networks to provide advanced and intelligent services, has to protect user privacy and address attacks such as spoofing attacks, denial of service (DoS) attacks, jamming, and eavesdropping. We investigate the attack model for IoT systems and review the IoT security solutions based on machine-learning (ML) techniques including supervised learning, unsupervised learning, and reinforcement learning (RL). ML-based IoT authentication, access control, secure offloading, and malware detection schemes to protect data privacy are the focus of this article. We also discuss the challenges that need to be addressed to implement these ML-based security schemes in practical IoT systems.",
"title": ""
},
{
"docid": "c61c111c5b5d1c4663905371b638e703",
"text": "Many standard computer vision datasets exhibit biases due to a variety of sources including illumination condition, imaging system, and preference of dataset collectors. Biases like these can have downstream effects in the use of vision datasets in the construction of generalizable techniques, especially for the goal of the creation of a classification system capable of generalizing to unseen and novel datasets. In this work we propose Unbiased Metric Learning (UML), a metric learning approach, to achieve this goal. UML operates in the following two steps: (1) By varying hyper parameters, it learns a set of less biased candidate distance metrics on training examples from multiple biased datasets. The key idea is to learn a neighborhood for each example, which consists of not only examples of the same category from the same dataset, but those from other datasets. The learning framework is based on structural SVM. (2) We do model validation on a set of weakly-labeled web images retrieved by issuing class labels as keywords to search engine. The metric with best validation performance is selected. Although the web images sometimes have noisy labels, they often tend to be less biased, which makes them suitable for the validation set in our task. Cross-dataset image classification experiments are carried out. Results show significant performance improvement on four well-known computer vision datasets.",
"title": ""
},
{
"docid": "712140b99a4765908ca26018b61f270f",
"text": "Accelerated by electric mobility and new requirements regarding quantities, quality and innovative motor designs, production technologies for windings of electrical drives gain in importance. Especially the demand for increasing slot fill ratios and a product design allowing manufacturers of electric drives to produce big quantities in a good quality impels innovations in the design of windings. The hairpin winding is a result of this development combining high slot fill ratios with new potentials for an economical production of the winding for electric drives. This is achieved by a new method for the production of the winding: The winding is assembled by mounting preformed elements of insulated copper wire to the stator simplifying the elaborate winding process on the one hand. On the other hand it becomes necessary to join these elements mechanically and electrically to manufacture the winding. Due to this, contacting technologies gain in importance regarding the production of hairpin windings. The new challenge consists of the high number of contact points that have to be produced in a small amount of space. On account of its process stability, high process velocities and the possibility of realizing a remote joining process, the laser welding shows big potentials for the realization of a contacting process for hairpin windings that is capable of series production. This paper describes challenges and possibilities for the application of infrared lasers in the field of hairpin winding production.",
"title": ""
}
] |
scidocsrr
|
e4576293e117c91ae83d78eae5309015
|
Towards Personalized Medicine: Leveraging Patient Similarity and Drug Similarity Analytics
|
[
{
"docid": "544333c99f2b28e37702306bfe6521d4",
"text": "Faced with unsustainable costs and enormous amounts of under-utilized data, health care needs more efficient practices, research, and tools to harness the full benefits of personal health and healthcare-related data. Imagine visiting your physician’s office with a list of concerns and questions. What if you could walk out the office with a personalized assessment of your health? What if you could have personalized disease management and wellness plan? These are the goals and vision of the work discussed in this paper. The timing is right for such a research direction—given the changes in health care, reimbursement, reform, meaningful use of electronic health care data, and patient-centered outcome mandate. We present the foundations of work that takes a Big Data driven approach towards personalized healthcare, and demonstrate its applicability to patient-centered outcomes, meaningful use, and reducing re-admission rates.",
"title": ""
}
] |
[
{
"docid": "d44b351cb1263cbd28cc7fc8c5ebb811",
"text": "Online distributed applications are becoming more and more important for users nowadays. There are an increasing number of individuals and companies developing applications and selling them online. In the past couple of years, Apple Inc. has successfully built an online application distribution platform -- iTunes App Store, which is facilitated by their fashionable hardware such like iPad or iPhone. Unlike other traditional selling networks, iTunes has some unique features to advertise their application, for example, daily application ranking, application recommendation, free trial application usage, application update, and user comments. All of these make us wonder what makes an application popular in the iTunes store and why users are interested in some specific type of applications. We plan to answer these questions by using machine learning techniques.",
"title": ""
},
{
"docid": "f132d1e91058ebc9484464e006a16da0",
"text": "We propose drl-RPN, a deep reinforcement learning-based visual recognition model consisting of a sequential region proposal network (RPN) and an object detector. In contrast to typical RPNs, where candidate object regions (RoIs) are selected greedily via class-agnostic NMS, drl-RPN optimizes an objective closer to the final detection task. This is achieved by replacing the greedy RoI selection process with a sequential attention mechanism which is trained via deep reinforcement learning (RL). Our model is capable of accumulating class-specific evidence over time, potentially affecting subsequent proposals and classification scores, and we show that such context integration significantly boosts detection accuracy. Moreover, drl-RPN automatically decides when to stop the search process and has the benefit of being able to jointly learn the parameters of the policy and the detector, both represented as deep networks. Our model can further learn to search over a wide range of exploration-accuracy trade-offs making it possible to specify or adapt the exploration extent at test time. The resulting search trajectories are image- and category-dependent, yet rely only on a single policy over all object categories. Results on the MS COCO and PASCAL VOC challenges show that our approach outperforms established, typical state-of-the-art object detection pipelines.",
"title": ""
},
{
"docid": "3023637fd498bb183dae72135812c304",
"text": "computational method for its solution. A Psychological Description of LSA as a Theory of Learning, Memory, and Knowledge We give a more complete description of LSA as a mathematical model later when we use it to simulate lexical acquisition. However, an overall outline is necessary to understand a roughly equivalent psychological theory we wish to present first. The input to LSA is a matrix consisting of rows representing unitary event types by columns representing contexts in which instances of the event types appear. One example is a matrix of unique word types by many individual paragraphs in which the words are encountered, where a cell contains the number of times that a particular word type, say model, appears in a particular paragraph, say this one. After an initial transformation of the cell entries, this matrix is analyzed by a statistical technique called singular value decomposition (SVD) closely akin to factor analysis, which allows event types and individual contexts to be re-represented as points or vectors in a high dimensional abstract space (Golub, Lnk, & Overton, 1981 ). The final output is a representation from which one can calculate similarity measures between all pairs consisting of either event types or con-space (Golub, Lnk, & Overton, 1981 ). The final output is a representation from which one can calculate similarity measures between all pairs consisting of either event types or contexts (e.g., word-word, word-paragraph, or paragraph-paragraph similarities). Psychologically, the data that the model starts with are raw, first-order co-occurrence relations between stimuli and the local contexts or episodes in which they occur. The stimuli or event types may be thought of as unitary chunks of perception or memory. The first-order process by which initial pairwise associations are entered and transformed in LSA resembles classical conditioning in that it depends on contiguity or co-occurrence, but weights the result first nonlinearly with local occurrence frequency, then inversely with a function of the number of different contexts in which the particular component is encountered overall and the extent to which its occurrences are spread evenly over contexts. However, there are possibly important differences in the details as currently implemented; in particular, LSA associations are symmetrical; a context is associated with the individual events it contains by the same cell entry as the events are associated with the context. This would not be a necessary feature of the model; it would be possible to make the initial matrix asymmetrical, with a cell indicating the co-occurrence relation, for example, between a word and closely following words. Indeed, Lund and Burgess (in press; Lund, Burgess, & Atchley, 1995), and SchUtze (1992a, 1992b), have explored related models in which such data are the input. The first step of the LSA analysis is to transform each cell entry from the number of times that a word appeared in a particular context to the log of that frequency. This approximates the standard empirical growth functions of simple learning. The fact that this compressive function begins anew with each context also yields a kind of spacing effect; the association of A and B is greater if both appear in two different contexts than if they each appear twice in one context. In a second transformation, all cell entries for a given word are divided by the entropy for that word, Z p log p over all its contexts. Roughly speaking, this step accomplishes much the same thing as conditioning rules such as those described by Rescorla & Wagner (1972), in that it makes the primary association better represent the informative relation between the entities rather than the mere fact that they occurred together. Somewhat more formally, the inverse entropy measure estimates the degree to which observing the occurrence of a component specifies what context it is in; the larger the entropy of, say, a word, the less information its observation transmits about the places it has occurred, so the less usage-defined meaning it acquires, and conversely, the less the meaning of a particular context is determined by containing the word. It is interesting to note that automatic information retrieval methods (including LSA when used for the purpose) are greatly improved by transformations of this general form, the present one usually appearing to be the best (Harman, 1986). It does not seem far-fetched to believe that the necessary transform for good information retrieval, retrieval that brings back text corresponding to what a person has in mind when the person offers one or more query words, corresponds to the functional relations in basic associative processes. Anderson (1990) has drawn attention to the analogy between information retrieval in external systems and those in the human mind. It is not clear which way the relationship goes. Does information retrieval in automatic systems work best when it mimics the circumstances that make people think two things are related, or is there a general logic that tends to make them have similar forms? In automatic information retrieval the logic is usually assumed to be that idealized searchers have in mind exactly the same text as they would like the system to find and draw the words in 2 Although this exploratory process takes some advantage of chance, there is no reason why any number of dimensions should be much better than any other unless some mechanism like the one proposed is at work. In all cases, the model's remaining parameters were fitted only to its input (training) data and not to the criterion (generalization) test. THE LATENT SEMANTIC ANALYSIS THEORY OF KNOWLEDGE 217 their queries from that text (see Bookstein & Swanson, 1974). Then the system's challenge is to estimate the probability that each text in its store is the one that the searcher was thinking about. This characterization, then, comes full circle to the kind of communicative agreement model we outlined above: The sender issues a word chosen to express a meaning he or she has in mind, and the receiver tries to estimate the probability of each of the sender's possible messages. Gallistel (1990), has argued persuasively for the need to separate local conditioning or associative processes from global representation of knowledge. The LSA model expresses such a separation in a very clear and precise way. The initial matrix after transformation to log frequency divided by entropy represents the product of the local or pairwise processes? The subsequent analysis and dimensionality reduction takes all of the previously acquired local information and turns it into a unified representation of knowledge. Thus, the first processing step of the model, modulo its associational symmetry, is a rough approximation to conditioning or associative processes. However, the model's next steps, the singular value decomposition and dimensionality optimization, are not contained as such in any extant psychological theory of learning, although something of the kind may be hinted at in some modem discussions of conditioning and, on a smaller scale and differently interpreted, is often implicit and sometimes explicit in many neural net and spreading-activation architectures. This step converts the transformed associative data into a condensed representation. The condensed representation can be seen as achieving several things, although they are at heart the result of only one mechanism. First, the re-representation captures indirect, higher-order associations. That is, jf a particular stimulus, X, (e.g., a word) has been associated with some other stimulus, Y, by being frequently found in joint context (i.e., contiguity), and Y is associated with Z, then the condensation can cause X and Z to have similar representations. However, the strength of the indirect XZ association depends on much more than a combination of the strengths of XY and YZ. This is because the relation between X and Z also depends, in a wellspecified manner, on the relation of each of the stimuli, X, Y, and Z, to every other entity in the space. In the past, attempts to predict indirect associations by stepwise chaining rules have not been notably successful (see, e.g., Pollio, 1968; Young, 1968). If associations correspond to distances in space, as supposed by LSA, stepwise chaining rules would not be expected to work well; if X is two units from Y and Y is two units from Z, all we know about the distance from X to Z is that it must be between zero and four. But with data about the distances between X, Y, Z, and other points, the estimate of XZ may be greatly improved by also knowing XY and YZ. An alternative view of LSA's effects is the one given earlier, the induction of a latent higher order similarity structure (thus its name) among representations of a large collection of events. Imagine, for example, that every time a stimulus (e.g., a word) is encountered, the distance between its representation and that of every other stimulus that occurs in close proximity to it is adjusted to be slightly smaller. The adjustment is then allowed to percolate through the whole previously constructed structure of relations, each point pulling on its neighbors until all settle into a compromise configuration (physical objects, weather systems, and Hopfield nets do this too; Hopfield, 1982). It is easy to see that the resulting relation between any two representations depends not only on direct experience with them but with everything else ever experienced. Although the current mathematical implementation of LSA does not work in this incremental way, its effects are much the same. The question, then, is whether such a mechanism, when combined with the statistics of experience, produces a faithful reflection of human knowledge. Finally, to anticipate what is developed later, the computational scheme used by LSA for combining and condensing local information into a common",
"title": ""
},
{
"docid": "7170110b2520fb37e282d08ed8774d0f",
"text": "OBJECTIVE\nTo examine the performance of the 11-13 weeks scan in detecting non-chromosomal abnormalities.\n\n\nMETHODS\nProspective first-trimester screening study for aneuploidies, including basic examination of the fetal anatomy, in 45 191 pregnancies. Findings were compared to those at 20-23 weeks and postnatal examination.\n\n\nRESULTS\nAneuploidies (n = 332) were excluded from the analysis. Fetal abnormalities were observed in 488 (1.1%) of the remaining 44 859 cases; 213 (43.6%) of these were detected at 11-13 weeks. The early scan detected all cases of acrania, alobar holoprosencephaly, exomphalos, gastroschisis, megacystis and body stalk anomaly, 77% of absent hand or foot, 50% of diaphragmatic hernia, 50% of lethal skeletal dysplasias, 60% of polydactyly, 34% of major cardiac defects, 5% of facial clefts and 14% of open spina bifida, but none of agenesis of the corpus callosum, cerebellar or vermian hypoplasia, echogenic lung lesions, bowel obstruction, most renal defects or talipes. Nuchal translucency (NT) was above the 95th percentile in 34% of fetuses with major cardiac defects.\n\n\nCONCLUSION\nAt 11-13 weeks some abnormalities are always detectable, some can never be and others are potentially detectable depending on their association with increased NT, the phenotypic expression of the abnormality with gestation and the objectives set for such a scan.",
"title": ""
},
{
"docid": "4c6efebdf08a3c1c4cefc9cdd8950bab",
"text": "Four patients are presented with the Goldenhar syndrome (GS) and cranial defects consisting of plagiocephaly, microcephaly, skull defects, or intracranial dermoid cysts. Twelve cases from the literature add hydrocephalus, encephalocele, and arhinencephaly to a growing list of brain anomalies in GS. As a group, these patients emphasize the variability of GS and the increased risk for developmental retardation with multiple, severe, or unusual manifestations. The temporal relation of proposed teratogenic events in GS provides an opportunity to reconstruct biological relationships within the 3-5-week human embryo.",
"title": ""
},
{
"docid": "20bee45f6e4c1adcf912d1ca4e451046",
"text": "BACKGROUND\nThe Cancer Genome Atlas Project (TCGA) is a National Cancer Institute effort to profile at least 500 cases of 20 different tumor types using genomic platforms and to make these data, both raw and processed, available to all researchers. TCGA data are currently over 1.2 Petabyte in size and include whole genome sequence (WGS), whole exome sequence, methylation, RNA expression, proteomic, and clinical datasets. Publicly accessible TCGA data are released through public portals, but many challenges exist in navigating and using data obtained from these sites. We developed TCGA Expedition to support the research community focused on computational methods for cancer research. Data obtained, versioned, and archived using TCGA Expedition supports command line access at high-performance computing facilities as well as some functionality with third party tools. For a subset of TCGA data collected at University of Pittsburgh, we also re-associate TCGA data with de-identified data from the electronic health records. Here we describe the software as well as the architecture of our repository, methods for loading of TCGA data to multiple platforms, and security and regulatory controls that conform to federal best practices.\n\n\nRESULTS\nTCGA Expedition software consists of a set of scripts written in Bash, Python and Java that download, extract, harmonize, version and store all TCGA data and metadata. The software generates a versioned, participant- and sample-centered, local TCGA data directory with metadata structures that directly reference the local data files as well as the original data files. The software supports flexible searches of the data via a web portal, user-centric data tracking tools, and data provenance tools. Using this software, we created a collaborative repository, the Pittsburgh Genome Resource Repository (PGRR) that enabled investigators at our institution to work with all TCGA data formats, and to interrogate these data with analysis pipelines, and associated tools. WGS data are especially challenging for individual investigators to use, due to issues with downloading, storage, and processing; having locally accessible WGS BAM files has proven invaluable.\n\n\nCONCLUSION\nOur open-source, freely available TCGA Expedition software can be used to create a local collaborative infrastructure for acquiring, managing, and analyzing TCGA data and other large public datasets.",
"title": ""
},
{
"docid": "de54b31c852912f40de046968ae28772",
"text": "Woven fabrics have a wide range of appearance determined by their small-scale 3D structure. Accurately modeling this structural detail can produce highly realistic renderings of fabrics and is critical for predictive rendering of fabric appearance. But building these yarn-level volumetric models is challenging. Procedural techniques are manually intensive, and fail to capture the naturally arising irregularities which contribute significantly to the overall appearance of cloth. Techniques that acquire the detailed 3D structure of real fabric samples are constrained only to model the scanned samples and cannot represent different fabric designs.\n This paper presents a new approach to creating volumetric models of woven cloth, which starts with user-specified fabric designs and produces models that correctly capture the yarn-level structural details of cloth. We create a small database of volumetric exemplars by scanning fabric samples with simple weave structures. To build an output model, our method synthesizes a new volume by copying data from the exemplars at each yarn crossing to match a weave pattern that specifies the desired output structure. Our results demonstrate that our approach generalizes well to complex designs and can produce highly realistic results at both large and small scales.",
"title": ""
},
{
"docid": "440b68739eccc51906a323f9b98644d6",
"text": "This paper presents a review of cloud application architectures and its evolution. It reports 1 observations being made during the course of a research project that tackled the problem to transfer 2 cloud applications between different cloud infrastructures. As a side effect we learned a lot about 3 commonalities and differences from plenty of different cloud applications which might be of value for 4 cloud software engineers and architects. Throughout the course of the research project we analyzed 5 industrial cloud standards, performed systematic mapping studies of cloud-native application related 6 research papers, performed action research activities in cloud engineering projects, modeled a cloud 7 application reference model, and performed software and domain specific language engineering 8 activities. Two major (and sometimes overlooked) trends can be identified. First, cloud computing 9 and its related application architecture evolution can be seen as a steady process to optimize 10 resource utilization in cloud computing. Second, this resource utilization improvements resulted 11 over time in an architectural evolution how cloud applications are being build and deployed. A shift 12 from monolithic servce-oriented architectures (SOA), via independently deployable microservices 13 towards so called serverless architectures is observable. Especially serverless architectures are more 14 decentralized and distributed, and make more intentional use of independently provided services. In 15 other words, a decentralizing trend in cloud application architectures is observable that emphasizes 16 decentralized architectures known from former peer-to-peer based approaches. That is astonishing 17 because with the rise of cloud computing (and its centralized service provisioning concept) the 18 research interest in peer-to-peer based approaches (and its decentralizing philosophy) decreased. 19 But this seems to change. Cloud computing could head into future of more decentralized and more 20 meshed services. 21",
"title": ""
},
{
"docid": "6e26ec8dc5024b2b64da355c9f30d478",
"text": "With each eye fixation, we experience a richly detailed visual world. Yet recent work on visual integration and change direction reveals that we are surprisingly unaware of the details of our environment from one view to the next: we often do not detect large changes to objects and scenes ('change blindness'). Furthermore, without attention, we may not even perceive objects ('inattentional blindness'). Taken together, these findings suggest that we perceive and remember only those objects and details that receive focused attention. In this paper, we briefly review and discuss evidence for these cognitive forms of 'blindness'. We then present a new study that builds on classic studies of divided visual attention to examine inattentional blindness for complex objects and events in dynamic scenes. Our results suggest that the likelihood of noticing an unexpected object depends on the similarity of that object to other objects in the display and on how difficult the priming monitoring task is. Interestingly, spatial proximity of the critical unattended object to attended locations does not appear to affect detection, suggesting that observers attend to objects and events, not spatial positions. We discuss the implications of these results for visual representations and awareness of our visual environment.",
"title": ""
},
{
"docid": "885d5fba6de05107a49225fadb209ff0",
"text": "Electroencephalography (EEG) signals reect activities on certain brain areas. Eective classication of time-varying EEG signals is still challenging. First, EEG signal processing and feature engineering are time-consuming and highly rely on expert knowledge. In addition, most existing studies focus on domain-specic classication algorithms which may not be applicable to other domains. Moreover, the EEG signal usually has a low signal-to-noise ratio and can be easily corrupted. In this regard, we propose a generic EEG signal classication framework that accommodates a wide range of applications to address the aforementioned issues. e proposed framework develops a reinforced selective aention model to automatically choose the distinctive information among the raw EEG signals. A convolutional mapping operation is employed to dynamically transform the selected information to an over-complete feature space, wherein implicit spatial dependency of EEG samples distribution is able to be uncovered. We demonstrate the eectiveness of the proposed framework using three representative scenarios: intention recognition with motor imagery EEG, person identication, and neurological diagnosis. ree widely used public datasets and a local dataset are used for our evaluation. e experiments show that our framework outperforms the state-of-the-art baselines and achieves the accuracy of more than 97% on all the datasets with low latency and good resilience of handling complex EEG signals across various domains. ese results conrm the suitability of the proposed generic approach for a range of problems in the realm of Brain-Computer Interface applications.",
"title": ""
},
{
"docid": "cd68f1e50052709d85cabf55bb1764df",
"text": "Multi-label classification is one of the most challenging tasks in the computer vision community, owing to different composition and interaction (e.g. partial visibility or occlusion) between objects in multi-label images. Intuitively, some objects usually co-occur with some specific scenes, e.g. the sofa often appears in a living room. Therefore, the scene of a given image may provides informative cues for identifying those embedded objects. In this paper, we propose a novel scene-aware deep framework for addressing the challenging multi-label classification task. In particular, we incorporate two sub-networks that are pre-trained for different tasks (i.e. object classification and scene classification) into a unified framework, so that informative scene-aware cues can be leveraged for benefiting multi-label object classification. In addition, we also present a novel one vs. all multiple-cross-entropy (MCE) loss for optimizing the proposed scene-aware deep framework by independently penalizing the classification error for each label. The proposed method can be learned in an end-to-end manner and extensive experimental results on Pascal VOC 2007 and MS COCO demonstrate that our approach is able to make a noticeable improvement for the multi-label classification task.",
"title": ""
},
{
"docid": "0368fdfe05918134e62e0f7b106130ee",
"text": "Scientific charts are an effective tool to visualize numerical data trends. They appear in a wide range of contexts, from experimental results in scientific papers to statistical analyses in business reports. The abundance of scientific charts in the web has made it inevitable for search engines to include them as indexed content. However, the queries based on only the textual data used to tag the images can limit query results. Many studies exist to address the extraction of data from scientific diagrams in order to improve search results. In our approach to achieving this goal, we attempt to enhance the semantic labeling of the charts by using the original data values that these charts were designed to represent. In this paper, we describe a method to extract data values from a specific class of charts, bar charts. The extraction process is fully automated using image processing and text recognition techniques combined with various heuristics derived from the graphical properties of bar charts. The extracted information can be used to enrich the indexing content for bar charts and improve search results. We evaluate the effectiveness of our method on bar charts drawn from the web as well as charts embedded in digital documents.",
"title": ""
},
{
"docid": "f7ed4fb9015dad13d47dec677c469c4b",
"text": "In this paper, a low-cost, power efficient and fast Differential Cascode Voltage-Switch-Logic (DCVSL) based delay cell (named DCVSL-R) is proposed. We use the DCVSL-R cell to implement high frequency and power-critical delay cells and flip-flops of ring oscillators and frequency dividers. When compared to TSPC, DCVSL circuits offer small input and clock capacitance and a symmetric differential loading for previous RF stages. When compared to CML, they offer low transistor count, no headroom limitation, rail-to-rail swing and no static current consumption. However, DCVSL circuits suffer from a large low-to-high propagation delay, which limits their speed and results in asymmetrical output waveforms. The proposed DCVSL-R circuit embodies the benefits of DCVSL while reducing the total propagation delay, achieving faster operation. DCVSL-R also generates symmetrical output waveforms which are critical for differential circuits. Another contribution of this work is a closed-form delay model that predicts the speed of DCVSL circuits with 8% worst case accuracy. We implement two ring-oscillator-based VCOs in 0.13 μm technology with DCVSL and DCVSL-R delay cells. Measurements show that the proposed DCVSL-R based VCO consumes 30% less power than the DCVSL VCO for the same oscillation frequency (2.4 GHz) and same phase noise (-113 dBc/Hz at 10 MHz). DCVSL-R circuits are also used to implement the high frequency dual modulus prescaler (DMP) of a 2.4 GHz frequency synthesizer in 0.18 μm technology. The DMP consumes only 0.8 mW at 2.48 GHz, a 40% reduction in power when compared to other reported DMPs with similar division ratios and operating frequencies. The RF buffer that drives the DMP consumes only 0.27 mW, demonstrating the lowest combined DMP and buffer power consumption among similar synthesizers in literature.",
"title": ""
},
{
"docid": "797ab17a7621f4eaa870a8eb24f8b94d",
"text": "A single-photon avalanche diode (SPAD) with enhanced near-infrared (NIR) sensitivity has been developed, based on 0.18 μm CMOS technology, for use in future automotive light detection and ranging (LIDAR) systems. The newly proposed SPAD operating in Geiger mode achieves a high NIR photon detection efficiency (PDE) without compromising the fill factor (FF) and a low breakdown voltage of approximately 20.5 V. These properties are obtained by employing two custom layers that are designed to provide a full-depletion layer with a high electric field profile. Experimental evaluation of the proposed SPAD reveals an FF of 33.1% and a PDE of 19.4% at 870 nm, which is the laser wavelength of our LIDAR system. The dark count rate (DCR) measurements shows that DCR levels of the proposed SPAD have a small effect on the ranging performance, even if the worst DCR (12.7 kcps) SPAD among the test samples is used. Furthermore, with an eye toward vehicle installations, the DCR is measured over a wide temperature range of 25-132 °C. The ranging experiment demonstrates that target distances are successfully measured in the distance range of 50-180 cm.",
"title": ""
},
{
"docid": "cb1e6d11d372e72f7675a55c8f2c429d",
"text": "We evaluate the performance of a hardware/software architecture designed to perform a wide range of fast image processing tasks. The system ar chitecture is based on hardware featuring a Field Programmable Gate Array (FPGA) co-processor and a h ost computer. A LabVIEW TM host application controlling a frame grabber and an industrial camer a is used to capture and exchange video data with t he hardware co-processor via a high speed USB2.0 chann el, implemented with a standard macrocell. The FPGA accelerator is based on a Altera Cyclone II ch ip and is designed as a system-on-a-programmablechip (SOPC) with the help of an embedded Nios II so ftware processor. The SOPC system integrates the CPU, external and on chip memory, the communication channel and typical image filters appropriate for the evaluation of the system performance. Measured tran sfer rates over the communication channel and processing times for the implemented hardware/softw are logic are presented for various frame sizes. A comparison with other solutions is given and a rang e of applications is also discussed.",
"title": ""
},
{
"docid": "193c60c3a14fe3d6a46b2624d45b70aa",
"text": "*Corresponding author: Shirin Sadat Ghiasi. Faculty of Medicine, Mashhad University of Medical Sciences, Mahshhad, Iran. E-mail: [email protected] Tel:+989156511388 This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons. org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. A Review Study on the Prenatal Diagnosis of Congenital Heart Disease Using Fetal Echocardiography",
"title": ""
},
{
"docid": "16932e01fdea801f28ec6c4194f70352",
"text": "Plum pox virus (PPV) causes the most economically-devastating viral disease in Prunus species. Unfortunately, few natural resistance genes are available for the control of PPV. Recessive resistance to some potyviruses is associated with mutations of eukaryotic translation initiation factor 4E (eIF4E) or its isoform eIF(iso)4E. In this study, we used an RNA silencing approach to manipulate the expression of eIF4E and eIF(iso)4E towards the development of PPV resistance in Prunus species. The eIF4E and eIF(iso)4E genes were cloned from plum (Prunus domestica L.). The sequence identity between plum eIF4E and eIF(iso)4E coding sequences is 60.4% at the nucleotide level and 52.1% at the amino acid level. Quantitative real-time RT-PCR analysis showed that these two genes have a similar expression pattern in different tissues. Transgenes allowing the production of hairpin RNAs of plum eIF4E or eIF(iso)4E were introduced into plum via Agrobacterium-mediated transformation. Gene expression analysis confirmed specific reduced expression of eIF4E or eIF(iso)4E in the transgenic lines and this was associated with the accumulation of siRNAs. Transgenic plants were challenged with PPV-D strain and resistance was evaluated by measuring the concentration of viral RNA. Eighty-two percent of the eIF(iso)4E silenced transgenic plants were resistant to PPV, while eIF4E silenced transgenic plants did not show PPV resistance. Physical interaction between PPV-VPg and plum eIF(iso)4E was confirmed. In contrast, no PPV-VPg/eIF4E interaction was observed. These results indicate that eIF(iso)4E is involved in PPV infection in plum, and that silencing of eIF(iso)4E expression can lead to PPV resistance in Prunus species.",
"title": ""
},
{
"docid": "34641057a037740ec28581a798c96f05",
"text": "Vehicles are becoming complex software systems with many components and services that need to be coordinated. Service oriented architectures can be used in this domain to support intra-vehicle, inter-vehicles, and vehicle-environment services. Such architectures can be deployed on different platforms, using different communication and coordination paradigms. We argue that practical solutions should be hybrid: they should integrate and support interoperability of different paradigms. We demonstrate the concept by integrating Jini, the service-oriented technology we used within the vehicle, and JXTA, the peer to peer infrastructure we used to support interaction with the environment through a gateway service, called J2J. Initial experience with J2J is illustrated.",
"title": ""
},
{
"docid": "2802db74e062103d45143e8e9ad71890",
"text": "Maritime traffic monitoring is an important aspect of safety and security, particularly in close to port operations. While there is a large amount of data with variable quality, decision makers need reliable information about possible situations or threats. To address this requirement, we propose extraction of normal ship trajectory patterns that builds clusters using, besides ship tracing data, the publicly available International Maritime Organization (IMO) rules. The main result of clustering is a set of generated lanes that can be mapped to those defined in the IMO directives. Since the model also takes non-spatial attributes (speed and direction) into account, the results allow decision makers to detect abnormal patterns - vessels that do not obey the normal lanes or sail with higher or lower speeds.",
"title": ""
},
{
"docid": "ce9487df62f75872d7111a26972feca7",
"text": "In this chapter we provide an overview of the concept of blockchain technology and its potential to disrupt the world of banking through facilitating global money remittance, smart contracts, automated banking ledgers and digital assets. In this regard, we first provide a brief overview of the core aspects of this technology, as well as the second-generation contract-based developments. From there we discuss key issues that must be considered in developing such ledger based technologies in a banking context.",
"title": ""
}
] |
scidocsrr
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.