query_id
stringlengths 32
32
| query
stringlengths 6
3.9k
| positive_passages
listlengths 1
21
| negative_passages
listlengths 10
100
| subset
stringclasses 7
values |
---|---|---|---|---|
60bea15ac31e4357ba083c0dfbd26e9e
|
NASH: Toward End-to-End Neural Architecture for Generative Semantic Hashing
|
[
{
"docid": "635888c0a30cfd15df13431201b22469",
"text": "Similarity search (nearest neighbor search) is a problem of pursuing the data items whose distances to a query item are the smallest from a large database. Various methods have been developed to address this problem, and recently a lot of efforts have been devoted to approximate search. In this paper, we present a survey on one of the main solutions, hashing, which has been widely studied since the pioneering work locality sensitive hashing. We divide the hashing algorithms two main categories: locality sensitive hashing, which designs hash functions without exploring the data distribution and learning to hash, which learns hash functions according the data distribution, and review them from various aspects, including hash function design and distance measure and search scheme in the hash coding space. Index Terms —Approximate Nearest Neighbor Search, Similarity Search, Hashing, Locality Sensitive Hashing, Learning to Hash, Quantization.",
"title": ""
},
{
"docid": "0ce4a0dfe5ea87fb87f5d39b13196e94",
"text": "Learning useful representations without supervision remains a key challenge in machine learning. In this paper, we propose a simple yet powerful generative model that learns such discrete representations. Our model, the Vector QuantisedVariational AutoEncoder (VQ-VAE), differs from VAEs in two key ways: the encoder network outputs discrete, rather than continuous, codes; and the prior is learnt rather than static. In order to learn a discrete latent representation, we incorporate ideas from vector quantisation (VQ). Using the VQ method allows the model to circumvent issues of “posterior collapse” -— where the latents are ignored when they are paired with a powerful autoregressive decoder -— typically observed in the VAE framework. Pairing these representations with an autoregressive prior, the model can generate high quality images, videos, and speech as well as doing high quality speaker conversion and unsupervised learning of phonemes, providing further evidence of the utility of the learnt representations.",
"title": ""
},
{
"docid": "cff671af6a7a170fac2daf6acd9d1e3e",
"text": "We show how to learn a deep graphical model of the word-count vectors obtained from a large set of documents. The values of the latent variables in the deepest layer are easy to infer and gi ve a much better representation of each document than Latent Sem antic Analysis. When the deepest layer is forced to use a small numb er of binary variables (e.g. 32), the graphical model performs “semantic hashing”: Documents are mapped to memory addresses in such a way that semantically similar documents are located at near by ddresses. Documents similar to a query document can then be fo und by simply accessing all the addresses that differ by only a fe w bits from the address of the query document. This way of extending the efficiency of hash-coding to approximate matching is much fa ster than locality sensitive hashing, which is the fastest curre nt method. By using semantic hashing to filter the documents given to TFID , we achieve higher accuracy than applying TF-IDF to the entir document set.",
"title": ""
},
{
"docid": "9546f8a74577cc1119e48fae0921d3cf",
"text": "Learning latent representations from long text sequences is an important first step in many natural language processing applications. Recurrent Neural Networks (RNNs) have become a cornerstone for this challenging task. However, the quality of sentences during RNN-based decoding (reconstruction) decreases with the length of the text. We propose a sequence-to-sequence, purely convolutional and deconvolutional autoencoding framework that is free of the above issue, while also being computationally efficient. The proposed method is simple, easy to implement and can be leveraged as a building block for many applications. We show empirically that compared to RNNs, our framework is better at reconstructing and correcting long paragraphs. Quantitative evaluation on semi-supervised text classification and summarization tasks demonstrate the potential for better utilization of long unlabeled text data.",
"title": ""
}
] |
[
{
"docid": "3ce61e8c33143c446ddc9aaa01274bc6",
"text": "Research on adolescent self-esteem indicates that adolescence is a time in which individuals experience important changes in their physical, cognitive, and social identities. Prior research suggests that there is a positive relationship between an adolescent's participation in structured extracurricular activities and well-being in a variety of domains, and some research indicates that these relationships may be dependent on the type of activities in which adolescents participate. Building on previous research, a growth-curve analysis was utilized to examine self-esteem trajectories from adolescence (age 14) to young adulthood (age 26). Using 3 waves of data from National Longitudinal Study of Adolescent Health (n = 5,399; 47.8% male), the analysis estimated a hierarchical growth-curve model emphasizing the effects of age and type of school-based extracurricular activity portfolio, including sports and school clubs, on self-esteem. The results indicated that age had a linear relationship with self-esteem over time. Changes in both the initial level of self-esteem and the growth of self-esteem over time were significantly influenced by the type of extracurricular activity portfolio. The findings were consistent across race and sex. The results support the utility of examining the longitudinal impact of portfolio type on well-being outcomes.",
"title": ""
},
{
"docid": "188ab32548b91fd1bf1edf34ff3d39d9",
"text": "With the marvelous development of wireless techniques and ubiquitous deployment of wireless systems indoors, myriad indoor location-based services (ILBSs) have permeated into numerous aspects of modern life. The most fundamental functionality is to pinpoint the location of the target via wireless devices. According to how wireless devices interact with the target, wireless indoor localization schemes roughly fall into two categories: device based and device free. In device-based localization, a wireless device (e.g., a smartphone) is attached to the target and computes its location through cooperation with other deployed wireless devices. In device-free localization, the target carries no wireless devices, while the wireless infrastructure deployed in the environment determines the target’s location by analyzing its impact on wireless signals.\n This article is intended to offer a comprehensive state-of-the-art survey on wireless indoor localization from the device perspective. In this survey, we review the recent advances in both modes by elaborating on the underlying wireless modalities, basic localization principles, and data fusion techniques, with special emphasis on emerging trends in (1) leveraging smartphones to integrate wireless and sensor capabilities and extend to the social context for device-based localization, and (2) extracting specific wireless features to trigger novel human-centric device-free localization. We comprehensively compare each scheme in terms of accuracy, cost, scalability, and energy efficiency. Furthermore, we take a first look at intrinsic technical challenges in both categories and identify several open research issues associated with these new challenges.",
"title": ""
},
{
"docid": "32ae89cf9f73fbc92de63cadba484cdb",
"text": "INTRODUCTION\nThe aim of this study was to evaluate and compare several physicochemical properties including working and setting times, flow, solubility, and water absorption of a recent calcium silicate-based sealer (MTA Fillapex; Angelus, Londrina, Brazil) and an epoxy resin-based sealer (AH Plus; Dentsply, Konstanz, Germany).\n\n\nMETHODS\nThe materials were handled following the manufacturer's instructions. The working time and flow were tested according to ISO 6876:2001 and the setting time according to American Society for Testing and Materials C266. For solubility and water absorption tests, the materials were placed into polyvinyl chloride molds (8 × 1.6 mm). The samples (n = 10 for each material and test) were placed in a cylindrical polystyrene-sealed container with 20 mL deionized water at 37°C. At 1, 7, 14, and 28 days, the samples were removed from the solutions and blotted dry for solubility and water absorption tests. The data were analyzed using 1-way analysis of variance with the Tukey test (P < .05).\n\n\nRESULTS\nMTA Fillapex showed the lowest values of flow, working and setting times, solubility, and water absorption (P < .05). The solubility and water absorption increased significantly over time for both materials in a 1- to 28-day period (P < .05).\n\n\nCONCLUSIONS\nMTA Fillapex showed suitable physical properties to be used as an endodontic sealer.",
"title": ""
},
{
"docid": "1fcaa9ebde2922c13ce42f8f90c9c6ba",
"text": "Despite advances in HIV treatment, there continues to be great variability in the progression of this disease. This paper reviews the evidence that depression, stressful life events, and trauma account for some of the variation in HIV disease course. Longitudinal studies both before and after the advent of highly active antiretroviral therapies (HAART) are reviewed. To ensure a complete review, PubMed was searched for all English language articles from January 1990 to July 2007. We found substantial and consistent evidence that chronic depression, stressful events, and trauma may negatively affect HIV disease progression in terms of decreases in CD4 T lymphocytes, increases in viral load, and greater risk for clinical decline and mortality. More research is warranted to investigate biological and behavioral mediators of these psychoimmune relationships, and the types of interventions that might mitigate the negative health impact of chronic depression and trauma. Given the high rates of depression and past trauma in persons living with HIV/AIDS, it is important for healthcare providers to address these problems as part of standard HIV care.",
"title": ""
},
{
"docid": "1c34abb0e212034a5fb96771499f1ee3",
"text": "Facial expression recognition is a useful feature in modern human computer interaction (HCI). In order to build efficient and reliable recognition systems, face detection, feature extraction and classification have to be robustly realised. Addressing the latter two issues, this work proposes a new method based on geometric and transient optical flow features and illustrates their comparison and integration for facial expression recognition. In the authors’ method, photogrammetric techniques are used to extract three-dimensional (3-D) features from every image frame, which is regarded as a geometric feature vector. Additionally, optical flow-based motion detection is carried out between consecutive images, what leads to the transient features. Artificial neural network and support vector machine classification results demonstrate the high performance of the proposed method. In particular, through the use of 3-D normalisation and colour information, the proposed method achieves an advanced feature representation for the accurate and robust classification of facial expressions.",
"title": ""
},
{
"docid": "6886be779c97b916f59f63e64a782ce1",
"text": "WikiTalk is an open-domain knowledge access system that talks about topics using Wikipedia articles as its knowledge source. Based on Constructive Dialogue Modelling theory, WikiTalk exploits the concepts of Topic and NewInfo to manage topic-tracking and topic-shifting. As the currently talked-about Topic can be any Wikipedia topic, the system is truly open-domain. NewInfos, the pieces of new information to be conveyed to the partner, are associated with hyperlinks extracted from the Wikipedia texts. Using these hyperlinks the system can change topics smoothly according to the user’s changing interests. As well as user-initiated topics, the system can suggest new topics using for example the daily \"Did you know?\" items in Wikipedia. WikiTalk can be employed in different environments. It has been demonstrated on Windows, with an open-source robotics simulator, and with the Aldebaran Nao humanoid robot.",
"title": ""
},
{
"docid": "1c8a3500d9fbd7e6c10dfffc06157d74",
"text": "The issue of privacy protection in video surveillance has drawn a lot of interest lately. However, thorough performance analysis and validation is still lacking, especially regarding the fulfillment of privacy-related requirements. In this paper, we put forward a framework to assess the capacity of privacy protection solutions to hide distinguishing facial information and to conceal identity. We then conduct rigorous experiments to evaluate the performance of face recognition algorithms applied to images altered by privacy protection techniques. Results show the ineffectiveness of naïve privacy protection techniques such as pixelization and blur. Conversely, they demonstrate the effectiveness of more sophisticated scrambling techniques to foil face recognition.",
"title": ""
},
{
"docid": "2efd26fc1e584aa5f70bdf9d24e5c2cd",
"text": "Bridging cultures that have often been distant, Julia combines expertise from the diverse fields of computer science and computational science to create a new approach to numerical computing. Julia is designed to be easy and fast and questions notions generally held to be “laws of nature” by practitioners of numerical computing: 1. High-level dynamic programs have to be slow. 2. One must prototype in one language and then rewrite in another language for speed or deployment. 3. There are parts of a system appropriate for the programmer, and other parts that are best left untouched as they have been built by the experts. We introduce the Julia programming language and its design—a dance between specialization and abstraction. Specialization allows for custom treatment. Multiple dispatch, a technique from computer science, picks the right algorithm for the right circumstance. Abstraction, which is what good computation is really about, recognizes what remains the same after differences are stripped away. Abstractions in mathematics are captured as code through another technique from computer science, generic programming. Julia shows that one can achieve machine performance without sacrificing human convenience.",
"title": ""
},
{
"docid": "54bdabea83e86d21213801c990c60f4d",
"text": "A method of depicting crew climate using a group diagram based on behavioral ratings is described. Behavioral ratings were made of twelve three-person professional airline cockpit crews in full-mission simulations. These crews had been part of an earlier study in which captains had been had been grouped into three personality types, based on pencil and paper pre-tests. We found that low error rates were related to group climate variables as well as positive captain behaviors.",
"title": ""
},
{
"docid": "86d4296be61308ec93920d2d84f0694f",
"text": "by Jian Xu Our world produces massive data every day; they exist in diverse forms, from pairwise data and matrix to time series and trajectories. Meanwhile, we have access to the versatile toolkit of network analysis. Networks also have different forms; from simple networks to higher-order network, each representation has different capabilities in carrying information. For researchers who want to leverage the power of the network toolkit, and apply it beyond networks data to sequential data, diffusion data, and many more, the question is: how to represent big data and networks? This dissertation makes a first step to answering the question. It proposes the higherorder network, which is a critical piece for representing higher-order interaction data; it introduces a scalable algorithm for building the network, and visualization tools for interactive exploration. Finally, it presents broad applications of the higher-order network in the real-world. Dedicated to those who strive to be better persons.",
"title": ""
},
{
"docid": "ba2a9451fa1f794c7a819acaa9bc5d82",
"text": "In this paper we briefly address DLR’s (German Aerospace Center) background in space robotics by hand of corresponding milestone projects including systems on the International Space Station. We then discuss the key technologies needed for the development of an artificial “robonaut” generation with mechatronic ultra-lightweight arms and multifingered hands. The third arm generation is nearly finished now, approaching the limits of what is technologically achievable today with respect to light weight and power losses. In a similar way DLR’s second generation of artificial four-fingered hands was a big step towards higher reliability, manipulability and overall",
"title": ""
},
{
"docid": "f2d8ee741a61b1f950508ac57b2aa379",
"text": "The concentrations of cellulose chemical markers, in oil, are influenced by various parameters due to the partition between the oil and the cellulose insulation. One major parameter is the oil temperature which is a function of the transformer load, ambient temperature and the type of cooling. To accurately follow the chemical markers concentration trends during all the transformer life, it is crucial to normalize the concentrations at a specific temperature. In this paper, we propose equations for the normalization of methanol, ethanol and 2-furfural at 20 °C. The proposed equations have been validated on some real power transformers.",
"title": ""
},
{
"docid": "878617f145544f66e79f7d2d3404cbdf",
"text": "In this paper we address the problem of classifying cited work into important and non-important to the developments presented in a research publication. This task is vital for the algorithmic techniques that detect and follow emerging research topics and to qualitatively measure the impact of publications in increasingly growing scholarly big data. We consider cited work as important to a publication if that work is used or extended in some way. If a reference is cited as background work or for the purpose of comparing results, the cited work is considered to be non-important. By employing five classification techniques (Support Vector Machine, Naïve Bayes, Decision Tree, K-Nearest Neighbors and Random Forest) on an annotated dataset of 465 citations, we explore the effectiveness of eight previously published features and six novel features (including context based, cue words based and textual based). Within this set, our new features are among the best performing. Using the Random Forest classifier we achieve an overall classification accuracy of 0.91 AUC.",
"title": ""
},
{
"docid": "14ba4e49e1f773c8f7bfadf8f08a967e",
"text": "Mounting evidence suggests that acute and chronic stress, especially the stress-induced release of glucocorticoids, induces changes in glutamate neurotransmission in the prefrontal cortex and the hippocampus, thereby influencing some aspects of cognitive processing. In addition, dysfunction of glutamatergic neurotransmission is increasingly considered to be a core feature of stress-related mental illnesses. Recent studies have shed light on the mechanisms by which stress and glucocorticoids affect glutamate transmission, including effects on glutamate release, glutamate receptors and glutamate clearance and metabolism. This new understanding provides insights into normal brain functioning, as well as the pathophysiology and potential new treatments of stress-related neuropsychiatric disorders.",
"title": ""
},
{
"docid": "9050bdd4d35ba3c213ab50a2f29274a5",
"text": "OBJECTIVES\nWe explored the clinical application of goal-directed therapy in community-based rehabilitation from the perspective of clients with traumatic brain injury (TBI), their significant others, and their treating occupational therapists.\n\n\nMETHOD\nTwelve people with TBI and their significant others completed an outpatient, goal-directed, 12-week occupational therapy program. Semistructured interviews with 12 participants, 10 significant others, and 3 occupational therapists involved in delivering the therapy programs explored their experiences of goal-directed therapy.\n\n\nRESULTS\nParticipants, their significant others, and therapists described goal-directed therapy positively, expressing satisfaction with progress made.\n\n\nCONCLUSION\nGoals provide structure, which facilitates participation in rehabilitation despite the presence of barriers, including reduced motivation and impaired self-awareness. A therapist-facilitated, structured, goal-setting process in which the client, therapist, and significant others work in partnership can enhance the process of goal setting and goal-directed rehabilitation in a community rehabilitation context.",
"title": ""
},
{
"docid": "2139c4ffeb8b20e333c1e06b462760ff",
"text": "BACKGROUND\nDental esthetics has become a popular topic among all disciplines in dentistry. When a makeover is planned for the esthetic appearance of a patient's teeth, the clinician must have a logical diagnostic approach that results in the appropriate treatment plan. With some patients, the restorative dentist cannot accomplish the correction alone but may require the assistance of other dental disciplines.\n\n\nAPPROACH\nThis article describes an interdisciplinary approach to the diagnosis and management of anterior dental esthetics. The authors practice different disciplines in dentistry: restorative care, orthodontics and periodontics. However, for more than 20 years, this team has participated in an interdisciplinary dental study group that focuses on a wide variety of dental problems. One such area has been the analysis of anterior dental esthetic problems requiring interdisciplinary correction. This article will describe a unique approach to interdisciplinary dental diagnosis, beginning with esthetics but encompassing structure, function and biology to achieve an optimal result.\n\n\nCLINICAL IMPLICATIONS\nIf a clinician uses an esthetically based approach to the diagnosis of anterior dental problems, then the outcome of the esthetic treatment plan will be enhanced without sacrificing the structural, functional and biological aspects of the patient's dentition.",
"title": ""
},
{
"docid": "21d91a145e734d8d3731fc1f8cc7a056",
"text": "Efficient and accurate planning of fingertip grasps is essential for dexterous in-hand manipulation. In this work, we present a system for fingertip grasp planning that incrementally learns a heuristic for hand reachability and multi-fingered inverse kinematics. The system consists of an online execution module and an offline optimization module. During execution the system plans and executes fingertip grasps using Canny's grasp quality metric and a learned random forest based hand reachability heuristic. In the offline module, this heuristic is improved based on a grasping manifold that is incrementally learned from the experiences collected during execution. The system is evaluated both in simulation and on a Schunk-SDH dexterous hand mounted on a KUKA-KR5 arm. We show that, as the grasping manifold is adapted to the system's experiences, the heuristic becomes more accurate, which results in an improved performance of the execution module. The improvement is not only observed for experienced objects, but also for previously unknown objects of similar sizes.",
"title": ""
},
{
"docid": "0a35370e6c99e122b8051a977029d77a",
"text": "To truly understand the visual world our models should be able not only to recognize images but also generate them. To this end, there has been exciting recent progress on generating images from natural language descriptions. These methods give stunning results on limited domains such as descriptions of birds or flowers, but struggle to faithfully reproduce complex sentences with many objects and relationships. To overcome this limitation we propose a method for generating images from scene graphs, enabling explicitly reasoning about objects and their relationships. Our model uses graph convolution to process input graphs, computes a scene layout by predicting bounding boxes and segmentation masks for objects, and converts the layout to an image with a cascaded refinement network. The network is trained adversarially against a pair of discriminators to ensure realistic outputs. We validate our approach on Visual Genome and COCO-Stuff, where qualitative results, ablations, and user studies demonstrate our method's ability to generate complex images with multiple objects.",
"title": ""
},
{
"docid": "36f2470ce215647bf92cbb9d8316c51c",
"text": "BACKGROUND\nIn schizophrenia and major depressive disorder, anhedonia (a loss of capacity to feel pleasure) had differently been considered as a premorbid personological trait or as a main symptom of their clinical picture. The aims of this study were to examine the pathological features of anhedonia in schizophrenic and depressed patients, and to investigate its clinical relations with general psychopathology (negative, positive, and depressive dimensions).\n\n\nMETHODS\nA total of 145 patients (80 schizophrenics and 65 depressed subjects) were assessed using the Physical Anhedonia Scale and the Social Anhedonia Scale (PAS and SAS, respectively), the Scales for the Assessment of Positive and Negative Symptoms (SAPS and SANS, respectively), the Calgary Depression Scale for Schizophrenics (CDSS), and the Hamilton Depression Rating Scale (HDRS). The statistical analysis was performed in two steps. First, the schizophrenic and depressed samples were dichotomised into 'anhedonic' and 'normal hedonic' subgroups (according to the 'double (PAS/SAS) cut-off') and were compared on the general psychopathology scores using the Mann-Whitney Z test. Subsequently, for the total schizophrenic and depressed samples, Spearman correlations were calculated to examine the relation between anhedonia ratings and the other psychopathological parameters.\n\n\nRESULTS\nIn the schizophrenic sample, anhedonia reached high significant levels only in 45% of patients (n = 36). This 'anhedonic' subgroup was distinguished by high scores in the disorganisation and negative dimensions. Positive correlations of anhedonia with disorganised and negative symptoms were also been detected. In the depressed sample, anhedonia reached high significant levels in only 36.9% of subjects (n = 24). This 'anhedonic' subgroup as distinguished by high scores in the depression severity and negative dimensions. Positive correlations of anhedonia with depressive and negative symptoms were also been detected.\n\n\nCONCLUSION\nIn the schizophrenic sample, anhedonia seems to be a specific subjective psychopathological experience of the negative and disorganised forms of schizophrenia. In the depressed sample, anhedonia seems to be a specific subjective psychopathological experience of those major depressive disorder forms with a marked clinical depression severity.",
"title": ""
},
{
"docid": "dd05688335b4240bbc40919870e30f39",
"text": "In this tool report, we present an overview of the Watson system, a Semantic Web search engine providing various functionalities not only to find and locate ontologies and semantic data online, but also to explore the content of these semantic documents. Beyond the simple facade of a search engine for the Semantic Web, we show that the availability of such a component brings new possibilities in terms of developing semantic applications that exploit the content of the Semantic Web. Indeed, Watson provides a set of APIs containing high level functions for finding, exploring and querying semantic data and ontologies that have been published online. Thanks to these APIs, new applications have emerged that connect activities such as ontology construction, matching, sense disambiguation and question answering to the Semantic Web, developed by our group and others. In addition, we also describe Watson as a unprecedented research platform for the study the Semantic Web, and of formalised knowledge in general.",
"title": ""
}
] |
scidocsrr
|
51f614fab58c41ece13a72e8ae589f18
|
Head Detection in Stereo Data for People Counting and Segmentation
|
[
{
"docid": "5300e9938a545895c8b97fe6c9d06aa5",
"text": "Background subtraction is a common computer vision task. We analyze the usual pixel-level approach. We develop an efficient adaptive algorithm using Gaussian mixture probability density. Recursive equations are used to constantly update the parameters and but also to simultaneously select the appropriate number of components for each pixel.",
"title": ""
}
] |
[
{
"docid": "23bc60e282fa6d459f564f162cd8166f",
"text": "The AUTOSAR consortium was founded to manage the growing electronics complexity and improve cost-efficiency without any compromises with quality as well as reusability. It is expected that AUTOSAR, open and standardized automotive software architecture, is widespread in automotive industry worldwide. In general, automotive embedded software has been closely coupled with the hardware, and the boundary between application software and hardware related software is not clear. The AUTOSAR approach requires a more different approaching way to develop automotive embedded software than today. In AUTOSAR, the application software and infrastructural software are clearly separated through the concept of RTE (Run Time Environment), and the paradigm of design is shifted from coding to configuration. By introducing AUTOSAR, existing system need to be translated into AUTOSAR and, several migration concepts are necessary for the successful transition. ECU developments typically start with legacy software, and it is unavoidable to rearrange existing software according to the AUTOSAR concept. This paper shows how to construct AUTOSAR application software components and basic software modules for already developed ECU software.",
"title": ""
},
{
"docid": "0939a703cb2eeb9396c4e681f95e1e4d",
"text": "Learning-based methods for visual segmentation have made progress on particular types of segmentation tasks, but are limited by the necessary supervision, the narrow definitions of fixed tasks, and the lack of control during inference for correcting errors. To remedy the rigidity and annotation burden of standard approaches, we address the problem of few-shot segmentation: given few image and few pixel supervision, segment any images accordingly. We propose guided networks, which extract a latent task representation from any amount of supervision, and optimize our architecture end-to-end for fast, accurate few-shot segmentation. Our method can switch tasks without further optimization and quickly update when given more guidance. We report the first results for segmentation from one pixel per concept and show real-time interactive video segmentation. Our unified approach propagates pixel annotations across space for interactive segmentation, across time for video segmentation, and across scenes for semantic segmentation. Our guided segmentor is state-of-the-art in accuracy for the amount of annotation and time. See http://github.com/shelhamer/revolver for code, models, and more details.",
"title": ""
},
{
"docid": "eb3fad94acaf1f36783fdb22f3932ec7",
"text": "This paper presents a new approach to translate between Building Information Modeling (BIM) and Building Energy Modeling (BEM) that uses Modelica, an object-oriented declarative, equation-based simulation environment. The approach (BIM2BEM) has been developed using a data modeling method to enable seamless model translations of building geometry, materials, and topology. Using data modeling, we created a Model View Definition (MVD) consisting of a process model and a class diagram. The process model demonstrates object-mapping between BIM and Modelica-based BEM (ModelicaBEM) and facilitates the definition of required information during model translations. The class diagram represents the information and object relationships to produce a class package intermediate between the BIM and BEM. The implementation of the intermediate class package enables system interface (Revit2Modelica) development for automatic BIM data translation into ModelicaBEM. In order to demonstrate and validate our approach, simulation result comparisons have been conducted via three test cases using (1) the BIM-based Modelica models generated from Revit2Modelica and (2) BEM models manually created using LBNL Modelica Buildings library. Our implementation shows that BIM2BEM (1) enables BIM models to be translated into ModelicaBEM models, (2) enables system interface development based on the MVD for thermal simulation, and (3) facilitates the reuse of original BIM data into building energy simulation without an import/export process.",
"title": ""
},
{
"docid": "0a902cb846d08a932e14ee5d5820d3ac",
"text": "Providing assurances for self-adaptive systems is challenging. A primary underlying problem is uncertainty that may stem from a variety of different sources, ranging from incomplete knowledge to sensor noise and uncertain behavior of humans in the loop. Providing assurances that the self-adaptive system complies with its requirements calls for an enduring process spanning the whole lifetime of the system. In this process, humans and the system jointly derive and integrate new evidence and arguments, which we coined perpetual assurances for self-adaptive systems. In this paper, we provide a background framework and the foundation for perpetual assurances for self-adaptive systems. We elaborate on the concrete challenges of offering perpetual assurances, requirements for solutions, realization techniques and mechanisms to make solutions suitable. We also present benchmark criteria to compare solutions. We then present a concrete exemplar that researchers can use to assess and compare approaches for perpetual assurances for self-adaptation.",
"title": ""
},
{
"docid": "571c7cb6e0670539a3effbdd65858d2a",
"text": "When writing software, developers often employ abbreviations in identifier names. In fact, some abbreviations may never occur with the expanded word, or occur more often in the code. However, most existing program comprehension and search tools do little to address the problem of abbreviations, and therefore may miss meaningful pieces of code or relationships between software artifacts. In this paper, we present an automated approach to mining abbreviation expansions from source code to enhance software maintenance tools that utilize natural language information. Our scoped approach uses contextual information at the method, program, and general software level to automatically select the most appropriate expansion for a given abbreviation. We evaluated our approach on a set of 250 potential abbreviations and found that our scoped approach provides a 57% improvement in accuracy over the current state of the art.",
"title": ""
},
{
"docid": "d8ddb086b2bd881e68d14488025007f3",
"text": "This paper presents a compact model of SiC insulated-gate bipolar transistors (IGBTs) for power electronic circuit simulation. Here, we focus on the modeling of important specific features in the turn-off characteristics of the 4H-SiC IGBT, which are investigated with a 2-D device simulator, at supply voltages higher than 5 kV. These features are found to originate from the punch-through effect of the SiC IGBT. Thus, they are modeled based on the carrier distribution change caused by punch through and implemented into the silicon IGBT model named “HiSIM-IGBT” to obtain a practically useful SiC-IGBT model. The developed compact SiC-IGBT model for circuit simulation is verified with the 2-D device simulation data.",
"title": ""
},
{
"docid": "4e6bcefa6a3ac86e260c16d4a2a72cab",
"text": "This paper discusses the power of emotions in our health, happiness and wholeness, and the emotional impact of movies. It presents iFelt, an interactive video application to classify, access, explore and visualize movies based on their emotional properties and impact.",
"title": ""
},
{
"docid": "f493cd17fd5d7a1fc61ca9c07bc42faf",
"text": "A novel unified control of the dc–ac interlinking converters (ICs) for autonomous operation of hybrid ac/dc microgrids (MGs) has been proposed in this paper. When the slack terminals in the ac and dc MGs are available, the ICs will operate in autonomous control of interlinking power between the ac and dc subgrids, with the total load demand proportionally shared among the existing ac and dc slack terminals. With a flexible control variable added in power control loop, design of the interlinking power control, and droop features of ac and dc MGs can be decoupled. Moreover, this control variable can be tuned flexibly according to different power control objectives, such as proportional power sharing in terms of capacity (which is considered in this paper), interlinking power dispatch, and other optimal power dispatch algorithms, ensuring a well-designed flexibility and compatibility. Furthermore, if the dc MG or the ac MG loses dc voltage control or ac voltage and frequency control capability due to failures of operation of its slack terminals, the ICs can automatically and seamlessly transfer to dc MG support or ac MG support control modes without operation mode detection, communication, control scheme switching, and control saturation. In order to enhance the stability of the proposed unified control in different modes with different control plants, a phase compensation transfer function has been added in the power control loop. After thorough theoretical analysis and discussions, detailed simulation verifications based on PSCAD/EMTDC and experimental results based on a hardware experimental MG platform have been presented.",
"title": ""
},
{
"docid": "4b1f3a34a3f2acdfebcc311c507a97f7",
"text": "Planning the path of an autonomous, agile vehicle ina dynamic environment is a very complex problem, especially when the vehicle is required to use its full maneuvering capabilities. Recent efforts aimed at using randomized algorithms for planning the path of kinematic and dynamic vehicles have demonstrated considerable potential for implementation on future autonomous platforms. This paper builds upon these efforts by proposing a randomized path planning architecture for dynamical systems in the presence of xed and moving obstacles. This architecture addresses the dynamic constraints on the vehicle’s motion, and it provides at the same time a consistent decoupling between low-level control and motion planning. The path planning algorithm retains the convergence properties of its kinematic counterparts. System safety is also addressed in the face of nite computation times by analyzing the behavior of the algorithm when the available onboard computation resources are limited, and the planning must be performed in real time. The proposed algorithm can be applied to vehicles whose dynamics are described either by ordinary differential equations or by higher-level, hybrid representations. Simulation examples involving a ground robot and a small autonomous helicopter are presented and discussed.",
"title": ""
},
{
"docid": "61def8d760de928a8cae89f2699c51cf",
"text": "OBJECTIVES\nTo describe the development and validation of a cancer awareness questionnaire (CAQ) based on a literature review of previous studies, focusing on cancer awareness and prevention.\n\n\nMATERIALS AND METHODS\nA total of 388 Chinese undergraduate students in a private university in Kuala Lumpur, Malaysia, were recruited to evaluate the developed self-administered questionnaire. The CAQ consisted of four sections: awareness of cancer warning signs and screening tests; knowledge of cancer risk factors; barriers in seeking medical advice; and attitudes towards cancer and cancer prevention. The questionnaire was evaluated for construct validity using principal component analysis and internal consistency using Cronbach's alpha (α) coefficient. Test-retest reliability was assessed with a 10-14 days interval and measured using Pearson product-moment correlation.\n\n\nRESULTS\nThe initial 77-item CAQ was reduced to 63 items, with satisfactory construct validity, and a high total internal consistency (Cronbach's α=0.77). A total of 143 students completed the questionnaire for the test-retest reliability obtaining a correlation of 0.72 (p<0.001) overall.\n\n\nCONCLUSIONS\nThe CAQ could provide a reliable and valid measure that can be used to assess cancer awareness among local Chinese undergraduate students. However, further studies among students from different backgrounds (e.g. ethnicity) are required in order to facilitate the use of the cancer awareness questionnaire among all university students.",
"title": ""
},
{
"docid": "db1cdc2a4e3fe26146a1f9c8b0926f9e",
"text": "Sememes are defined as the minimum semantic units of human languages. People have manually annotated lexical sememes for words and form linguistic knowledge bases. However, manual construction is time-consuming and labor-intensive, with significant annotation inconsistency and noise. In this paper, we for the first time explore to automatically predict lexical sememes based on semantic meanings of words encoded by word embeddings. Moreover, we apply matrix factorization to learn semantic relations between sememes and words. In experiments, we take a real-world sememe knowledge base HowNet for training and evaluation, and the results reveal the effectiveness of our method for lexical sememe prediction. Our method will be of great use for annotation verification of existing noisy sememe knowledge bases and annotation suggestion of new words and phrases.",
"title": ""
},
{
"docid": "1bf7687bbc4aef6caa9f0fe6484b8945",
"text": "The role-based access control (RBAC) framework is a mechanism that describes the access control principle. As a common interaction, an organization provides a service to a user who owns a certain role that was issued by a different organization. Such trans-organizational RBAC is common in face-to-face communication but not in a computer network, because it is difficult to establish both the security that prohibits the malicious impersonation of roles and the flexibility that allows small organizations to participate and users to fully control their own roles. In this paper, we present an RBAC using smart contract (RBAC-SC), a platform that makes use of Ethereum’s smart contract technology to realize a trans-organizational utilization of roles. Ethereum is an open blockchain platform that is designed to be secure, adaptable, and flexible. It pioneered smart contracts, which are decentralized applications that serve as “autonomous agents” running exactly as programmed and are deployed on a blockchain. The RBAC-SC uses smart contracts and blockchain technology as versatile infrastructures to represent the trust and endorsement relationship that are essential in the RBAC and to realize a challenge-response authentication protocol that verifies a user’s ownership of roles. We describe the RBAC-SC framework, which is composed of two main parts, namely, the smart contract and the challenge-response protocol, and present a performance analysis. A prototype of the smart contract is created and deployed on Ethereum’s Testnet blockchain, and the source code is publicly available.",
"title": ""
},
{
"docid": "71cc535dcae1b50f9fe3314f4140d916",
"text": "Information and communications technology has fostered the rise of the sharing economy, enabling individuals to share excess capacity. In this paper, we focus on Airbnb.com, which is among the most prominent examples of the sharing economy. We take the perspective of an accommodation provider and investigate the concept of trust, which facilitates complete strangers to form temporal C2C relationships on Airbnb.com. In fact, the implications of trust in the sharing economy fundamentally differ to related online industries. In our research model, we investigate the formation of trust by incorporating two antecedents – ‘Disposition to trust’ and ‘Familiarity with Airbnb.com’. Furthermore, we differentiate between ‘Trust in Airbnb.com’ and ‘Trust in renters’ and examine their implications on two provider intentions. To seek support for our research model, we conducted a survey with 189 participants. The results show that both trust constructs are decisive to successfully initiate a sharing deal between two parties.",
"title": ""
},
{
"docid": "6e53c13c4da3f985f85d56d2c9b037e6",
"text": "Simulating human mobility is important in mobile networks because many mobile devices are either attached to or controlled by humans and it is very hard to deploy real mobile networks whose size is controllably scalable for performance evaluation. Lately various measurement studies of human walk traces have discovered several significant statistical patterns of human mobility. Namely these include truncated power-law distributions of flights, pause-times and inter-contact times, fractal way-points, and heterogeneously defined areas of individual mobility. Unfortunately, none of existing mobility models effectively captures all of these features. This paper presents a new mobility model called SLAW (Self-similar Least Action Walk) that can produce synthetic walk traces containing all these features. This is by far the first such model. Our performance study using using SLAW generated traces indicates that SLAW is effective in representing social contexts present among people sharing common interests or those in a single community such as university campus, companies and theme parks. The social contexts are typically common gathering places where most people visit during their daily lives such as student unions, dormitory, street malls and restaurants. SLAW expresses the mobility patterns involving these contexts by fractal waypoints and heavy-tail flights on top of the waypoints. We verify through simulation that SLAW brings out the unique performance features of various mobile network routing protocols.",
"title": ""
},
{
"docid": "8e3f8fca93ca3106b83cf85d20c061ca",
"text": "KeeLoq is a 528-round lightweight block cipher which has a 64-bit secret key and a 32-bit block length. The cube attack, proposed by Dinur and Shamir, is a new type of attacking method. In this paper, we investigate the security of KeeLoq against iterative side-channel cube attack which is an enhanced attack scheme. Based on structure of typical block ciphers, we give the model of iterative side-channel cube attack. Using the traditional single-bit leakage model, we assume that the attacker can exactly possess the information of one bit leakage after round 23. The new attack model costs a data complexity of 211.00 chosen plaintexts to recover the 23-bit key of KeeLoq. Our attack will reduce the key searching space to 241 by considering an error-free bit from internal states.",
"title": ""
},
{
"docid": "aff140cf2ffe8acef72317f4cc55e1b8",
"text": "We address the problem of deploying groups of tens or hundreds of unmanned ground vehicles (UGVs) in urban environments where a group of aerial vehicles (UAVs) can be used to coordinate the ground vehicles. We envision a hierarchy in which UAVs with aerial cameras can be used to monitor and command a swarm of UGVs, controlling the splitting and merging of the swarm into groups and the shape (distribution) and motion of each group. We call these UAVs Aerial Shepherds. We show a probabilistic approach using the EM algorithm for the initial assignment of shepherds to groups and present behaviors that allow an efficient hierarchical decomposition. We illustrate the framework through simulation examples, with applications to deployment in an urban environment.",
"title": ""
},
{
"docid": "343ad5204ee034972654aba86439730f",
"text": "This paper presents a Doppler radar vital sign detection system with random body movement cancellation (RBMC) technique based on adaptive phase compensation. An ordinary camera was integrated with the system to measure the subject's random body movement (RBM) that is fed back as phase information to the radar system for RBMC. The linearity of the radar system, which is strictly related to the circuit saturation problem in noncontact vital sign detection, has been thoroughly analyzed and discussed. It shows that larger body movement does not necessarily mean larger radar baseband output. High gain configuration at baseband is required for acceptable SNR in noncontact vital sign detection. The phase compensation at radar RF front-end helps to relieve the high-gain baseband from potential saturation in the presence of large body movement. A simple video processing algorithm was presented to extract the RBM without using any marker. Both theoretical analysis and simulation have been carried out to validate the linearity analysis and the proposed RBMC technique. Two experiments were carried out in the lab environment. One is the phase compensation at RF front end to extract a phantom motion in the presence of another large shaker motion, and the other one is to measure the subject person breathing normally but randomly moving his body back and forth. The experimental results show that the proposed radar system is effective to relieve the linearity burden of the baseband circuit and help compensate the RBM.",
"title": ""
},
{
"docid": "c2bc140c0203ebd1b5c0d378ef77763b",
"text": "Community discovery is central to social network analysis as it provides a natural way for decomposing a social graph to smaller ones based on the interactions among individuals. Communities do not need to be disjoint and often exhibit recursive structure. The latter has been established as a distinctive characteristic of large social graphs, indicating a modularity in the way humans build societies. This paper presents the implementation of four established community discovery algorithms in the form of Neo4j higher order analytics with the Twitter4j Java API and their application to two real Twitter graphs with diverse structural properties. In order to evaluate the results obtained from each algorithm a regularization-like metric, balancing the global and local graph self-similarity akin to the way it is done in signal processing, is proposed.",
"title": ""
},
{
"docid": "1e8acf321f7ff3a1a496e4820364e2a8",
"text": "The liver is a central regulator of metabolism, and liver failure thus constitutes a major health burden. Understanding how this complex organ develops during embryogenesis will yield insights into how liver regeneration can be promoted and how functional liver replacement tissue can be engineered. Recent studies of animal models have identified key signaling pathways and complex tissue interactions that progressively generate liver progenitor cells, differentiated lineages and functional tissues. In addition, progress in understanding how these cells interact, and how transcriptional and signaling programs precisely coordinate liver development, has begun to elucidate the molecular mechanisms underlying this complexity. Here, we review the lineage relationships, signaling pathways and transcriptional programs that orchestrate hepatogenesis.",
"title": ""
},
{
"docid": "425eea5a508dcdd63e0e1ea8e6527a3d",
"text": "This technical report describes the multi-label classification (MLC) search space in the MEKA software, including the traditional/meta MLC algorithms, and the traditional/meta/preprocessing single-label classification (SLC) algorithms. The SLC search space is also studied because is part of MLC search space as several methods use problem transformation methods to create a solution (i.e., a classifier) for a MLC problem. This was done in order to understand better the MLC algorithms. Finally, we propose a grammar that formally expresses this understatement.",
"title": ""
}
] |
scidocsrr
|
e0960c2ca95331553edf97960c9c1cac
|
An Automatic Knowledge Graph Creation Framework from Natural Language Text
|
[
{
"docid": "c02fb121399e1ed82458fb62179d2560",
"text": "Most coreference resolution models determine if two mentions are coreferent using a single function over a set of constraints or features. This approach can lead to incorrect decisions as lower precision features often overwhelm the smaller number of high precision ones. To overcome this problem, we propose a simple coreference architecture based on a sieve that applies tiers of deterministic coreference models one at a time from highest to lowest precision. Each tier builds on the previous tier’s entity cluster output. Further, our model propagates global information by sharing attributes (e.g., gender and number) across mentions in the same cluster. This cautious sieve guarantees that stronger features are given precedence over weaker ones and that each decision is made using all of the information available at the time. The framework is highly modular: new coreference modules can be plugged in without any change to the other modules. In spite of its simplicity, our approach outperforms many state-of-the-art supervised and unsupervised models on several standard corpora. This suggests that sievebased approaches could be applied to other NLP tasks.",
"title": ""
},
{
"docid": "40ec8caea52ba75a6ad1e100fb08e89a",
"text": "Disambiguating concepts and entities in a context sensitive way is a fundamental problem in natural language processing. The comprehensiveness of Wikipedia has made the online encyclopedia an increasingly popular target for disambiguation. Disambiguation to Wikipedia is similar to a traditional Word Sense Disambiguation task, but distinct in that the Wikipedia link structure provides additional information about which disambiguations are compatible. In this work we analyze approaches that utilize this information to arrive at coherent sets of disambiguations for a given document (which we call “global” approaches), and compare them to more traditional (local) approaches. We show that previous approaches for global disambiguation can be improved, but even then the local disambiguation provides a baseline which is very hard to beat.",
"title": ""
}
] |
[
{
"docid": "04d94b476a40466117af236870f22035",
"text": "With advances in deep learning, neural network variants are becoming the dominant architecture for many NLP tasks. In this project, we apply several deep learning approaches to question answering, with a focus on the bAbI dataset.",
"title": ""
},
{
"docid": "7cfdad39cebb90cac18a8f9ae6a46238",
"text": "A malware macro (also called \"macro virus\") is the code that exploits the macro functionality of office documents (especially Microsoft Office’s Excel and Word) to carry out malicious action against the systems of the victims that open the file. This type of malware was very popular during the late 90s and early 2000s. After its rise when it was created as a propagation method of other malware in 2014, macro viruses continue posing a threat to the user that is far from being controlled. This paper studies the possibility of improving macro malware detection via machine learning techniques applied to the properties of the code.",
"title": ""
},
{
"docid": "6b1e67c1768f9ec7a6ab95a9369b92d1",
"text": "Autoregressive sequence models based on deep neural networks, such as RNNs, Wavenet and the Transformer attain state-of-the-art results on many tasks. However, they are difficult to parallelize and are thus slow at processing long sequences. RNNs lack parallelism both during training and decoding, while architectures like WaveNet and Transformer are much more parallelizable during training, yet still operate sequentially during decoding. We present a method to extend sequence models using discrete latent variables that makes decoding much more parallelizable. We first autoencode the target sequence into a shorter sequence of discrete latent variables, which at inference time is generated autoregressively, and finally decode the output sequence from this shorter latent sequence in parallel. To this end, we introduce a novel method for constructing a sequence of discrete latent variables and compare it with previously introduced methods. Finally, we evaluate our model end-to-end on the task of neural machine translation, where it is an order of magnitude faster at decoding than comparable autoregressive models. While lower in BLEU than purely autoregressive models, our model achieves higher scores than previously proposed non-autoregressive translation models.",
"title": ""
},
{
"docid": "9e4044150b05752693e11627e7f8cd2b",
"text": "Snarr RL, Esco MR, Witte EV, Jenkins CT, Brannan RM. Electromyographic Activity of Rectus Abdominis During a Suspension Push-up Compared to Traditional Exercises. JEPonline 2013;16(3):1-8. The purpose of this study was to compare the electromyographic (EMG) activity of the rectus abdominis (RA) across three different exercises [i.e., suspension pushup (SPU), standard pushup (PU) and abdominal supine crunch (C)]. Fifteen apparently healthy men (n = 12, age = 25.75 ± 3.91 yrs) and women (n = 3, age = 22.33 ± 1.15) volunteered to participate in this study. The subjects performed four repetitions of SPU, PU, and C. The order of the exercises was randomized. Mean peak EMG activity of the RA was recorded across the 4 repetitions of each exercise. Raw (mV) and normalized (%MVC) values were analyzed. The results of this study showed that SPU and C elicited a significantly greater (P<0.05) activation of the RA reported as raw (2.2063 ± 1.00198 mV and 1.9796 ± 1.36190 mV, respectively) and normalized values (68.0 ± 16.5% and 52 ± 28.7%, respectively) compared to PU (i.e., 0.8448 ± 0.76548 mV and 21 ± 16.6%). The SPU and C were not significantly different (P>0.05). This investigation indicated that SPU and C provided similar activation levels of the RA that were significantly greater than PU.",
"title": ""
},
{
"docid": "99e3a2d4dbb1423be73adaa4e9288a94",
"text": "Playing a musical instrument is an intense, multisensory, and motor experience that usually commences at an early age and requires the acquisition and maintenance of a range of skills over the course of a musician's lifetime. Thus, musicians offer an excellent human model for studying the brain effects of acquiring specialized sensorimotor skills. For example, musicians learn and repeatedly practice the association of motor actions with specific sound and visual patterns (musical notation) while receiving continuous multisensory feedback. This association learning can strengthen connections between auditory and motor regions (e.g., arcuate fasciculus) while activating multimodal integration regions (e.g., around the intraparietal sulcus). We argue that training of this neural network may produce cross-modal effects on other behavioral or cognitive operations that draw on this network. Plasticity in this network may explain some of the sensorimotor and cognitive enhancements that have been associated with music training. These enhancements suggest the potential for music making as an interactive treatment or intervention for neurological and developmental disorders, as well as those associated with normal aging.",
"title": ""
},
{
"docid": "cf219b9093dc55f09d067954d8049aeb",
"text": "In this work we explore a straightforward variational Bayes scheme for Recurrent Neural Networks. Firstly, we show that a simple adaptation of truncated backpropagation through time can yield good quality uncertainty estimates and superior regularisation at only a small extra computational cost during training, also reducing the amount of parameters by 80%. Secondly, we demonstrate how a novel kind of posterior approximation yields further improvements to the performance of Bayesian RNNs. We incorporate local gradient information into the approximate posterior to sharpen it around the current batch statistics. We show how this technique is not exclusive to recurrent neural networks and can be applied more widely to train Bayesian neural networks. We also empirically demonstrate how Bayesian RNNs are superior to traditional RNNs on a language modelling benchmark and an image captioning task, as well as showing how each of these methods improve our model over a variety of other schemes for training them. We also introduce a new benchmark for studying uncertainty for language models so future methods can be easily compared.",
"title": ""
},
{
"docid": "78d1a0f7a66d3533b1a00d865eeb6abd",
"text": "Motivated by a real-life problem of sharing social network data that contain sensitive personal information, we propose a novel approach to release and analyze synthetic graphs in order to protect privacy of individual relationships captured by the social network while maintaining the validity of statistical results. A case study using a version of the Enron e-mail corpus dataset demonstrates the application and usefulness of the proposed techniques in solving the challenging problem of maintaining privacy and supporting open access to network data to ensure reproducibility of existing studies and discovering new scientific insights that can be obtained by analyzing such data. We use a simple yet effective randomized response mechanism to generate synthetic networks under -edge differential privacy, and then use likelihood based inference for missing data and Markov chain Monte Carlo techniques to fit exponential-family random graph models to the generated synthetic networks.",
"title": ""
},
{
"docid": "9043a5aae40471cb9f671a33725b0072",
"text": "In a software development group of IBM Retail Store Solutions, we built a non-trivial software system based on a stable standard specification using a disciplined, rigorous unit testing and build approach based on the test- driven development (TDD) practice. Using this practice, we reduced our defect rate by about 50 percent compared to a similar system that was built using an ad-hoc unit testing approach. The project completed on time with minimal development productivity impact. Additionally, the suite of automated unit test cases created via TDD is a reusable and extendable asset that will continue to improve quality over the lifetime of the software system. The test suite will be the basis for quality checks and will serve as a quality contract between all members of the team.",
"title": ""
},
{
"docid": "9d97803a016e24fc9a742d45adf1cc3a",
"text": "Biochemical compositional analysis of microbial biomass is a useful tool that can provide insight into the behaviour of an organism and its adaptational response to changes in its environment. To some extent, it reflects the physiological and metabolic status of the organism. Conventional methods to estimate biochemical composition often employ different sample pretreatment strategies and analytical steps for analysing each major component, such as total proteins, carbohydrates, and lipids, making it labour-, time- and sample-intensive. Such analyses when carried out individually can also result in uncertainties of estimates as different pre-treatment or extraction conditions are employed for each of the component estimations and these are not necessarily standardised for the organism, resulting in observations that are not easy to compare within the experimental set-up or between laboratories. We recently reported a method to estimate total lipids in microalgae (Chen, Vaidyanathan, Anal. Chim. Acta, 724, 67-72). Here, we propose a unified method for the simultaneous estimation of the principal biological components, proteins, carbohydrates, lipids, chlorophyll and carotenoids, in a single microalgae culture sample that incorporates the earlier published lipid assay. The proposed methodology adopts an alternative strategy for pigment assay that has a high sensitivity. The unified assay is shown to conserve sample (by 79%), time (67%), chemicals (34%) and energy (58%) when compared to the corresponding assay for each component, carried out individually on different samples. The method can also be applied to other microorganisms, especially those with recalcitrant cell walls.",
"title": ""
},
{
"docid": "62b24fad8ab9d1c426ed3ff7c3c5fb49",
"text": "In the present paper we have reported a wavelet based time-frequency multiresolution analysis of an ECG signal. The ECG (electrocardiogram), which records hearts electrical activity, is able to provide with useful information about the type of Cardiac disorders suffered by the patient depending upon the deviations from normal ECG signal pattern. We have plotted the coefficients of continuous wavelet transform using Morlet wavelet. We used different ECG signal available at MIT-BIH database and performed a comparative study. We demonstrated that the coefficient at a particular scale represents the presence of QRS signal very efficiently irrespective of the type or intensity of noise, presence of unusually high amplitude of peaks other than QRS peaks and Base line drift errors. We believe that the current studies can enlighten the path towards development of very lucid and time efficient algorithms for identifying and representing the QRS complexes that can be done with normal computers and processors. KeywordsECG signal, Continuous Wavelet Transform, Morlet Wavelet, Scalogram, QRS Detector.",
"title": ""
},
{
"docid": "3d3a4cd96a349a7ebbaf168a1685e0d8",
"text": "We consider influence maximization (IM) in social networks, which is the problem of maximizing the number of users that become aware of a product by selecting a set of “seed” users to expose the product to. While prior work assumes a known model of information diffusion, we propose a parametrization in terms of pairwise reachability which makes our framework agnostic to the underlying diffusion model. We give a corresponding monotone, submodular surrogate function, and show that it is a good approximation to the original IM objective. We also consider the case of a new marketer looking to exploit an existing social network, while simultaneously learning the factors governing information propagation. For this, we propose a pairwise-influence semi-bandit feedback model and develop a LinUCB-based bandit algorithm. Our model-independent regret analysis shows that our bound on the cumulative regret has a better (as compared to previous work) dependence on the size of the network. By using the graph Laplacian eigenbasis to construct features, we describe a practical LinUCB implementation. Experimental evaluation suggests that our framework is robust to the underlying diffusion model and can efficiently learn a near-optimal solution.",
"title": ""
},
{
"docid": "6c6afdefc918e6dfdb6bc5f5bb96cf45",
"text": "Due to the complexity and uncertainty of socioeconomic environments and cognitive diversity of group members, the cognitive information over alternatives provided by a decision organization consisting of several experts is usually uncertain and hesitant. Hesitant fuzzy preference relations provide a useful means to represent the hesitant cognitions of the decision organization over alternatives, which describe the possible degrees that one alternative is preferred to another by using a set of discrete values. However, in order to depict the cognitions over alternatives more comprehensively, besides the degrees that one alternative is preferred to another, the decision organization would give the degrees that the alternative is non-preferred to another, which may be a set of possible values. To effectively handle such common cases, in this paper, the dual hesitant fuzzy preference relation (DHFPR) is introduced and the methods for group decision making (GDM) with DHFPRs are investigated. Firstly, a new operator to aggregate dual hesitant fuzzy cognitive information is developed, which treats the membership and non-membership information fairly, and can generate more neutral results than the existing dual hesitant fuzzy aggregation operators. Since compatibility is a very effective tool to measure the consensus in GDM with preference relations, then two compatibility measures for DHFPRs are proposed. After that, the developed aggregation operator and compatibility measures are applied to GDM with DHFPRs and two GDM methods are designed, which can be applied to different decision making situations. Each GDM method involves a consensus improving model with respect to DHFPRs. The model in the first method reaches the desired consensus level by adjusting the group members’ preference values, and the model in the second method improves the group consensus level by modifying the weights of group members according to their contributions to the group decision, which maintains the group members’ original opinions and allows the group members not to compromise for reaching the desired consensus level. In actual applications, we may choose a proper method to solve the GDM problems with DHFPRs in light of the actual situation. Compared with the GDM methods with IVIFPRs, the proposed methods directly apply the original DHFPRs to decision making and do not need to transform them into the IVIFPRs, which can avoid the loss and distortion of original information, and thus can generate more precise decision results.",
"title": ""
},
{
"docid": "09ac80ede8822e3e71642b8bd57ff262",
"text": "Auditory displays are described for several application domains: transportation, industrial processes, health care, operation theaters, and service sectors. Several types of auditory displays are compared, such as warning, state, and intent displays. Also, the importance for blind people in a visualized world is considered with suitable approaches. The service robot domain has been chosen as an example for the future use of auditory displays within multimedia process supervision and control applications in industrial, transportation, and medical systems. The design of directional sounds and of additional sounds for robot states, as well as the design of more complicated robot sound tracks, are explained. Basic musical elements and robot movement sounds have been combined. Two exploratory experimental studies, one on the understandability of the directional sounds and the robot state sounds as well as another on the auditory perception of intended robot trajectories in a simulated supermarket scenario, are described. Subjective evaluations of sound characteristics such as urgency, expressiveness, and annoyance have been carried out by nonmusicians and musicians. These experimental results are briefly compared with time-frequency analyses.",
"title": ""
},
{
"docid": "20238a257954a4a0d02549250b082dce",
"text": "Wearable, flexible healthcare devices, which can monitor health data to predict and diagnose disease in advance, benefit society. Toward this future, various flexible and stretchable sensors as well as other components are demonstrated by arranging materials, structures, and processes. Although there are many sensor demonstrations, the fundamental characteristics such as the dependence of a temperature sensor on film thickness and the impact of adhesive for an electrocardiogram (ECG) sensor are yet to be explored in detail. In this study, the effect of film thickness for skin temperature measurements, adhesive force, and reliability of gel-less ECG sensors as well as an integrated real-time demonstration is reported. Depending on the ambient conditions, film thickness strongly affects the precision of skin temperature measurements, resulting in a thin flexible film suitable for a temperature sensor in wearable device applications. Furthermore, by arranging the material composition, stable gel-less sticky ECG electrodes are realized. Finally, real-time simultaneous skin temperature and ECG signal recordings are demonstrated by attaching an optimized device onto a volunteer's chest.",
"title": ""
},
{
"docid": "1d8917f5faaed1531fdcd4df06ff0920",
"text": "4G cellular standards are targeting aggressive spectrum reuse (frequency reuse 1) to achieve high system capacity and simplify radio network planning. The increase in system capacity comes at the expense of SINR degradation due to increased intercell interference, which severely impacts cell-edge user capacity and overall system throughput. Advanced interference management schemes are critical for achieving the required cell edge spectral efficiency targets and to provide ubiquity of user experience throughout the network. In this article we compare interference management solutions across the two main 4G standards: IEEE 802.16m (WiMAX) and 3GPP-LTE. Specifically, we address radio resource management schemes for interference mitigation, which include power control and adaptive fractional frequency reuse. Additional topics, such as interference management for multitier cellular deployments, heterogeneous architectures, and smart antenna schemes will be addressed in follow-up papers.",
"title": ""
},
{
"docid": "8d73ecdcbebed67393d31095d8a72ee0",
"text": "This paper presents a method for autonomous recharging of a mobile robot, a necessity for achieving long-term robotic activity without human intervention. A recharging station is designed consisting of a stationary docking station and a docking mechanism mounted to an ER-1 Evolution Robotics robot. The docking station and docking mechanism serve as a dual-power source, providing a mechanical and electrical connection between the recharging system of the robot and a laptop placed on it. Docking strategy algorithms use vision based navigation. The result is a significantly low-cost, high-entrance angle tolerant system. Iterative improvements to the system, to resist environmental perturbations and implement obstacle avoidance, ultimately resulted in a docking success rate of 100 percent over 50 trials.",
"title": ""
},
{
"docid": "72e0824602462a21781e9a881041e726",
"text": "In an effort to develop a genomics-based approach to the prediction of drug response, we have developed an algorithm for classification of cell line chemosensitivity based on gene expression profiles alone. Using oligonucleotide microarrays, the expression levels of 6,817 genes were measured in a panel of 60 human cancer cell lines (the NCI-60) for which the chemosensitivity profiles of thousands of chemical compounds have been determined. We sought to determine whether the gene expression signatures of untreated cells were sufficient for the prediction of chemosensitivity. Gene expression-based classifiers of sensitivity or resistance for 232 compounds were generated and then evaluated on independent sets of data. The classifiers were designed to be independent of the cells' tissue of origin. The accuracy of chemosensitivity prediction was considerably better than would be expected by chance. Eighty-eight of 232 expression-based classifiers performed accurately (with P < 0.05) on an independent test set, whereas only 12 of the 232 would be expected to do so by chance. These results suggest that at least for a subset of compounds genomic approaches to chemosensitivity prediction are feasible.",
"title": ""
},
{
"docid": "0ca588e42d16733bc8eef4e7957e01ab",
"text": "Three-dimensional (3D) finite element (FE) models are commonly used to analyze the mechanical behavior of the bone under different conditions (i.e., before and after arthroplasty). They can provide detailed information but they are numerically expensive and this limits their use in cases where large or numerous simulations are required. On the other hand, 2D models show less computational cost, but the precision of results depends on the approach used for the simplification. Two main questions arise: Are the 3D results adequately represented by a 2D section of the model? Which approach should be used to build a 2D model that provides reliable results compared to the 3D model? In this paper, we first evaluate if the stem symmetry plane used for generating the 2D models of bone-implant systems adequately represents the results of the full 3D model for stair climbing activity. Then, we explore three different approaches that have been used in the past for creating 2D models: (1) without side-plate (WOSP), (2) with variable thickness side-plate and constant cortical thickness (SPCT), and (3) with variable thickness side-plate and variable cortical thickness (SPVT). From the different approaches investigated, a 2D model including a side-plate best represents the results obtained with the full 3D model with much less computational cost. The side-plate needs to have variable thickness, while the cortical bone thickness can be kept constant.",
"title": ""
},
{
"docid": "a1348a9823fc85d22bc73f3fe177e0ba",
"text": "Ultrasound imaging makes use of backscattering of waves during their interaction with scatterers present in biological tissues. Simulation of synthetic ultrasound images is a challenging problem on account of inability to accurately model various factors of which some include intra-/inter scanline interference, transducer to surface coupling, artifacts on transducer elements, inhomogeneous shadowing and nonlinear attenuation. Current approaches typically solve wave space equations making them computationally expensive and slow to operate. We propose a generative adversarial network (GAN) inspired approach for fast simulation of patho-realistic ultrasound images. We apply the framework to intravascular ultrasound (IVUS) simulation. A stage 0 simulation performed using pseudo B-mode ultrasound image simulator yields speckle mapping of a digitally defined phantom. The stage I GAN subsequently refines them to preserve tissue specific speckle intensities. The stage II GAN further refines them to generate high resolution images with patho-realistic speckle profiles. We evaluate patho-realism of simulated images with a visual Turing test indicating an equivocal confusion in discriminating simulated from real. We also quantify the shift in tissue specific intensity distributions of the real and simulated images to prove their similarity.",
"title": ""
},
{
"docid": "396ce5ec8ef03a55ed022c4b580531bb",
"text": "BACKGROUND\nThe aim of this study was to evaluate if the presence of a bovine aortic arch (BAA)- the most common aortic arch anomaly-influences the location of the primary entry tear, the surgical procedure, and the outcome of patients undergoing operation for type A acute aortic dissection (AAD).\n\n\nMETHODS\nA total of 157 patients underwent emergency operations because of AAD (71% men, mean age 59.5 ± 13 years). Preoperative computed tomographic scans were screened for the presence of BAA. Patients were separated into 2 groups: presenting with BAA (BAA+, n = 22) or not (BAA-, n = 135). Location of the primary tear, surgical treatment, outcome, and risk factors for postoperative neurologic injury and in-hospital mortality were analyzed.\n\n\nRESULTS\nFourteen percent (22 of 157) of all patients operated on for AAD had a concomitant BAA. Location of the primary entry tear was predominantly in the aortic arch in patients with BAA (BAA+, 59.1% versus BAA-, 13.3%; p < 0.001). Multivariate analysis revealed the presence of a BAA to be an independent risk factor for having the primary tear in the aortic arch (odds ratio [OR], 14.79; 95% confidence interval [CI] 4.54-48.13; p < 0.001) but not for in-hospital mortality. Patients with BAA had a higher rate of postoperative neurologic injury (BAA+, 35% versus BAA-, 7.9%; p = 0.004). Multivariate analysis identified the presence of BAA as an independent risk factor for postoperative neurologic injury (OR, 4.9; 95% CI, 1.635-14.734; p = 0.005).\n\n\nCONCLUSIONS\nIn type A AAD, the presence of a BAA predicts the location of the primary entry site in the aortic arch and is an independent risk factor for a poor neurologic outcome.",
"title": ""
}
] |
scidocsrr
|
e05a4449a6424b71e6edd332eb402c49
|
An Incentive-Compatible Routing Protocol for Two-Hop Delay-Tolerant Networks
|
[
{
"docid": "135513fa93b5fade93db11fdf942fe7a",
"text": "This paper describes two techniques that improve throughput in an ad hoc network in the presence of nodes that agree to forward packets but fail to do so. To mitigate this problem, we propose categorizing nodes based upon their dynamically measured behavior. We use a watchdog that identifies misbehaving nodes and a pathrater that helps routing protocols avoid these nodes. Through simulation we evaluate watchdog and pathrater using packet throughput, percentage of overhead (routing) transmissions, and the accuracy of misbehaving node detection. When used together in a network with moderate mobility, the two techniques increase throughput by 17% in the presence of 40% misbehaving nodes, while increasing the percentage of overhead transmissions from the standard routing protocol's 9% to 17%. During extreme mobility, watchdog and pathrater can increase network throughput by 27%, while increasing the overhead transmissions from the standard routing protocol's 12% to 24%.",
"title": ""
},
{
"docid": "2af231da02dbfb4db5c44c386870142c",
"text": "Mobile ad hoc routing protocols allow nodes with wireless adaptors to communicate with one another without any pre-existing network infrastructure. Existing ad hoc routing protocols, while robust to rapidly changing network topology, assume the presence of a connected path from source to destination. Given power limitations, the advent of short-range wireless networks, and the wide physical conditions over which ad hoc networks must be deployed, in some scenarios it is likely that this assumption is invalid. In this work, we develop techniques to deliver messages in the case where there is never a connected path from source to destination or when a network partition exists at the time a message is originated. To this end, we introduce Epidemic Routing, where random pair-wise exchanges of messages among mobile hosts ensure eventual message delivery. The goals of Epidemic Routing are to: i) maximize message delivery rate, ii) minimize message latency, and iii) minimize the total resources consumed in message delivery. Through an implementation in the Monarch simulator, we show that Epidemic Routing achieves eventual delivery of 100% of messages with reasonable aggregate resource consumption in a number of interesting scenarios.",
"title": ""
}
] |
[
{
"docid": "06ad885fdd05799306bae3d7d0ff1b10",
"text": "We present a data driven approach to holistic scene understanding. From a single image of an indoor scene, our approach estimates its detailed 3D geometry, i.e. The location of its walls and floor, and the 3D appearance of its containing objects, as well as its semantic meaning, i.e. A prediction of what objects it contains. This is made possible by using large datasets of detailed 3D models alongside appearance based detectors. We first estimate the 3D layout of a room, and extrapolate 2D object detection hypotheses to three dimensions to form bounding cuboids. Cuboids are converted to detailed 3D models of the predicted semantic category. Combinations of 3D models are used to create a large list of layout hypotheses for each image -- where each layout hypothesis is semantically meaningful and geometrically plausible. The likelihood of each layout hypothesis is ranked using a learned linear model -- and the hypothesis with the highest predicted likelihood is the final predicted 3D layout. Our approach is able to recover the detailed geometry of scenes, provide precise segmentation of objects in the image plane, and estimate objects' pose in 3D.",
"title": ""
},
{
"docid": "3ce61e8c33143c446ddc9aaa01274bc6",
"text": "Research on adolescent self-esteem indicates that adolescence is a time in which individuals experience important changes in their physical, cognitive, and social identities. Prior research suggests that there is a positive relationship between an adolescent's participation in structured extracurricular activities and well-being in a variety of domains, and some research indicates that these relationships may be dependent on the type of activities in which adolescents participate. Building on previous research, a growth-curve analysis was utilized to examine self-esteem trajectories from adolescence (age 14) to young adulthood (age 26). Using 3 waves of data from National Longitudinal Study of Adolescent Health (n = 5,399; 47.8% male), the analysis estimated a hierarchical growth-curve model emphasizing the effects of age and type of school-based extracurricular activity portfolio, including sports and school clubs, on self-esteem. The results indicated that age had a linear relationship with self-esteem over time. Changes in both the initial level of self-esteem and the growth of self-esteem over time were significantly influenced by the type of extracurricular activity portfolio. The findings were consistent across race and sex. The results support the utility of examining the longitudinal impact of portfolio type on well-being outcomes.",
"title": ""
},
{
"docid": "29f8f508808b9c602abc776eefeac77c",
"text": "Phase shifters based on double dielectric slab-loaded air-filled substrate-integrated waveguide (SIW) are proposed for high-performance applications at millimeter-wave frequencies. The three-layered air-filled SIW, made of a low-cost multilayer printed circuit board process, allows for substantial loss reduction and power handling capability enhancement compared with the conventional dielectric-filled counterpart. It is of particular interest for millimeter-wave applications that generally require low-loss transmission and high-density power handling. Its top and bottom layers may make use of a low-cost standard substrate, such as FR-4, on which baseband analog or digital circuits can be implemented so to obtain very compact, low cost, and self-packaged millimeter-wave integrated systems compared with the systems based on rectangular waveguide while achieving higher performance than the systems based on the conventional SIW. In this paper, it is demonstrated that transmission loss can be further improved at millimeter-wave frequencies with an additional polishing of the top and bottom conductor surfaces. Over Ka-band, an improvement of average 1.56 dB/m is experimentally demonstrated. Using the air-filled SIW fabrication process, dielectric slabs can be implemented along conductive via rows without any additional process. Based on the propagation properties of the obtained double dielectric slab-loaded air-filled SIW, phase shifters are proposed. To obtain a broadband response, an equal-length compensated phase shifter made of two air-filled SIW structures, offering a reverse varying propagation constant difference against frequency, is proposed and demonstrated at Ka-band. Finally, a single dielectric slab phase shifter is investigated for comparison and its bandwidth limitation is highlighted.",
"title": ""
},
{
"docid": "eb83222ce7180fe3039c00eeb8600d2f",
"text": "Cloud-assisted video streaming has emerged as a new paradigm to optimize multimedia content distribution over the Internet. This article investigates the problem of streaming cloud-assisted real-time video to multiple destinations (e.g., cloud video conferencing, multi-player cloud gaming, etc.) over lossy communication networks. The user diversity and network dynamics result in the delay differences among multiple destinations. This research proposes <underline>D</underline>ifferentiated cloud-<underline>A</underline>ssisted <underline>VI</underline>deo <underline>S</underline>treaming (DAVIS) framework, which proactively leverages such delay differences in video coding and transmission optimization. First, we analytically formulate the optimization problem of joint coding and transmission to maximize received video quality. Second, we develop a quality optimization framework that integrates the video representation selection and FEC (Forward Error Correction) packet interleaving. The proposed DAVIS is able to effectively perform differentiated quality optimization for multiple destinations by taking advantage of the delay differences in cloud-assisted video streaming system. We conduct the performance evaluation through extensive experiments with the Amazon EC2 instances and Exata emulation platform. Evaluation results show that DAVIS outperforms the reference cloud-assisted streaming solutions in video quality and delay performance.",
"title": ""
},
{
"docid": "20f05b48fa88283d649a3bcadf2ed818",
"text": "A great variety of native and introduced plant species were used as foods, medicines and raw materials by the Rumsen and Mutsun Costanoan peoples of central California. The information presented here has been abstracted from original unpublished field notes recorded during the 1920s and 1930s by John Peabody Harrington, who also directed the collection of some 500 plant specimens. The nature of Harrington’s data and their significance for California ethnobotany are described, followed by a summary of information on the ethnographic uses of each plant.",
"title": ""
},
{
"docid": "9f15297a7eab4084fa7d17b618d82a02",
"text": "Purpose – The purpose of this study is to update a global ranking of knowledge management and intellectual capital (KM/IC) academic journals. Design/methodology/approach – Two different approaches were utilized: a survey of 379 active KM/IC researchers; and the journal citation impact method. Scores produced by the application of these methods were combined to develop the final ranking. Findings – Twenty-five KM/IC-centric journals were identified and ranked. The top six journals are: Journal of Knowledge Management, Journal of Intellectual Capital, The Learning Organization, Knowledge Management Research & Practice, Knowledge and Process Management and International Journal of Knowledge Management. Knowledge Management Research & Practice has substantially improved its reputation. The Learning Organization and Journal of Intellectual Capital retained their previous positions due to their strong citation impact. The number of KM/IC-centric and KM/IC-relevant journals has been growing at the pace of one new journal launch per year. This demonstrates that KM/IC is not a scientific fad; instead, the discipline is progressing towards academic maturity and recognition. Practical implications – The developed ranking may be used by various stakeholders, including journal editors, publishers, reviewers, researchers, new scholars, students, policymakers, university administrators, librarians and practitioners. It is a useful tool to further promote the KM/IC discipline and develop its unique identity. It is important for all KM/IC journals to become included in Thomson Reuters’ Journal Citation Reports. Originality/value – This is the most up-to-date ranking of KM/IC journals.",
"title": ""
},
{
"docid": "f690ffa886d3ed44a6b2171226cd151f",
"text": "Gold nanoparticles are widely used in biomedical imaging and diagnostic tests. Based on their established use in the laboratory and the chemical stability of Au(0), gold nanoparticles were expected to be safe. The recent literature, however, contains conflicting data regarding the cytotoxicity of gold nanoparticles. Against this background a systematic study of water-soluble gold nanoparticles stabilized by triphenylphosphine derivatives ranging in size from 0.8 to 15 nm is made. The cytotoxicity of these particles in four cell lines representing major functional cell types with barrier and phagocyte function are tested. Connective tissue fibroblasts, epithelial cells, macrophages, and melanoma cells prove most sensitive to gold particles 1.4 nm in size, which results in IC(50) values ranging from 30 to 56 microM depending on the particular 1.4-nm Au compound-cell line combination. In contrast, gold particles 15 nm in size and Tauredon (gold thiomalate) are nontoxic at up to 60-fold and 100-fold higher concentrations, respectively. The cellular response is size dependent, in that 1.4-nm particles cause predominantly rapid cell death by necrosis within 12 h while closely related particles 1.2 nm in diameter effect predominantly programmed cell death by apoptosis.",
"title": ""
},
{
"docid": "a526a2254f4408048828a9112e475020",
"text": "Fast Fourier transform (FFT)-based restorations are fast, but at the expense of assuming that the blurring and deblurring are based on circular convolution. Unfortunately, when the opposite sides of the image do not match up well in intensity, this assumption can create significant artifacts across the image. If the pixels outside the measured image window are modeled as unknown values in the restored image, boundary artifacts are avoided. However, this approach destroys the structure that makes the use of the FFT directly applicable, since the unknown image is no longer the same size as the measured image. Thus, the restoration methods available for this problem no longer have the computational efficiency of the FFT. We propose a new restoration method for the unknown boundary approach that can be implemented in a fast and flexible manner. We decompose the restoration into a sum of two independent restorations. One restoration yields an image that comes directly from a modified FFT-based approach. The other restoration involves a set of unknowns whose number equals that of the unknown boundary values. By summing the two, the artifacts are canceled. Because the second restoration has a significantly reduced set of unknowns, it can be calculated very efficiently even though no circular convolution structure exists.",
"title": ""
},
{
"docid": "9874a50fa660c4423e08c42e52c675a5",
"text": "We are developing a technique to predict travel time of a vehicle for an objective road section, based on real time traffic data collected through a probe-car system. In the area of Intelligent Transport System (ITS), travel time prediction is an important subject. Probe-car system is an upcoming data collection method, in which a number of vehicles are used as moving sensors to detect actual traffic situation. It can collect data concerning much larger area, compared with traditional fixed detectors. Our prediction technique is based on statistical analysis using AR model with seasonal adjustment and MDL (Minimum Description Length) criterion. Seasonal adjustment is used to handle periodicities of 24 hours in traffic data. Alternatively, we employ state space model, which can handle time series with periodicities. It is important to select really effective data for prediction, among the data from widespread area, which are collected via probe-car system. We do this using MDL criterion. That is, we find the explanatory variables that really have influence on the future travel time. In this paper, we experimentally show effectiveness of our method using probe-car data collected in Nagoya Metropolitan Area in 2002.",
"title": ""
},
{
"docid": "c70abd8598ef360dc6e9a10f46622003",
"text": "Removal of baseline wander is a crucial step in the signal conditioning stage of photoplethysmography signals. Hence, a method for removing the baseline wander from photoplethysmography based on two-stages of median filtering is proposed in this paper. Recordings from Physionet database are used to validate the proposed method. In this paper, the two-stage moving average filtering is also applied to remove baseline wander in photoplethysmography signals for comparison with our novel two-stage median filtering method. Our experiment results show that the performance of two-stage median filtering method is more effective in removing baseline wander from photoplethysmography signals. This median filtering method effectively improves the cross correlation with minimal distortion of the signal of interest. Although the method is proposed for baseline wander in photoplethysmography signals, it can be applied to other biomedical signals as well.",
"title": ""
},
{
"docid": "c5441c3010dd0169f0b20e383c05e0c9",
"text": "The purpose of the present study was to elucidate how plyometric training improves stretch-shortening cycle (SSC) exercise performance in terms of muscle strength, tendon stiffness, and muscle-tendon behavior during SSC exercise. Eleven men were assigned to a training group and ten to a control group. Subjects in the training group performed depth jumps (DJ) using only the ankle joint for 12 weeks. Before and after the period, we observed reaction forces at foot, muscle-tendon behavior of the gastrocnemius, and electromyographic activities of the triceps surae and tibialis anterior during DJ. Maximal static plantar flexion strength and Achilles tendon stiffness were also determined. In the training group, maximal strength remained unchanged while tendon stiffness increased. The force impulse of DJ increased, with a shorter contact time and larger reaction force over the latter half of braking and initial half of propulsion phases. In the latter half of braking phase, the average electromyographic activity (mEMG) increased in the triceps surae and decreased in tibialis anterior, while fascicle behavior of the gastrocnemius remained unchanged. In the initial half of propulsion, mEMG of triceps surae and shortening velocity of gastrocnemius fascicle decreased, while shortening velocity of the tendon increased. These results suggest that the following mechanisms play an important role in improving SSC exercise performance through plyometric training: (1) optimization of muscle-tendon behavior of the agonists, associated with alteration in the neuromuscular activity during SSC exercise and increase in tendon stiffness and (2) decrease in the neuromuscular activity of antagonists during a counter movement.",
"title": ""
},
{
"docid": "527e70797ec7931687d17d26f1f64428",
"text": "We experimentally demonstrate the focusing of visible light with ultra-thin, planar metasurfaces made of concentrically perforated, 30-nm-thick gold films. The perforated nano-voids—Babinet-inverted (complementary) nano-antennas—create discrete phase shifts and form a desired wavefront of cross-polarized, scattered light. The signal-to-noise ratio in our complementary nano-antenna design is at least one order of magnitude higher than in previous metallic nano-antenna designs. We first study our proof-of-concept ‘metalens’ with extremely strong focusing ability: focusing at a distance of only 2.5 mm is achieved experimentally with a 4-mm-diameter lens for light at a wavelength of 676 nm. We then extend our work with one of these ‘metalenses’ and achieve a wavelength-controllable focal length. Optical characterization of the lens confirms that switching the incident wavelength from 676 to 476 nm changes the focal length from 7 to 10 mm, which opens up new opportunities for tuning and spatially separating light at different wavelengths within small, micrometer-scale areas. All the proposed designs can be embedded on-chip or at the end of an optical fiber. The designs also all work for two orthogonal, linear polarizations of incident light. Light: Science & Applications (2013) 2, e72; doi:10.1038/lsa.2013.28; published online 26 April 2013",
"title": ""
},
{
"docid": "1df4fad2d5448364834608f4bc9d10a0",
"text": "What causes adolescents to be materialistic? Prior research shows parents and peers are an important influence. Researchers have viewed parents and peers as socialization agents that transmit consumption attitudes, goals, and motives to adolescents. We take a different approach, viewing parents and peers as important sources of emotional support and psychological well-being, which increase self-esteem in adolescents. Supportive parents and peers boost adolescents' self-esteem, which decreases their need to turn to material goods to develop positive selfperceptions. In a study with 12–18 year-olds, we find support for our view that self-esteem mediates the relationship between parent/peer influence and adolescent materialism. © 2010 Society for Consumer Psychology. Published by Elsevier Inc. All rights reserved. Rising levels of materialism among adolescents have raised concerns among parents, educators, and consumer advocates.More than half of 9–14 year-olds agree that, “when you grow up, the more money you have, the happier you are,” and over 60% agree that, “the only kind of job I want when I grow up is one that getsme a lot of money” (Goldberg, Gorn, Peracchio, & Bamossy, 2003). These trends have lead social scientists to conclude that adolescents today are “...the most brand-oriented, consumer-involved, and materialistic generation in history” (Schor, 2004, p. 13). What causes adolescents to bematerialistic? Themost consistent finding to date is that adolescent materialism is related to the interpersonal influences in their lives—notably, parents and peers. The vast majority of research is based on a social influence perspective, viewing parents and peers as socialization agents that transmit consumption attitudes, goals, and motives to adolescents through modeling, reinforcement, and social interaction. In early research, Churchill and Moschis (1979) proposed that adolescents learn rational aspects of consumption from their parents and social aspects of consumption (materialism) from their peers. Moore and ⁎ Corresponding author. Villanova School of Business, 800 Lancaster Avenue, Villanova, PA 19085, USA. Fax: +1 520 621 7483. E-mail addresses: [email protected] (L.N. Chaplin), [email protected] (D.R. John). 1057-7408/$ see front matter © 2010 Society for Consumer Psychology. Publish doi:10.1016/j.jcps.2010.02.002 Moschis (1981) examined family communication styles, suggesting that certain styles (socio-oriented) promote conformity to others' views, setting the stage for materialism. In later work, Goldberg et al. (2003) posited that parents transmit materialistic values to their offspring by modeling these values. Researchers have also reported positive correlations betweenmaterialism and socio-oriented family communication (Moore & Moschis, 1981), parents' materialism (Flouri, 2004; Goldberg et al., 2003), peer communication about consumption (Churchill & Moschis, 1979; Moschis & Churchill, 1978), and susceptibility to peer influence (Achenreiner, 1997; Banerjee & Dittmar, 2008; Roberts, Manolis, & Tanner, 2008). We take a different approach. Instead of viewing parents and peers as socialization agents that transmit consumption attitudes and values, we consider parents and peers as important sources of emotional support and psychological well-being, which lay the foundation for self-esteem in adolescents. We argue that supportive parents and peers boost adolescents' self-esteem, which decreases their need to embrace material goods as a way to develop positive self-perceptions. Prior research is suggestive of our perspective. In studies with young adults, researchers have found a link between (1) lower parental support (cold and controlling mothers) and a focus on financial success aspirations (Kasser, Ryan, Zax, & Sameroff, 1995: 18 year-olds) and (2) lower parental support (less affection and supervision) in ed by Elsevier Inc. All rights reserved. 1 Support refers to warmth, affection, nurturance, and acceptance (Becker, 1981; Ellis, Thomas, and Rollins, 1976). Parental nurturance involves the development of caring relationships, in which parents reason with their children about moral conflicts, involve them in family decision making, and set high moral expectations (Maccoby, 1984; Staub, 1988). 177 L.N. Chaplin, D.R. John / Journal of Consumer Psychology 20 (2010) 176–184 divorced families and materialism (Rindfleisch, Burroughs, & Denton, 1997: 20–32 year-olds). These studies do not focus on adolescents, do not examine peer factors, nor do they include measures of self-esteem or self-worth. But, they do suggest that parents and peers can influence materialism in ways other than transmitting consumption attitudes and values, which has been the focus of prior research on adolescent materialism. In this article, we seek preliminary evidence for our view by testing whether self-esteem mediates the relationship between parent/peer influence and adolescent materialism. We include parent and peer factors that inhibit or encourage adolescent materialism, which allows us to test self-esteem as a mediator under both conditions. For parental influence, we include parental support (inhibits materialism) and parents' materialism (encourages materialism). Both factors have appeared in prior materialism studies, but our interest here is whether self-esteem is a mediator of their influence on materialism. For peer influence, we include peer support (inhibits materialism) and peers' materialism (encourages materialism), with our interest being whether self-esteem is a mediator of their influence on materialism. These peer factors are new to materialism research and offer potentially new insights. Contrary to prior materialism research, which views peers as encouraging materialism among adolescents, we also consider the possibility that peers may be a positive influence by providing emotional support in the same way that parents do. Our research offers several contributions to understanding materialism in adolescents. First, we provide a broader perspective on the role of parents and peers as influences on adolescent materialism. The social influence perspective, which views parents and peers as transmitting consumption attitudes and values, has dominated materialism research with children and adolescents since its early days. We provide a broader perspective by considering parents and peers as much more than socialization agents—they contribute heavily to the sense of self-esteem that adolescents possess, which influences materialism. Second, our perspective provides a process explanation for why parents and peers influence materialism that can be empirically tested. Prior research offers a valuable set of findings about what factors correlate with adolescent materialism, but the process responsible for the correlation is left untested. Finally, we provide a parsimonious explanation for why different factors related to parent and peer influence affect adolescent materialism. Although the number of potential parent and peer factors is large, it is possible that there is a common thread (self-esteem) for why these factors influence adolescent materialism. Isolating mediators, such as selfesteem, could provide the basis for developing a conceptual framework to tie together findings across prior studies with different factors, providing a more unified explanation for why certain adolescents are more vulnerable to materialism.",
"title": ""
},
{
"docid": "3d3110b19142e9a01bf4252742ce9586",
"text": "Detecting unsolicited content and the spammers who create it is a long-standing challenge that affects all of us on a daily basis. The recent growth of richly-structured social networks has provided new challenges and opportunities in the spam detection landscape. Motivated by the Tagged.com social network, we develop methods to identify spammers in evolving multi-relational social networks. We model a social network as a time-stamped multi-relational graph where vertices represent users, and edges represent different activities between them. To identify spammer accounts, our approach makes use of structural features, sequence modelling, and collective reasoning. We leverage relational sequence information using k-gram features and probabilistic modelling with a mixture of Markov models. Furthermore, in order to perform collective reasoning and improve the predictive power of a noisy abuse reporting system, we develop a statistical relational model using hinge-loss Markov random fields (HL-MRFs), a class of probabilistic graphical models which are highly scalable. We use Graphlab Create and Probabilistic Soft Logic (PSL) to prototype and experimentally evaluate our solutions on internet-scale data from Tagged.com. Our experiments demonstrate the effectiveness of our approach, and show that models which incorporate the multi-relational nature of the social network significantly gain predictive performance over those that do not.",
"title": ""
},
{
"docid": "5b07f0dbf40fb302d04cb7a880d9f67f",
"text": "The current study investigated whether long-term experience in music or a second language is associated with enhanced cognitive functioning. Early studies suggested the possibility of a cognitive advantage from musical training and bilingualism but have failed to be replicated by recent findings. Further, each form of expertise has been independently investigated leaving it unclear whether any benefits are specifically caused by each skill or are a result of skill learning in general. To assess whether cognitive benefits from training exist, and how unique they are to each training domain, the current study compared musicians and bilinguals to each other, plus to individuals who had expertise in both skills, or neither. Young adults (n = 153) were categorized into one of four groups: monolingual musician; bilingual musician; bilingual non-musician; and monolingual non-musician. Multiple tasks per cognitive ability were used to examine the coherency of any training effects. Results revealed that musically trained individuals, but not bilinguals, had enhanced working memory. Neither skill had enhanced inhibitory control. The findings confirm previous associations between musicians and improved cognition and extend existing evidence to show that benefits are narrower than expected but can be uniquely attributed to music compared to another specialized auditory skill domain. The null bilingual effect despite a music effect in the same group of individuals challenges the proposition that young adults are at a performance ceiling and adds to increasing evidence on the lack of a bilingual advantage on cognition.",
"title": ""
},
{
"docid": "8ad9d98ab60211f96f8076144dad3ad2",
"text": "Although firms have invested significant resources in implementing enterprise software systems (ESS) to modernize and integrate their business process infrastructure, customer satisfaction with ESS has remained an understudied phenomenon. In this exploratory research study, we investigate customer satisfaction for support services of ESS and focus on employee skills and customer heterogeneity. We analyze archival customer satisfaction data from 170 real-world customer service encounters of a leading ESS vendor. Our analysis indicates that the technical and behavioral skills of customer support representatives play a major role in influencing overall customer satisfaction with ESS support services. We find that the effect of technical skills on customer satisfaction is moderated by behavioral skills. We also find that the technical skills of the support personnel are valued more by repeat customers than by new customers. We discuss the implications of these findings for managing customer heterogeneity in ESS support services and for the allocation and training of ESS support personnel. © 2007 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "fb1092ee4fe5f29394148ae0b134dd08",
"text": "The landscape of online learning has evolved in a synchronous fashion with the development of the every-growing repertoire of technologies, especially with the recent addition of Massive Online Open Courses (MOOCs). Since MOOC platforms allow thousands of students to participate at the same time, MOOC participants can have fairly varied motivation. Meanwhile, a low course completion rate has been observed across different MOOC platforms. The first and initiated stage of the proposed research here is a preliminary attempt to study how different motivational aspects of MOOC learners correlate with course participation and completion, with motivation measured using a survey and participation measured using log analytics. The exploratory stage of the study has been conducted within the context of an educational data mining MOOC, within Coursera. In the long run, research results can be expected to inform future interventions, and the design of MOOCs, as well as increasing understanding of the emergent needs of MOOC learners as data collection extends beyond the current scope by incorporating wider disciplinary areas.",
"title": ""
},
{
"docid": "7ad46a50bb98f22760f07de82c6e2035",
"text": "Major theories for explaining the organization of semantic memory in the human brain are premised on the often-observed dichotomous dissociation between living and nonliving objects. Evidence from neuroimaging has been interpreted to suggest that this distinction is reflected in the functional topography of the ventral vision pathway as lateral-to-medial activation gradients. Recently, we observed that similar activation gradients also reflect differences among living stimuli consistent with the semantic dimension of graded animacy. Here, we address whether the salient dichotomous distinction between living and nonliving objects is actually reflected in observable measured brain activity or whether previous observations of a dichotomous dissociation were the illusory result of stimulus sampling biases. Using fMRI, we measured neural responses while participants viewed 10 animal species with high to low animacy and two inanimate categories. Representational similarity analysis of the activity in ventral vision cortex revealed a main axis of variation with high-animacy species maximally different from artifacts and with the least animate species closest to artifacts. Although the associated functional topography mirrored activation gradients observed for animate–inanimate contrasts, we found no evidence for a dichotomous dissociation. We conclude that a central organizing principle of human object vision corresponds to the graded psychological property of animacy with no clear distinction between living and nonliving stimuli. The lack of evidence for a dichotomous dissociation in the measured brain activity challenges theories based on this premise.",
"title": ""
},
{
"docid": "2738f51f986a6f6d4d4244e66bb6869a",
"text": "A frequency compensation technique for three- stage amplifiers is introduced. The proposed solution exploits only one Miller capacitor and a resistor in the compensation network. The straightness of the technique is used to design, using a standard CMOS 0.35-mum process, a 1.5-V OTA driving a 150-pF load capacitor. The dc current consumption is about 14 muA at DC and a 1.6-MHz gain-bandwidth product is obtained, providing significant improvement in both MHz-pF/mA and (V/mus)-pF/mA performance parameters.",
"title": ""
},
{
"docid": "2da44919966d841d4a1d6f3cc2a648e9",
"text": "A composite cavity-backed folded sectorial bowtie antenna (FSBA) is proposed and investigated in this paper, which is differentially fed by an SMA connector through a balun, i.e. a transition from a microstrip line to a parallel stripline. The composite cavity as a general case, consisting of a conical part and a cylindrical rim, can be tuned freely from a cylindrical to a cup-shaped one. Parametric studies are performed to optimize the antenna performance. Experimental results reveal that it can achieve an impedance bandwidth of 143% for SWR les 2, a broadside gain of 8-15.3 dBi, and stable radiation pattern over the whole operating band. The total electrical dimensions are 0.66lambdam in diameter and 0.16lambdam in height, where lambdam is the free-space wavelength at lower edge of the operating frequency band. The problem about the distorted patterns in the upper frequency band for wideband cavity-backed antennas is solved in our work.",
"title": ""
}
] |
scidocsrr
|
eabe54b6b35f626f0f6e023da2e047d6
|
Dual-polarized log.-periodic antenna on a conical MID substrate
|
[
{
"docid": "ecfd9b38cc68c4af9addb4915424d6d0",
"text": "The conditions for antenna diversity action are investigated. In terms of the fields, a condition is shown to be that the incident field and the far field of the diversity antenna should obey (or nearly obey) an orthogonality relationship. The role of mutual coupling is central, and it is different from that in a conventional array antenna. In terms of antenna parameters, a sufficient condition for diversity action for a certain class of high gain antennas at the mobile, which approximates most practical mobile antennas, is shown to be zero (or low) mutual resistance between elements. This is not the case at the base station, where the condition is necessary only. The mutual resistance condition offers a powerful design tool, and examples of new mobile diversity antennas are discussed along with some existing designs.",
"title": ""
}
] |
[
{
"docid": "020ee6cc73f38e738a27d51d8a832bc2",
"text": "The growing interest in natural alternatives to synthetic petroleum-based dyes for food applications necessitates looking at nontraditional sources of natural colors. Certain sorghum varieties accumulate large amounts of poorly characterized pigments in their nongrain tissue. We used High Performance Liquid Chromatography-Tandem Mass Spectroscopy to characterize sorghum leaf sheath pigments and measured the stability of isolated pigments in the presence of bisulfite at pH 1.0 to 7.0 over a 4-wk period. Two new 3-deoxyanthocyanidin compounds were identified: apigeninidin-flavene dimer and apigenin-7-O-methylflavene dimer. The dimeric molecules had near identical UV-Vis absorbance profiles at pH 1.0 to 7.0, with no obvious sign of chalcone or quinoidal base formation even at the neutral pH, indicating unusually strong resistance to hydrophilic attack. The dimeric 3-deoxyanthocyanidins were also highly resistant to nucleophilic attack by SO(2); for example, apigeninidin-flavene dimer lost less than 20% of absorbance, compared to apigeninidin monomer, which lost more than 80% of absorbance at λ(max) within 1 h in the presence of SO(2). The increased molecular complexity of the dimeric 3-deoxyanthocyanidins compared to their monomers may be responsible for their unusual stability in the presence of bisulfite; these compounds present new interesting opportunities for food applications.",
"title": ""
},
{
"docid": "0b56f9c9ec0ce1db8dcbfd2830b2536b",
"text": "In many statistical problems, a more coarse-grained model may be suitable for population-level behaviour, whereas a more detailed model is appropriate for accurate modelling of individual behaviour. This raises the question of how to integrate both types of models. Methods such as posterior regularization follow the idea of generalized moment matching, in that they allow matching expectations between two models, but sometimes both models are most conveniently expressed as latent variable models. We propose latent Bayesian melding, which is motivated by averaging the distributions over populations statistics of both the individual-level and the population-level models under a logarithmic opinion pool framework. In a case study on electricity disaggregation, which is a type of singlechannel blind source separation problem, we show that latent Bayesian melding leads to significantly more accurate predictions than an approach based solely on generalized moment matching.",
"title": ""
},
{
"docid": "f4e6c48244730a244a5380972085db39",
"text": "The LifeCLEF plant identification challenge aims at evaluating plant identification methods and systems at a very large scale, close to the conditions of a real-world biodiversity monitoring scenario. The 2016-th edition was actually conducted on a set of more than 110K images illustrating 1000 plant species living in West Europe, built through a large-scale participatory sensing platform initiated in 2011 and which now involves tens of thousands of contributors. The main novelty over the previous years is that the identification task was evaluated as an open-set recognition problem, i.e. a problem in which the recognition system has to be robust to unknown and never seen categories. Beyond the brute-force classification across the known classes of the training set, the big challenge was thus to automatically reject the false positive classification hits that are caused by the unknown classes. This overview presents more precisely the resources and assessments of the challenge, summarizes the approaches and systems employed by the participating research groups, and provides an analysis of the main outcomes.",
"title": ""
},
{
"docid": "a6114f7353b2aa17f8cf4a31b57aac2c",
"text": "Rejection of Internet banking is one of the most important problems that faces banks in developing countries. So far, very few academic studies have been conducted on Internet banking adoption in Arab countries. Hence, this research aims to investigate factors that influence the intention to use Internet banking in Yemen. Cross-sectional data were collected from 1286 respondents through a survey. Structural equation modeling was employed to analyze data. The findings supported the research hypotheses and confirmed that perceived relative advantages, perceived ease of use, trust of the Internet banking all impact attitude toward the intention of adopting Internet banking. This paper makes a contribution to Internet banking literature. It sheds light on the factors that affect Internet banking adoption. The findings made a contribution in terms of understanding the factors that can contribute to the adoption of Internet banking by Yemeni consumers..",
"title": ""
},
{
"docid": "c04991d45762b4a3fcc247f18eca34c3",
"text": "We present a system for activity recognition from passive RFID data using a deep convolutional neural network. We directly feed the RFID data into a deep convolutional neural network for activity recognition instead of selecting features and using a cascade structure that first detects object use from RFID data followed by predicting the activity. Because our system treats activity recognition as a multi-class classification problem, it is scalable for applications with large number of activity classes. We tested our system using RFID data collected in a trauma room, including 14 hours of RFID data from 16 actual trauma resuscitations. Our system outperformed existing systems developed for activity recognition and achieved similar performance with process-phase detection as systems that require wearable sensors or manually-generated input. We also analyzed the strengths and limitations of our current deep learning architecture for activity recognition from RFID data.",
"title": ""
},
{
"docid": "af22932b48a2ea64ecf3e5ba1482564d",
"text": "Collaborative embedded systems (CES) heavily rely on information models to understand the contextual situations they are exposed to. These information models serve different purposes. First, during development time it is necessary to model the context for eliciting and documenting the requirements that a CES is supposed to achieve. Second, information models provide information to simulate different contextual situations and CES ́s behavior in these situations. Finally, CESs need information models about their context during runtime in order to react to different contextual situations and exchange context information with other CESs. Heavyweight ontologies, based on Ontology Web Language (OWL), have already proven suitable for representing knowledge about contextual situations during runtime. Furthermore, lightweight ontologies (e.g. class diagrams) have proven their practicality for creating domain specific languages for requirements documentation. However, building an ontology (lightor heavyweight) is a non-trivial task that needs to be integrated into development methods for CESs such that it serves the above stated purposes in a seamless way. This paper introduces the requirements for the building of ontologies and proposes a method that is integrated into the engineering of CESs.",
"title": ""
},
{
"docid": "15de948f800564c755b4dbc7d06ec83b",
"text": "In this study, kinematics design and workspace analysis of a novel spatial parallel mechanism is elaborated. The proposed mechanism is to be exploited in a hybrid serial-parallel mobile robot to fulfill stable motion of the robotic system when handling heavy objects by a serial manipulator mounted on the parallel mechanism. In fact, this parallel mechanism has been designed to obtain an appropriate maneuverability and fulfill the tipover stability of a hybrid serial-parallel mobile robotic system after grasping heavy objects. The proposed mechanism is made of three legs while each leg has one active degree of freedom (DOF) so that the mechanism contains 3-DOF. In order to investigate this novel parallel mechanism, inverse and forward kinematics model is developed and verified using two maneuvers. Next, the workspace of the manipulator is demonstrated symbolically, besides a numerical analysis will be studied. Finally, the advantages of exploiting such a parallel manipulator are discussed in order to be used in a hybrid serial-parallel mobile robotic system.",
"title": ""
},
{
"docid": "e35d00d5b7cedc937e34526b6c73ffc6",
"text": "Unintentional falls can cause severe injuries and even death, especially if no immediate assistance is given. The aim of Fall Detection Systems (FDSs) is to detect an occurring fall. This information can be used to trigger the necessary assistance in case of injury. This can be done by using either ambient-based sensors, e.g. cameras, or wearable devices. The aim of this work is to study the technical aspects of FDSs based on wearable devices and artificial intelligence techniques, in particular Deep Learning (DL), to implement an effective algorithm for on-line fall detection. The proposed classifier is based on a Recurrent Neural Network (RNN) model with underlying Long Short-Term Memory (LSTM) blocks. The method is tested on the publicly available SisFall dataset, with extended annotation, and compared with the results obtained by the SisFall authors.",
"title": ""
},
{
"docid": "8015f5668df95f83e353550d54eac4da",
"text": "Counterfeit currency is a burning question throughout the world. The counterfeiters are becoming harder to track down because of their rapid adoption of and adaptation with highly advanced technology. One of the most effective methods to stop counterfeiting can be the widespread use of counterfeit detection tools/software that are easily available and are efficient in terms of cost, reliability and accuracy. This paper presents a core software system to build a robust automated counterfeit currency detection tool for Bangladeshi bank notes. The software detects fake currency by extracting existing features of banknotes such as micro-printing, optically variable ink (OVI), water-mark, iridescent ink, security thread and ultraviolet lines using OCR (Optical Character recognition), Contour Analysis, Face Recognition, Speeded UP Robust Features (SURF) and Canny Edge & Hough transformation algorithm of OpenCV. The success rate of this software can be measured in terms of accuracy and speed. This paper also focuses on the pros and cons of implementation details that may degrade the performance of image processing based paper currency authentication systems.",
"title": ""
},
{
"docid": "f892d34ee0cf19744b52dab89d451239",
"text": "In Tables 2 and 3, we present the results of our eight models and three baselines compared to current state-of-the-art under the two metric of MedErr and Acc π 6 respectively. As we mentioned in the paper, we run all experiments three times and report the mean and standard deviation (in brackets) across these three trials. We also show figures of images where we obtain the least pose estimation error and the most pose estimation error for every object category using one run of model MG+. As can be seen from Figs. [1-12], we make the most error under three conditions: (i) when the objects are really blurry (very small in pixel size in the original image), (ii) the shape of the object is uncommon (possibly very few examples seen during training) and (iii) the pose of a test image is very different from common poses observed during training. The first condition is best observed in the bad cases for categories aeroplane and car where almost all the images shown are very blurry. The second condition is best observed in categories boat and chair where the bad cases contain uncommon boats and chairs. The third condition is best observed in categories bottle and tvmonitor where the bad images are in very different poses compared to the best images. We also present the performance of our models MG and MG+ across different object categories of the Pascal3D+ dataset during ablation experiments in Tables 4-11. These are detailed tables for the results shown in Tables 3 and 4 of the main paper and an overview of the experiments is shown below.",
"title": ""
},
{
"docid": "06a3bf091404fc51bb3ee0a9f1d8a759",
"text": "A compact design of a circularly-polarized microstrip antenna in order to achieve dual-band behavior for Radio Frequency Identification (RFID) applications is presented, defected ground structure (DGS) technique is used to miniaturize and get a dual-band antenna, the entire size is 38×40×1.58 mm3. This antenna was designed to cover both ultra-height frequency (740MHz ~ 1GHz) and slow height frequency (2.35 GHz ~ 2.51GHz), return loss <; -10 dB, the 3-dB axial ratio bandwidths are about 110 MHz at the lower band (900 MHz).",
"title": ""
},
{
"docid": "de1fe89adbc6e4a8993eb90cae39d97e",
"text": "Decision trees have proved to be valuable tools for the description, classification and generalization of data. Work on constructing decision trees from data exists in multiple disciplines such as statistics, pattern recognition, decision theory, signal processing, machine learning and artificial neural networks. Researchers in these disciplines, sometimes working on quite different problems, identified similar issues and heuristics for decision tree construction. This paper surveys existing work on decision tree construction, attempting to identify the important issues involved, directions the work has taken and the current state of the art.",
"title": ""
},
{
"docid": "28953a02fed251fbf12f6977268d7806",
"text": "While attributes have been widely used for person re-identification (Re-ID) which aims at matching the same person images across disjoint camera views, they are used either as extra features or for performing multi-task learning to assist the image-image matching task. However, how to find a set of person images according to a given attribute description, which is very practical in many surveillance applications, remains a rarely investigated cross-modality matching problem in person Re-ID. In this work, we present this challenge and leverage adversarial learning to formulate the attribute-image cross-modality person Re-ID model. By imposing a semantic consistency constraint across modalities as a regularization, the adversarial learning enables to generate imageanalogous concepts of query attributes for matching the corresponding images at both global level and semantic ID level. We conducted extensive experiments on three attribute datasets and demonstrated that the regularized adversarial modelling is so far the most effective method for the attributeimage cross-modality person Re-ID problem.",
"title": ""
},
{
"docid": "c28dc261ddc770a6655eb1dbc528dd3b",
"text": "Software applications are no longer stand-alone systems. They are increasingly the result of integrating heterogeneous collections of components, both executable and data, possibly dispersed over a computer network. Different components can be provided by different producers and they can be part of different systems at the same time. Moreover, components can change rapidly and independently, making it difficult to manage the whole system in a consistent way. Under these circumstances, a crucial step of the software life cycle is deployment—that is, the activities related to the release, installation, activation, deactivation, update, and removal of components, as well as whole systems. This paper presents a framework for characterizing technologies that are intended to support software deployment. The framework highlights four primary factors concerning the technologies: process coverage; process changeability; interprocess coordination; and site, product, and deployment policy abstraction. A variety of existing technologies are surveyed and assessed against the framework. Finally, we discuss promising research directions in software deployment. This work was supported in part by the Air Force Material Command, Rome Laboratory, and the Defense Advanced Research Projects Agency under Contract Number F30602-94-C-0253. The content of the information does not necessarily reflect the position or the policy of the U.S. Government and no official endorsement should be inferred.",
"title": ""
},
{
"docid": "3bee9a2d5f9e328bb07c3c76c80612fa",
"text": "In this paper, we construct a complexity-based morphospace wherein one can study systems-level properties of conscious and intelligent systems based on information-theoretic measures. The axes of this space labels three distinct complexity types, necessary to classify conscious machines, namely, autonomous, cognitive and social complexity. In particular, we use this morphospace to compare biologically conscious agents ranging from bacteria, bees, C. elegans, primates and humans with artificially intelligence systems such as deep networks, multi-agent systems, social robots, AI applications such as Siri and computational systems as Watson. Given recent proposals to synthesize consciousness, a generic complexitybased conceptualization provides a useful framework for identifying defining features of distinct classes of conscious and synthetic systems. Based on current clinical scales of consciousness that measure cognitive awareness and wakefulness, this article takes a perspective on how contemporary artificially intelligent machines and synthetically engineered life forms would measure on these scales. It turns out that awareness and wakefulness can be associated to computational and autonomous complexity respectively. Subsequently, building on insights from cognitive robotics, we examine the function that consciousness serves, and argue the role of consciousness as an evolutionary game-theoretic strategy. This makes the case for a third type of complexity necessary for describing consciousness, namely, social complexity. Having identified these complexity types, allows for a representation of both, biological and synthetic systems in a common morphospace. A consequence of this classification is a taxonomy of possible conscious machines. In particular, we identify four types of consciousness, based on embodiment: (i) biological consciousness, (ii) synthetic consciousness, (iii) group consciousness (resulting from group interactions), and (iv) simulated consciousness (embodied by virtual agents within a simulated reality). This taxonomy helps in the investigation of comparative signatures of consciousness across domains, in order to highlight design principles necessary to engineer conscious machines. This is particularly relevant in the light of recent developments at the ar X iv :1 70 5. 11 19 0v 3 [ qbi o. N C ] 2 4 N ov 2 01 8 The Morphospace of Consciousness 2 crossroads of cognitive neuroscience, biomedical engineering, artificial intelligence and biomimetics.",
"title": ""
},
{
"docid": "70c82bb98d0e558280973d67429cea8a",
"text": "We present an algorithm for separating the local gradient information and Lambertian color by using 4-source color photometric stereo in the presence of highlights and shadows. We assume that the surface reflectance can be approximated by the sum of a Lambertian and a specular component. The conventional photometric method is generalized for color images. Shadows and highlights in the input images are detected using either spectral or directional cues and excluded from the recovery process, thus giving more reliable estimates of local surface parameters.",
"title": ""
},
{
"docid": "494c46a56fa1c55b274f1b3c653a358a",
"text": "In this paper we integrate insights from diverse islands of research on electronic privacy to offer a holistic view of privacy engineering and a systematic structure for the discipline's topics. First we discuss privacy requirements grounded in both historic and contemporary perspectives on privacy. We use a three-layer model of user privacy concerns to relate them to system operations (data transfer, storage and processing) and examine their effects on user behavior. In the second part of the paper we develop guidelines for building privacy-friendly systems. We distinguish two approaches: \"privacy-by-policy\" and \"privacy-by-architecture.\" The privacy-by-policy approach focuses on the implementation of the notice and choice principles of fair information practices (FIPs), while the privacy-by-architecture approach minimizes the collection of identifiable personal data and emphasizes anonymization and client-side data storage and processing. We discuss both approaches with a view to their technical overlaps and boundaries as well as to economic feasibility. The paper aims to introduce engineers and computer scientists to the privacy research domain and provide concrete guidance on how to design privacy-friendly systems.",
"title": ""
},
{
"docid": "6800b7749dcc39020de70a25c167cab1",
"text": "Emotion recognition from speech has emerged as an important research area in the recent past. In this regard, review of existing work on emotional speech processing is useful for carrying out further research. In this paper, the recent literature on speech emotion recognition has been presented considering the issues related to emotional speech corpora, different types of speech features and models used for recognition of emotions from speech. Thirty two representative speech databases are reviewed in this work from point of view of their language, number of speakers, number of emotions, and purpose of collection. The issues related to emotional speech databases used in emotional speech recognition are also briefly discussed. Literature on different features used in the task of emotion recognition from speech is presented. The importance of choosing different classification models has been discussed along with the review. The important issues to be considered for further emotion recognition research in general and in specific to the Indian context have been highlighted where ever necessary.",
"title": ""
},
{
"docid": "addad4069782620549e7a357e2c73436",
"text": "Drivable region detection is challenging since various types of road, occlusion or poor illumination condition have to be considered in a outdoor environment, particularly at night. In the past decade, Many efforts have been made to solve these problems, however, most of the already existing methods are designed for visible light cameras, which are inherently inefficient under low light conditions. In this paper, we present a drivable region detection algorithm designed for thermal-infrared cameras in order to overcome the aforementioned problems. The novelty of the proposed method lies in the utilization of on-line road initialization with a highly scene-adaptive sampling mask. Furthermore, our prior road information extraction is tailored to enforce temporal consistency among a series of images. In this paper, we also propose a large number of experiments in various scenarios (on-road, off-road and cluttered road). A total of about 6000 manually annotated images are made available in our website for the research community. Using this dataset, we compared our method against multiple state-of-the-art approaches including convolutional neural network (CNN) based methods to emphasize the robustness of our approach under challenging situations.",
"title": ""
},
{
"docid": "ab156ab101063353a64bbcd51e47b88f",
"text": "Spontaneous lens absorption (SLA) is a rare complication of hypermature cataract. However, this condition has been reported in several cases of hypermature cataracts that were caused by trauma, senility, uveitic disorders such as Fuchs’ uveitis syndrome (FUS), and infectious disorders including leptospirosis and rubella. We report a case of spontaneous absorption of a hypermature cataract secondary to FUS. To our knowledge, this is the first report of SLA that was followed by dislocation of the capsular remnants into the vitreous and resulted in a misdiagnosis as crystalline lens luxation.",
"title": ""
}
] |
scidocsrr
|
a0b8aad2aa58819271d049f77893d0f8
|
Lejla Islami Assessing generational differences in susceptibility to Social Engineering attacks . A comparison between Millennial and Baby Boomer generations
|
[
{
"docid": "9e3ad07ca89501d37812ea02861f9466",
"text": "This study examines the evidence for the effectiveness of active learning. It defines the common forms of active learning most relevant for engineering faculty and critically examines the core element of each method. It is found that there is broad but uneven support for the core elements of active, collaborative, cooperative and problem-based learning.",
"title": ""
},
{
"docid": "b57b06d861b5c4666095e356ee7e010b",
"text": "Phishing is a form of electronic identity theft in which a combination of social engineering and Web site spoofing techniques is used to trick a user into revealing confidential information with economic value. The problem of social engineering attack is that there is no single solution to eliminate it completely, since it deals largely with the human factor. This is why implementing empirical experiments is very crucial in order to study and to analyze all malicious and deceiving phishing Web site attack techniques and strategies. In this paper, three different kinds of phishing experiment case studies have been conducted to shed some light into social engineering attacks, such as phone phishing and phishing Web site attacks for designing effective countermeasures and analyzing the efficiency of performing security awareness about phishing threats. Results and reactions to our experiments show the importance of conducting phishing training awareness for all users and doubling our efforts in developing phishing prevention techniques. Results also suggest that traditional standard security phishing factor indicators are not always effective for detecting phishing websites, and alternative intelligent phishing detection approaches are needed.",
"title": ""
}
] |
[
{
"docid": "3aaf9c81e8304bf540722d35c32d2046",
"text": "To reduce page load times and bandwidth usage for mobile web browsing, middleboxes that compress page content are commonly used today. Unfortunately, this can hurt performance in many cases; via an extensive measurement study, we show that using middleboxes to facilitate compression results in up to 28% degradation in page load times when the client enjoys excellent wireless link conditions. We find that benefits from compression are primarily realized under bad network conditions. Guided by our study, we design and implement FlexiWeb, a framework that determines both when to use a middlebox and how to use it, based on the client's network conditions. First, FlexiWeb selectively fetches objects on a web page either directly from the source or via a middlebox, rather than fetching all objects via the middlebox. Second, instead of simply performing lossless compression of all content, FlexiWeb performs network-aware compression of images by selecting from among a range of content transformations. We implement and evaluate a prototype of FlexiWeb using Google's open source Chromium mobile browser and our implementation of a modified version of Google's open source compression proxy. Our extensive experiments show that, across a range of scenarios, FlexiWeb reduces page load times for mobile clients by 35-42% compared to the status quo.",
"title": ""
},
{
"docid": "45eb2d7b74f485e9eeef584555e38316",
"text": "With the increasing demand of massive multimodal data storage and organization, cross-modal retrieval based on hashing technique has drawn much attention nowadays. It takes the binary codes of one modality as the query to retrieve the relevant hashing codes of another modality. However, the existing binary constraint makes it difficult to find the optimal cross-modal hashing function. Most approaches choose to relax the constraint and perform thresholding strategy on the real-value representation instead of directly solving the original objective. In this paper, we first provide a concrete analysis about the effectiveness of multimodal networks in preserving the inter- and intra-modal consistency. Based on the analysis, we provide a so-called Deep Binary Reconstruction (DBRC) network that can directly learn the binary hashing codes in an unsupervised fashion. The superiority comes from a proposed simple but efficient activation function, named as Adaptive Tanh (ATanh). The ATanh function can adaptively learn the binary codes and be trained via back-propagation. Extensive experiments on three benchmark datasets demonstrate that DBRC outperforms several state-of-the-art methods in both image2text and text2image retrieval task.",
"title": ""
},
{
"docid": "ffea50948eab00d47f603d24bcfc1bfd",
"text": "A statistical pattern-recognition technique was applied to the classification of musical instrument tones within a taxonomic hierarchy. Perceptually salient acoustic features— related to the physical properties of source excitation and resonance structure—were measured from the output of an auditory model (the log-lag correlogram) for 1023 isolated tones over the full pitch ranges of 15 orchestral instruments. The data set included examples from the string (bowed and plucked), woodwind (single, double, and air reed), and brass families. Using 70%/30% splits between training and test data, maximum a posteriori classifiers were constructed based on Gaussian models arrived at through Fisher multiplediscriminant analysis. The classifiers distinguished transient from continuant tones with approximately 99% correct performance. Instrument families were identified with approximately 90% performance, and individual instruments were identified with an overall success rate of approximately 70%. These preliminary analyses compare favorably with human performance on the same task and demonstrate the utility of the hierarchical approach to classification.",
"title": ""
},
{
"docid": "96bddddd86976f4dff0b984ef062704b",
"text": "How do the structures of the medial temporal lobe contribute to memory? To address this question, we examine the neurophysiological correlates of both recognition and associative memory in the medial temporal lobe of humans, monkeys, and rats. These cross-species comparisons show that the patterns of mnemonic activity observed throughout the medial temporal lobe are largely conserved across species. Moreover, these findings show that neurons in each of the medial temporal lobe areas can perform both similar as well as distinctive mnemonic functions. In some cases, similar patterns of mnemonic activity are observed across all structures of the medial temporal lobe. In the majority of cases, however, the hippocampal formation and surrounding cortex signal mnemonic information in distinct, but complementary ways.",
"title": ""
},
{
"docid": "639bbe7b640c514ab405601c7c3cfa01",
"text": "Measuring the semantic similarity between words is an important component in various tasks on the web such as relation extraction, community mining, document clustering, and automatic metadata extraction. Despite the usefulness of semantic similarity measures in these applications, accurately measuring semantic similarity between two words (or entities) remains a challenging task. We propose an empirical method to estimate semantic similarity using page counts and text snippets retrieved from a web search engine for two words. Specifically, we define various word co-occurrence measures using page counts and integrate those with lexical patterns extracted from text snippets. To identify the numerous semantic relations that exist between two given words, we propose a novel pattern extraction algorithm and a pattern clustering algorithm. The optimal combination of page counts-based co-occurrence measures and lexical pattern clusters is learned using support vector machines. The proposed method outperforms various baselines and previously proposed web-based semantic similarity measures on three benchmark data sets showing a high correlation with human ratings. Moreover, the proposed method significantly improves the accuracy in a community mining task.",
"title": ""
},
{
"docid": "3f09b82a9a9be064819c1d7b402b0031",
"text": "Academic dishonesty is widespread within secondary and higher education. It can include unethical academic behaviors such as cheating, plagiarism, or unauthorized help. Researchers have investigated a number of individual and contextual factors in an effort to understand the phenomenon. In the last decade, there has been increasing interest in the role personality plays in explaining unethical academic behaviors. We used meta-analysis to estimate the relationship between each of the Big Five personality factors and academic dishonesty. Previous reviews have highlighted the role of neuroticism and extraversion as potential predictors of cheating behavior. However, our results indicate that conscientiousness and agreeableness are the strongest Big Five predictors, with both factors negatively related to academic dishonesty. We discuss the implications of our findings for both research and practice. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "ce91f2a7b4ec328e07d22caaf5e35b51",
"text": "http://www.jstor.org Interorganizational Collaboration and the Locus of Innovation: Networks of Learning in Biotechnology Author(s): Walter W. Powell, Kenneth W. Koput and Laurel Smith-Doerr Source: Administrative Science Quarterly, Vol. 41, No. 1 (Mar., 1996), pp. 116-145 Published by: on behalf of the Sage Publications, Inc. Johnson Graduate School of Management, Cornell University Stable URL: http://www.jstor.org/stable/2393988 Accessed: 18-12-2015 16:50 UTC",
"title": ""
},
{
"docid": "c4d0084aab61645fc26e099115e1995c",
"text": "Digital transformation often includes establishing big data analytics capabilities and poses considerable challenges for traditional manufacturing organizations, such as car companies. Successfully introducing big data analytics requires substantial organizational transformation and new organizational structures and business processes. Based on the three-stage evolution of big data analytics capabilities at AUDI, the full article provides recommendations for how traditional manufacturing organizations can successfully introduce big data analytics and master the related organizational transformations. Stage I: Advancing. In Stage I, AUDI’s sales and marketing department initiated data analytics projects. Commitment within the organization for data analytics grew slowly, and the strategic importance of the area was increasingly recognized. During this first stage, the IT department played a passive role, responding to the initiators of data analytics projects. The company’s digital innovation hub, however, laid the technology foundation for big data analytics during the Advancing stage. Stage II: Enabling. In Stage II, analytics competencies were built up not only in the digital innovation hub but also in the IT department. The IT department enabled big data analytics through isolated technology activities, sometimes taking on or insourcing tasks previously carried out by external consultancies or the digital innovation hub. Analytics services were developed through a more advanced technology infrastructure as well as analytics methods. Stage III: Leveraging. In the current Stage III, AUDI is leveraging the analytics competencies of the digital innovation hub and the IT department to centrally provide analytics-as-a-service. The IT department is now fully responsible for all technology tasks and is evolving to become a consulting partner for the other big data analytics stakeholders (sales and marketing department and digital innovation hub). In particular, digital services are enabled by leveraging the most valuable data source (i.e., operational car data).",
"title": ""
},
{
"docid": "dc94e340ceb76a0c9fda47bac4be9920",
"text": "Mobile health (mHealth) apps are an ideal tool for monitoring and tracking long-term health conditions; they are becoming incredibly popular despite posing risks to personal data privacy and security. In this paper, we propose a testing method for Android mHealth apps which is designed using a threat analysis, considering possible attack scenarios and vulnerabilities specific to the domain. To demonstrate the method, we have applied it to apps for managing hypertension and diabetes, discovering a number of serious vulnerabilities in the most popular applications. Here we summarise the results of that case study, and discuss the experience of using a testing method dedicated to the domain, rather than out-of-the-box Android security testing methods. We hope that details presented here will help design further, more automated, mHealth security testing tools and methods.",
"title": ""
},
{
"docid": "98881e7174d495d42a0d68c0f0d7bf3b",
"text": "The design process is often characterized by and realized through the iterative steps of evaluation and refinement. When the process is based on a single creative domain such as visual art or audio production, designers primarily take inspiration from work within their domain and refine it based on their own intuitions or feedback from an audience of experts from within the same domain. What happens, however, when the creative process involves more than one creative domain such as in a digital game? How should the different domains influence each other so that the final outcome achieves a harmonized and fruitful communication across domains? How can a computational process orchestrate the various computational creators of the corresponding domains so that the final game has the desired functional and aesthetic characteristics? To address these questions, this paper identifies game facet orchestration as the central challenge for artificial-intelligence-based game generation, discusses its dimensions, and reviews research in automated game generation that has aimed to tackle it. In particular, we identify the different creative facets of games, propose how orchestration can be facilitated in a top-down or bottom-up fashion, review indicative preliminary examples of orchestration, and conclude by discussing the open questions and challenges ahead.",
"title": ""
},
{
"docid": "2b1858fc902102d06ea3fc0394b842bf",
"text": "Recently, deep learning approaches with various network architectures have achieved significant performance improvement over existing iterative reconstruction methods in various imaging problems. However, it is still unclear why these deep learning architectures work for specific inverse problems. Moreover, unlike the usual evolution of signal processing theory around the classical theories, the link between the deep learning and the classical signal processing approaches such as wavelet, non-local processing, compressed sensing, etc, is still not well understood, which often makes signal processors in deep troubles. To address these issues, here we show that the long-searched-for missing link is the convolutional framelet for representing a signal by convolving local and non-local bases. The convolutional framelets was originally developed to generalize the recent theory of low-rank Hankel matrix approaches, and this paper significantly extends the idea to derive a deep neural network using multi-layer convolutional framelets with perfect reconstruction (PR) under rectified linear unit (ReLU) nonlinearity. Our analysis also shows that the popular deep network components such as residual block, redundant filter channels, and concatenated ReLU (CReLU) indeed help to achieve the PR, while the pooling and unpooling layers should be augmented with multi-resolution convolutional framelets to achieve PR condition. This discovery reveals the limitations of many existing deep learning architectures for inverse problems, and leads us to propose a novel deep convolutional framelets neural network. Using numerical experiments with sparse view x-ray computed tomography (CT), we demonstrated that our deep convolution framelets network shows consistent improvement over existing deep architectures at all downsampling factors. This discovery suggests that the success of deep learning is not from a magical power of a black-box, but rather comes from the power of a novel signal representation using non-local basis combined with data-driven local basis, which is indeed a natural extension of classical signal processing theory. Index Terms Convolutional framelets, deep learning, inverse problems, ReLU, perfect reconstruction condition Correspondence to: Jong Chul Ye, Ph.D KAIST Endowed Chair Professor Department of Bio and Brain Engineering Department of Mathematical Sciences Korea Advanced Institute of Science and Technology (KAIST) 291 Daehak-ro, Yuseong-gu, Daejeon 34141, Republic of Korea Tel: +82-42-350-4320 Email: [email protected] J.C. Ye and Y. S. Han are with the Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon 34141, Republic of Korea (e-mail: {jong.ye,hanyoseob}@kaist.ac.kr). ar X iv :1 70 7. 00 37 2v 1 [ st at .M L ] 3 J ul 2 01 7",
"title": ""
},
{
"docid": "a9abef2213a7a24ec87aef11888d7854",
"text": "Mechanical ventilation (MV) remains the cornerstone of acute respiratory distress syndrome (ARDS) management. It guarantees sufficient alveolar ventilation, high FiO2 concentration, and high positive end-expiratory pressure levels. However, experimental and clinical studies have accumulated, demonstrating that MV also contributes to the high mortality observed in patients with ARDS by creating ventilator-induced lung injury. Under these circumstances, extracorporeal lung support (ECLS) may be beneficial in two distinct clinical settings: to rescue patients from the high risk for death associated with severe hypoxemia, hypercapnia, or both not responding to maximized conventional MV, and to replace MV and minimize/abolish the harmful effects of ventilator-induced lung injury. High extracorporeal blood flow venovenous extracorporeal membrane oxygenation (ECMO) may therefore rescue the sickest patients with ARDS from the high risk for death associated with severe hypoxemia, hypercapnia, or both not responding to maximized conventional MV. Successful venovenous ECMO treatment in patients with extremely severe H1N1-associated ARDS and positive results of the CESAR trial have led to an exponential use of the technology in recent years. Alternatively, lower-flow extracorporeal CO2 removal devices may be used to reduce the intensity of MV (by reducing Vt from 6 to 3-4 ml/kg) and to minimize or even abolish the harmful effects of ventilator-induced lung injury if used as an alternative to conventional MV in nonintubated, nonsedated, and spontaneously breathing patients. Although conceptually very attractive, the use of ECLS in patients with ARDS remains controversial, and high-quality research is needed to further advance our knowledge in the field.",
"title": ""
},
{
"docid": "21909d9d0a741061a65cf06e023f7aa2",
"text": "Integrated magnetics is applied to replace the three-discrete transformers by a single core transformer in a three-phase LLC resonant converter. The magnetic circuit of the integrated transformer is analyzed to derive coupling factors between the phases; these coupling factors are intentionally minimized to realize the magnetic behavior of the three-discrete transformers, with the benefit of eliminating the dead space between them. However, in a practical design, the transformer parameters in a multiphase LLC resonant converter are never exactly identical among the phases, leading to unbalanced current sharing between the paralleled modules. In this regard, a current balancing method is proposed in this paper. The proposed method can improve the current sharing between the paralleled phases relying on a single balancing transformer, and its theory is based on Ampere’s law, by forcing the sum of the three resonant currents to zero. Theoretically, if an ideal balancing transformer has been utilized, it would impose the same effect of connecting the integrated transformer in a solid star connection. However, as the core permeability of the balancing transformer is finite, the unbalanced current cannot be completely suppressed. Nonetheless, utilizing a single balancing transformer has an advantage over the star connection, as it keeps the interleaving structure simple which allows for traditional phase-shedding techniques, and it can be a solution for the other multiphase topologies where realizing a star connection is not feasible. Along with the theoretical discussion, simulation and experimental results are also presented to evaluate the proposed method considering various sources of the unbalance such as a mismatch in: 1) resonant and magnetizing inductances; 2) resonant capacitors; 3) transistor on-resistances of the MOSFETS; and 4) propagation delay of the gate drivers.",
"title": ""
},
{
"docid": "eb0ec729796a93f36d348e70e3fa9793",
"text": "This paper proposes a novel approach to measure the object size using a regular digital camera. Nowadays, the remote object-size measurement is very crucial to many multimedia applications. Our proposed computer-aided automatic object-size measurement technique is based on a new depth-information extraction (range finding) scheme using a regular digital camera. The conventional range finders are often carried out using the passive method such as stereo cameras or the active method such as ultrasonic and infrared equipment. They either require the cumbersome set-up or deal with point targets only. The proposed approach requires only a digital camera with certain image processing techniques and relies on the basic principles of visible light. Experiments are conducted to evaluate the performance of our proposed new object-size measurement mechanism. The average error-percentage of this method is below 2%. It demonstrates the striking effectiveness of our proposed new method.",
"title": ""
},
{
"docid": "705694c36d36ca6950740d754160f4bd",
"text": "There is a growing concern that excessive and uncontrolled use of Facebook not only interferes with performance at school or work but also poses threats to physical and psychological well-being. The present research investigated how two individual difference variables--social anxiety and need for social assurance--affect problematic use of Facebook. Drawing on the basic premises of the social skill model of problematic Internet use, we hypothesized that social anxiety and need for social assurance would be positively correlated with problematic use of Facebook. Furthermore, it was predicted that need for social assurance would moderate the relationship between social anxiety and problematic use. A cross-sectional online survey was conducted with a college student sample in the United States (N=243) to test the proposed hypotheses. Results showed that both social anxiety and need for social assurance had a significant positive association with problematic use of Facebook. More importantly, the data demonstrated that need for social assurance served as a significant moderator of the relationship between social anxiety and problematic Facebook use. The positive association between social anxiety and problematic Facebook use was significant only for Facebook users with medium to high levels of need for social assurance but not for those with a low level of need for social assurance. Theoretical and practical implications of these findings were discussed.",
"title": ""
},
{
"docid": "d053f8b728f94679cd73bc91193f0ba6",
"text": "Deep learning is an important new area of machine learning which encompasses a wide range of neural network architectures designed to complete various tasks. In the medical imaging domain, example tasks include organ segmentation, lesion detection, and tumor classification. The most popular network architecture for deep learning for images is the convolutional neural network (CNN). Whereas traditional machine learning requires determination and calculation of features from which the algorithm learns, deep learning approaches learn the important features as well as the proper weighting of those features to make predictions for new data. In this paper, we will describe some of the libraries and tools that are available to aid in the construction and efficient execution of deep learning as applied to medical images.",
"title": ""
},
{
"docid": "3ae51aede5a7a551cfb2aecbc77a9ecb",
"text": "We present the Crossfire attack -- a powerful attack that degrades and often cuts off network connections to a variety of selected server targets (e.g., servers of an enterprise, a city, a state, or a small country) by flooding only a few network links. In Crossfire, a small set of bots directs low intensity flows to a large number of publicly accessible servers. The concentration of these flows on the small set of carefully chosen links floods these links and effectively disconnects selected target servers from the Internet. The sources of the Crossfire attack are undetectable by any targeted servers, since they no longer receive any messages, and by network routers, since they receive only low-intensity, individual flows that are indistinguishable from legitimate flows. The attack persistence can be extended virtually indefinitely by changing the set of bots, publicly accessible servers, and target links while maintaining the same disconnection targets. We demonstrate the attack feasibility using Internet experiments, show its effects on a variety of chosen targets (e.g., servers of universities, US states, East and West Coasts of the US), and explore several countermeasures.",
"title": ""
},
{
"docid": "31be3d5db7d49d1bfc58c81efec83bdc",
"text": "Electromagnetic elements such as inductance are not used in switched-capacitor converters to convert electrical power. In contrast, capacitors are used for storing and transforming the electrical power in these new topologies. Lower volume, higher power density, and more integration ability are the most important features of these kinds of converters. In this paper, the most important switched-capacitor converters topologies, which have been developed in the last decade as new topologies in power electronics, are introduced, analyzed, and compared with each other, in brief. Finally, a 100 watt double-phase half-mode resonant converter is simulated to convert 48V dc to 24 V dc for light weight electrical vehicle applications. Low output voltage ripple (0.4%), and soft switching for all power diodes and switches are achieved under the worst-case conditions.",
"title": ""
},
{
"docid": "7f49cb5934130fb04c02db03bd40e83d",
"text": "BACKGROUND\nResearch literature on problematic smartphone use, or smartphone addiction, has proliferated. However, relationships with existing categories of psychopathology are not well defined. We discuss the concept of problematic smartphone use, including possible causal pathways to such use.\n\n\nMETHOD\nWe conducted a systematic review of the relationship between problematic use with psychopathology. Using scholarly bibliographic databases, we screened 117 total citations, resulting in 23 peer-reviewer papers examining statistical relations between standardized measures of problematic smartphone use/use severity and the severity of psychopathology.\n\n\nRESULTS\nMost papers examined problematic use in relation to depression, anxiety, chronic stress and/or low self-esteem. Across this literature, without statistically adjusting for other relevant variables, depression severity was consistently related to problematic smartphone use, demonstrating at least medium effect sizes. Anxiety was also consistently related to problem use, but with small effect sizes. Stress was somewhat consistently related, with small to medium effects. Self-esteem was inconsistently related, with small to medium effects when found. Statistically adjusting for other relevant variables yielded similar but somewhat smaller effects.\n\n\nLIMITATIONS\nWe only included correlational studies in our systematic review, but address the few relevant experimental studies also.\n\n\nCONCLUSIONS\nWe discuss causal explanations for relationships between problem smartphone use and psychopathology.",
"title": ""
},
{
"docid": "651db77789c5f5edaa933534255c88d6",
"text": "Abstract: Rapid increase in internet users along with growing power of online review sites and social media has given birth to Sentiment analysis or Opinion mining, which aims at determining what other people think and comment. Sentiments or Opinions contain public generated content about products, services, policies and politics. People are usually interested to seek positive and negative opinions containing likes and dislikes, shared by users for features of particular product or service. Therefore product features or aspects have got significant role in sentiment analysis. In addition to sufficient work being performed in text analytics, feature extraction in sentiment analysis is now becoming an active area of research. This review paper discusses existing techniques and approaches for feature extraction in sentiment analysis and opinion mining. In this review we have adopted a systematic literature review process to identify areas well focused by researchers, least addressed areas are also highlighted giving an opportunity to researchers for further work. We have also tried to identify most and least commonly used feature selection techniques to find research gaps for future work. Rapid increase in internet users along with growing power of online review sites and social media has given birth to Sentiment analysis or Opinion mining, which aims at determining what other people think and comment. Sentiments or Opinions contain public generated content about products, services, policies and politics. People are usually interested to seek positive and negative opinions containing likes and dislikes, shared by users for features of particular product or service. Therefore product features or aspects have got significant role in sentiment analysis. In addition to sufficient work being performed in text analytics, feature extraction in sentiment analysis is now becoming an active area of research. This review paper discusses existing techniques and approaches for feature extraction in sentiment analysis and opinion mining. In this review we have adopted a systematic literature review process to identify areas well focused by researchers, least addressed areas are also highlighted giving an opportunity to researchers for further work. We have also tried to identify most and least commonly used feature selection techniques to find research gaps for future work.",
"title": ""
}
] |
scidocsrr
|
6b33cd1d084266d3ba5dbca461383dbd
|
Fast Approximate kNN Graph Construction for High Dimensional Data via Recursive Lanczos Bisection
|
[
{
"docid": "da168a94f6642ee92454f2ea5380c7f3",
"text": "One of the central problems in machine learning and pattern recognition is to develop appropriate representations for complex data. We consider the problem of constructing a representation for data lying on a low-dimensional manifold embedded in a high-dimensional space. Drawing on the correspondence between the graph Laplacian, the Laplace Beltrami operator on the manifold, and the connections to the heat equation, we propose a geometrically motivated algorithm for representing the high-dimensional data. The algorithm provides a computationally efficient approach to nonlinear dimensionality reduction that has locality-preserving properties and a natural connection to clustering. Some potential applications and illustrative examples are discussed.",
"title": ""
}
] |
[
{
"docid": "f0ea9c96eed1480cc24a5776b116b49e",
"text": "With the advent of real-time dense scene reconstruction from handheld RGBD cameras [1], one key aspect to enable robust operation is the ability to relocalise in a previously mapped environment or after loss of measurement. Tasks such as operating on a workspace, where moving objects and occlusions are likely, require a recovery competence in order to be useful. For RGBD cameras, this must also include the ability to relocalise in areas with reduced visual texture.",
"title": ""
},
{
"docid": "8da2450cbcb9b43d07eee187e5bf07f1",
"text": "We propose a unified approach for bottom-up hierarchical image segmentation and object proposal generation for recognition, called Multiscale Combinatorial Grouping (MCG). For this purpose, we first develop a fast normalized cuts algorithm. We then propose a high-performance hierarchical segmenter that makes effective use of multiscale information. Finally, we propose a grouping strategy that combines our multiscale regions into highly-accurate object proposals by exploring efficiently their combinatorial space. We also present Single-scale Combinatorial Grouping (SCG), a faster version of MCG that produces competitive proposals in under five seconds per image. We conduct an extensive and comprehensive empirical validation on the BSDS500, SegVOC12, SBD, and COCO datasets, showing that MCG produces state-of-the-art contours, hierarchical regions, and object proposals.",
"title": ""
},
{
"docid": "4997567b6a96c64ddfe8d105098dd961",
"text": "A chip-scale package structure for Si photonic optical transceivers has been developed. The foot print of the transceiver is 5 mm × 5mm. By using an optical pin array and glass interposer with through-glass vias (TGVs), high density optical and electrical I/O interfaces are configured on one side of the package. The optical pin acts as a vertical waveguide or spot-size converter (SSC). The combination of the optical pin and the O-band multimode transmission provides large misalignment tolerance for the optical interface. The developed optical transceiver has a high degree of usability for various applications, such as multi-chip modules and active optical cables, and is called “optical I/O core.” The optical I/O core demonstrated 25 Gbps/ch error-free operation over a 300-m multimode fiber. The optical I/O core is a promising solution for relieving the I/O bottleneck in high-bandwidth inter-chip data transmission.",
"title": ""
},
{
"docid": "9c8e773dde5e999ac31a1a4bd279c24d",
"text": "The efficiency of wireless power transfer (WPT) systems is highly dependent on the load, which may change in a wide range in field applications. Besides, the detuning of WPT systems caused by the component tolerance and aging of inductors and capacitors can also decrease the system efficiency. In order to track the maximum system efficiency under varied loads and detuning conditions in real time, an active single-phase rectifier (ASPR) with an auxiliary measurement coil (AMC) and its corresponding control method are proposed in this paper. Both the equivalent load impedance and the output voltage can be regulated by the ASPR and the inverter, separately. First, the fundamental harmonic analysis model is established to analyze the influence of the load and the detuning on the system efficiency. Second, the soft-switching conditions and the equivalent input impedance of ASPR with different phase shifts and pulse widths are investigated in detail. Then, the analysis of the AMC and the maximum efficiency control strategy are provided in detail. Finally, an 800-W prototype is set up to validate the performance of the proposed method. The experimental results show that with 10% tolerance of the resonant capacitor in the receiver side, the system efficiency with the proposed approach reaches 91.7% at rated 800-W load and 91.1% at 300-W light load, which has an improvement by 2% and 10% separately compared with the traditional diode rectifier.",
"title": ""
},
{
"docid": "77908ab362e0a26e395bc2d2bf07e0ee",
"text": "In this paper we consider the problem of exploring an unknown environment by a team of robots. As in single-robot exploration the goal is to minimize the overall exploration time. The key problem to be solved therefore is to choose appropriate target points for the individual robots so that they simultaneously explore different regions of their environment. We present a probabilistic approach for the coordination of multiple robots which, in contrast to previous approaches, simultaneously takes into account the costs of reaching a target point and the utility of target points. The utility of target points is given by the size of the unexplored area that a robot can cover with its sensors upon reaching a target position. Whenever a target point is assigned to a specific robot, the utility of the unexplored area visible from this target position is reduced for the other robots. This way, a team of multiple robots assigns different target points to the individual robots. The technique has been implemented and tested extensively in real-world experiments and simulation runs. The results given in this paper demonstrate that our coordination technique significantly reduces the exploration time compared to previous approaches. '",
"title": ""
},
{
"docid": "4ec8be0369514c8df163b2790ccd0225",
"text": "New dynamic particle image velocimetry (PIV) technology was applied to the study of the flow field associated with prosthetic heart valves. Four bileaflet prostheses, the St. Jude Medical (SJM) valve, the On-X valve with straight leaflets, the Jyros (JR) valve, and the Edwards MIRA (MIRA) valve with curved leaflets, were tested in the mitral position under pulsatile flow conditions to find the effect of the leaflet shape and overall valve design on the flow field, particularly in terms of the turbulent stress distribution, which may influence hemolysis, platelet activation, and thrombus formation. Comparison of the time-resolved flow fields associated with the opening, accelerating, peak, and closing phases of the diastolic flow revealed the effects of the leaflet shape and overall valve design on the flow field. Anatomically and antianatomically oriented bileaflet valves were also compared in the mitral position to study the effects of the orientation on the downstream flow field. The experimental program used a dynamic PIV system utilizing a high-speed, high-resolution video camera to map the true time-resolved velocity field inside the simulated ventricle. Based on the experimental data, the following general conclusions can be made. High-resolution dynamic PIV can capture true chronological changes in the velocity and turbulence fields. In the vertical measuring plane that passes the centers of both the aortic and mitral valves (A-A section), bileaflet valves show clear and simple circulatory flow patterns when the valve is installed in the antianatomical orientation. The SJM, the On-X, and the MIRA valves maintain a relatively high velocity through the central orifice. The curved leaflets of the JR valve generate higher velocities with a divergent flow during the accelerating and peak flow phases when the valve is installed in the anatomical orientation. In the velocity field directly below the mitral valve and normal to the previous measuring plane (B-B section), where characteristic differences in valve design on the three-dimensional flow should be visible, the symmetrical divergent nature of the flow generated by the two inclined half-disks installed in the antianatomical orientation was evident. The SJM valve, with a central downward flow near the valve, is contrasted with the JR valve, which has a peripherally strong downward circulation with higher turbulent stresses. The On-X valve has a strong central downward flow attributable to its large opening angle and flared inlet shape. The MIRA valve also has a relatively strong downward central flow. The MIRA valve, however, diverts the flow three-dimensionally due to its peripherally curved leaflets.",
"title": ""
},
{
"docid": "a9d7826ccc665c036de72caceebc32a9",
"text": "Current topic models often suffer from discovering topics not matching human intuition, unnatural switching of topics within documents and high computational demands. We address these shortcomings by proposing a topic model and an inference algorithm based on automatically identifying characteristic keywords for topics.",
"title": ""
},
{
"docid": "b3a7b289cd54ef0d8a8c175c40449577",
"text": "Global Internet threats have undergone a profound transformation from attacks designed solely to disable infrastructure to those that also target people and organizations. At the center of many of these attacks are collections of compromised computers, or Botnets, remotely controlled by the attackers, and whose members are located in homes, schools, businesses, and governments around the world [6]. In this survey paper we provide a brief look at how existing botnet research, the evolution and future of botnets, as well as the goals and visibility of today’s networks intersect to inform the field of botnet technology and defense.",
"title": ""
},
{
"docid": "31d3f4eabc7706cb30cfc9e8d9c37b32",
"text": "BACKGROUND\nTestosterone can motivate human approach and avoidance behavior. Specifically, the conscious recognition of and implicit reaction to angry facial expressions is influenced by testosterone. The study tested whether exogenous testosterone modulates the personal distance (PD) humans prefer in a social threat context.\n\n\nMETHODS\n82 healthy male participants underwent either transdermal testosterone (testosterone group) or placebo application (placebo group). Each participant performed a computerized stop-distance task before (T1) and 3.5h after (T2) treatment, during which they indicated how closely they would approach a human, animal or virtual character with varying emotional expression.\n\n\nRESULTS\nMen's PD towards humans and animals varied as a function of their emotional expression. In the testosterone group, a pre-post comparison indicated that the administration of 50mg testosterone was associated with a small but significant reduction of men's PD towards aggressive individuals. Men in the placebo group did not change the initially chosen PD after placebo application independent of the condition. However comparing the testosterone and placebo group after testosterone administration did not reveal significant differences. While the behavioral effect was small and only observed as within-group effect it was repeatedly and selectively shown for men's PD choices towards an angry woman, angry man and angry dog in the testosterone group. In line with the literature, our findings in young men support the influential role of exogenous testosterone on male's approach behavior during social confrontations.",
"title": ""
},
{
"docid": "d03d831ceddf508d58298a45c9373ccd",
"text": "Recent BIO-tagging-based neural semantic role labeling models are very high performing, but assume gold predicates as part of the input and cannot incorporate span-level features. We propose an endto-end approach for jointly predicting all predicates, arguments spans, and the relations between them. The model makes independent decisions about what relationship, if any, holds between every possible word-span pair, and learns contextualized span representations that provide rich, shared input features for each decision. Experiments demonstrate that this approach sets a new state of the art on PropBank SRL without gold predicates.1",
"title": ""
},
{
"docid": "c5bb89954e511fcfc7820338d2a7d745",
"text": "Microblogging is a communication paradigm in which users post bits of information (brief text updates or micro media such as photos, video or audio clips) that are visible by their communities. When a user finds a “meme” of another user interesting, she can eventually repost it, thus allowing memes to propagate virally trough a social network. In this paper we introduce the meme ranking problem, as the problem of selecting which k memes (among the ones posted their contacts) to show to users when they log into the system. The objective is to maximize the overall activity of the network, that is, the total number of reposts that occur. We deeply characterize the problem showing that not only exact solutions are unfeasible, but also approximated solutions are prohibitive to be adopted in an on-line setting. Therefore we devise a set of heuristics and we compare them trough an extensive simulation based on the real-world Yahoo! Meme social graph, and with parameters learnt from real logs of meme propagations. Our experimentation demonstrates the effectiveness and feasibility of these methods.",
"title": ""
},
{
"docid": "a8bd9e8470ad414c38f5616fb14d433d",
"text": "Detecting hidden communities from observed interactions is a classical problem. Theoretical analysis of community detection has so far been mostly limited to models with non-overlapping communities such as the stochastic block model. In this paper, we provide guaranteed community detection for a family of probabilistic network models with overlapping communities, termed as the mixed membership Dirichlet model, first introduced in Airoldi et al. (2008). This model allows for nodes to have fractional memberships in multiple communities and assumes that the community memberships are drawn from a Dirichlet distribution. Moreover, it contains the stochastic block model as a special case. We propose a unified approach to learning communities in these models via a tensor spectral decomposition approach. Our estimator uses low-order moment tensor of the observed network, consisting of 3-star counts. Our learning method is based on simple linear algebraic operations such as singular value decomposition and tensor power iterations. We provide guaranteed recovery of community memberships and model parameters, and present a careful finite sample analysis of our learning method. Additionally, our results match the best known scaling requirements for the special case of the (homogeneous) stochastic block model.",
"title": ""
},
{
"docid": "9e88b710d55b90074a98ba70527e0cea",
"text": "In this paper we present a series of design directions for the development of affordable, modular, light-weight, intrinsically-compliant, underactuated robot hands, that can be easily reproduced using off-the-shelf materials. The proposed robot hands, efficiently grasp a series of everyday life objects and are considered to be general purpose, as they can be used for various applications. The efficiency of the proposed robot hands has been experimentally validated through a series of experimental paradigms, involving: grasping of multiple everyday life objects with different geometries, myoelectric (EMG) control of the robot hands in grasping tasks, preliminary results on a grasping capable quadrotor and autonomous grasp planning under object position and shape uncertainties.",
"title": ""
},
{
"docid": "6b83827500e4ea22c9fed3288d0506a7",
"text": "This study develops a high-performance stand-alone photovoltaic (PV) generation system. To make the PV generation system more flexible and expandable, the backstage power circuit is composed of a high step-up converter and a pulsewidth-modulation (PWM) inverter. In the dc-dc power conversion, the high step-up converter is introduced to improve the conversion efficiency in conventional boost converters to allow the parallel operation of low-voltage PV arrays, and to decouple and simplify the control design of the PWM inverter. Moreover, an adaptive total sliding-mode control system is designed for the voltage control of the PWM inverter to maintain a sinusoidal output voltage with lower total harmonic distortion and less variation under various output loads. In addition, an active sun tracking scheme without any light sensors is investigated to make the PV modules face the sun directly for capturing the maximum irradiation and promoting system efficiency. Experimental results are given to verify the validity and reliability of the high step-up converter, the PWM inverter control, and the active sun tracker for the high-performance stand-alone PV generation system.",
"title": ""
},
{
"docid": "7c89edeaffe5017adbfd1e4f810e2af8",
"text": "BACKGROUND\nAmbrisentan is a propanoic acid-based, A-selective endothelin receptor antagonist for the once-daily treatment of pulmonary arterial hypertension.\n\n\nMETHODS AND RESULTS\nAmbrisentan in Pulmonary Arterial Hypertension, Randomized, Double-Blind, Placebo-Controlled, Multicenter, Efficacy Study 1 and 2 (ARIES-1 and ARIES-2) were concurrent, double-blind, placebo-controlled studies that randomized 202 and 192 patients with pulmonary arterial hypertension, respectively, to placebo or ambrisentan (ARIES-1, 5 or 10 mg; ARIES-2, 2.5 or 5 mg) orally once daily for 12 weeks. The primary end point for each study was change in 6-minute walk distance from baseline to week 12. Clinical worsening, World Health Organization functional class, Short Form-36 Health Survey score, Borg dyspnea score, and B-type natriuretic peptide plasma concentrations also were assessed. In addition, a long-term extension study was performed. The 6-minute walk distance increased in all ambrisentan groups; mean placebo-corrected treatment effects were 31 m (P=0.008) and 51 m (P<0.001) in ARIES-1 for 5 and 10 mg ambrisentan, respectively, and 32 m (P=0.022) and 59 m (P<0.001) in ARIES-2 for 2.5 and 5 mg ambrisentan, respectively. Improvements in time to clinical worsening (ARIES-2), World Health Organization functional class (ARIES-1), Short Form-36 score (ARIES-2), Borg dyspnea score (both studies), and B-type natriuretic peptide (both studies) were observed. No patient treated with ambrisentan developed aminotransferase concentrations >3 times the upper limit of normal. In 280 patients completing 48 weeks of treatment with ambrisentan monotherapy, the improvement from baseline in 6-minute walk at 48 weeks was 39 m.\n\n\nCONCLUSIONS\nAmbrisentan improves exercise capacity in patients with pulmonary arterial hypertension. Improvements were observed for several secondary end points in each of the studies, although statistical significance was more variable. Ambrisentan is well tolerated and is associated with a low risk of aminotransferase abnormalities.",
"title": ""
},
{
"docid": "da51ce9ded3b79bb4a038addace19d7b",
"text": "The purpose of this research paper is to identify the significant role of Strategic Information Systems (SIS) in supporting the Competitive Advantage (CA). It also explains its role on the dimensions that increase the competitive advantage which are the operational efficiency, information quality and innovation. In order to achieve the goal of this study and to collect the primary data, the researchers designed a survey, in the form of an electronic questionnaire. This survey instrument consisted of 20 questions. It was distributed to members of the study sample, which contains the managers at all levels, and the employees in the Saudi banking sector. The number of the participants included in the survey was 147. The results of this study revealed that there is a significant role of strategic information systems on increasing operational efficiency, improving the quality of information and promoting innovation. This in turn enabled the organizations to achieve higher levels of competitive advantages. The strategic information systems have deep consequences for organizations that adopt them; managers could achieve great and sustainable competitive advantages from such systems if carefully considered and developed. On the other hand, this study was conducted in the banking sector in KSA context. So, more research is needed in other sectors and in the context of other countries; to confirm and generalize the results. Finally, the paper’s primary value lies in its ability to provide the evidence that the strategic information systems play a significant role in supporting and achieving the competitive advantages in Saudi Arabia, particularly in the banking sector. Since there was a lack of such research in the Saudi context, this paper can provide a theoretical basis for future researchers as well as practical implications for managers. Keywords—Strategic information systems (SIS); competitive advantage (CA); operational efficiency; information quality; innovation",
"title": ""
},
{
"docid": "9cb16594b916c5d11c189e80c0ac298a",
"text": "This paper describes the design of an innovative and low cost self-assistive technology that is used to facilitate the control of a wheelchair and home appliances by using advanced voice commands of the disabled people. This proposed system will provide an alternative to the physically challenged people with quadriplegics who is permanently unable to move their limbs (but who is able to speak and hear) and elderly people in controlling the motion of the wheelchair and home appliances using their voices to lead an independent, confident and enjoyable life. The performance of this microcontroller based and voice integrated design is evaluated in terms of accuracy and velocity in various environments. The results show that it could be part of an assistive technology for the disabled persons without any third person’s assistance.",
"title": ""
},
{
"docid": "210c6daca7335fa87f1a41e823b0ff34",
"text": "Mitochondrial DNA control region sequences of orangutans (Pongo pygmaeus) from six different populations on the island of Borneo were determined and analyzed for evidence of regional diversity and were compared separately with orangutans from the island of Sumatra. Within the Bornean population, four distinct subpopulations were identified. Furthermore, the results of this study revealed marked divergence, supportive evidence of speciation between Sumatran and Bornean orangutans. This study demonstrates that, as an entire population, Bornean orangutans have not experienced a serious genetic bottleneck, which has been suggested as the cause of low diversity in humans and east African chimpanzees. Based on these new data, it is estimated that Bornean and Sumatran orangutans diverged approximately 1.1 MYA and that the four distinct Bornean populations diverged 860,000 years ago. These findings have important implications for management, breeding, and reintroduction practices in orangutan conservation efforts.",
"title": ""
},
{
"docid": "8b252e706868440162e50a2c23255cb3",
"text": "Currently, most top-performing text detection networks tend to employ fixed-size anchor boxes to guide the search for text instances. ey usually rely on a large amount of anchors with different scales to discover texts in scene images, thus leading to high computational cost. In this paper, we propose an end-to-end boxbased text detector with scale-adaptive anchors, which can dynamically adjust the scales of anchors according to the sizes of underlying texts by introducing an additional scale regression layer. e proposed scale-adaptive anchors allow us to use a few number of anchors to handle multi-scale texts and therefore significantly improve the computational efficiency. Moreover, compared to discrete scales used in previous methods, the learned continuous scales are more reliable, especially for small texts detection. Additionally, we propose Anchor convolution to beer exploit necessary feature information by dynamically adjusting the sizes of receptive fields according to the learned scales. Extensive experiments demonstrate that the proposed detector is fast, taking only 0.28 second per image, while outperforming most state-of-the-art methods in accuracy.",
"title": ""
},
{
"docid": "ed9d6571634f30797fb338a928cc8361",
"text": "In this paper, we study the challenging problem of tracking the trajectory of a moving object in a video with possibly very complex background. In contrast to most existing trackers which only learn the appearance of the tracked object online, we take a different approach, inspired by recent advances in deep learning architectures, by putting more emphasis on the (unsupervised) feature learning problem. Specifically, by using auxiliary natural images, we train a stacked denoising autoencoder offline to learn generic image features that are more robust against variations. This is then followed by knowledge transfer from offline training to the online tracking process. Online tracking involves a classification neural network which is constructed from the encoder part of the trained autoencoder as a feature extractor and an additional classification layer. Both the feature extractor and the classifier can be further tuned to adapt to appearance changes of the moving object. Comparison with the state-of-the-art trackers on some challenging benchmark video sequences shows that our deep learning tracker is more accurate while maintaining low computational cost with real-time performance when our MATLAB implementation of the tracker is used with a modest graphics processing unit (GPU).",
"title": ""
}
] |
scidocsrr
|
f81c57594b7f4f3672b0981757eaaa2f
|
Social Computing and Social Media. Human Behavior
|
[
{
"docid": "61304d369ea790d80b24259336d6974c",
"text": "After searching for the keywords “information privacy” in ABI/Informs focusing on scholarly articles, we obtained a listing of 340 papers. We first eliminated papers that were anonymous, table of contents, interviews with experts, or short opinion pieces. We also removed articles not related to our focus on information privacy research in IS literature. A total of 218 articles were removed as explained in Table A1.",
"title": ""
}
] |
[
{
"docid": "625f54bb3157e429af1af8f0d04f0713",
"text": "Proof theory is a powerful tool for understanding computational phenomena, as most famously exemplified by the Curry–Howard isomorphism between intuitionistic logic and the simply-typed λ-calculus. In this paper, we identify a fragment of intuitionistic linear logic with least fixed points and establish a Curry–Howard isomorphism between a class of proofs in this fragment and deterministic finite automata. Proof-theoretically, closure of regular languages under complementation, union, and intersection can then be understood in terms of cut elimination. We also establish an isomorphism between a different class of proofs and subsequential string transducers. Because prior work has shown that linear proofs can be seen as session-typed processes, a concurrent semantics of transducer composition is obtained for free. 1998 ACM Subject Classification F.4.1 Mathematical Logic; F.1.1 Models of Computation",
"title": ""
},
{
"docid": "2858b796264102abf10fcf6507639883",
"text": "Privacy policies are a nearly ubiquitous feature of websites and online services, and the contents of such policies are legally binding for users. However, the obtuse language and sheer length of most privacy policies tend to discourage users from reading them. We describe a pilot experiment to use automatic text categorization to answer simple categorical questions about privacy policies, as a first step toward developing automated or semi-automated methods to retrieve salient features from these policies. Our results tentatively demonstrate the feasibility of this approach for answering selected questions about privacy policies, suggesting that further work toward user-oriented analysis of these policies could be fruitful.",
"title": ""
},
{
"docid": "de7adeaded669f10ff63bc36269ca384",
"text": "The posterior cruciate ligament (PCL) is recognized as an essential stabilizer of the knee. However, the complexity of the ligament has generated controversy about its definitive role and the recommended treatment after injury. A proper understanding of the functional role of the PCL is necessary to minimize residual instability, osteoarthritic progression, and failure of additional concomitant ligament graft reconstructions or meniscal repairs after treatment. Recent anatomic and biomechanical studies have elucidated the surgically relevant quantitative anatomy and confirmed the codominant role of the anterolateral and posteromedial bundles of the PCL. Although nonoperative treatment has historically been the initial treatment of choice for isolated PCL injury, possibly biased by the historically poorer objective outcomes postoperatively compared with anterior cruciate ligament reconstructions, surgical intervention has been increasingly used for isolated and combined PCL injuries. Recent studies have more clearly elucidated the biomechanical and clinical effects after PCL tears and resultant treatments. This article presents a thorough review of updates on the clinically relevant anatomy, epidemiology, biomechanical function, diagnosis, and current treatments for the PCL, with an emphasis on the emerging clinical and biomechanical evidence regarding each of the treatment choices for PCL reconstruction surgery. It is recommended that future outcomes studies use PCL stress radiographs to determine objective outcomes and that evidence level 1 and 2 studies be performed to assess outcomes between transtibial and tibial inlay reconstructions and also between single- and double-bundle PCL reconstructions.",
"title": ""
},
{
"docid": "764b20159244eac0b503d86636f5d62e",
"text": "Most modern Information Extraction (IE) systems are implemented as sequential taggers and focus on modelling local dependencies. Non-local and non-sequential context is, however, a valuable source of information to improve predictions. In this paper, we introduce GraphIE, a framework that operates over a graph representing both local and nonlocal dependencies between textual units (i.e. words or sentences). The algorithm propagates information between connected nodes through graph convolutions and exploits the richer representation to improve word-level predictions. The framework is evaluated on three different tasks, namely social media, textual and visual information extraction. Results show that GraphIE outperforms a competitive baseline (BiLSTM+CRF) in all tasks by a significant margin.",
"title": ""
},
{
"docid": "dca4c46bd96ae24a87dd8daef432a7b1",
"text": "This paper introduces a noncausal autoregressive process with Cauchy errors in application to the exchange rates of the Bitcoin electronic currency against the US Dollar. The dynamics of the daily Bitcoin/USD exchange rate series displays episodes of local trends, which can be modelled and interpreted as speculative bubbles. The bubbles may result from the speculative component in the on-line trading. The Bitcoin/USD exchange rates are modelled and predicted. JEL number: C14 · G32 · G23",
"title": ""
},
{
"docid": "d11d6df22b5c6212b27dad4e3ed96826",
"text": "We propose learning sentiment-specific word embeddings dubbed sentiment embeddings in this paper. Existing word embedding learning algorithms typically only use the contexts of words but ignore the sentiment of texts. It is problematic for sentiment analysis because the words with similar contexts but opposite sentiment polarity, such as good and bad, are mapped to neighboring word vectors. We address this issue by encoding sentiment information of texts (e.g., sentences and words) together with contexts of words in sentiment embeddings. By combining context and sentiment level evidences, the nearest neighbors in sentiment embedding space are semantically similar and it favors words with the same sentiment polarity. In order to learn sentiment embeddings effectively, we develop a number of neural networks with tailoring loss functions, and collect massive texts automatically with sentiment signals like emoticons as the training data. Sentiment embeddings can be naturally used as word features for a variety of sentiment analysis tasks without feature engineering. We apply sentiment embeddings to word-level sentiment analysis, sentence level sentiment classification, and building sentiment lexicons. Experimental results show that sentiment embeddings consistently outperform context-based embeddings on several benchmark datasets of these tasks. This work provides insights on the design of neural networks for learning task-specific word embeddings in other natural language processing tasks.",
"title": ""
},
{
"docid": "d95fb46b3857b55602af2cf271300f5a",
"text": "This paper proposes a new active interphase transformer for 24-pulse diode rectifier. The proposed scheme injects a compensation current into the secondary winding of either of the two first-stage interphase transformers. For only one of the first-stage interphase transformers being active, the inverter conducted the injecting current is with a lower kVA rating [1.26% pu (Po)] compared to conventional active interphase transformers. Moreover, the proposal scheme draws near sinusoidal input currents and the simulated and the experimental total harmonic distortion of overall line currents are only 1.88% and 2.27% respectively. When the inverter malfunctions, the input line current still can keep in the conventional 24-pulse situation. A digital-signal-processor (DSP) based digital controller is employed to calculate the desired compensation current and deals with the trigger signals needed for the inverter. Moreover, a 6kW prototype is built for test. Both simulation and experimental results demonstrate the validity of the proposed scheme.",
"title": ""
},
{
"docid": "4003b1a03be323c78e98650895967a07",
"text": "In an experiment on Airbnb, we find that applications from guests with distinctively African-American names are 16% less likely to be accepted relative to identical guests with distinctively White names. Discrimination occurs among landlords of all sizes, including small landlords sharing the property and larger landlords with multiple properties. It is most pronounced among hosts who have never had an African-American guest, suggesting only a subset of hosts discriminate. While rental markets have achieved significant reductions in discrimination in recent decades, our results suggest that Airbnb’s current design choices facilitate discrimination and raise the possibility of erasing some of these civil rights gains.",
"title": ""
},
{
"docid": "d2a9cd6bfbaff70302f2d6f455e87fcc",
"text": "A Deep-learning architecture is a representation learning method with multiple levels of abstraction. It finds out complex structure of nonlinear processing layer in large datasets for pattern recognition. From the earliest uses of deep learning, Convolution Neural Network (CNN) can be trained by simple mathematical method based gradient descent. One of the most promising improvement of CNN is the integration of intelligent heuristic algorithms for learning optimization. In this paper, we use the seven layer CNN, named ConvNet, for handwriting digit classification. The Particle Swarm Optimization algorithm (PSO) is adapted to evolve the internal parameters of processing layers.",
"title": ""
},
{
"docid": "be3640467394a0e0b5a5035749b442e9",
"text": "Data pre-processing is an important and critical step in the data mining process and it has a huge impact on the success of a data mining project.[1](3) Data pre-processing is a step of the Knowledge discovery in databases (KDD) process that reduces the complexity of the data and offers better conditions to subsequent analysis. Through this the nature of the data is better understood and the data analysis is performed more accurately and efficiently. Data pre-processing is challenging as it involves extensive manual effort and time in developing the data operation scripts. There are a number of different tools and methods used for pre-processing, including: sampling, which selects a representative subset from a large population of data; transformation, which manipulates raw data to produce a single input; denoising, which removes noise from data; normalization, which organizes data for more efficient access; and feature extraction, which pulls out specified data that is significant in some particular context. Pre-processing technique is also useful for association rules algo. LikeAprior, Partitioned, Princer-search algo. and many more algos.",
"title": ""
},
{
"docid": "6c02349f422020718990399b78ff23bc",
"text": "AIM\nTo compare the cyclic fatigue resistance of HyFlex CM, Twisted Files (TF), K3XF, Race, and K3, and evaluate the effect of autoclave sterilization on the cyclic fatigue resistance of these instruments both before and after the files were cycled.\n\n\nMETHODOLOGY\nFive types of NiTi instruments with similar size 30, .06 taper were selected: HyFlex CM, TF, K3XF, Race and K3. Files were tested in a simulated canal with a curvature of 60° and a radius of 3 mm. The number of cycles to failure of each instrument was determined to evaluate cyclic fatigue resistance. Each type of instruments was randomly divided into four experimental groups: group 1 (n = 20), unsterilized instruments; group 2 (n = 20), pre-sterilized instruments subjected to 10 cycles of autoclave sterilization; group 3 (n = 20), instruments tested were sterilized at 25%, 50% and 75% of the mean cycles to failure as determined in group 1, and then cycled to failure; group 4 (n = 20), instruments cycled in the same manner as group 3 but without sterilization. The fracture surfaces of instruments were examined by scanning electron microscopy (SEM).\n\n\nRESULTS\nHyFlex CM, TF and K3XF had significantly higher cyclic fatigue resistance than Race and K3 in the unsterilized group 1 (P < 0.05). Autoclave sterilization significantly increased the MCF of HyFlex CM and K3XF (P < 0.05) both before and after the files were cycled. SEM examination revealed a typical pattern of cyclic fatigue fracture in all instruments.\n\n\nCONCLUSIONS\nHyFlex CM, TF and K3XF instruments composed of new thermal-treated alloy were more resistant to fatigue failure than Race and K3. Autoclaving extended the cyclic fatigue life of HyFlex CM and K3XF.",
"title": ""
},
{
"docid": "09b35c40a65a0c2c0f58deb49555000d",
"text": "There are a wide range of forensic and analysis tools to examine digital evidence in existence today. Traditional tool design examines each source of digital evidence as a BLOB (binary large object) and it is up to the examiner to identify the relevant items from evidence. In the face of rapid technological advancements we are increasingly confronted with a diverse set of digital evidence and being able to identify a particular tool for conducting a specific analysis is an essential task. In this paper, we present a systematic study of contemporary forensic and analysis tools using a hypothesis based review to identify the different functionalities supported by these tools. We highlight the limitations of the forensic tools in regards to evidence corroboration and develop a case for building evidence correlation functionalities into these tools.",
"title": ""
},
{
"docid": "f315dca8c08645292c96aa1425d94a24",
"text": "WebRTC has quickly become popular as a video conferencing platform, partly due to the fact that many browsers support it. WebRTC utilizes the Google Congestion Control (GCC) algorithm to provide congestion control for realtime communications over UDP. The performance during a WebRTC call may be influenced by several factors, including the underlying WebRTC implementation, the device and network characteristics, and the network topology. In this paper, we perform a thorough performance evaluation of WebRTC both in emulated synthetic network conditions as well as in real wired and wireless networks. Our evaluation shows that WebRTC streams have a slightly higher priority than TCP flows when competing with cross traffic. In general, while in several of the considered scenarios WebRTC performed as expected, we observed important cases where there is room for improvement. These include the wireless domain and the newly added support for the video codecs VP9 and H.264 that does not perform as expected.",
"title": ""
},
{
"docid": "7dfb278b5ab80242c4d2a3344b876ab7",
"text": "BACKGROUND\nWith 244 million international migrants, and significantly more people moving within their country of birth, there is an urgent need to engage with migration at all levels in order to support progress towards global health and development targets. In response to this, the 2nd Global Consultation on Migration and Health- held in Colombo, Sri Lanka in February 2017 - facilitated discussions concerning the role of research in supporting evidence-informed health responses that engage with migration.\n\n\nCONCLUSIONS\nDrawing on discussions with policy makers, research scholars, civil society, and United Nations agencies held in Colombo, we emphasize the urgent need for quality research on international and domestic (in-country) migration and health to support efforts to achieve the Sustainable Development Goals (SDGs). The SDGs aim to 'leave no-one behind' irrespective of their legal status. An ethically sound human rights approach to research that involves engagement across multiple disciplines is required. Researchers need to be sensitive when designing and disseminating research findings as data on migration and health may be misused, both at an individual and population level. We emphasize the importance of creating an 'enabling environment' for migration and health research at national, regional and global levels, and call for the development of meaningful linkages - such as through research reference groups - to support evidence-informed inter-sectoral policy and priority setting processes.",
"title": ""
},
{
"docid": "e6a92df6b717a55f86425b0164e9aa3a",
"text": "The COmpound Semiconductor Materials On Silicon (COSMOS) program of the U.S. Defense Advanced Research Projects Agency (DARPA) focuses on developing transistor-scale heterogeneous integration processes to intimately combine advanced compound semiconductor (CS) devices with high-density silicon circuits. The technical approaches being explored in this program include high-density micro assembly, monolithic epitaxial growth, and epitaxial layer printing processes. In Phase I of the program, performers successfully demonstrated world-record differential amplifiers through heterogeneous integration of InP HBTs with commercially fabricated CMOS circuits. In the current Phase II, complex wideband, large dynamic range, high-speed digital-to-analog convertors (DACs) are under development based on the above heterogeneous integration approaches. These DAC designs will utilize InP HBTs in the critical high-speed, high-voltage swing circuit blocks and will employ sophisticated in situ digital correction techniques enabled by CMOS transistors. This paper will also discuss the Phase III program plan as well as future directions for heterogeneous integration technology that will benefit mixed signal circuit applications.",
"title": ""
},
{
"docid": "1797cf48a3db9a2204dc97517b9f4039",
"text": "This communication presents a wideband aperture-coupled patch antenna array based on ridge gap waveguide feed layer for 60-GHz applications. The novelty of this antenna lies in the combination of relatively new gap waveguide technology along with conventional patch antenna arrays allowing to achieve a wideband patch antenna array with high gain and high radiation efficiency. An <inline-formula> <tex-math notation=\"LaTeX\">$8 \\times 8$ </tex-math></inline-formula>-element array antenna is designed, fabricated, and tested. Experimental results show that the bandwidth of VSWR < 2 is up to 15.5% (57.5–67.2 GHz). More than 75% efficiency and higher than 21.5-dBi gain are achieved over the operational bandwidth. The results are valuable for the design and evaluation of wideband planar antenna arrays at millimeter-wave frequencies.",
"title": ""
},
{
"docid": "615891cdd2860247d7837634bc3478f8",
"text": "An exact probabilistic formulation of the “square root law” conjectured byPrice is given and a probability distribution satisfying this law is defined, for which the namePrice distribution is suggested. Properties of thePrice distribution are discussed, including its relationship with the laws ofLotka andZipf. No empirical support of applicability ofPrice distribution as a model for publication productivity could be found.",
"title": ""
},
{
"docid": "0f4750f3998766e8f2a506a2d432f3bf",
"text": "Presently sustainability of fashion in the worldwide is the major considerable issue. The much talked concern is for the favor of fashion’s sustainability around the world. Many organizations and fashion conscious personalities have come forward to uphold the further extension of the campaign of good environment for tomorrows. On the other hand, fashion for the morality or ethical issues is one of the key concepts for the humanity and sustainability point of view. Main objectives of this study to justify the sustainability concern of fashion companies and their policy. In this paper concerned brands are focused on the basis of their present activities related fashion from the manufacturing to the marketing process. Most of the cases celebrities are in the forwarded stages for the upheld of the fashion sustainability. For the conservation of the environment, sustainability of the fashion is the utmost need in the present fastest growing world. Nowadays, fashion is considered the vital issue for the ecological aspect with morality concern. The research is based on the rigorously study with the reading materials. The data have been gathered from various sources, mainly academic literature, research article, conference article, PhD thesis, under graduate & post graduate dissertation and a qualitative research method approach has been adopted for this research. For the convenience of the reader and future researchers, Analysis and Findings have done in the same time.",
"title": ""
},
{
"docid": "70e2716835f789398e6d7a50aed9df46",
"text": "Human spatial behavior and experience cannot be investigated independently from the shape and configuration of environments. Therefore, comparative studies in architectural psychology and spatial cognition would clearly benefit from operationalizations of space that provide a common denominator for capturing its behavioral and psychologically relevant properties. This paper presents theoretical and methodological issues arising from the practical application of isovist-based graphs for the analysis of architectural spaces. Based on recent studies exploring the influence of spatial form and structure on behavior and experience in virtual environments, the following topics are discussed: (1) the derivation and empirical verification of meaningful descriptor variables on the basis of classic qualitative theories of environmental psychology relating behavior and experience to spatial properties; (2) methods to select reference points for the analysis of architectural spaces at a local level; furthermore, based on two experiments exploring the phenomenal conception of the spatial structure of architectural environments, formalized strategies for (3) the selection of reference points at a global level, and for (4), their integration into a sparse yet plausible comprehensive graph structure, are proposed. Taken together, a well formalized and psychologically oriented methodology for the efficient description of spatial properties of environments at the architectural scale level is outlined. This method appears useful for a wide range of applications, ranging from abstract architectural analysis over behavioral experiments to studies on mental representations in cognitive science. doi:10.1068/b33050 }Formerly also associated to Cognitive Neuroscience, Department of Zoology, University of Tu« bingen. Currently at the Centre for Cognitive Science, University of Freiburg, Friedrichstrasse 50, 79098 Freiburg, Germany. because, in reality, various potentially relevant factors coexist. In order to obtain better predictions under such complex conditions, either a comprehensive model or at least additional knowledge on the relative weights of individual factors and their potential interactions is required. As an intermediate step towards such more comprehensive approaches, existing theories have to be formulated qualitatively and translated to a common denominator. In this paper an integrative framework for describing the shape and structure of environments is outlined that allows for a quantitative formulation and test of theories on behavioral and emotional responses to environments. It is based on the two basic elements isovist and place graph. This combination appears particularly promising, since its sparseness allows an efficient representation of both geometrical and topological properties at a wide range of scales, and at the same time it seems capable and flexible enough to retain a substantial share of psychologically and behaviorally relevant detail features. Both the isovist and the place graph are established analysis techniques within their scientific communities of space syntax and spatial cognition respectively. Previous combinations of graphs and isovists (eg Batty, 2001; Benedikt, 1979; Turner et al, 2001) were based on purely formal criteria, whereas many placegraph applications made use of their inherent flexibility but suffered from a lack of formalization (cf Franz et al, 2005a). The methodology outlined in this paper seeks to combine both approaches by defining well-formalized rules for flexible graphs based on empirical findings on the human conception of the spatial structure. In sections 3 and 4, methodological issues of describing local properties on the basis of isovists are discussed. This will be done on the basis of recent empirical studies that tested the behavioral relevance of a selection of isovist measurands. The main issues are (a) the derivation of meaningful isovist measurands, based on classic qualitative theories from environmental psychology, and (b) strategies to select reference points for isovist analysis in environments consisting of few subspaces. Sections 5 and 6 then discuss issues arising when using an isovist-based description system for operationalizing larger environments consisting of multiple spaces: (c) on the basis of an empirical study in which humans identified subspaces by marking their centers, psychologically plausible selection criteria for sets of reference points are proposed and formalized; (d) a strategy to derive a topological graph on the basis of the previously identified elements is outlined. Taken together, a viable methodology is proposed which describes spatial properties of environments efficiently and comprehensively in a psychologically and behaviorally plausible manner.",
"title": ""
},
{
"docid": "7e7d6eac8e70bbdd008209aeb21c5e10",
"text": "Recent research on Internet traffic classification has produced a number of approaches for distinguishing types of traffic. However, a rigorous comparison of such proposed algorithms still remains a challenge, since every proposal considers a different benchmark for its experimental evaluation. A lack of clear consensus on an objective and cientific way for comparing results has made researchers uncertain of fundamental as well as relative contributions and limitations of each proposal. In response to the growing necessity for an objective method of comparing traffic classifiers and to shed light on scientifically grounded traffic classification research, we introduce an Internet traffic classification benchmark tool, NeTraMark. Based on six design guidelines (Comparability, Reproducibility, Efficiency, Extensibility, Synergy, and Flexibility/Ease-of-use), NeTraMark is the first Internet traffic lassification benchmark where eleven different state-of-the-art traffic classifiers are integrated. NeTraMark allows researchers and practitioners to easily extend it with new classification algorithms and compare them with other built-in classifiers, in terms of three categories of performance metrics: per-whole-trace flow accuracy, per-application flow accuracy, and computational performance.",
"title": ""
}
] |
scidocsrr
|
c7b262551efebf71979614c6fdd39045
|
Context-aware recommendations from implicit data via scalable tensor factorization
|
[
{
"docid": "048ff79b90371eb86b9d62810cfea31f",
"text": "In October, 2006 Netflix released a dataset containing 100 million anonymous movie ratings and challenged the data mining, machine learning and computer science communities to develop systems that could beat the accuracy of its recommendation system, Cinematch. We briefly describe the challenge itself, review related work and efforts, and summarize visible progress to date. Other potential uses of the data are outlined, including its application to the KDD Cup 2007.",
"title": ""
},
{
"docid": "544426cfa613a31ac903041afa946d89",
"text": "Recommender systems have the effect of guiding users in a personalized way to interesting objects in a large space of possible options. Content-based recommendation systems try to recommend items similar to those a given user has liked in the past. Indeed, the basic process performed by a content-based recommender consists in matching up the attributes of a user profile in which preferences and interests are stored, with the attributes of a content object (item), in order to recommend to the user new interesting items. This chapter provides an overview of content-based recommender systems, with the aim of imposing a degree of order on the diversity of the different aspects involved in their design and implementation. The first part of the chapter presents the basic concepts and terminology of contentbased recommender systems, a high level architecture, and their main advantages and drawbacks. The second part of the chapter provides a review of the state of the art of systems adopted in several application domains, by thoroughly describing both classical and advanced techniques for representing items and user profiles. The most widely adopted techniques for learning user profiles are also presented. The last part of the chapter discusses trends and future research which might lead towards the next generation of systems, by describing the role of User Generated Content as a way for taking into account evolving vocabularies, and the challenge of feeding users with serendipitous recommendations, that is to say surprisingly interesting items that they might not have otherwise discovered. Pasquale Lops Department of Computer Science, University of Bari “Aldo Moro”, Via E. Orabona, 4, Bari (Italy) e-mail: [email protected] Marco de Gemmis Department of Computer Science, University of Bari “Aldo Moro”, Via E. Orabona, 4, Bari (Italy) e-mail: [email protected] Giovanni Semeraro Department of Computer Science, University of Bari “Aldo Moro”, Via E. Orabona, 4, Bari (Italy) e-mail: [email protected]",
"title": ""
},
{
"docid": "37a6d22411148cde4be4cb5a4dfe8bde",
"text": "When you write papers, how many times do you want to make some citations at a place but you are not sure which papers to cite? Do you wish to have a recommendation system which can recommend a small number of good candidates for every place that you want to make some citations? In this paper, we present our initiative of building a context-aware citation recommendation system. High quality citation recommendation is challenging: not only should the citations recommended be relevant to the paper under composition, but also should match the local contexts of the places citations are made. Moreover, it is far from trivial to model how the topic of the whole paper and the contexts of the citation places should affect the selection and ranking of citations. To tackle the problem, we develop a context-aware approach. The core idea is to design a novel non-parametric probabilistic model which can measure the context-based relevance between a citation context and a document. Our approach can recommend citations for a context effectively. Moreover, it can recommend a set of citations for a paper with high quality. We implement a prototype system in CiteSeerX. An extensive empirical evaluation in the CiteSeerX digital library against many baselines demonstrates the effectiveness and the scalability of our approach.",
"title": ""
},
{
"docid": "e26c73004a3f29b1abbadd515a0ca748",
"text": "The situation in which a choice is made is an important information for recommender systems. Context-aware recommenders take this information into account to make predictions. So far, the best performing method for context-aware rating prediction in terms of predictive accuracy is Multiverse Recommendation based on the Tucker tensor factorization model. However this method has two drawbacks: (1) its model complexity is exponential in the number of context variables and polynomial in the size of the factorization and (2) it only works for categorical context variables. On the other hand there is a large variety of fast but specialized recommender methods which lack the generality of context-aware methods.\n We propose to apply Factorization Machines (FMs) to model contextual information and to provide context-aware rating predictions. This approach results in fast context-aware recommendations because the model equation of FMs can be computed in linear time both in the number of context variables and the factorization size. For learning FMs, we develop an iterative optimization method that analytically finds the least-square solution for one parameter given the other ones. Finally, we show empirically that our approach outperforms Multiverse Recommendation in prediction quality and runtime.",
"title": ""
},
{
"docid": "4984f9e1995cd69aac609374778d45c0",
"text": "We discuss the video recommendation system in use at YouTube, the world's most popular online video community. The system recommends personalized sets of videos to users based on their activity on the site. We discuss some of the unique challenges that the system faces and how we address them. In addition, we provide details on the experimentation and evaluation framework used to test and tune new algorithms. We also present some of the findings from these experiments.",
"title": ""
}
] |
[
{
"docid": "90f1e303325d2d9f56fdcc905924c7bf",
"text": "giving a statistic image for each contrast. P values for activations in the amygdala were corrected for the volume of brain analysed (specified as a sphere with radius 8 mm) 29. Anatomical localization for the group mean-condition-specific activations are reported in standard space 28. In all cases, the localization of the group mean activations was confirmed by registration with the subject's own MRIs. In an initial conditioning phase immediately before scanning, subjects viewed a sequence of greyscale images of four faces taken from a standard set of pictures of facial affect 30. Images of a single face were presented on a computer monitor screen for 75 ms at intervals of 15–25 s (mean 20 s). Each of the four faces was shown six times in a pseudorandom order. Two of the faces had angry expressions (A1 and A2), the other two being neutral (N1 and N2). One of the angry faces (CS+) was always followed by a 1-s 100-dB burst of white noise. In half of the subjects A1 was the CS+ face; in the other half, A2 was used. None of the other faces was ever paired with the noise. Before each of the 12 scanning windows, which occurred at 8-min intervals, a shortened conditioning sequence was played consisting of three repetitions of the four faces. During the 90-s scanning window, which seamlessly followed the conditioning phase, 12 pairs of faces, consisting of a target and mask, were shown at 5-s intervals. The target face was presented for 30 ms and was immediately followed by the masking face for 45 ms (Fig. 1). These stimulus parameters remained constant throughout all scans and effectively prevented any reportable awareness of the target face (which might be a neutral face or an angry face). There were four different conditions (Fig. 1), masked conditioned, non-masked conditioned, masked unconditioned, and non-masked unconditioned. Throughout the experiment, subjects performed the same explicit task, which was to detect any occurrence, however fleeting, of the angry faces. Immediately before the first conditioning sequence, subjects were shown the two angry faces and were instructed, for each stimulus presentation, to press a response button with the index finger of the right hand if one the angry faces appeared, or another button with the middle finger of the right hand if they did not see either of the angry faces. Throughout the acquisition and extinction phases, subjects' SCRs were monitored to …",
"title": ""
},
{
"docid": "c30fe4a7563090638a3bcc943c1cb328",
"text": "In order to investigate the role of facial movement in the recognition of emotions, faces were covered with black makeup and white spots. Video recordings of such faces were played back so that only the white spots were visible. The results demonstrated that moving displays of happiness, sadness, fear, surprise, anger and disgust were recognized more accurately than static displays of the white spots at the apex of the expressions. This indicated that facial motion, in the absence of information about the shape and position of facial features, is informative about these basic emotions. Normally illuminated dynamic displays of these expressions, however, were recognized more accurately than displays of moving spots. The relative effectiveness of upper and lower facial areas for the recognition of these six emotions was also investigated using normally illuminated and spots-only displays. In both instances the results indicated that different facial regions are more informative for different emitions. The movement patterns characterizing the various emotional expressions as well as common confusions between emotions are also discussed.",
"title": ""
},
{
"docid": "a45b098b22e8d84b484617d276874601",
"text": "Subjectivity detection is the task of identifying objective and subjective sentences. Objective sentences are those which do not exhibit any sentiment. So, it is desired for a sentiment analysis engine to find and separate the objective sentences for further analysis, e.g., polarity detection. In subjective sentences, opinions can often be expressed on one or multiple topics. Aspect extraction is a subtask of sentiment analysis that consists in identifying opinion targets in opinionated text, i.e., in detecting the specific aspects of a product or service the opinion holder is either praising or complaining about.",
"title": ""
},
{
"docid": "853d3d6584a32fff4f4e7c483bf0972d",
"text": "Nutritional modulation remains central to the management of metabolic syndrome. Intervention with cinnamon in individuals with metabolic syndrome remains sparsely researched. We investigated the effect of oral cinnamon consumption on body composition and metabolic parameters of Asian Indians with metabolic syndrome. In this 16-week double blind randomized control trial, 116 individuals with metabolic syndrome were randomized to two dietary intervention groups, cinnamon [6 capsules (3 g) daily] or wheat flour [6 capsules (2.5 g) daily]. Body composition, blood pressure and metabolic parameters were assessed. Significantly greater decrease [difference between means, (95% CI)] in fasting blood glucose (mmol/L) [0.3 (0.2, 0.5) p = 0.001], glycosylated haemoglobin (mmol/mol) [2.6 (0.4, 4.9) p = 0.023], waist circumference (cm) [4.8 (1.9, 7.7) p = 0.002] and body mass index (kg/m2 ) [1.3 (0.9, 1.5) p = 0.001] was observed in the cinnamon group compared to placebo group. Other parameters which showed significantly greater improvement were: waist-hip ratio, blood pressure, serum total cholesterol, low-density lipoprotein cholesterol, serum triglycerides, and high-density lipoprotein cholesterol. Prevalence of defined metabolic syndrome was significantly reduced in the intervention group (34.5%) vs. the placebo group (5.2%). A single supplement intervention with 3 g cinnamon for 16 weeks resulted in significant improvements in all components of metabolic syndrome in a sample of Asian Indians in north India. The clinical trial was retrospectively registered (after the recruitment of the participants) in ClinicalTrial.gov under the identification number: NCT02455778 on 25th May 2015.",
"title": ""
},
{
"docid": "688d6f57a4567b7d23a849e33ae584d4",
"text": "Whereas traditional theories of gender development have focused on individualistic paths, recent analyses have argued for a more social categorical approach to children's understanding of gender. Using a modeling paradigm based on K. Bussey and A. Bandura (1984), 3 experiments (N = 62, N = 32, and N = 64) examined preschoolers' (M age = 52.9 months) imitation of, and memory for, behaviors of same-sex and opposite-sex children and adults. In all experiments, children's imitation of models varied according to the emphasis given to the particular category of models, despite equal attention being paid to both categories. It is suggested that the categorical nature of gender, or age, informs children's choice of imitative behaviors.",
"title": ""
},
{
"docid": "8e3b1f49ca8a5afe20a9b66e0088a56a",
"text": "Describing the contents of images is a challenging task for machines to achieve. It requires not only accurate recognition of objects and humans, but also their attributes and relationships as well as scene information. It would be even more challenging to extend this process to identify falls and hazardous objects to aid elderly or users in need of care. This research makes initial attempts to deal with the above challenges to produce multi-sentence natural language description of image contents. It employs a local region based approach to extract regional image details and combines multiple techniques including deep learning and attribute learning through the use of machine learned features to create high level labels that can generate detailed description of real-world images. The system contains the core functions of scene classification, object detection and classification, attribute learning, relationship detection and sentence generation. We have also further extended this process to deal with open-ended fall detection and hazard identification. In comparison to state-of-the-art related research, our system shows superior robustness and flexibility in dealing with test images from new, unrelated domains, which poses great challenges to many existing methods. Our system is evaluated on a subset from Flickr8k and Pascal VOC 2012 and achieves an impressive average BLEU score of 46 and outperforms related research by a significant margin of 10 BLEU score when evaluated with a small dataset of images containing falls and hazardous objects. It also shows impressive performance when evaluated using a subset of IAPR TC-12 dataset.",
"title": ""
},
{
"docid": "875917534e19961b06e09b6c1a872914",
"text": "It is found that the current manual water quality monitoring entails tedious process and is time consuming. To alleviate the problems caused by the manual monitoring and the lack of effective system for prawn farming, a remote water quality monitoring for prawn farming pond is proposed. The proposed system is leveraging on wireless sensors in detecting the water quality and Short Message Service (SMS) technology in delivering alert to the farmers upon detection of degradation of the water quality. Three water quality parameters that are critical to the prawn health are monitored, which are pH, temperature and dissolved oxygen. In this paper, the details of system design and implementation are presented. The results obtained in the preliminary survey study served as the basis for the development of the system prototype. Meanwhile, the results acquired through the usability testing imply that the system is able to meet the users’ needs. Key-Words: Remote monitoring, Water Quality, Wireless sensors",
"title": ""
},
{
"docid": "fff21e37244f5c097dc9e8935bb92939",
"text": "For the purpose of enhancing the search ability of the cuckoo search (CS) algorithm, an improved robust approach, called HS/CS, is put forward to address the optimization problems. In HS/CS method, the pitch adjustment operation in harmony search (HS) that can be considered as a mutation operator is added to the process of the cuckoo updating so as to speed up convergence. Several benchmarks are applied to verify the proposed method and it is demonstrated that, in most cases, HS/CS performs better than the standard CS and other comparative methods. The parameters used in HS/CS are also investigated by various simulations.",
"title": ""
},
{
"docid": "e08854e0fc17a8f80ede1fc05a07805c",
"text": "While many researches have analyzed the psychological antecedents of mobile phone addiction and mobile phone usage behavior, their relationship with psychological characteristics remains mixed. We investigated the relationship between psychological characteristics, mobile phone addiction and use of mobile phones for 269 Taiwanese female university students who were administered Rosenberg’s selfesteem scale, Lai’s personality inventory, and a mobile phone usage questionnaire and mobile phone addiction scale. The result showing that: (1) social extraversion and anxiety have positive effects on mobile phone addiction, and self-esteem has negative effects on mobile phone addiction. (2) Mobile phone addiction has a positive predictive effect on mobile phone usage behavior. The results of this study identify personal psychological characteristics of Taiwanese female university students which can significantly predict mobile phone addiction; female university students with mobile phone addiction will make more phone calls and send more text messages. These results are discussed and suggestions for future research for school and university students are provided. 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "67e16f36bb6d83c5d6eae959a7223b77",
"text": "Neighborhood filters are nonlocal image and movie filters which reduce the noise by averaging similar pixels. The first object of the paper is to present a unified theory of these filters and reliable criteria to compare them to other filter classes. A CCD noise model will be presented justifying the involvement of neighborhood filters. A classification of neighborhood filters will be proposed, including classical image and movie denoising methods and discussing further a recently introduced neighborhood filter, NL-means. In order to compare denoising methods three principles will be discussed. The first principle, “method noise”, specifies that only noise must be removed from an image. A second principle will be introduced, “noise to noise”, according to which a denoising method must transform a white noise into a white noise. Contrarily to “method noise”, this principle, which characterizes artifact-free methods, eliminates any subjectivity and can be checked by mathematical arguments and Fourier analysis. “Noise to noise” will be proven to rule out most denoising methods, with the exception of neighborhood filters. This is why a third and new comparison principle, the “statistical optimality”, is needed and will be introduced to compare the performance of all neighborhood filters. The three principles will be applied to compare ten different image and movie denoising methods. It will be first shown that only wavelet thresholding methods and NL-means give an acceptable method noise. Second, that neighborhood filters are the only ones to satisfy the “noise to noise” principle. Third, that among them NL-means is closest to statistical optimality. A particular attention will be paid to the application of the statistical optimality criterion for movie denoising methods. It will be pointed out that current movie denoising methods are motion compensated neighborhood filters. This amounts to say that they are neighborhood filters and that the ideal neighborhood of a pixel is its trajectory. Unfortunately the aperture problem makes it impossible to estimate ground true trajectories. It will be demonstrated that computing trajectories and restricting the neighborhood to them is harmful for denoising purposes and that space-time NL-means preserves more movie details.",
"title": ""
},
{
"docid": "7eff56a5f17cef0b15b4ddc737ceeeed",
"text": "Many analysis tasks involve linked nodes, such as people connected by friendship links. Research on link-based classification (LBC) has studied how to leverage these connections to improve classification accuracy. Most such prior research has assumed the provision of a densely labeled training network. Instead, this article studies the common and challenging case when LBC must use a single sparsely labeled network for both learning and inference, a case where existing methods often yield poor accuracy. To address this challenge, we introduce a novel method that enables prediction via “neighbor attributes,” which were briefly considered by early LBC work but then abandoned due to perceived problems. We then explain, using both extensive experiments and loss decomposition analysis, how using neighbor attributes often significantly improves accuracy. We further show that using appropriate semi-supervised learning (SSL) is essential to obtaining the best accuracy in this domain and that the gains of neighbor attributes remain across a range of SSL choices and data conditions. Finally, given the challenges of label sparsity for LBC and the impact of neighbor attributes, we show that multiple previous studies must be re-considered, including studies regarding the best model features, the impact of noisy attributes, and strategies for active learning.",
"title": ""
},
{
"docid": "b4a9d5d0e76805500ea8156743e1989e",
"text": "The Systems that rely on Face Recognition (FR) biometric have acquired great importance ever since terrorist threats injected weakness among the implemented security systems. Rest biometrics as iris or Fingerprints recognition is not trustworthy in such situations whereas FR is considered as a better compromise. In Image processing, Occlusion refers to facade of the face image which can be due to hair, moustache, sunglasses, or wrapping of facial image by scarf or other accessories. Efforts on FR appears in controlled environment have been in the picture for past several years; however identification under uncontrolled condition like partial occlusion is typically quite a matter of concern. Based on review of literature and its analysis so far, a classification made in this paper to solve the challenges in recognition of face in the presence of partial occlusion. The methods used are INPAINTING based methods that make use of Exemplar-based Inpainting, Feature-Extraction, and Fast Weighted-Principal component analysis (FWPCA),etc. The presented approach in this paper describes the removal of Occlusion from images or restore the occluded part of image using Exemplar-based Image Inpainting technique, feature extraction and FW-PCA(Restoration) combinations.",
"title": ""
},
{
"docid": "72f3800a072c2844f6ec145788c0749e",
"text": "In Augmented Reality (AR), interfaces consist of a blend of both real and virtual content. In this paper we examine existing gaming styles played in the real world or on computers. We discuss the strengths and weaknesses of these mediums within an informal model of gaming experience split into four aspects; physical, mental, social and emotional. We find that their strengths are mostly complementary, and argue that games built in AR can blend them to enhance existing game styles and open up new ones. To illustrate these ideas, we present our work on AR Worms, a re-implementation of the classic computer game Worms using Augmented Reality. We discuss how AR has enabled us to start exploring interfaces for gaming, and present informal observations of players at several demonstrations. Finally, we present some ideas for AR games in the area of strategy and role playing games.",
"title": ""
},
{
"docid": "54a1257346f9a1ead514bb8077b0e7ca",
"text": "Recent years has witnessed growing interest in hyperspectral image (HSI) processing. In practice, however, HSIs always suffer from huge data size and mass of redundant information, which hinder their application in many cases. HSI compression is a straightforward way of relieving these problems. However, most of the conventional image encoding algorithms mainly focus on the spatial dimensions, and they need not consider the redundancy in the spectral dimension. In this paper, we propose a novel HSI compression and reconstruction algorithm via patch-based low-rank tensor decomposition (PLTD). Instead of processing the HSI separately by spectral channel or by pixel, we represent each local patch of the HSI as a third-order tensor. Then, the similar tensor patches are grouped by clustering to form a fourth-order tensor per cluster. Since the grouped tensor is assumed to be redundant, each cluster can be approximately decomposed to a coefficient tensor and three dictionary matrices, which leads to a low-rank tensor representation of both the spatial and spectral modes. The reconstructed HSI can then be simply obtained by the product of the coefficient tensor and dictionary matrices per cluster. In this way, the proposed PLTD algorithm simultaneously removes the redundancy in both the spatial and spectral domains in a unified framework. The extensive experimental results on various public HSI datasets demonstrate that the proposed method outperforms the traditional image compression approaches and other tensor-based methods.",
"title": ""
},
{
"docid": "bff34a024324774d28ccaa23722e239e",
"text": "We review the Philippine frogs of the genus Leptobrachuim. All previous treatments have referred Philippine populations to L. hasseltii, a species we restrict to Java and Bali, Indonesia. We use external morphology, body proportions, color pattern, advertisement calls, and phylogenetic analysis of molecular sequence data to show that Philippine populations of Leptobrachium represent three distinct and formerly unrecognized evolutionary lineages, and we describe each (populations on Mindoro, Palawan, and Mindanao Island groups) as new species. Our findings accentuate the degree to which the biodiversity of Philippine amphibians is currently underestimated and in need of comprehensive review with new and varied types of data. LAGOM: Pinagbalik aralan namin ang mga palaka sa Pilipinas mula sa genus Leptobrachium. Ang nakaraang mga palathala ay tumutukoy sa populasyon ng L. hasseltii, ang uri ng palaka na aming tinakda lamang sa Java at Bali, Indonesia. Ginamit namin ang panglabas na morpolohiya, proporsiyon ng pangangatawan, kulay disenyo, pantawag pansin, at phylogenetic na pagsusuri ng molekular na pagkakasunod-sunod ng datos upang maipakita na ang populasyon sa Pilipinas ng Leptobrachium ay kumakatawan sa tatlong natatangi at dating hindi pa nakilalang ebolusyonaryong lipi. Inilalarawan din naming ang bawat isa (populasyon sa Mindoro, Palawan, at mga grupo ng isla sa Mindanao) na bagong uri ng palaka. Ang aming natuklasan ay nagpapatingkad sa antas kung saan ang biodibersidad ng amphibians sa Pilipinas sa kasalukuyan ay may mababang pagtatantya at nangangailangan ng malawakang pagbabalik-aral ng mga bago at iba’t ibang uri ng",
"title": ""
},
{
"docid": "b54215466bcdf86442f9a6e87e831069",
"text": "In this paper, we consider the problem of tracking human motion with a 22-DOF kinematic model from depth images. In contrast to existing approaches, our system naturally scales to multiple sensors. The motivation behind our approach, termed Multiple Depth Camera Approach (MDCA), is that by using several cameras, we can significantly improve the tracking quality and reduce ambiguities as for example caused by occlusions. By fusing the depth images of all available cameras into one joint point cloud, we can seamlessly incorporate the available information from multiple sensors into the pose estimation. To track the high-dimensional human pose, we employ state-of-the-art annealed particle filtering and partition sampling. We compute the particle likelihood based on the truncated signed distance of each observed point to a parameterized human shape model. We apply a coarse-to-fine scheme to recognize a wide range of poses to initialize the tracker. In our experiments, we demonstrate that our approach can accurately track human motion in real-time (15Hz) on a GPGPU. In direct comparison to two existing trackers (OpenNI, Microsoft Kinect SDK), we found that our approach is significantly more robust for unconstrained motions and under (partial) occlusions.",
"title": ""
},
{
"docid": "86ac69a113d41fe7e0914c2ab2c9c700",
"text": "A 6.5kV 25A dual IGBT module is customized and packaged specially for high voltage low current application like solid state transformer and its characteristics and losses have been tested under the low current operation and compared with 10kV SiC MOSFET. Based on the test results, the switching losses under different frequencies in a 20kVA Solid-State Transformer (SST) has been calculated for both devices. The result shows 10kV SiC MOSFET has 7–10 times higher switching frequency capability than 6.5kV Si IGBT in the SST application.",
"title": ""
},
{
"docid": "78229ed553e824250f5514b81a3d5ba1",
"text": "In the context of simulating the frictional contact dynamic s of large systems of rigid bodies, this paper reviews a novel method for solving large cone complementarity proble ms by means of a fixed-point iteration algorithm. The method is an extension of the Gauss-Seidel and Gauss-Jacobi methods with overrelaxation for symmetric convex linear complementarity problems. Convergent under fairly standa rd assumptions, the method is implemented in a parallel framework by using a single instruction multiple data compu tation paradigm promoted by the Compute Unified Device Architecture library for graphical processing unit progra mming. The framework supports the simulation of problems with more than 1 million bodies in contact. Simulation thus b ecomes a viable tool for investigating the dynamics of complex systems such as ground vehicles running on sand, pow der composites, and granular material flow.",
"title": ""
},
{
"docid": "d81ca3c2965ac963a51d35193aa1255b",
"text": "In this paper we compare di!erent nonlinear control design methods by applying them to the planar model of a ducted fan engine. The methods used range from Jacobian linearization of the nonlinear plant and designing an LQR controller, to using model predictive control and linear parameter varying methods. The controller design can be divided into two steps. The \"rst step requires the derivation of a control Lyapunov function (CLF), while the second involves using an existing CLF to generate a controller. The main premise of this paper is that by combining the best of these two phases, it is possible to \"nd controllers that achieve superior performance when compared to those that apply each phase independently. All of the results are compared to the optimal solution which is approximated by solving a trajectory optimization problem with a su$ciently large time horizon. 2001 Published by Elsevier Science Ltd.",
"title": ""
},
{
"docid": "3630c575bf7b5250930c7c54d8a1c6d0",
"text": "The RCSB Protein Data Bank (RCSB PDB, http://www.rcsb.org) provides access to 3D structures of biological macromolecules and is one of the leading resources in biology and biomedicine worldwide. Our efforts over the past 2 years focused on enabling a deeper understanding of structural biology and providing new structural views of biology that support both basic and applied research and education. Herein, we describe recently introduced data annotations including integration with external biological resources, such as gene and drug databases, new visualization tools and improved support for the mobile web. We also describe access to data files, web services and open access software components to enable software developers to more effectively mine the PDB archive and related annotations. Our efforts are aimed at expanding the role of 3D structure in understanding biology and medicine.",
"title": ""
}
] |
scidocsrr
|
1d99bcdc5a006f1a5d3b391ec64a9b0e
|
Monolingual Machine Translation for Paraphrase Generation
|
[
{
"docid": "4361b4d2d77d22f46b9cd5920a4822c8",
"text": "While paraphrasing is critical both for interpretation and generation of natural language, current systems use manual or semi-automatic methods to collect paraphrases. We present an unsupervised learning algorithm for identification of paraphrases from a corpus of multiple English translations of the same source text. Our approach yields phrasal and single word lexical paraphrases as well as syntactic paraphrases.",
"title": ""
},
{
"docid": "fc164dc2d55cec2867a99436d37962a1",
"text": "We address the text-to-text generation problem of sentence-level paraphrasing — a phenomenon distinct from and more difficult than wordor phrase-level paraphrasing. Our approach applies multiple-sequence alignment to sentences gathered from unannotated comparable corpora: it learns a set of paraphrasing patterns represented by word lattice pairs and automatically determines how to apply these patterns to rewrite new sentences. The results of our evaluation experiments show that the system derives accurate paraphrases, outperforming baseline systems.",
"title": ""
}
] |
[
{
"docid": "e0b96837b0908aa859fa56a2b0a5701c",
"text": "Being able to automatically describe the content of an image using properly formed English sentences is a challenging task, but it could have great impact by helping visually impaired people better understand their surroundings. Most modern mobile phones are able to capture photographs, making it possible for the visually impaired to make images of their environments. These images can then be used to generate captions that can be read out loud to the visually impaired, so that they can get a better sense of what is happening around them. In this paper, we present a deep recurrent architecture that automatically generates brief explanations of images. Our models use a convolutional neural network (CNN) to extract features from an image. These features are then fed into a vanilla recurrent neural network (RNN) or a Long Short-Term Memory (LSTM) network to generate a description of the image in valid English. Our models achieve comparable to state of the art performance, and generate highly descriptive captions that can potentially greatly improve the lives of visually impaired people.",
"title": ""
},
{
"docid": "2518564949f7488a7f01dff74e3b6e2d",
"text": "Although it is commonly believed that women are kinder and more cooperative than men, there is conflicting evidence for this assertion. Current theories of sex differences in social behavior suggest that it may be useful to examine in what situations men and women are likely to differ in cooperation. Here, we derive predictions from both sociocultural and evolutionary perspectives on context-specific sex differences in cooperation, and we conduct a unique meta-analytic study of 272 effect sizes-sampled across 50 years of research-on social dilemmas to examine several potential moderators. The overall average effect size is not statistically different from zero (d = -0.05), suggesting that men and women do not differ in their overall amounts of cooperation. However, the association between sex and cooperation is moderated by several key features of the social context: Male-male interactions are more cooperative than female-female interactions (d = 0.16), yet women cooperate more than men in mixed-sex interactions (d = -0.22). In repeated interactions, men are more cooperative than women. Women were more cooperative than men in larger groups and in more recent studies, but these differences disappeared after statistically controlling for several study characteristics. We discuss these results in the context of both sociocultural and evolutionary theories of sex differences, stress the need for an integrated biosocial approach, and outline directions for future research.",
"title": ""
},
{
"docid": "c16f21fd2b50f7227ea852882004ef5b",
"text": "We study a stock dealer’s strategy for submitting bid and ask quotes in a limit order book. The agent faces an inventory risk due to the diffusive nature of the stock’s mid-price and a transactions risk due to a Poisson arrival of market buy and sell orders. After setting up the agent’s problem in a maximal expected utility framework, we derive the solution in a two step procedure. First, the dealer computes a personal indifference valuation for the stock, given his current inventory. Second, he calibrates his bid and ask quotes to the market’s limit order book. We compare this ”inventory-based” strategy to a ”naive” best bid/best ask strategy by simulating stock price paths and displaying the P&L profiles of both strategies. We find that our strategy has a P&L profile that has both a higher return and lower variance than the benchmark strategy.",
"title": ""
},
{
"docid": "a7204a8f1e525731a77c34386553b141",
"text": "This paper suggests a definition of the term Cloud Federation, a concept of service aggregation characterized by interoperability features, which addresses the economic problems of vendor lock-in and provider integration. Furthermore, it approaches challenges like performance and disaster-recovery through methods such as co-location and geographic distribution. The concept of Cloud Federation enables further reduction of costs due to partial outsourcing to more cost-efficient regions, may satisfy security requirements through techniques like fragmentation and provides new prospects in terms of legal aspects. Based on this concept, we discuss a reference architecture that enables new service models by horizontal and vertical integration. The definition along with the reference architecture serves as a common vocabulary for discussions and suggests a template for creating value-added software solutions.",
"title": ""
},
{
"docid": "c7ab6bc685029cc61a02f4596fef8818",
"text": "UPON Lite focuses on users, typically domain experts without ontology expertise, minimizing the role of ontology engineers.",
"title": ""
},
{
"docid": "815fe60934f0313c56e631d73b998c95",
"text": "The scientific credibility of findings from clinical trials can be undermined by a range of problems including missing data, endpoint switching, data dredging, and selective publication. Together, these issues have contributed to systematically distorted perceptions regarding the benefits and risks of treatments. While these issues have been well documented and widely discussed within the profession, legislative intervention has seen limited success. Recently, a method was described for using a blockchain to prove the existence of documents describing pre-specified endpoints in clinical trials. Here, we extend the idea by using smart contracts - code, and data, that resides at a specific address in a blockchain, and whose execution is cryptographically validated by the network - to demonstrate how trust in clinical trials can be enforced and data manipulation eliminated. We show that blockchain smart contracts provide a novel technological solution to the data manipulation problem, by acting as trusted administrators and providing an immutable record of trial history.",
"title": ""
},
{
"docid": "8f228c6b2d25d6922acc8e9843011b2c",
"text": "BACKGROUND\nPrevious studies have reported that the quality of cardiopulmonary resuscitation (CPR) is important for patient survival. Real time objective feedback during manikin training has been shown to improve CPR performance. Objective measurement could facilitate competition and help motivate participants to improve their CPR performance. The aims of this study were to investigate whether real time objective feedback on manikins helps improve CPR performance and whether competition between separate European Emergency Medical Services (EMS) and between participants at each EMS helps motivation to train.\n\n\nMETHODS\nTen European EMS took part in the study and was carried out in two stages. At Stage 1, each EMS provided 20 pre-hospital professionals. A questionnaire was completed and standardised assessment scenarios were performed for adult and infant out of hospital cardiac arrest (OHCA). CPR performance was objectively measured and recorded but no feedback given. Between Stage 1 and 2, each EMS was given access to manikins for 6 months and instructed on how to use with objective real-time CPR feedback available. Stage 2 was undertaken and was a repeat of Stage 1 with a questionnaire with additional questions relating to usefulness of feedback and the competition nature of the study (using a 10 point Likert score). The EMS that improved the most from Stage 1 to Stage 2 was declared the winner. An independent samples Student t-test was used to analyse the objective CPR metrics with the significance level taken as p < 0.05.\n\n\nRESULTS\nOverall mean Improvement of CPR performance from Stage 1 to Stage 2 was significant. The improvement was greater for the infant assessment. The participants thought the real-time feedback very useful (mean score of 8.5) and very easy to use (mean score of 8.2). Competition between EMS organisations recorded a mean score of 5.8 and competition between participants recorded a mean score of 6.0.\n\n\nCONCLUSIONS\nThe results suggest that the use of real time objective feedback can significantly help improve CPR performance. Competition, especially between participants, appeared to encourage staff to practice and this study suggests that competition might have a useful role to help motivate staff to perform CPR training.",
"title": ""
},
{
"docid": "910a416dc736ec3566583c57123ac87c",
"text": "Internet of Things (IoT) is one of the greatest technology revolutions in the history. Due to IoT potential, daily objects will be consciously worked in harmony with optimized performances. However, today, technology is not ready to fully bring its power to our daily life because of huge data analysis requirements in instant time. On the other hand, the powerful data management of cloud computing gives IoT an opportunity to make the revolution in our life. However, the traditional cloud computing server schedulers are not ready to provide services to IoT because IoT consists of a number of heterogeneous devices and applications which are far away from standardization. Therefore, to meet the expectations of users, the traditional cloud computing server schedulers should be improved to efficiently schedule and allocate IoT requests. There are several proposed scheduling algorithms for cloud computing in the literature. However, these scheduling algorithms are limited because of considering neither heterogeneous servers nor dynamic scheduling approach for different priority requests. Our objective is to propose Husnu S. Narman [email protected] 1 Holcombe Department of Electrical and Computer Engineering, Clemson University, Clemson, SC, 29634, USA 2 Department of Computer Science and Engineering, Bangladesh University of Engineering and Technology, Zahir Raihan Rd, Dhaka, 1000, Bangladesh 3 School of Computer Science, University of Oklahoma, Norman, OK, 73019, USA dynamic dedicated server scheduling for heterogeneous and homogeneous systems to efficiently provide desired services by considering priorities of requests. Results show that the proposed scheduling algorithm improves throughput up to 40 % in heterogeneous and homogeneous cloud computing systems for IoT requests. Our proposed scheduling algorithm and related analysis will help cloud service providers build efficient server schedulers which are adaptable to homogeneous and heterogeneous environments byconsidering systemperformancemetrics, such as drop rate, throughput, and utilization in IoT.",
"title": ""
},
{
"docid": "6b7c0ce61dba453ac26684b9214a752f",
"text": "After a century of controversy, the notion that the immune system regulates cancer development is experiencing a new resurgence. An overwhelming amount of data from animal models--together with compelling data from human patients--indicate that a functional cancer immunosurveillance process indeed exists that acts as an extrinsic tumor suppressor. However, it has also become clear that the immune system can facilitate tumor progression, at least in part, by sculpting the immunogenic phenotype of tumors as they develop. The recognition that immunity plays a dual role in the complex interactions between tumors and the host prompted a refinement of the cancer immunosurveillance hypothesis into one termed \"cancer immunoediting.\" In this review, we summarize the history of the cancer immunosurveillance controversy and discuss its resolution and evolution into the three Es of cancer immunoediting--elimination, equilibrium, and escape.",
"title": ""
},
{
"docid": "f267c096ffe69c40b5bd987450cdde84",
"text": "Recent breakthroughs in cryptanalysis of standard hash functions like SHA-1 and MD5 raise the need for alternatives. The MD6 hash function is developed by a team led by Professor Ronald L. Rivest in response to the call for proposals for a SHA-3 cryptographic hash algorithm by the National Institute of Standards and Technology. The hardware performance evaluation of hash chip design mainly includes efficiency and flexibility. In this paper, a RAM-based reconfigurable FPGA implantation of the MD6-224/256/384 /512 hash function is presented. The design achieves a throughput ranges from 118 to 227 Mbps at the maximum frequency of 104MHz on low-cost Cyclone III device. The implementation of MD6 core functionality uses mainly embedded Block RAMs and small resources of logic elements in Altera FPGA, which satisfies the needs of most embedded applications, including wireless communication. The implementation results also show that the MD6 hash function has good reconfigurability.",
"title": ""
},
{
"docid": "47baaddefd3476ce55d39a0f111ade5a",
"text": "We propose a novel method for classifying resume data of job applicants into 27 different job categories using convolutional neural networks. Since resume data is costly and hard to obtain due to its sensitive nature, we use domain adaptation. In particular, we train a classifier on a large number of freely available job description snippets and then use it to classify resume data. We empirically verify a reasonable classification performance of our approach despite having only a small amount of labeled resume data available.",
"title": ""
},
{
"docid": "26cecceea22566025c22e66376dbb138",
"text": "The development of technologies related to the Internet of Things (IoT) provides a new perspective on applications pertaining to smart cities. Smart city applications focus on resolving issues facing people in everyday life, and have attracted a considerable amount of research interest. The typical issue encountered in such places of daily use, such as stations, shopping malls, and stadiums is crowd dynamics management. Therefore, we focus on crowd dynamics management to resolve the problem of congestion using IoT technologies. Real-time crowd dynamics management can be achieved by gathering information relating to congestion and propose less crowded places. Although many crowd dynamics management applications have been proposed in various scenarios and many models have been devised to this end, a general model for evaluating the control effectiveness of crowd dynamics management has not yet been developed in IoT research. Therefore, in this paper, we propose a model to evaluate the performance of crowd dynamics management applications. In other words, the objective of this paper is to present the proof-of-concept of control effectiveness of crowd dynamics management. Our model uses feedback control theory, and enables an integrated evaluation of the control effectiveness of crowd dynamics management methods under various scenarios. We also provide extensive numerical results to verify the effectiveness of the model.",
"title": ""
},
{
"docid": "2855a1f420ed782317c1598c9d9c185e",
"text": "Ranking authors is vital for identifying a researcher’s impact and his standing within a scientific field. There are many different ranking methods (e.g., citations, publications, h-index, PageRank, and weighted PageRank), but most of them are topic-independent. This paper proposes topic-dependent ranks based on the combination of a topic model and a weighted PageRank algorithm. The Author-Conference-Topic (ACT) model was used to extract topic distribution of individual authors. Two ways for combining the ACT model with the PageRank algorithm are proposed: simple combination (I_PR) or using a topic distribution as a weighted vector for PageRank (PR_t). Information retrieval was chosen as the test field and representative authors for different topics at different time phases were identified. Principal Component Analysis (PCA) was applied to analyze the ranking difference between I_PR and PR_t.",
"title": ""
},
{
"docid": "668b47c7f7aade33506aeffeba3711db",
"text": "B ioinformatics and data mining provide exciting and challenging research and application areas for computational science. Bioinformatics is the science of managing, mining, and interpreting information from biological sequences and structures. Advances such as genome-sequencing initiatives, microarrays, proteomics, and functional and structural genomics have pushed the frontiers of human knowledge. In addition , data mining and machine learning have been advancing in strides in recent years, with high-impact applications from marketing to science. Although researchers have spent much effort on data mining for bioinformatics (see the sidebar), the two areas have largely been developing separately.",
"title": ""
},
{
"docid": "a3e383cb19c97af5a4e501c7b13d9088",
"text": "Rapid diagnosis and treatment of acute neurological illnesses such as stroke, hemorrhage, and hydrocephalus are critical to achieving positive outcomes and preserving neurologic function—‘time is brain’1–5. Although these disorders are often recognizable by their symptoms, the critical means of their diagnosis is rapid imaging6–10. Computer-aided surveillance of acute neurologic events in cranial imaging has the potential to triage radiology workflow, thus decreasing time to treatment and improving outcomes. Substantial clinical work has focused on computer-assisted diagnosis (CAD), whereas technical work in volumetric image analysis has focused primarily on segmentation. 3D convolutional neural networks (3D-CNNs) have primarily been used for supervised classification on 3D modeling and light detection and ranging (LiDAR) data11–15. Here, we demonstrate a 3D-CNN architecture that performs weakly supervised classification to screen head CT images for acute neurologic events. Features were automatically learned from a clinical radiology dataset comprising 37,236 head CTs and were annotated with a semisupervised natural-language processing (NLP) framework16. We demonstrate the effectiveness of our approach to triage radiology workflow and accelerate the time to diagnosis from minutes to seconds through a randomized, double-blinded, prospective trial in a simulated clinical environment. A deep-learning algorithm is developed to provide rapid and accurate diagnosis of clinical 3D head CT-scan images to triage and prioritize urgent neurological events, thus potentially accelerating time to diagnosis and care in clinical settings.",
"title": ""
},
{
"docid": "48903eded4e1a88114e3917e2e6173b6",
"text": "The problem of generating maps with mobile robots has received considerable attention over the past years. Most of the techniques developed so far have been designed for situations in which the environment is static during the mapping process. Dynamic objects, however, can lead to serious errors in the resulting maps such as spurious objects or misalignments due to localization errors. In this paper we consider the problem of creating maps with mobile robots in dynamic environments. We present a new approach that interleaves mapping and localization with a probabilistic technique to identify spurious measurements. In several experiments we demonstrate that our algorithm generates accurate 2d and 3d in different kinds of dynamic indoor and outdoor environments. We also use our algorithm to isolate the dynamic objects and to generate three-dimensional representation of them.",
"title": ""
},
{
"docid": "bf7e9cb3e7eb376582ae6b279ab27a7b",
"text": "Although control mechanisms have been widely studied in IS research within and between organizations, there is still a lack of research on control mechanisms applied in software-based platforms, on which the complex interactions between a platform owner and a myriad of third-party developers have to be coordinated. Drawing on IS control literature and self-determination theory, our study presents the findings of a laboratory experiment with 138 participants in which we examined how formal (i.e., output and process) and informal (i.e., self) control mechanisms affect third-party developers’ intentions to stick with a mobile app development platform. We demonstrate that selfcontrol plays a significantly more important role than formal control modes in increasing platform stickiness. We also find that the relationship between control modes and platform stickiness is fully mediated by developers’ perceived autonomy. Taken together, our study highlights the theoretically important finding that self-determination and self-regulation among third-party developers are stronger driving forces for staying with a platform than typical hierarchical control mechanisms. Implications for research and practice are discussed.",
"title": ""
},
{
"docid": "bbd5a204986f546b00dbcba8fbca75be",
"text": "We present a novel keyword spotting (KWS) system that uses contextual automatic speech recognition (ASR). For voice-activated devices, it is common that a KWS system is run on the device in order to quickly detect a trigger phrase (e.g. “Ok Google”). After the trigger phrase is detected, the audio corresponding to the voice command that follows is streamed to the server. The audio is transcribed by the server-side ASR system and semantically processed to generate a response which is sent back to the device. Due to limited resources on the device, the device KWS system might introduce false accepts (FA) and false rejects (FR) that can cause an unsatisfactory user experience. We describe a system that uses server-side contextual ASR and trigger phrase non-terminals to improve overall KWS accuracy. We show that this approach can significantly reduce the FA rate (by 89%) while minimally increasing the FR rate (by 0.2%). Furthermore, we show that this system significantly improves the ASR quality, reducing Word Error Rate (WER) (by 10% to 50% relative), and allows the user to speak seamlessly, without pausing between the trigger phrase and the voice command.",
"title": ""
},
{
"docid": "c85e22b314f14a453524dfe390d8f9dc",
"text": "Wide spread monitoring cameras on construction sites provide large amount of information for construction management. The emerging of computer vision and machine learning technologies enables automatic recognition of construction activities from videos. As the executors of construction, the activities of construction workers have strong impact on productivity and progress. Compared to machine work, manual work is more subjective and may differ largely in operation flow and productivity from one worker to another. Hence only a handful of work study on vision based activity recognition of construction workers. Lacking of publicly available datasets is one of the main reasons that currently hinder advancement. The paper studies manual work of construction workers comprehensively, selects 11 common types of activities and establishes a new real world video dataset with 1176 instances. For activity recognition, a cutting-edge video description method, dense trajectories, has been applied. Support vector machines are integrated with a bag-of-features pipeline for activity learning and classification. Performance on multiple types of descriptors (Histograms of Oriented Gradients HOG, Histograms of Optical Flow HOF, Motion Boundary Histogram MBH) and their combination has been evaluated. Experimental results show that the proposed system has achieved a state-of-art performance on the new dataset.",
"title": ""
}
] |
scidocsrr
|
1bab41ed6b79fbadfd5d0b52f055fc57
|
Enabling secure and resource-efficient blockchain networks with VOLT
|
[
{
"docid": "1315247aa0384097f5f9e486bce09bd4",
"text": "We give an overview of the scripting languages used in existing cryptocurrencies, and in particular we review in some detail the scripting languages of Bitcoin, Nxt and Ethereum, in the context of a high-level overview of Distributed Ledger Technology and cryptocurrencies. We survey different approaches, and give an overview of critiques of existing languages. We also cover technologies that might be used to underpin extensions and innovations in scripting and contracts, including technologies for verification, such as zero knowledge proofs, proof-carrying code and static analysis, as well as approaches to making systems more efficient, e.g. Merkelized Abstract Syntax Trees.",
"title": ""
},
{
"docid": "c19863ef5fa4979f288763837e887a7c",
"text": "Decentralized cryptocurrencies have pushed deployments of distributed consensus to more stringent environments than ever before. Most existing protocols rely on proofs-of-work which require expensive computational puzzles to enforce, imprecisely speaking, “one vote per unit of computation”. The enormous amount of energy wasted by these protocols has been a topic of central debate, and well-known cryptocurrencies have announced it a top priority to alternative paradigms. Among the proposed alternative solutions, proofs-of-stake protocols have been of particular interest, where roughly speaking, the idea is to enforce “one vote per unit of stake”. Although the community have rushed to propose numerous candidates for proofs-of-stake, no existing protocol has offered formal proofs of security, which we believe to be a critical, indispensible ingredient of a distributed consensus protocol, particularly one that is to underly a high-value cryptocurrency system. In this work, we seek to address the following basic questions: • What kind of functionalities and robustness requirements should a consensus candidate offer to be suitable in a proof-of-stake application? • Can we design a provably secure protocol that satisfies these requirements? To the best of our knowledge, we are the first to formally articulate a set of requirements for consensus candidates for proofs-of-stake. We argue that any consensus protocol satisfying these properties can be used for proofs-of-stake, as long as money does not switch hands too quickly. Moreover, we provide the first consensus candidate that provably satisfies the desired robustness properties.",
"title": ""
}
] |
[
{
"docid": "f82ca8db3c8183839e4a91f1fd6b45a9",
"text": "Recently, we developed a series of cytotoxic peptide conjugates containing 14-O-glutaryl esters of doxorubicin (DOX) or 2-pyrrolino-DOX (AN-201). Serum carboxylesterase enzymes (CE) can partially hydrolyze these conjugates in the circulation, releasing the cytotoxic radical, before the targeting is complete. CE activity in serum of nude mice is about 10 times higher than in human serum. Thus, we found that the t(1/2) of AN-152, an analog of luteinizing hormone-releasing hormone (LH-RH) containing DOX, at 0.3 mg/ml is 19. 49 +/- 0.74 min in mouse serum and 126.06 +/- 3.03 min in human serum in vitro. The addition of a CE inhibitor, diisopropyl fluorophosphate (DFP), to mouse serum in vitro significantly (P < 0. 01) prolongs the t(1/2) of AN-152 to 69.63 +/- 4.44 min. When DFP is used in vivo, 400 nmol/kg cytotoxic somatostatin analog AN-238 containing AN-201 is well tolerated by mice, whereas all animals die after the same dose without DFP. In contrast, DFP has no effect on the tolerance of AN-201. A better tolerance to AN-238 after DFP treatment is due to the selective uptake of AN-238 by somatostatin receptor-positive tissues. Our results demonstrate that the suppression of the CE activity in nude mice greatly decreases the toxicity of cytotoxic hybrids containing 2-pyrrolino-DOX 14-O-hemiglutarate and brings this animal model closer to the conditions that exist in humans. The use of DFP together with these peptide conjugates in nude mice permits a better understanding of their mechanism of action and improves the clinical predictability of the oncological and toxicological results.",
"title": ""
},
{
"docid": "94dadbee2ca05ab17298dae45e8aebdc",
"text": "Cloud storage enables users to remotely store their data and enjoy the on-demand high quality cloud applications without the burden of local hardware and software management. Though the benefits are clear, such a service is also relinquishing users' physical possession of their outsourced data, which inevitably poses new security risks toward the correctness of the data in cloud. In order to address this new problem and further achieve a secure and dependable cloud storage service, we propose in this paper a flexible distributed storage integrity auditing mechanism, utilizing the homomorphic token and distributed erasure-coded data. The proposed design allows users to audit the cloud storage with very lightweight communication and computation cost. The auditing result not only ensures strong cloud storage correctness guarantee, but also simultaneously achieves fast data error localization, i.e., the identification of misbehaving server. Considering the cloud data are dynamic in nature, the proposed design further supports secure and efficient dynamic operations on outsourced data, including block modification, deletion, and append. Analysis shows the proposed scheme is highly efficient and resilient against Byzantine failure, malicious data modification attack, and even server colluding attacks.",
"title": ""
},
{
"docid": "647b8a609738551aafe0e346fd563562",
"text": "We study the profit-maximization problem of a monopolistic market-maker. The sequential decision problem is hard because the state space is a function. We demonstrate that the belief state is well approximated by a Gaussian distribution. We prove a key monotonicity property of the Gaussian state update which makes the problem tractable. The algorithm leads to a surprising insight: an optimal monopolist can provide more liquidity than perfectly competitive market-makers, because a monopolist is willing to absorb initial losses in order to learn a new valuation rapidly so she can extract higher profits later.",
"title": ""
},
{
"docid": "223b74ccdafcd3fafa372cd6a4fbb6cb",
"text": "Android OS experiences a blazing popularity since the last few years. This predominant platform has established itself not only in the mobile world but also in the Internet of Things (IoT) devices. This popularity, however, comes at the expense of security, as it has become a tempting target of malicious apps. Hence, there is an increasing need for sophisticated, automatic, and portable malware detection solutions. In this paper, we propose MalDozer, an automatic Android malware detection and family attribution framework that relies on sequences classification using deep learning techniques. Starting from the raw sequence of the app's API method calls, MalDozer automatically extracts and learns the malicious and the benign patterns from the actual samples to detect Android malware. MalDozer can serve as a ubiquitous malware detection system that is not only deployed on servers, but also on mobile and even IoT devices. We evaluate MalDozer on multiple Android malware datasets ranging from 1 K to 33 K malware apps, and 38 K benign apps. The results show that MalDozer can correctly detect malware and attribute them to their actual families with an F1-Score of 96%e99% and a false positive rate of 0.06% e2%, under all tested datasets and settings. © 2018 The Author(s). Published by Elsevier Ltd on behalf of DFRWS. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).",
"title": ""
},
{
"docid": "9a283f62dad38887bc6779c3ea61979d",
"text": "Recent evidence supports that alterations in hepatocyte-derived exosomes (HDE) may play a role in the pathogenesis of drug-induced liver injury (DILI). HDE-based biomarkers also hold promise to improve the sensitivity of existing in vitro assays for predicting DILI liability. Primary human hepatocytes (PHH) provide a physiologically relevant in vitro model to explore the mechanistic and biomarker potential of HDE in DILI. However, optimal methods to study exosomes in this culture system have not been defined. Here we use HepG2 and HepaRG cells along with PHH to optimize methods for in vitro HDE research. We compared the quantity and purity of HDE enriched from HepG2 cell culture medium by 3 widely used methods: ultracentrifugation (UC), OptiPrep density gradient ultracentrifugation (ODG), and ExoQuick (EQ)-a commercially available exosome precipitation reagent. Although EQ resulted in the highest number of particles, UC resulted in more exosomes as indicated by the relative abundance of exosomal CD63 to cellular prohibitin-1 as well as the comparative absence of contaminating extravesicular material. To determine culture conditions that best supported exosome release, we also assessed the effect of Matrigel matrix overlay at concentrations ranging from 0 to 0.25 mg/ml in HepaRG cells and compared exosome release from fresh and cryopreserved PHH from same donor. Sandwich culture did not impair exosome release, and freshly prepared PHH yielded a higher number of HDE overall. Taken together, our data support the use of UC-based enrichment from fresh preparations of sandwich-cultured PHH for future studies of HDE in DILI.",
"title": ""
},
{
"docid": "5a4f5a9323737bbdb7e8a0fd8bc57317",
"text": "A repetitive high voltage pulse adder based on solid state switches with isolated recharge is described. All capacitors in the modulator are recharged separately through a corresponding magnetic core transformer by a common high frequency power supply. This circuit configuration simplifies the insulation design and arrangement. This scheme is suitable for high frequency pulse discharge due to its high charge efficiency. At the same time it retains the inherent switch protection capability similar to a solid-state Marx modulator. In our present work, a 22-stage test modulator has been built and successfully operated at output voltage of 20 kV with repetitive rate of 20 kHz, rise time of 150 ns and pulse width of 3 ¿s. The primary experiments verified the system feasibility.",
"title": ""
},
{
"docid": "ada320bb2747d539ff6322bbd46bd9f0",
"text": "Real job applicants completed a 5-factor model personality measure as part of the job application process. They were rejected; 6 months later they (n = 5,266) reapplied for the same job and completed the same personality measure. Results indicated that 5.2% or fewer improved their scores on any scale on the 2nd occasion; moreover, scale scores were as likely to change in the negative direction as the positive. Only 3 applicants changed scores on all 5 scales beyond a 95% confidence threshold. Construct validity of the personality scales remained intact across the 2 administrations, and the same structural model provided an acceptable fit to the scale score matrix on both occasions. For the small number of applicants whose scores changed beyond the standard error of measurement, the authors found the changes were systematic and predictable using measures of social skill, social desirability, and integrity. Results suggest that faking on personality measures is not a significant problem in real-world selection settings.",
"title": ""
},
{
"docid": "c4d204b8ceda86e9d8e4ca56214f0ba3",
"text": "This article may be used for research, teaching and private study purposes. Any substantial or systematic reproduction, redistribution , reselling , loan or sub-licensing, systematic supply or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.",
"title": ""
},
{
"docid": "878617f145544f66e79f7d2d3404cbdf",
"text": "In this paper we address the problem of classifying cited work into important and non-important to the developments presented in a research publication. This task is vital for the algorithmic techniques that detect and follow emerging research topics and to qualitatively measure the impact of publications in increasingly growing scholarly big data. We consider cited work as important to a publication if that work is used or extended in some way. If a reference is cited as background work or for the purpose of comparing results, the cited work is considered to be non-important. By employing five classification techniques (Support Vector Machine, Naïve Bayes, Decision Tree, K-Nearest Neighbors and Random Forest) on an annotated dataset of 465 citations, we explore the effectiveness of eight previously published features and six novel features (including context based, cue words based and textual based). Within this set, our new features are among the best performing. Using the Random Forest classifier we achieve an overall classification accuracy of 0.91 AUC.",
"title": ""
},
{
"docid": "483c95f5f42388409dceb8cdb3792d19",
"text": "The world of e-commerce is reshaping marketing strategies based on the analysis of e-commerce data. Huge amounts of data are being collecting and can be analyzed for some discoveries that may be used as guidance for people sharing same interests but lacking experience. Indeed, recommendation systems are becoming an essential business strategy tool from just a novelty. Many large e-commerce web sites are already encapsulating recommendation systems to provide a customer friendly environment by helping customers in their decision-making process. A recommendation system learns from a customer behavior patterns and recommend the most valuable from available alternative choices. In this paper, we developed a two-stage algorithm using self-organizing map (SOM) and fuzzy k-means with an improved distance function to classify users into clusters. This will lead to have in the same cluster users who mostly share common interests. Results from the combination of SOM and fuzzy K-means revealed better accuracy in identifying user related classes or clusters. We validated our results using various datasets to check the accuracy of the employed clustering approach. The generated groups of users form the domain for transactional datasets to find most valuable products for customers.",
"title": ""
},
{
"docid": "1cacd8fca21d8e00d5296ef0d6e0fc2d",
"text": "There appears to be a lack of new ideas in driver behavior modeling. Although behavioral research is under some pressure, it seems too facile to attribute this deplorable state of affairs only to a lack of research funds. In my opinion the causal chain may well run in the opposite direction. An analysis of what is wrong has led me to the conclusion that human factors research in the area of driver behavior has hardly been touched by the “cognitive revolution” that ‘swept psychology in the past fifteen years. A more cognitive approach might seem advisable and the “promise of progress” of such an approach should be assessed.",
"title": ""
},
{
"docid": "f631ceda1a738c12ea71846650a11372",
"text": "An object recognition engine needs to extract discriminative features from data representing an object and accurately classify the object to be of practical use in robotics. Furthermore, the classification of the object must be rapidly performed in the presence of a voluminous stream of data. These conditions call for a distributed and scalable architecture that can utilize a cloud computing infrastructure for performing object recognition. This paper introduces a Cloud-based Object Recognition Engine (CORE) to address these needs. CORE is able to train on large-scale datasets, perform classification of 3D point cloud data, and efficiently transfer data in a robotic network.",
"title": ""
},
{
"docid": "e644b698d2977a2c767fe86a1445e23c",
"text": "This paper describes the E2E data, a new dataset for training end-to-end, datadriven natural language generation systems in the restaurant domain, which is ten times bigger than existing, frequently used datasets in this area. The E2E dataset poses new challenges: (1) its human reference texts show more lexical richness and syntactic variation, including discourse phenomena; (2) generating from this set requires content selection. As such, learning from this dataset promises more natural, varied and less template-like system utterances. We also establish a baseline on this dataset, which illustrates some of the difficulties associated with this data.",
"title": ""
},
{
"docid": "1a02d963590683c724a814f341f94f92",
"text": "The concept of the quality attribute scenario was introduced in 2003 to support the development of software architectures. This concept is useful because it provides an operational means to represent the quality requirements of a system. It also provides a more concrete basis with which to teach software architecture. Teaching this concept however has some unexpected issues. In this paper, I present my experiences of teaching quality attribute scenarios and outline Bus Tracker, a case study I have developed to support my teaching.",
"title": ""
},
{
"docid": "8be48d08aec21ecdf8a124fa3fef8d48",
"text": "Topic modeling has become a widely used tool for document management. However, there are few topic models distinguishing the importance of documents on different topics. In this paper, we propose a framework LIMTopic to incorporate link based importance into topic modeling. To instantiate the framework, RankTopic and HITSTopic are proposed by incorporating topical pagerank and topical HITS into topic modeling respectively. Specifically, ranking methods are first used to compute the topical importance of documents. Then, a generalized relation is built between link importance and topic modeling. We empirically show that LIMTopic converges after a small number of iterations in most experimental settings. The necessity of incorporating link importance into topic modeling is justified based on KL-Divergences between topic distributions converted from topical link importance and those computed by basic topic models. To investigate the document network summarization performance of topic models, we propose a novel measure called log-likelihood of ranking-integrated document-word matrix. Extensive experimental results show that LIMTopic performs better than baseline models in generalization performance, document clustering and classification, topic interpretability and document network summarization performance. Moreover, RankTopic has comparable performance with relational topic model (RTM) and HITSTopic performs much better than baseline models in document clustering and classification.",
"title": ""
},
{
"docid": "244be1e978813811e3f5afc1941cd4f5",
"text": "In this paper we introduce a new publicly available dataset for verification against textual sources, FEVER: Fact Extraction and VERification. It consists of 185,445 claims generated by altering sentences extracted from Wikipedia and subsequently verified without knowledge of the sentence they were derived from. The claims are classified as SUPPORTED, REFUTED or NOTENOUGHINFO by annotators achieving 0.6841 in Fleiss κ. For the first two classes, the annotators also recorded the sentence(s) forming the necessary evidence for their judgment. To characterize the challenge of the dataset presented, we develop a pipeline approach and compare it to suitably designed oracles. The best accuracy we achieve on labeling a claim accompanied by the correct evidence is 31.87%, while if we ignore the evidence we achieve 50.91%. Thus we believe that FEVER is a challenging testbed that will help stimulate progress on claim verification against textual sources.",
"title": ""
},
{
"docid": "b8e123dc9baa469fe93e9853de0f2d3f",
"text": "The synapse is the functional unit of the brain. During the last several decades we have acquired a great deal of information on its structure, molecular components, and physiological function. It is clear that synapses are morphologically and molecularly diverse and that this diversity is recruited to different functions. One of the most intriguing findings is that the size of the synaptic response in not invariant, but can be altered by a variety of homo- and heterosynaptic factors such as past patterns of use or modulatory neurotransmitters. Perhaps the most difficult challenge in neuroscience is to design experiments that reveal how these basic building blocks of the brain are put together and how they are regulated to mediate the information flow through neural circuits that is necessary to produce complex behaviors and store memories. In this review we will focus on studies that attempt to uncover the role of synaptic plasticity in the regulation of whole-animal behavior by learning and memory.",
"title": ""
},
{
"docid": "220e4d6e207ea14beaa1526383fbaccb",
"text": "A millimeter-wave sinusoidally modulated (SM) leaky-wave antenna (LWA) based on inset dielectric waveguide (IDW) is presented in this paper. The proposed antenna, radiating at 10° from broadside at 60 GHz, consists of a SM IDW, a rectangular waveguide for excitation and a transition for impedance matching. Fundamental TE01 mode is excited by the IDW with the leaky wave generated by the SM inset groove depth. The electric field is normal to the metallic waveguide wall and thus reduces the conductor loss. As a proof of concept, the modulated dielectric inset as well as the dielectric transition are conveniently fabricated by 3-D printing (tan δ = 0.02). Measurements of the antenna prototype show that the main beam can be scanned from -9° to 40° in a frequency range from 50 to 85 GHz within a gain variation between 9.1 and 14.2 dBi. Meanwhile, the reflection coefficient |S11| is kept below -13.4 dB over the whole frequency band. The measured results agree reasonably well with simulations. Furthermore, the gain of the proposed antenna can be enhanced by extending its length and using low-loss dielectric materials such as Teflon (tan δ <; 0.002).",
"title": ""
},
{
"docid": "38a8d294694d9430a3ea829a94e6b15d",
"text": "Strain technology has been successfully integrated into CMOS fabrication to improve carrier transport properties since 90nm node. Due to the non-uniform stress distribution in the channel, the enhancement in carrier mobility, velocity, and threshold voltage shift strongly depend on circuit layout, leading to systematic performance variations among transistors. A compact stress model that physically captures this behavior is essential to bridge the process technology with design optimization. In this paper, starting from the first principle, a new layout-dependent stress model is proposed as a function of layout, temperature, and other device parameters. Furthermore, a method of layout decomposition is developed to partition the layout into a set of simple patterns for efficient model extraction. These solutions significantly reduce the complexity in stress modeling and simulation. They are comprehensively validated by TCAD simulation and published Si-data, including the state-of-the-art strain technologies and the STI stress effect. By embedding them into circuit analysis, the interaction between layout and circuit performance is well benchmarked at 45nm node.",
"title": ""
},
{
"docid": "eb87bd5be6f183d039b5d0964a1f5e67",
"text": "One obstacle to applying reinforcement learning algorithms to real-world problems is the lack of suitable reward functions. Designing such reward functions is difficult in part because the user only has an implicit understanding of the task objective. This gives rise to the agent alignment problem: how do we create agents that behave in accordance with the user’s intentions? We outline a high-level research direction to solve the agent alignment problem centered around reward modeling: learning a reward function from interaction with the user and optimizing the learned reward function with reinforcement learning. We discuss the key challenges we expect to face when scaling reward modeling to complex and general domains, concrete approaches to mitigate these challenges, and ways to establish trust in the resulting agents.",
"title": ""
}
] |
scidocsrr
|
e86e4d416d54a1f49e12400a783ed2f2
|
A deep network model for paraphrase detection in short text messages
|
[
{
"docid": "0ce06f95b1dafcac6dad4413c8b81970",
"text": "User acceptance of artificial intelligence agents might depend on their ability to explain their reasoning, which requires adding an interpretability layer that facilitates users to understand their behavior. This paper focuses on adding an interpretable layer on top of Semantic Textual Similarity (STS), which measures the degree of semantic equivalence between two sentences. The interpretability layer is formalized as the alignment between pairs of segments across the two sentences, where the relation between the segments is labeled with a relation type and a similarity score. We present a publicly available dataset of sentence pairs annotated following the formalization. We then develop a system trained on this dataset which, given a sentence pair, explains what is similar and different, in the form of graded and typed segment alignments. When evaluated on the dataset, the system performs better than an informed baseline, showing that the dataset and task are well-defined and feasible. Most importantly, two user studies show how the system output can be used to automatically produce explanations in natural language. Users performed better when having access to the explanations, providing preliminary evidence that our dataset and method to automatically produce explanations is useful in real applications.",
"title": ""
},
{
"docid": "bcee978b0c7b8d533b05ce64daca92e3",
"text": "Sentiment analysis of short texts is challenging because of the limited contextual information they usually contain. In recent years, deep learning models such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs) have been applied to text sentiment analysis with comparatively remarkable results. In this paper, we describe a jointed CNN and RNN architecture, taking advantage of the coarse-grained local features generated by CNN and long-distance dependencies learned via RNN for sentiment analysis of short texts. Experimental results show an obvious improvement upon the state-of-the-art on three benchmark corpora, MR, SST1 and SST2, with 82.28%, 51.50% and 89.95% accuracy, respectively. 1",
"title": ""
},
{
"docid": "f3fb98614d1d8ff31ca977cbf6a15a9c",
"text": "Paraphrase Identification and Semantic Similarity are two different yet well related tasks in NLP. There are many studies on these two tasks extensively on structured texts in the past. However, with the strong rise of social media data, studying these tasks on unstructured texts, particularly, social texts in Twitter is very interesting as it could be more complicated problems to deal with. We investigate and find a set of simple features which enables us to achieve very competitive performance on both tasks in Twitter data. Interestingly, we also confirm the significance of using word alignment techniques from evaluation metrics in machine translation in the overall performance of these tasks.",
"title": ""
}
] |
[
{
"docid": "a19e10548c395cdd03fdc80bb8c25ce1",
"text": "The need to make default assumptions is frequently encountered in reasoning'about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non.monotonJcity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected. The gods did not reveal, from the beginning, All things to us, but in the course of time Through seeking we may learn and know things better. But as for certain truth, no man has known it, Nor shall he know it, neither of the gods Nor yet of all the things of which I speak. For even if by chance he were to utter The final truth, he would himself not know it: For all is but a woven web of guesses. Xenophanes",
"title": ""
},
{
"docid": "13a0f14bef295428bd4eabae1594513a",
"text": "The FPGA replay attack, where an attacker downgrades an FPGA-based system to the previous version with known vulnerabilities, has become a serious security and privacy concern for FPGA design. Current FPGA intellectual property (IP) protection mechanisms target the protection of FPGA configuration bitstreams by watermarking or encryption or binding. However, these mechanisms fail to prevent replay attacks. In this article, based on a recently reported PUF-FSM binding method that protects the usage of configuration bitstreams, we propose to reconfigure both the physical unclonable functions (PUFs) and the locking scheme of the finite state machine (FSM) in order to defeat the replay attack. We analyze the proposed scheme and demonstrate how replay attack would fail in attacking systems protected by the reconfigurable binding method. We implement two ways to build reconfigurable PUFs and propose two practical methods to reconfigure the locking scheme. Experimental results show that the two reconfigurable PUFs can generate significantly distinct responses with average reconfigurability of more than 40%. The reconfigurable locking schemes only incur a timing overhead less than 1%.",
"title": ""
},
{
"docid": "af983aa7ac103dd41dfd914af452758f",
"text": "The fast-growing nature of instant messaging applications usage on Android mobile devices brought about a proportional increase on the number of cyber-attack vectors that could be perpetrated on them. Android mobile phones store significant amount of information in the various memory partitions when Instant Messaging (IM) applications (WhatsApp, Skype, and Facebook) are executed on them. As a result of the enormous crimes committed using instant messaging applications, and the amount of electronic based traces of evidence that can be retrieved from the suspect’s device where an investigation could convict or refute a person in the court of law and as such, mobile phones have become a vulnerable ground for digital evidence mining. This paper aims at using forensic tools to extract and analyse left artefacts digital evidence from IM applications on Android phones using android studio as the virtual machine. Digital forensic investigation methodology by Bill Nelson was applied during this research. Some of the key results obtained showed how digital forensic evidence such as call logs, contacts numbers, sent/retrieved messages, and images can be mined from simulated android phones when running these applications. These artefacts can be used in the court of law as evidence during cybercrime investigation.",
"title": ""
},
{
"docid": "b3160bf7e40ab6cee122894af276cead",
"text": "This article describes existing and expected benefits of the SP theory of intelligence, and some potential applications. The theory aims to simplify and integrate ideas across artificial intelligence, mainstream computing, and human perception and cognition, with information compression as a unifying theme. It combines conceptual simplicity with descriptive and explanatory power across several areas of computing and cognition. In the SP machine—an expression of the SP theory which is currently realized in the form of a computer model—there is potential for an overall simplification of computing systems, including software. The SP theory promises deeper insights and better solutions in several areas of application including, most notably, unsupervised learning, natural language processing, autonomous robots, computer vision, intelligent databases, software engineering, information compression, medical diagnosis and big data. There is also potential in areas such as the semantic web, bioinformatics, structuring of documents, the detection of computer viruses, data fusion, new kinds of computer, and the development of scientific theories. The theory promises seamless integration of structures and functions within and between different areas of application. The potential value, worldwide, of these benefits and applications is at least $190 billion each year. Further development would be facilitated by the creation of a high-parallel, open-source version of the SP machine, available to researchers everywhere.",
"title": ""
},
{
"docid": "1ebb333d5a72c649cd7d7986f5bf6975",
"text": "\"Of what a strange nature is knowledge! It clings to the mind, when it has once seized on it, like a lichen on the rock,\" Abstract We describe a theoretical system intended to facilitate the use of knowledge In an understand ing system. The notion of script is introduced to account for knowledge about mundane situations. A program, SAM, is capable of using scripts to under stand. The notion of plans is introduced to ac count for general knowledge about novel situa tions. I. Preface In an attempt to provide theory where there have been mostly unrelated systems, Minsky (1974) recently described the as fitting into the notion of \"frames.\" Minsky at tempted to relate this work, in what is essentially language processing, to areas of vision research that conform to the same notion. Mlnsky's frames paper has created quite a stir in AI and some immediate spinoff research along the lines of developing frames manipulators (e.g. Bobrow, 1975; Winograd, 1975). We find that we agree with much of what Minsky said about frames and with his characterization of our own work. The frames idea is so general, however, that It does not lend itself to applications without further specialization. This paper is an attempt to devel op further the lines of thought set out in Schank (1975a) and Abelson (1973; 1975a). The ideas pre sented here can be viewed as a specialization of the frame idea. We shall refer to our central constructs as \"scripts.\" II. The Problem Researchers in natural language understanding have felt for some time that the eventual limit on the solution of our problem will be our ability to characterize world knowledge. Various researchers have approached world knowledge in various ways. Winograd (1972) dealt with the problem by severely restricting the world. This approach had the po sitive effect of producing a working system and the negative effect of producing one that was only minimally extendable. Charniak (1972) approached the problem from the other end entirely and has made some interesting first steps, but because his work is not grounded in any representational sys tem or any working computational system the res triction of world knowledge need not critically concern him. Our feeling is that an effective characteri zation of knowledge can result in a real under standing system in the not too distant future. We expect that programs based on the theory we out …",
"title": ""
},
{
"docid": "2225179bd84be6cc0cdce7928245e884",
"text": "The p53 tumor suppressor gene is commonly altered in human tumors, predominantly through missense mutations that result in accumulation of mutant p53 protein. These mutations may confer dominant-negative or gain-of-function properties to p53. To ascertain the physiological effects of p53 point mutation, the structural mutant p53R172H and the contact mutant p53R270H (codons 175 and 273 in humans) were engineered into the endogenous p53 locus in mice. p53R270H/+ and p53R172H/+ mice are models of Li-Fraumeni Syndrome; they developed allele-specific tumor spectra distinct from p53+/- mice. In addition, p53R270H/- and p53R172H/- mice developed novel tumors compared to p53-/- mice, including a variety of carcinomas and more frequent endothelial tumors. Dominant effects that varied by allele and function were observed in primary cells derived from p53R270H/+ and p53R172H/+ mice. These results demonstrate that point mutant p53 alleles expressed under physiological control have enhanced oncogenic potential beyond the simple loss of p53 function.",
"title": ""
},
{
"docid": "36162ebd7d7c5418e4c78bad5bbba8ab",
"text": "In this paper we discuss the design of human-robot interaction focussing especially on social robot communication and multimodal information presentation. As a starting point we use the WikiTalk application, an open-domain conversational system which has been previously developed using a robotics simulator. We describe how it can be implemented on the Nao robot platform, enabling Nao to make informative spoken contributions on a wide range of topics during conversation. Spoken interaction is further combined with gesturing in order to support Nao’s presentation by natural multimodal capabilities, and to enhance and explore natural communication between human users and robots.",
"title": ""
},
{
"docid": "fb67e237688deb31bd684c714a49dca5",
"text": "In order to mitigate investments, stock price forecasting has attracted more attention in recent years. Aiming at the discreteness, non-normality, high-noise in high-frequency data, a support vector machine regression (SVR) algorithm is introduced in this paper. However, the characteristics in different periods of the same stock, or the same periods of different stocks are significantly different. So, SVR with fixed parameters is difficult to satisfy with the constantly changing data flow. To tackle this problem, an adaptive SVR was proposed for stock data at three different time scales, including daily data, 30-min data, and 5-min data. Experiments show that the improved SVR with dynamic optimization of learning parameters by particle swarm optimization can get a better result than compared methods including SVR and back-propagation neural network.",
"title": ""
},
{
"docid": "f6031e9a3fbe7cb6f2e4d0d71d02a275",
"text": "This paper presents an ultra low power reconfigurable, multi-standard (IEEE802.15.4, BLE, 5Mbps proprietary) ISM2.4GHz band transceiver compliant to FCC, ETSI class 2 and ARIB regulations. It uses a DPLL with counter based area and power efficient re-circulating TDC, current reuse low area DCO, dynamic divider, class-AB PA, and fully integrated LDOs. The RX is reconfigurable between zero-IF/low-IF along with antenna diversity. The transceiver consumes 3.5mA (TX), 3.1mA (RX) from 3.0V battery with on-chip DCDC converter, and occupies 1.1mm2 in 65nm CMOS process. The RX front-end provides 42dB gain, 6dB NF, and -34dBm input P1dB.",
"title": ""
},
{
"docid": "7df97a9d3ae19fce1c86322c1f5ac929",
"text": "This study examined the effects of background music on test performance. In a repeated-measures design 30 undergraduates completed two cognitive tests, one in silence and the other with background music. Analysis suggested that music facilitated cognitive performance compared with the control condition of no music: more questions were completed and more answers were correct. There was no difference in heart rate under the two conditions. The improved performance under the music condition might be directly related to the type of music used.",
"title": ""
},
{
"docid": "119a4b04bc042b68f4b32480a069f6d4",
"text": "Preserving the availability and integrity of the power grid critical infrastructures in the face of fast-spreading intrusions requires advances in detection techniques specialized for such large-scale cyber-physical systems. In this paper, we present a security-oriented cyber-physical state estimation (SCPSE) system, which, at each time instant, identifies the compromised set of hosts in the cyber network and the maliciously modified set of measurements obtained from power system sensors. SCPSE fuses uncertain information from different types of distributed sensors, such as power system meters and cyber-side intrusion detectors, to detect the malicious activities within the cyber-physical system. We implemented a working prototype of SCPSE and evaluated it using the IEEE 24-bus benchmark system. The experimental results show that SCPSE significantly improves on the scalability of traditional intrusion detection techniques by using information from both cyber and power sensors. Furthermore, SCPSE was able to detect all the attacks against the control network in our experiments.",
"title": ""
},
{
"docid": "c009c5cf0e85081f71247815d0a1ae29",
"text": "This paper describes a low-power receiver front-end in a bidirectional near-ground source-series terminated (SST) interface implemented in a 40-nm CMOS process, which supports low-common mode differential NRZ signaling up to 16-Gb/s data rates. The high-speed operation is enabled by utilizing a common-gate amplifier stage with replica transconductance impedance calibration that accurately terminates the channel in the presence of receiver input loading. The near-ground low-impedance receiver also incorporates common-mode gain cancellation and in-situ equalization calibration to achieve reliable data reception at 16 Gb/s with better than 0.4 mW/Gb/s power efficiency over a memory link with more than 15 dB loss at the Nyquist frequency.",
"title": ""
},
{
"docid": "b966960849c979f32a56742048ce1ed1",
"text": "This research concerns how children learn the distinction between substance names and ohjcct names. Quinc (1969) proposed that children learn the distinction through learning the syntactic distinctions inherent in countlmasc grammar. However. Soja et al. ( 1991) found that English-speaking 2-ycar-olds. who did not seem to have acquired countlmass grammar. distinguished objects from substances in a word extension task, suggesting a pre-lingut\\tic ontological distinction. T o test whether the distinction between ohject names and substance names i s conceptually or linguistically driven. we repeated Soja et al.'s study with Englishand Japanese-spaking 2-, 2.5-. and 4-year-olds and adults. Japanese does not make a countmass grammatical distinction: all inanimate nouns are treated alikc. Thus i f young Japanese children made the object-substance distinction i n word meaning. this would support the early ontology position over the linguistic influence position. We used three types o f standards: srthsrmices (e.g.. sand i n an S-shape), simple ohjecrs (e.g.. a kidney-shaped piece o f paraffin) and complex ohjccfs (e.$.. a W C W ~ whisk). The suhjccts learned novel nouns in ncutral syntax denoting each standard entity. They were then asked which of the two alternatives one matching i n shape hut not material and the other matching in material hut not shape would also be named by the same lahel. The results suggest the universal use o f ontological knowledge i n early word learning. Children in hoth languages showed differentiation between (complex) ohjects and suh\\tances as early as 2 years o f age. However. there were also early cross-linguiztic differences. American apd Japanese children generabed the simple object instances and the \\(thstance instances differently. We speculate that children universally make a distinction hctween individuals and non-individuals i n word learning hut that the nature of the categories and the hnundary between them is influenced by language. 01997 Elsevier Science B.V. A l l rights reserved. * Comsponding author. E-mail: [email protected].",
"title": ""
},
{
"docid": "2d787b0deca95ce212e11385ae60c36d",
"text": "In this paper, we introduce three novel distributed Kalman filtering (DKF) algorithms for sensor networks. The first algorithm is a modification of a previous DKF algorithm presented by the author in CDC-ECC '05. The previous algorithm was only applicable to sensors with identical observation matrices which meant the process had to be observable by every sensor. The modified DKF algorithm uses two identical consensus filters for fusion of the sensor data and covariance information and is applicable to sensor networks with different observation matrices. This enables the sensor network to act as a collective observer for the processes occurring in an environment. Then, we introduce a continuous-time distributed Kalman filter that uses local aggregation of the sensor data but attempts to reach a consensus on estimates with other nodes in the network. This peer-to-peer distributed estimation method gives rise to two iterative distributed Kalman filtering algorithms with different consensus strategies on estimates. Communication complexity and packet-loss issues are discussed. The performance and effectiveness of these distributed Kalman filtering algorithms are compared and demonstrated on a target tracking task.",
"title": ""
},
{
"docid": "88bc4f8a24a2e81a9c133d11a048ca10",
"text": "In this paper, we give an overview of the HDF5 technology suite and some of its applications. We discuss the HDF5 data model, the HDF5 software architecture and some of its performance enhancing capabilities.",
"title": ""
},
{
"docid": "98e7492293b295200b78c99cce8824dd",
"text": "Ann Campbell Burke examines the development and evolution [5] of vertebrates, in particular, turtles [6]. Her Harvard University [7] experiments, described in \"Development of the Turtle Carapace [4]: Implications for the Evolution of a Novel Bauplan,\" were published in 1989. Burke used molecular techniques to investigate the developmental mechanisms responsible for the formation of the turtle shell. Burke's work with turtle embryos has provided empirical evidence for the hypothesis that the evolutionary origins of turtle morphology [8] depend on changes in the embryonic and developmental mechanisms underpinning shell production.",
"title": ""
},
{
"docid": "c07eedc87181fa7af8494b95a0c454d3",
"text": "Studies on fault detection and diagnosis of planetary gearboxes are quite limited compared with those of fixed-axis gearboxes. Different from fixed-axis gearboxes, planetary gearboxes exhibit unique behaviors, which invalidate fault diagnosis methods that work well for fixed-axis gearboxes. It is a fact that for systems as complex as planetary gearboxes, multiple sensors mounted on different locations provide complementary information on the health condition of the systems. On this basis, a fault detection method based on multi-sensor data fusion is introduced in this paper. In this method, two features developed for planetary gearboxes are used to characterize the gear health conditions, and an adaptive neuro-fuzzy inference system (ANFIS) is utilized to fuse all features from different sensors. In order to demonstrate the effectiveness of the proposed method, experiments are carried out on a planetary gearbox test rig, on which multiple accelerometers are mounted for data collection. The comparisons between the proposed method and the methods based on individual sensors show that the former achieves much higher accuracies in detecting planetary gearbox faults.",
"title": ""
},
{
"docid": "0c8c05b492e32407339843badeec4a20",
"text": "In contemplating the function and origin of music, a number of scholars have considered whether music might be an evolutionary adaptation. This article reviews the basic arguments related to evolutionary claims for music. Although evolutionary theories about music remain wholly speculative, musical behaviors satisfy a number of basic conditions, which suggests that there is indeed merit in pursuing possible evolutionary accounts.",
"title": ""
},
{
"docid": "7a9387636f01bb462aef2d3b32627c67",
"text": "The Stanford Testbed of Autonomous Rotorcraft for Multi-Agent Control (STARMAC), a fleet of quadrotor helicopters, has been developed as a testbed for novel algorithms that enable autonomous operation of aerial vehicles. This paper develops an autonomous vehicle trajectory tracking algorithm through cluttered environments for the STARMAC platform. A system relying on a single optimization must trade off the complexity of the planned path with the rate of update of the control input. In this paper, a trajectory tracking controller for quadrotor helicopters is developed to decouple the two problems. By accepting as inputs a path of waypoints and desired velocities, the control input can be updated frequently to accurately track the desired path, while the path planning occurs as a separate process on a slower timescale. To enable the use of planning algorithms that do not consider dynamic feasibility or provide feedforward inputs, a computationally efficient algorithm using space-indexed waypoints is presented to modify the speed profile of input paths to guarantee feasibility of the planned trajectory and minimum time traversal of the planned. The algorithm is an efficient alternative to formulating a nonlinear optimization or mixed integer program. Both indoor and outdoor flight test results are presented for path tracking on the STARMAC vehicles.",
"title": ""
},
{
"docid": "0a14a4d38f1f05aec6e0ea5d658defcf",
"text": "In this work, we investigate the use of sparsity-inducing regularizers during training of Convolution Neural Networks (CNNs). These regularizers encourage that fewer connections in the convolution and fully connected layers take non-zero values and in effect result in sparse connectivity between hidden units in the deep network. This in turn reduces the memory and runtime cost involved in deploying the learned CNNs. We show that training with such regularization can still be performed using stochastic gradient descent implying that it can be used easily in existing codebases. Experimental evaluation of our approach on MNIST, CIFAR, and ImageNet datasets shows that our regularizers can result in dramatic reductions in memory requirements. For instance, when applied on AlexNet, our method can reduce the memory consumption by a factor of four with minimal loss in accuracy.",
"title": ""
}
] |
scidocsrr
|
87b7a3b9c5bf7b4faaea7b65ed464a9c
|
StreamScope: Continuous Reliable Distributed Processing of Big Data Streams
|
[
{
"docid": "fe31348bce3e6e698e26aceb8e99b2d8",
"text": "Web-based enterprises process events generated by millions of users interacting with their websites. Rich statistical data distilled from combining such interactions in near real-time generates enormous business value. In this paper, we describe the architecture of Photon, a geographically distributed system for joining multiple continuously flowing streams of data in real-time with high scalability and low latency, where the streams may be unordered or delayed. The system fully tolerates infrastructure degradation and datacenter-level outages without any manual intervention. Photon guarantees that there will be no duplicates in the joined output (at-most-once semantics) at any point in time, that most joinable events will be present in the output in real-time (near-exact semantics), and exactly-once semantics eventually.\n Photon is deployed within Google Advertising System to join data streams such as web search queries and user clicks on advertisements. It produces joined logs that are used to derive key business metrics, including billing for advertisers. Our production deployment processes millions of events per minute at peak with an average end-to-end latency of less than 10 seconds. We also present challenges and solutions in maintaining large persistent state across geographically distant locations, and highlight the design principles that emerged from our experience.",
"title": ""
}
] |
[
{
"docid": "2c91e6ca6cf72279ad084c4a51b27b1c",
"text": "Knowing where the host lane lies is paramount to the effectiveness of many advanced driver assistance systems (ADAS), such as lane keep assist (LKA) and adaptive cruise control (ACC). This paper presents an approach for improving lane detection based on the past trajectories of vehicles. Instead of expensive high-precision map, we use the vehicle trajectory information to provide additional lane-level spatial support of the traffic scene, and combine it with the visual evidence to improve each step of the lane detection procedure, thereby overcoming typical challenges of normal urban streets. Such an approach could serve as an Add-On to enhance the performance of existing lane detection systems in terms of both accuracy and robustness. Experimental results in various typical but challenging scenarios show the effectiveness of the proposed system.",
"title": ""
},
{
"docid": "7c3bd683a927626c97ec9ae31b0bae3e",
"text": "Project portfolio management in relation to innovation has increasingly gained the attention of practitioners and academics during the last decade. While significant progress has been made in the pursuit of a process approach to achieve an effective project portfolio management, limited attention has been paid to the issue of how to integrate sustainability into innovation portfolio management decision making. The literature is lacking insights on how to manage the innovation project portfolio throughout the strategic analysis phase to the monitoring of the portfolio performance in relation to sustainability during the development phase of projects. This paper presents a 5step framework for integrating sustainability in the innovation project portfolio management process in the field of product development. The framework can be applied for the management of a portfolio of three project categories that involve breakthrough projects, platform projects and derivative projects. It is based on the assessment of various methods of project evaluation and selection, and a case analysis in the automotive industry. It enables the integration of the three dimensions of sustainability into the innovation project portfolio management process within firms. The three dimensions of sustainability involve ecological sustainability, social sustainability and economic sustainability. Another benefit is enhancing the ability of firms to achieve an effective balance of investment between the three dimensions of sustainability, taking the competitive approach of a firm toward the marketplace into account. 2014 Published by Elsevier B.V. * Corresponding author. Tel.: +31 6 12990878. E-mail addresses: [email protected] (J.W. Brook), [email protected] (F. Pagnanelli). G Models ENGTEC-1407; No. of Pages 17 Please cite this article in press as: Brook, J.W., Pagnanelli, F., Integrating sustainability into innovation project portfolio management – A strategic perspective. J. Eng. Technol. Manage. (2014), http://dx.doi.org/10.1016/j.jengtecman.2013.11.004",
"title": ""
},
{
"docid": "b57cbb1f6eeb34946df47f2be390aaf8",
"text": "The automatic detection of software vulnerabilities is an important research problem. However, existing solutions to this problem rely on human experts to define features and often miss many vulnerabilities (i.e., incurring high false negative rate). In this paper, we initiate the study of using deep learning-based vulnerability detection to relieve human experts from the tedious and subjective task of manually defining features. Since deep learning is motivated to deal with problems that are very different from the problem of vulnerability detection, we need some guiding principles for applying deep learning to vulnerability detection. In particular, we need to find representations of software programs that are suitable for deep learning. For this purpose, we propose using code gadgets to represent programs and then transform them into vectors, where a code gadget is a number of (not necessarily consecutive) lines of code that are semantically related to each other. This leads to the design and implementation of a deep learning-based vulnerability detection system, called Vulnerability Deep Pecker (VulDeePecker). In order to evaluate VulDeePecker, we present the first vulnerability dataset for deep learning approaches. Experimental results show that VulDeePecker can achieve much fewer false negatives (with reasonable false positives) than other approaches. We further apply VulDeePecker to 3 software products (namely Xen, Seamonkey, and Libav) and detect 4 vulnerabilities, which are not reported in the National Vulnerability Database but were “silently” patched by the vendors when releasing later versions of these products; in contrast, these vulnerabilities are almost entirely missed by the other vulnerability detection systems we experimented with.",
"title": ""
},
{
"docid": "771ddd19549c46ecfb50ee96bdcc3dfa",
"text": "A metamaterial 1:4 series power divider that provides equal power split to all four output ports over a large bandwidth is presented, which can be extended to an arbitrary number of output ports. The divider comprises four nonradiating metamaterial lines in series, incurring a zero insertion phase over a large bandwidth, while simultaneously maintaining a compact length of /spl lambda//sub 0//8. Compared to a series power divider employing conventional one-wavelength long meandered transmission lines to provide in-phase signals at the output ports, the metamaterial divider provides a 165% increase in the input return-loss bandwidth and a 155% and 154% increase in the through-power bandwidth to ports 3 and 4, respectively. In addition, the metamaterial divider is significantly more compact, occupying only 2.6% of the area that the transmission line divider occupies. The metamaterial and transmission line dividers exhibit comparable insertion losses.",
"title": ""
},
{
"docid": "945f129f81e9b7a69a6ba9dc982ed7c6",
"text": "Geographic location of a person is important contextual information that can be used in a variety of scenarios like disaster relief, directional assistance, context-based advertisements, etc. GPS provides accurate localization outdoors but is not useful inside buildings. We propose an coarse indoor localization approach that exploits the ubiquity of smart phones with embedded sensors. GPS is used to find the building in which the user is present. The Accelerometers are used to recognize the user’s dynamic activities (going up or down stairs or an elevator) to determine his/her location within the building. We demonstrate the ability to estimate the floor-level of a user. We compare two techniques for activity classification, one is naive Bayes classifier and the other is based on dynamic time warping. The design and implementation of a localization application on the HTC G1 platform running Google Android is also presented.",
"title": ""
},
{
"docid": "68aad74ce40e9f44997a078df5e54a23",
"text": "A wideband circularly polarized (CP) rectangular dielectric resonator antenna (DRA) based on the concept of traveling-wave excitation is presented. A lumped resistively loaded monofilar-spiral-slot is used to excite the rectangular DRA. The proposed DRA is theoretically and experimentally analyzed, including design concept, design guideline, parameter study, and experimental verification. It is found that by using such an excitation, a wide 3-dB axial-ratio (AR) bandwidth of 18.7% can be achieved.",
"title": ""
},
{
"docid": "152e5d8979eb1187e98ecc0424bb1fde",
"text": "Face verification remains a challenging problem in very complex conditions with large variations such as pose, illumination, expression, and occlusions. This problem is exacerbated when we rely unrealistically on a single training data source, which is often insufficient to cover the intrinsically complex face variations. This paper proposes a principled multi-task learning approach based on Discriminative Gaussian Process Latent Variable Model (DGPLVM), named GaussianFace, for face verification. In contrast to relying unrealistically on a single training data source, our model exploits additional data from multiple source-domains to improve the generalization performance of face verification in an unknown target-domain. Importantly, our model can adapt automatically to complex data distributions, and therefore can well capture complex face variations inherent in multiple sources. To enhance discriminative power, we introduced a more efficient equivalent form of Kernel Fisher Discriminant Analysis to DGPLVM. To speed up the process of inference and prediction, we exploited the low rank approximation method. Extensive experiments demonstrated the effectiveness of the proposed model in learning from diverse data sources and generalizing to unseen domains. Specifically, the accuracy of our algorithm achieved an impressive accuracy rate of 98.52% on the well-known and challenging Labeled Faces in the Wild (LFW) benchmark. For the first time, the human-level performance in face verification (97.53%) on LFW is surpassed.",
"title": ""
},
{
"docid": "673c0d74b0df4cfe698d1a7397fc1365",
"text": "The intense growth of Internet of Things (IoTs), its multidisciplinary nature and broadcasting communication pattern made it very challenging for research community/domain. Operating systems for IoTs plays vital role in this regard. Through this research contribution, the objective is to present an analytical study on the recent developments on operating systems specifically designed or fulfilled the needs of IoTs. Starting from study and advances in the field of IoTs with focus on existing operating systems specifically for IoTs. Finally the existing operating systems for IoTs are evaluated and compared on some set criteria and facts and findings are presented.",
"title": ""
},
{
"docid": "ebb70af20b550c911a63757b754c6619",
"text": "This paper presents a vehicle price prediction system by using the supervised machine learning technique. The research uses multiple linear regression as the machine learning prediction method which offered 98% prediction precision. Using multiple linear regression, there are multiple independent variables but one and only one dependent variable whose actual and predicted values are compared to find precision of results. This paper proposes a system where price is dependent variable which is predicted, and this price is derived from factors like vehicle’s model, make, city, version, color, mileage, alloy rims and power steering.",
"title": ""
},
{
"docid": "781f1db88568af0e2d5804424ae470e0",
"text": "Two cassettes with tetracycline-resistance (TcR) and kanamycin-resistance (KmR) determinants have been developed for the construction of insertion and deletion mutants of cloned genes in Escherichia coli. In both cassettes, the resistance determinants are flanked by the short direct repeats (FRT sites) required for site-specific recombination mediated by the yeast Flp recombinase. In addition, a plasmid with temperature-sensitive replication for temporal production of the Flp enzyme in E. coli has been constructed. After a gene disruption or deletion mutation is constructed in vitro by insertion of one of the cassettes into a given gene, the mutated gene is transferred to the E. coli chromosome by homologous recombination and selection for the antibiotic resistance provided by the cassette. If desired, the resistance determinant can subsequently be removed from the chromosome in vivo by Flp action, leaving behind a short nucleotide sequence with one FRT site and with no polar effect on downstream genes. This system was applied in the construction of an E. coli endA deletion mutation which can be transduced by P1 to the genetic background of interest using TcR as a marker. The transductant can then be freed of the TcR if required.",
"title": ""
},
{
"docid": "668b8d1475bae5903783159a2479cc32",
"text": "As environmental concerns and energy consumption continue to increase, utilities are looking at cost effective strategies for improved network operation and consumer consumption. Smart grid is a collection of next generation power delivery concepts that includes new power delivery components, control and monitoring throughout the power grid and more informed customer options. This session will cover utilization of AMI networks to realize some of the smart grid goals.",
"title": ""
},
{
"docid": "6e8d30f3eaaf6c88dddb203c7b703a92",
"text": "searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggesstions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington VA, 22202-4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to any oenalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS.",
"title": ""
},
{
"docid": "1c60ddeb7e940992094cb8f3913e811a",
"text": "In this paper, we address the scene segmentation task by capturing rich contextual dependencies based on the selfattention mechanism. Unlike previous works that capture contexts by multi-scale features fusion, we propose a Dual Attention Networks (DANet) to adaptively integrate local features with their global dependencies. Specifically, we append two types of attention modules on top of traditional dilated FCN, which model the semantic interdependencies in spatial and channel dimensions respectively. The position attention module selectively aggregates the features at each position by a weighted sum of the features at all positions. Similar features would be related to each other regardless of their distances. Meanwhile, the channel attention module selectively emphasizes interdependent channel maps by integrating associated features among all channel maps. We sum the outputs of the two attention modules to further improve feature representation which contributes to more precise segmentation results. We achieve new state-of-the-art segmentation performance on three challenging scene segmentation datasets, i.e., Cityscapes, PASCAL Context and COCO Stuff dataset. In particular, a Mean IoU score of 81.5% on Cityscapes test set is achieved without using coarse data. we make the code and trained models publicly available at https://github.com/junfu1115/DANet",
"title": ""
},
{
"docid": "e831e47d09429ef0838366ffb07ed353",
"text": "This paper studies the effects of boosting in the context of different classification methods for text categorization, including Decision Trees, Naive Bayes, Support Vector Machines (SVMs) and a Rocchio-style classifier. We identify the inductive biases of each classifier and explore how boosting, as an error-driven resampling mechanism, reacts to those biases. Our experiments on the Reuters-21578 benchmark show that boosting is not effective in improving the performance of the base classifiers on common categories. However, the effect of boosting for rare categories varies across classifiers: for SVMs and Decision Trees, we achieved a 13-17% performance improvement in macro-averaged F1 measure, but did not obtain substantial improvement for the other two classifiers. This interesting finding of boosting on rare categories has not been reported before.",
"title": ""
},
{
"docid": "6961b34ae6e5043be5f777dbd7818ebf",
"text": "Sign language is the communication medium for the deaf and the mute people. It uses hand gestures along with the facial expressions and the body language to convey the intended message. This paper proposes a novel approach of interpreting the sign language using the portable smart glove. LED-LDR pair on each finger senses the signing gesture and couples the analog voltage to the microcontroller. The microcontroller MSP430G2553 converts these analog voltage values to digital samples and the ASCII code of the letter gestured is wirelessly transmitted using the ZigBee. Upon reception, the letter corresponding to the received ASCII code is displayed on the computer and the corresponding audio is played.",
"title": ""
},
{
"docid": "f7c47b9447af707e9ce212fc35a1f404",
"text": "The article describes the method of malware activities identification using ontology and rules. The method supports detection of malware at host level by observing its behavior. It sifts through hundred thousands of regular events and allows to identify suspicious ones. They are then passed on to the second building block responsible for malware tracking and matching stored models with observed malicious actions. The presented method was implemented and verified in the infected computer environment. As opposed to signature-based antivirus mechanisms it allows to detect malware the code of which has been obfuscated.",
"title": ""
},
{
"docid": "e173580f0dd327c78fd0b16b234112a1",
"text": "Multi-view data is very popular in real-world applications, as different view-points and various types of sensors help to better represent data when fused across views or modalities. Samples from different views of the same class are less similar than those with the same view but different class. We consider a more general case that prior view information of testing data is inaccessible in multi-view learning. Traditional multi-view learning algorithms were designed to obtain multiple view-specific linear projections and would fail without this prior information available. That was because they assumed the probe and gallery views were known in advance, so the correct view-specific projections were to be applied in order to better learn low-dimensional features. To address this, we propose a Low-Rank Common Subspace (LRCS) for multi-view data analysis, which seeks a common low-rank linear projection to mitigate the semantic gap among different views. The low-rank common projection is able to capture compatible intrinsic information across different views and also well-align the within-class samples from different views. Furthermore, with a low-rank constraint on the view-specific projected data and that transformed by the common subspace, the within-class samples from multiple views would concentrate together. Different from the traditional supervised multi-view algorithms, our LRCS works in a weakly supervised way, where only the view information gets observed. Such a common projection can make our model more flexible when dealing with the problem of lacking prior view information of testing data. Two scenarios of experiments, robust subspace learning and transfer learning, are conducted to evaluate our algorithm. Experimental results on several multi-view datasets reveal that our proposed method outperforms state-of-the-art, even when compared with some supervised learning methods.",
"title": ""
},
{
"docid": "a0555ab4fce4608110aa309412d3fb38",
"text": "An electrocardiogram (ECG) measures the electric activity of the heart and has been widely used for detecting heart diseases due to its simplicity and non-invasive nature. By analyzing the electrical signal of each heartbeat, i.e., the combination of action impulse waveforms produced by different specialized cardiac tissues found in the heart, it is possible to detect some of its abnormalities. In the last decades, several works were developed to produce automatic ECG-based heartbeat classification methods. In this work, we survey the current state-of-the-art methods of ECG-based automated abnormalities heartbeat classification by presenting the ECG signal preprocessing, the heartbeat segmentation techniques, the feature description methods and the learning algorithms used. In addition, we describe some of the databases used for evaluation of methods indicated by a well-known standard developed by the Association for the Advancement of Medical Instrumentation (AAMI) and described in ANSI/AAMI EC57:1998/(R)2008 (ANSI/AAMI, 2008). Finally, we discuss limitations and drawbacks of the methods in the literature presenting concluding remarks and future challenges, and also we propose an evaluation process workflow to guide authors in future works.",
"title": ""
},
{
"docid": "19813f09d0f8e3eafcfc9491be7f9341",
"text": "Sentiment analysis of scientific citations has received much attention in recent years because of the increased availability of scientific publications. Scholarly databases are valuable sources for publications and citation information where researchers can publish their ideas and results. Sentiment analysis of scientific citations aims to analyze the authors’ sentiments within scientific citations. During the last decade, some review papers have been published in the field of sentiment analysis. Despite the growth in the size of scholarly databases and researchers’ interests, no one as far as we know has carried out an in-depth survey in a specific area of sentiment analysis in scientific citations. This paper presents a comprehensive survey of sentiment analysis of scientific citations. In this review, the process of scientific citation sentiment analysis is introduced and recently proposed methods with the main challenges are presented, analyzed and discussed. Further, we present related fields such as citation function classification and citation recommendation that have recently gained enormous attention. Our contributions include identifying the most important challenges as well as the analysis and classification of recent methods used in scientific citation sentiment analysis. Moreover, it presents the normal process, and this includes citation context extraction, public data sources, and feature selection. We found that most of the papers use classical machine learning methods. However, due to limitations of performance and manual feature selection in machine learning, we believe that in the future hybrid and deep learning methods can possibly handle the problems of scientific citation sentiment analysis more efficiently and reliably.",
"title": ""
},
{
"docid": "2b32087daf5c104e60f91ebf19cd744d",
"text": "A large amount of food photos are taken in restaurants for diverse reasons. This dish recognition problem is very challenging, due to different cuisines, cooking styles and the intrinsic difficulty of modeling food from its visual appearance. Contextual knowledge is crucial to improve recognition in such scenario. In particular, geocontext has been widely exploited for outdoor landmark recognition. Similarly, we exploit knowledge about menus and geolocation of restaurants and test images. We first adapt a framework based on discarding unlikely categories located far from the test image. Then we reformulate the problem using a probabilistic model connecting dishes, restaurants and geolocations. We apply that model in three different tasks: dish recognition, restaurant recognition and geolocation refinement. Experiments on a dataset including 187 restaurants and 701 dishes show that combining multiple evidences (visual, geolocation, and external knowledge) can boost the performance in all tasks.",
"title": ""
}
] |
scidocsrr
|
c38d314732279790e290e0f9fd033405
|
The Faults in Our Pi Stars: Security Issues and Open Challenges in Deep Reinforcement Learning
|
[
{
"docid": "17611b0521b69ad2b22eeadc10d6d793",
"text": "Neural networks provide state-of-the-art results for most machine learning tasks. Unfortunately, neural networks are vulnerable to adversarial examples: given an input x and any target classification t, it is possible to find a new input x' that is similar to x but classified as t. This makes it difficult to apply neural networks in security-critical areas. Defensive distillation is a recently proposed approach that can take an arbitrary neural network, and increase its robustness, reducing the success rate of current attacks' ability to find adversarial examples from 95% to 0.5%.In this paper, we demonstrate that defensive distillation does not significantly increase the robustness of neural networks by introducing three new attack algorithms that are successful on both distilled and undistilled neural networks with 100% probability. Our attacks are tailored to three distance metrics used previously in the literature, and when compared to previous adversarial example generation algorithms, our attacks are often much more effective (and never worse). Furthermore, we propose using high-confidence adversarial examples in a simple transferability test we show can also be used to break defensive distillation. We hope our attacks will be used as a benchmark in future defense attempts to create neural networks that resist adversarial examples.",
"title": ""
}
] |
[
{
"docid": "c14621efc992a1bc0856a8c4126cf665",
"text": "This paper researched the waveguide-to-microstrip antipodal finline transition at W band, analyzed and simulated antipodal finline transition of Cosine-squared taper function curve and flexible spline curve, designed and fabricated an spline curve antipodal finline transition of back-to-back structure. The measured results show that the insertion loss is less than 1.3dB, and the return loss is less than -13dB in whole W-band, the insertion loss of a single transition is less than 1.0dB, and the return loss is less than -20dB in 90GHz~ 99GHz, achieve good performance. It can provide low transitions between the wave guide and micro strip line. It is loss easy designing and machining, it achieve good performance in a wide band range. It is useful to design circuit in W-band.",
"title": ""
},
{
"docid": "b9f2639c0cda5c98865d9c6fc9003104",
"text": "We propose a statistical model applicable to character level language modeling and show that it is a good fit for both, program source code and English text. The model is parameterized by a program from a domain-specific language (DSL) that allows expressing non-trivial data dependencies. Learning is done in two phases: (i) we synthesize a program from the DSL, essentially learning a good representation for the data, and (ii) we learn parameters from the training data – the process is done via counting, as in simple language models such as n-gram. Our experiments show that the precision of our model is comparable to that of neural networks while sharing a number of advantages with n-gram models such as fast query time and the capability to quickly add and remove training data samples. Further, the model is parameterized by a program that can be manually inspected, understood and updated, addressing a major problem of neural networks.",
"title": ""
},
{
"docid": "c4b60f6f97f7a2c9c9f7ae045bac590f",
"text": "Vaccines containing novel adjuvant formulations are increasingly reaching advanced development and licensing stages, providing new tools to fill previously unmet clinical needs. However, many adjuvants fail during product development owing to factors such as manufacturability, stability, lack of effectiveness, unacceptable levels of tolerability or safety concerns. This Review outlines the potential benefits of adjuvants in current and future vaccines and describes the importance of formulation and mechanisms of action of adjuvants. Moreover, we emphasize safety considerations and other crucial aspects in the clinical development of effective adjuvants that will help facilitate effective next-generation vaccines against devastating infectious diseases.",
"title": ""
},
{
"docid": "34630bf9d122e546f82dba54d832ec78",
"text": "A growth model with shocks to technology is studied. Labor is indivisible, so all variability in hours worked is due to fluctuations in the number employed. We find that, unlike previous equilibrium models of the business cycle, this economy displays large fluctuations in hours worked and relatively small fluctuations in productivity. This finding is independent of individuals’ willingness to substitute leisure across time. This and other findings are the result of studying and comparing summary statistics describing this economy, an economy with divisible labor, and post-war U.S. time series.",
"title": ""
},
{
"docid": "4f1949af3455bd5741e731a9a60ecdf1",
"text": "BACKGROUND\nGuava leaf tea (GLT), exhibiting a diversity of medicinal bioactivities, has become a popularly consumed daily beverage. To improve the product quality, a new process was recommended to the Ser-Tou Farmers' Association (SFA), who began field production in 2005. The new process comprised simplified steps: one bud-two leaves were plucked at 3:00-6:00 am, in the early dawn period, followed by withering at ambient temperature (25-28 °C), rolling at 50 °C for 50-70 min, with or without fermentation, then drying at 45-50 °C for 70-90 min, and finally sorted.\n\n\nRESULTS\nThe product manufactured by this new process (named herein GLTSF) exhibited higher contents (in mg g(-1), based on dry ethyl acetate fraction/methanolic extract) of polyphenolics (417.9 ± 12.3) and flavonoids (452.5 ± 32.3) containing a compositional profile much simpler than previously found: total quercetins (190.3 ± 9.1), total myricetin (3.3 ± 0.9), total catechins (36.4 ± 5.3), gallic acid (8.8 ± 0.6), ellagic acid (39.1 ± 6.4) and tannins (2.5 ± 9.1).\n\n\nCONCLUSION\nWe have successfully developed a new process for manufacturing GLTSF with a unique polyphenolic profile. Such characteristic compositional distribution can be ascribed to the right harvesting hour in the early dawn and appropriate treatment process at low temperature, avoiding direct sunlight.",
"title": ""
},
{
"docid": "56a6ea3418b9a1edf591b860f128ea82",
"text": "Convolutional Neural Networks (CNNs) have gained a remarkable success on many real-world problems in recent years. However, the performance of CNNs is highly relied on their architectures. For some state-of-the-art CNNs, their architectures are hand-crafted with expertise in both CNNs and the investigated problems. To this end, it is difficult for researchers, who have no extended expertise in CNNs, to explore CNNs for their own problems of interest. In this paper, we propose an automatic architecture design method for CNNs by using genetic algorithms, which is capable of discovering a promising architecture of a CNN on handling image classification tasks. The proposed algorithm does not need any pre-processing before it works, nor any post-processing on the discovered CNN, which means it is completely automatic. The proposed algorithm is validated on widely used benchmark datasets, by comparing to the state-of-the-art peer competitors covering eight manually designed CNNs, four semi-automatically designed CNNs and additional four automatically designed CNNs. The experimental results indicate that the proposed algorithm achieves the best classification accuracy consistently among manually and automatically designed CNNs. Furthermore, the proposed algorithm also shows the competitive classification accuracy to the semi-automatic peer competitors, while reducing 10 times of the parameters. In addition, on the average the proposed algorithm takes only one percentage of computational resource compared to that of all the other architecture discovering algorithms. Experimental codes and the discovered architectures along with the trained weights are made public to the interested readers.",
"title": ""
},
{
"docid": "2f30301143dc626a3013eb24629bfb45",
"text": "A vast array of devices, ranging from industrial robots to self-driven cars or smartphones, require increasingly sophisticated processing of real-world input data (image, voice, radio, ...). Interestingly, hardware neural network accelerators are emerging again as attractive candidate architectures for such tasks. The neural network algorithms considered come from two, largely separate, domains: machine-learning and neuroscience. These neural networks have very different characteristics, so it is unclear which approach should be favored for hardware implementation. Yet, few studies compare them from a hardware perspective. We implement both types of networks down to the layout, and we compare the relative merit of each approach in terms of energy, speed, area cost, accuracy and functionality.\n Within the limit of our study (current SNN and machine-learning NN algorithms, current best effort at hardware implementation efforts, and workloads used in this study), our analysis helps dispel the notion that hardware neural network accelerators inspired from neuroscience, such as SNN+STDP, are currently a competitive alternative to hardware neural networks accelerators inspired from machine-learning, such as MLP+BP: not only in terms of accuracy, but also in terms of hardware cost for realistic implementations, which is less expected. However, we also outline that SNN+STDP carry potential for reduced hardware cost compared to machine-learning networks at very large scales, if accuracy issues can be controlled (or for applications where they are less important). We also identify the key sources of inaccuracy of SNN+STDP which are less related to the loss of information due to spike coding than to the nature of the STDP learning algorithm. Finally, we outline that for the category of applications which require permanent online learning and moderate accuracy, SNN+STDP hardware accelerators could be a very cost-efficient solution.",
"title": ""
},
{
"docid": "7681a78f2d240afc6b2e48affa0612c1",
"text": "Web usage mining applies data mining procedures to analyze user access of Web sites. As with any KDD (knowledge discovery and data mining) process, WUM contains three main steps: preprocessing, knowledge extraction, and results analysis. We focus on data preprocessing, a fastidious, complex process. Analysts aim to determine the exact list of users who accessed the Web site and to reconstitute user sessions-the sequence of actions each user performed on the Web site. Intersites WUM deals with Web server logs from several Web sites, generally belonging to the same organization. Thus, analysts must reassemble the users' path through all the different Web servers that they visited. Our solution is to join all the log files and reconstitute the visit. Classical data preprocessing involves three steps: data fusion, data cleaning, and data structuration. Our solution for WUM adds what we call advanced data preprocessing. This consists of a data summarization step, which will allow the analyst to select only the information of interest. We've successfully tested our solution in an experiment with log files from INRIA Web sites.",
"title": ""
},
{
"docid": "9e8e57ef22d3dfe139f4b9c9992b0884",
"text": "It has been suggested that when the variance assumptions of a repeated measures ANOVA are not met, the df of the mean square ratio should be adjusted by the sample estimate of the Box correction factor, e. This procedure works well when e is low, but the estimate is seriously biased when this is not the case. An alternate estimate is proposed which is shown by Monte Carlo methods to be less biased for moderately large e.",
"title": ""
},
{
"docid": "4c96561217bb77cf7ca899fbba06bbde",
"text": "The state of advice given to people today on how to stay safe online has plenty of room for improvement. Too many things are asked of them, which may be unrealistic, time consuming, or not really worth the effort. To improve the security advice, our community must find out what practices people use and what recommendations, if messaged well, are likely to bring the highest benefit while being realistic to ask of people. In this paper, we present the results of a study which aims to identify which practices people do that they consider most important at protecting their security online. We compare self-reported security practices of non-experts to those of security experts (i.e., participants who reported having five or more years of experience working in computer security). We report on the results of two online surveys—one with 231 security experts and one with 294 MTurk participants—on what the practices and attitudes of each group are. Our findings show a discrepancy between the security practices that experts and non-experts report taking. For instance, while experts most frequently report installing software updates, using two-factor authentication and using a password manager to stay safe online, non-experts report using antivirus software, visiting only known websites, and changing passwords frequently.",
"title": ""
},
{
"docid": "e652a726883bd507c34c464991ff9a68",
"text": "A compact wideband open-end slot antenna is proposed for DCS, PCS, UMTS, WLAN, and WiMAX band applications. The antenna is composed of an open-ended slot and a central line within the slot. The open-end slot decreases the antenna size in half, compared to a conventional slot-ring antenna. Furthermore, the narrowed slot and central line miniaturize the antenna and obtain wideband performance using the inherent impedance-matching network of the central line. The proposed antenna obtained an omnidirectional radiation pattern with an antenna gain of larger than 1 dBi and an antenna efficiency of greater than 50% over 1710-3660 MHz.",
"title": ""
},
{
"docid": "6cf7a5286a03190b0910380830968351",
"text": "In this paper, the mechanical and aerodynamic design, carbon composite production, hierarchical control system design and vertical flight tests of a new unmanned aerial vehicle, which is capable of VTOL (vertical takeoff and landing) like a helicopter and long range horizontal flight like an airplane, are presented. Real flight tests show that the aerial vehicle can successfully operate in VTOL mode. Kalman filtering is employed to obtain accurate roll and pitch angle estimations.",
"title": ""
},
{
"docid": "b78d5e7047d340ebef8f4e80d28ab4d9",
"text": "Light scattering and color change are two major sources of distortion for underwater photography. Light scattering is caused by light incident on objects reflected and deflected multiple times by particles present in the water before reaching the camera. This in turn lowers the visibility and contrast of the image captured. Color change corresponds to the varying degrees of attenuation encountered by light traveling in the water with different wavelengths, rendering ambient underwater environments dominated by a bluish tone. No existing underwater processing techniques can handle light scattering and color change distortions suffered by underwater images, and the possible presence of artificial lighting simultaneously. This paper proposes a novel systematic approach to enhance underwater images by a dehazing algorithm, to compensate the attenuation discrepancy along the propagation path, and to take the influence of the possible presence of an artifical light source into consideration. Once the depth map, i.e., distances between the objects and the camera, is estimated, the foreground and background within a scene are segmented. The light intensities of foreground and background are compared to determine whether an artificial light source is employed during the image capturing process. After compensating the effect of artifical light, the haze phenomenon and discrepancy in wavelength attenuation along the underwater propagation path to camera are corrected. Next, the water depth in the image scene is estimated according to the residual energy ratios of different color channels existing in the background light. Based on the amount of attenuation corresponding to each light wavelength, color change compensation is conducted to restore color balance. The performance of the proposed algorithm for wavelength compensation and image dehazing (WCID) is evaluated both objectively and subjectively by utilizing ground-truth color patches and video downloaded from the Youtube website. Both results demonstrate that images with significantly enhanced visibility and superior color fidelity are obtained by the WCID proposed.",
"title": ""
},
{
"docid": "78982bfdcf476081bd708c8aa2e5c5bd",
"text": "Simultaneous Localization And Mapping (SLAM) is a fundamental problem in mobile robotics. While sparse point-based SLAM methods provide accurate camera localization, the generated maps lack semantic information. On the other hand, state of the art object detection methods provide rich information about entities present in the scene from a single image. This work incorporates a real-time deep-learned object detector to the monocular SLAM framework for representing generic objects as quadrics that permit detections to be seamlessly integrated while allowing the real-time performance. Finer reconstruction of an object, learned by a CNN network, is also incorporated and provides a shape prior for the quadric leading further refinement. To capture the dominant structure of the scene, additional planar landmarks are detected by a CNN-based plane detector and modelled as landmarks in the map. Experiments show that the introduced plane and object landmarks and the associated constraints, using the proposed monocular plane detector and incorporated object detector, significantly improve camera localization and lead to a richer semantically more meaningful map.",
"title": ""
},
{
"docid": "d2ca6d41e582c798bc7c53e932fd8dec",
"text": "How to measure usability is an important question in HCI research and user interface evaluation. We review current practice in measuring usability by categorizing and discussing usability measures from 180 studies published in core HCI journals and proceedings. The discussion distinguish several problems with the measures, including whether they actually measure usability, if they cover usability broadly, how they are reasoned about, and if they meet recommendations on how to measure usability. In many studies, the choice of and reasoning about usability measures fall short of a valid and reliable account of usability as quality-in-use of the user interface being studied. Based on the review, we discuss challenges for studies of usability and for research into how to measure usability. The challenges are to distinguish and empirically compare subjective and objective measures of usability; to focus on developing and employing measures of learning and retention; to study long-term use and usability; to extend measures of satisfaction beyond post-use questionnaires; to validate and standardize the host of subjective satisfaction questionnaires used; to study correlations between usability measures as a means for validation; and to use both micro and macro tasks and corresponding measures of usability. In conclusion, we argue that increased attention to the problems identified and challenges discussed may strengthen studies of usability and usability research. r 2005 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "6da5d72c237948b03cc6a818884ff937",
"text": "This paper develops a model of conversion behavior (i.e., converting store visits into purchases) that predicts each customers probability of purchasing based on an observed history of visits and purchases. We offer an individual-level probability model that allows for consumer heterogeneity in a very flexible manner. We allow visits to play very different roles in the purchasing process. For example, some visits are motivated by a planned purchase while others are simply browsing visits. The Conversion Model in this paper has the flexibility to accommodate a number of visit-to-purchase relationships. Finally, consumers shopping behavior may evolve over time as a function of past experiences. Thus, the Conversion Model also allows for non-stationarity in behavior. Specifically, our Conversion Model decomposes an individuals purchasing conversion behavior into a visit effect and a purchasing threshold effect. Each component is allowed to vary across households as well as over time. We then apply this model to the problem of managing visitor traffic. By predicting purchasing probabilities for a given visit, the Conversion Model can identify those visits that are likely to result in a purchase. These visits should be re-directed to a server that will provide a better shopping experience while those visitors that are less likely to result in a purchase may be identified as targets for a promotion.",
"title": ""
},
{
"docid": "44ca351c024e61b06b1709ba0e4db44f",
"text": "Rootkits affect system security by modifying kernel data structures to achieve a variety of malicious goals. While early rootkits modified control data structures, such as the system call table and values of function pointers, recent work has demonstrated rootkits that maliciously modify noncontrol data. Most prior techniques for rootkit detection have focused solely on detecting control data modifications and, therefore, fail to detect such rootkits. This paper presents a novel technique to detect rootkits that modify both control and noncontrol data. The main idea is to externally observe the execution of the kernel during an inference phase and hypothesize invariants on kernel data structures. A rootkit detection phase uses these invariants as specifications of data structure integrity. During this phase, violation of invariants indicates an infection. We have implemented Gibraltar, a prototype tool that infers kernel data structure invariants and uses them to detect rootkits. Experiments show that Gibraltar can effectively detect previously known rootkits, including those that modify noncontrol data structures.",
"title": ""
},
{
"docid": "93e93d2278706638859f5f4b1601bfa6",
"text": "To acquire accurate, real-time hyperspectral images with high spatial resolution, we develop two types of low-cost, lightweight Whisk broom hyperspectral sensors that can be loaded onto lightweight unmanned autonomous vehicle (UAV) platforms. A system is composed of two Mini-Spectrometers, a polygon mirror, references for sensor calibration, a GPS sensor, a data logger and a power supply. The acquisition of images with high spatial resolution is realized by a ground scanning along a direction perpendicular to the flight direction based on the polygon mirror. To cope with the unstable illumination condition caused by the low-altitude observation, skylight radiation and dark current are acquired in real-time by the scanning structure. Another system is composed of 2D optical fiber array connected to eight Mini-Spectrometers and a telephoto lens, a convex lens, a micro mirror, a GPS sensor, a data logger and a power supply. The acquisition of images is realized by a ground scanning based on the rotation of the micro mirror.",
"title": ""
},
{
"docid": "31835855800826146ac689ab5d000344",
"text": "The article emphasizes the ability of new nonlinear resistive materials to be tuned for specific applications, which opens up new possibilities in designing electric field control of products for medium and high voltage applications.",
"title": ""
},
{
"docid": "62493d2bf56e2de680d396e407bf41d5",
"text": "In this paper, we investigate the problem of finding the common tangents of two convex polygons that intersect in two (unknown) points. First, we give a @(log’ n) bound for algorithms that store the polygons in independent arrays. Second, we show how to beat the lower bound if the vertices of the convex polygons are drawn from a fixed set of n points. We introduce a data structure called a compact interval tree that supports common tangent computations, as well as the standard binary-search-based queries, in O(log n) time apiece. Third, we apply compact interval trees to solve the subpath hull query problem: given a simple path, preprocess it so that we can find the convex hull of a query subpath quickly. With O(n log n) preprocessing, we can assemble a compact interval tree that represents the convex hull of a query subpath in O(log n) time. In order to represent arrangements of lines implicitly, Edelsbrunner et al. [l] used a less efficient structure, called bridge trees, to solve the subpath hull query problem. Our compact interval trees improve their results by a factor of O(logn). Thus, the present paper replaces the paper on bridge trees referred to by [l].",
"title": ""
}
] |
scidocsrr
|
51c6bdc6b899f54df1b3a92bdc7d4234
|
Clinical Outcomes of Impending Nasal Skin Necrosis Related to Nose and Nasolabial Fold Augmentation with Hyaluronic Acid Fillers.
|
[
{
"docid": "01ed88c12ed9b2ca96cdf46700005493",
"text": "Using soft tissue fillers to correct postrhinoplasty deformities in the nose is appealing. Fillers are minimally invasive and can potentially help patients who are concerned with the financial expense, anesthetic risk, or downtime generally associated with a surgical intervention. A variety of filler materials are currently available and have been used for facial soft tissue augmentation. Of these, hyaluronic acid (HA) derivatives, calcium hydroxylapatite gel (CaHA), and silicone have most frequently been used for treating nasal deformities. While effective, silicone is known to cause severe granulomatous reactions in some patients and should be avoided. HA and CaHA are likely safer, but still may occasionally lead to complications such as infection, thinning of the skin envelope, and necrosis. Nasal injection technique must include sub-SMAS placement to eliminate visible or palpable nodularity. Restricting the use of fillers to the nasal dorsum and sidewalls minimizes complications because more adverse events occur after injections to the nasal tip and alae. We believe that HA and CaHA are acceptable for the treatment of postrhinoplasty deformities in carefully selected patients; however, patients who are treated must be followed closely for complications. The use of any soft tissue filler in the nose should always be approached with great caution and with a thorough consideration of a patient's individual circumstances.",
"title": ""
}
] |
[
{
"docid": "552a1dae3152fcc2c19a83eb26bc1021",
"text": "Several new algorithms for camera-based fall detection have been proposed in the literature recently, with the aim to monitor older people at home so nurses or family members can be warned in case of a fall incident. However, these algorithms are evaluated almost exclusively on data captured in controlled environments, under optimal conditions (simple scenes, perfect illumination and setup of cameras), and with falls simulated by actors. In contrast, we collected a dataset based on real life data, recorded at the place of residence of four older persons over several months. We showed that this poses a significantly harder challenge than the datasets used earlier. The image quality is typically low. Falls are rare and vary a lot both in speed and nature. We investigated the variation in environment parameters and context during the fall incidents. We found that various complicating factors, such as moving furniture or the use of walking aids, are very common yet almost unaddressed in the literature. Under such circumstances and given the large variability of the data in combination with the limited number of examples available to train the system, we posit that simple yet robust methods incorporating, where available, domain knowledge (e.g. the fact that the background is static or that a fall usually involves a downward motion) seem to be most promising. Based on these observations, we propose a new fall detection system. It is based on background subtraction and simple measures extracted from the dominant foreground object such as aspect ratio, fall angle and head speed. We discuss the results obtained, with special emphasis on particular difficulties encountered under real world circumstances.",
"title": ""
},
{
"docid": "2ded5ae948ef41d64a857c824f0c2246",
"text": "While social media offer great communication opportunities, they also increase the vulnerability of young people to threatening situations online. Recent studies report that cyberbullying constitutes a growing problem among youngsters. Successful prevention depends on the adequate detection of potentially harmful messages and the information overload on the Web requires intelligent systems to identify potential risks automatically. The focus of this paper is on automatic cyberbullying detection in social media text by modelling posts written by bullies, victims, and bystanders of online bullying. We describe the collection and fine-grained annotation of a cyberbullying corpus for English and Dutch and perform a series of binary classification experiments to determine the feasibility of automatic cyberbullying detection. We make use of linear support vector machines exploiting a rich feature set and investigate which information sources contribute the most for the task. Experiments on a hold-out test set reveal promising results for the detection of cyberbullying-related posts. After optimisation of the hyperparameters, the classifier yields an F1 score of 64% and 61% for English and Dutch respectively, and considerably outperforms baseline systems.",
"title": ""
},
{
"docid": "72fb6765b43f47abc129c073bfdcdba5",
"text": "The General Data Protection Regulation (GDPR) is a European Union regulation that will replace the existing Data Protection Directive on 25 May 2018. The most significant change is a huge increase in the maximum fine that can be levied for breaches of the regulation. Yet fewer than half of UK companies are fully aware of GDPR—and a number of those who were preparing for it stopped doing so when the Brexit vote was announced. A last-minute rush to become compliant is therefore expected, and numerous companies are starting to offer advice, checklists and consultancy on how to comply with GDPR. In such an environment, artificial intelligence technologies ought to be able to assist by providing best advice; asking all and only the relevant questions; monitoring activities; and carrying out assessments. The paper considers four areas of GDPR compliance where rule based technologies and/or machine learning techniques may be relevant: Following compliance checklists and codes of conduct; Supporting risk assessments; Complying with the new regulations regarding technologies that perform automatic profiling; Complying with the new regulations concerning recognising and reporting breaches of security. It concludes that AI technology can support each of these four areas. The requirements that GDPR (or organisations that need to comply with GDPR) state for explanation and justification of reasoning imply that rule-based approaches are likely to be more helpful than machine learning approaches. However, there may be good business reasons to take a different approach in some circumstances.",
"title": ""
},
{
"docid": "aac326acf267f3299f03b9b426c8c9ac",
"text": "Recently, Internet of Things (IoT) and cloud computing (CC) have been widely studied and applied in many fields, as they can provide a new method for intelligent perception and connection from M2M (including man-to-man, man-to-machine, and machine-to-machine), and on-demand use and efficient sharing of resources, respectively. In order to realize the full sharing, free circulation, on-demand use, and optimal allocation of various manufacturing resources and capabilities, the applications of the technologies of IoT and CC in manufacturing are investigated in this paper first. Then, a CC- and IoT-based cloud manufacturing (CMfg) service system (i.e., CCIoT-CMfg) and its architecture are proposed, and the relationship among CMfg, IoT, and CC is analyzed. The technology system for realizing the CCIoT-CMfg is established. Finally, the advantages, challenges, and future works for the application and implementation of CCIoT-CMfg are discussed.",
"title": ""
},
{
"docid": "6d11d47e6549ac4d9f369772e78884d8",
"text": "A novel analytical model of inductively coupled wireless power transfer is presented. For the first time, the effects of coil misalignment and geometry are addressed in a single mathematical expression. In the applications envisaged, such as radio frequency identification (RFID) and biomedical implants, the receiving coil is normally significantly smaller than the transmitting coil. Formulas are derived for the magnetic field at the receiving coil when it is laterally and angularly misaligned from the transmitting coil. Incorporating this magnetic field solution with an equivalent circuit for the inductive link allows us to introduce a power transfer formula that combines coil characteristics and misalignment factors. The coil geometries considered are spiral and short solenoid structures which are currently popular in the RFID and biomedical domains. The novel analytical power transfer efficiency expressions introduced in this study allow the optimization of coil geometry for maximum power transfer and misalignment tolerance. The experimental results show close correlation with the theoretical predictions. This analytic technique can be widely applied to inductive wireless power transfer links without the limitations imposed by numerical methods.",
"title": ""
},
{
"docid": "90469bbf7cf3216b2ab1ee8441fbce14",
"text": "This work presents the evolution of a solution for predictive maintenance to a Big Data environment. The proposed adaptation aims for predicting failures on wind turbines using a data-driven solution deployed in the cloud and which is composed by three main modules. (i) A predictive model generator which generates predictive models for each monitored wind turbine by means of Random Forest algorithm. (ii) A monitoring agent that makes predictions every 10 minutes about failures in wind turbines during the next hour. Finally, (iii) a dashboard where given predictions can be visualized. To implement the solution Apache Spark, Apache Kafka, Apache Mesos and HDFS have been used. Therefore, we have improved the previous work in terms of data process speed, scalability and automation. In addition, we have provided fault-tolerant functionality with a centralized access point from where the status of all the wind turbines of a company localized all over the world can be monitored, reducing O&M costs.",
"title": ""
},
{
"docid": "b510e02eccd0155d5d411be805f81009",
"text": "Dynamically typed languages trade flexibility and ease of use for safety, while statically typed languages prioritize the early detection of bugs, and provide a better framework for structure large programs. The idea of optional typing is to combine the two approaches in the same language: the programmer can begin development with dynamic types, and migrate to static types as the program matures. The challenge is designing a type system that feels natural to the programmer that is used to programming in a dynamic language.\n This paper presents the initial design of Typed Lua, an optionally-typed extension of the Lua scripting language. Lua is an imperative scripting language with first class functions and lightweight metaprogramming mechanisms. The design of Typed Lua's type system has a novel combination of features that preserves some of the idioms that Lua programmers are used to, while bringing static type safety to them. We show how the major features of the type system type these idioms with some examples, and discuss some of the design issues we faced.",
"title": ""
},
{
"docid": "6e73ea43f02dc41b96e5d46bafe3541d",
"text": "Learning discriminative representations for unseen person images is critical for person re-identification (ReID). Most of the current approaches learn deep representations in classification tasks, which essentially minimize the empirical classification risk on the training set. As shown in our experiments, such representations easily get over-fitted on a discriminative human body part on the training set. To gain the discriminative power on unseen person images, we propose a deep representation learning procedure named part loss network, to minimize both the empirical classification risk on training person images and the representation learning risk on unseen person images. The representation learning risk is evaluated by the proposed part loss, which automatically detects human body parts and computes the person classification loss on each part separately. Compared with traditional global classification loss, simultaneously considering part loss enforces the deep network to learn representations for different body parts and gain the discriminative power on unseen persons. Experimental results on three person ReID datasets, i.e., Market1501, CUHK03, and VIPeR, show that our representation outperforms existing deep representations.",
"title": ""
},
{
"docid": "f95bc42d41f4c7448950fa4e1a47ac9a",
"text": "In recent years many deep neural networks have been proposed to solve Reading Comprehension (RC) tasks. Most of these models suffer from reasoning over long documents and do not trivially generalize to cases where the answer is not present as a span in a given document. We present a novel neural-based architecture that is capable of extracting relevant regions based on a given question-document pair and generating a well-formed answer. To show the effectiveness of our architecture, we conducted several experiments on the recently proposed and challenging RC dataset ‘NarrativeQA’. The proposed architecture outperforms state-of-the-art results (Tay et al., 2018) by 12.62% (ROUGE-L) relative improvement.",
"title": ""
},
{
"docid": "05bcc85ca42945987a6f0c6c2839fa0a",
"text": "Abstract. Blockchain has many benefits including decentralization, availability, persistency, consistency, anonymity, auditability and accountability, and it also covers a wide spectrum of applications ranging from cryptocurrency, financial services, reputation system, Internet of Things, sharing economy to public and social services. Not only may blockchain be regarded as a by-product of Bitcoin cryptocurrency systems, but also it is a type of distributed ledger technology through using a trustworthy, decentralized log of totally ordered transactions. By summarizing the literature of blockchain, it is found that more papers focus on engineering implementation and realization, while little work has been done on basic theory, for example, mathematical models (Markov processes, queueing theory and game models), performance analysis and optimization of blockchain systems. In this paper, we develop queueing theory of blockchain systems and provide system performance evaluation. To do this, we design a Markovian batch-service queueing system with two different service stages, while the two stages are suitable to well express the mining process in the miners pool and the building of a new blockchain. By using the matrix-geometric solution, we obtain a system stable condition and express three key performance measures: (a) The number of transactions in the queue, (b) the number of transactions in a block, and (c) the transaction-confirmation time. Finally, We use numerical examples to verify computability of our theoretical results. Although our queueing model is simple under exponential or Poisson assumptions, our analytic method will open a series of potentially promising research in queueing theory of blockchain systems.",
"title": ""
},
{
"docid": "35502104f98e7ced7c39d622ed7a82ea",
"text": "When security incidents occur, several challenges exist for conducting an effective forensic investigation of SCADA systems, which run 24/7 to control and monitor industrial and infrastructure processes. The Web extra at http://youtu.be/L0EFnr-famg is an audio interview with Irfan Ahmed about SCADA (supervisory control and data acquisition) systems.",
"title": ""
},
{
"docid": "1c16d6b5072283cfc9301f6ae509ede1",
"text": "T paper introduces a model of collective creativity that explains how the locus of creative problem solving shifts, at times, from the individual to the interactions of a collective. The model is grounded in observations, interviews, informal conversations, and archival data gathered in intensive field studies of work in professional service firms. The evidence suggests that although some creative solutions can be seen as the products of individual insight, others should be regarded as the products of a momentary collective process. Such collective creativity reflects a qualitative shift in the nature of the creative process, as the comprehension of a problematic situation and the generation of creative solutions draw from—and reframe—the past experiences of participants in ways that lead to new and valuable insights. This research investigates the origins of such moments, and builds a model of collective creativity that identifies the precipitating roles played by four types of social interaction: help seeking, help giving, reflective reframing, and reinforcing. Implications of this research include shifting the emphasis in research and management of creativity from identifying and managing creative individuals to understanding the social context and developing interactive approaches to creativity, and from a focus on relatively constant contextual variables to the alignment of fluctuating variables and their precipitation of momentary phenomena.",
"title": ""
},
{
"docid": "bd3776d1dc36d6a91ea73d3c12ca326c",
"text": "Spatial pyramid pooling module or encode-decoder structure are used in deep neural networks for semantic segmentation task. The former networks are able to encode multi-scale contextual information by probing the incoming features with filters or pooling operations at multiple rates and multiple effective fields-of-view, while the latter networks can capture sharper object boundaries by gradually recovering the spatial information. In this work, we propose to combine the advantages from both methods. Specifically, our proposed model, DeepLabv3+, extends DeepLabv3 by adding a simple yet effective decoder module to refine the segmentation results especially along object boundaries. We further explore the Xception model and apply the depthwise separable convolution to both Atrous Spatial Pyramid Pooling and decoder modules, resulting in a faster and stronger encoder-decoder network. We demonstrate the effectiveness of the proposed model on PASCAL VOC 2012 and Cityscapes datasets, achieving the test set performance of 89.0% and 82.1% without any post-processing. Our paper is accompanied with a publicly available reference implementation of the proposed models in Tensorflow at https: //github.com/tensorflow/models/tree/master/research/deeplab.",
"title": ""
},
{
"docid": "3a3d6fecb580c2448c21838317aec3e2",
"text": "The Vehicle Routing Problem with Time windows (VRPTW) is an extension of the capacity constrained Vehicle Routing Problem (VRP). The VRPTW is NP-Complete and instances with 100 customers or more are very hard to solve optimally. We represent the VRPTW as a multi-objective problem and present a genetic algorithm solution using the Pareto ranking technique. We use a direct interpretation of the VRPTW as a multi-objective problem, in which the two objective dimensions are number of vehicles and total cost (distance). An advantage of this approach is that it is unnecessary to derive weights for a weighted sum scoring formula. This prevents the introduction of solution bias towards either of the problem dimensions. We argue that the VRPTW is most naturally viewed as a multi-objective problem, in which both vehicles and cost are of equal value, depending on the needs of the user. A result of our research is that the multi-objective optimization genetic algorithm returns a set of solutions that fairly consider both of these dimensions. Our approach is quite effective, as it provides solutions competitive with the best known in the literature, as well as new solutions that are not biased toward the number of vehicles. A set of well-known benchmark data are used to compare the effectiveness of the proposed method for solving the VRPTW.",
"title": ""
},
{
"docid": "3024c0cd172eb2a3ec33e0383ac8ba18",
"text": "The Android packaging model offers ample opportunities for malware writers to piggyback malicious code in popular apps, which can then be easily spread to a large user base. Although recent research has produced approaches and tools to identify piggybacked apps, the literature lacks a comprehensive investigation into such phenomenon. We fill this gap by: 1) systematically building a large set of piggybacked and benign apps pairs, which we release to the community; 2) empirically studying the characteristics of malicious piggybacked apps in comparison with their benign counterparts; and 3) providing insights on piggybacking processes. Among several findings providing insights analysis techniques should build upon to improve the overall detection and classification accuracy of piggybacked apps, we show that piggybacking operations not only concern app code, but also extensively manipulates app resource files, largely contradicting common beliefs. We also find that piggybacking is done with little sophistication, in many cases automatically, and often via library code.",
"title": ""
},
{
"docid": "0afbce731c55b9a3d3ced22ad59aa0ef",
"text": "In this paper, we introduce a method that automatically builds text classifiers in a new language by training on already labeled data in another language. Our method transfers the classification knowledge across languages by translating the model features and by using an Expectation Maximization (EM) algorithm that naturally takes into account the ambiguity associated with the translation of a word. We further exploit the readily available unlabeled data in the target language via semisupervised learning, and adapt the translated model to better fit the data distribution of the target language.",
"title": ""
},
{
"docid": "4bce6150e9bc23716a19a0d7c02640c0",
"text": "A Data Mining Framework for Constructing Features and Models for Intrusion Detection Systems",
"title": ""
},
{
"docid": "02f97b35b014a55b4a36e22981877784",
"text": "BACKGROUND\nCough is an extremely common problem in pediatrics, mostly triggered and perpetuated by inflammatory processes or mechanical irritation leading to viscous mucous production and increased sensitivity of the cough receptors. Protecting the mucosa might be very useful in limiting the contact with micro-organisms and irritants thus decreasing the inflammation and mucus production. Natural molecular complexes can act as a mechanical barrier limiting cough stimuli with a non pharmacological approach but with an indirect anti-inflammatory action.\n\n\nOBJECTIVE\nAim of the study was to assess the efficacy of a medical device containing natural functional components in the treatment of cough persisting more than 7 days.\n\n\nMETHODS\nIn this randomized, parallel groups, double-blind vs. placebo study, children with cough persisting more than 7 days were enrolled. The clinical efficacy of the study product was assessed evaluating changes in day- and night-time cough scores after 4 and 8 days (t4 and t8) of product administration.\n\n\nRESULTS\nIn the inter-group analysis, in the study product group compared with the placebo group, a significant difference (t4 study treatment vs. t4 placebo, p = 0.03) was observed at t4 in night-time cough score.Considering the intra-group analysis, only the study product group registered a significant improvement from t0 to t4 in both day-time (t0 vs. t4, p = 0.04) and night-time (t0 vs. t4, p = 0.003) cough scores.A significant difference, considering the study product, was also found in the following intra-group analyses: day-time scores at t4 vs. t8 (p =0.01) and at t0 vs. t8 (p = 0.001); night-time scores at t4 vs. t8 (p = 0.05), and at t0 vs. t8 (p = 0.005). Considering a subgroup of patients with higher cough (≥ 3) scores, 92.9% of them in the study product group improved at t0 vs. t4 day-time.\n\n\nCONCLUSIONS\nGrintuss® pediatric syrup showed to possess an interesting profile of efficacy and safety in the treatment of cough persisting more than 7 days.",
"title": ""
},
{
"docid": "073b17e195cec320c20533f154d4ab7f",
"text": "Automatic segmentation of cell nuclei is an essential step in image cytometry and histometry. Despite substantial progress, there is a need to improve accuracy, speed, level of automation, and adaptability to new applications. This paper presents a robust and accurate novel method for segmenting cell nuclei using a combination of ideas. The image foreground is extracted automatically using a graph-cuts-based binarization. Next, nuclear seed points are detected by a novel method combining multiscale Laplacian-of-Gaussian filtering constrained by distance-map-based adaptive scale selection. These points are used to perform an initial segmentation that is refined using a second graph-cuts-based algorithm incorporating the method of alpha expansions and graph coloring to reduce computational complexity. Nuclear segmentation results were manually validated over 25 representative images (15 in vitro images and 10 in vivo images, containing more than 7400 nuclei) drawn from diverse cancer histopathology studies, and four types of segmentation errors were investigated. The overall accuracy of the proposed segmentation algorithm exceeded 86%. The accuracy was found to exceed 94% when only over- and undersegmentation errors were considered. The confounding image characteristics that led to most detection/segmentation errors were high cell density, high degree of clustering, poor image contrast and noisy background, damaged/irregular nuclei, and poor edge information. We present an efficient semiautomated approach to editing automated segmentation results that requires two mouse clicks per operation.",
"title": ""
}
] |
scidocsrr
|
b18b5873bea1e7c849b518f3f3bab4cb
|
High-Content Screening for Quantitative Cell Biology.
|
[
{
"docid": "6104736f53363991d675c2a03ada8c82",
"text": "The term machine learning refers to a set of topics dealing with the creation and evaluation of algorithms that facilitate pattern recognition, classification, and prediction, based on models derived from existing data. Two facets of mechanization should be acknowledged when considering machine learning in broad terms. Firstly, it is intended that the classification and prediction tasks can be accomplished by a suitably programmed computing machine. That is, the product of machine learning is a classifier that can be feasibly used on available hardware. Secondly, it is intended that the creation of the classifier should itself be highly mechanized, and should not involve too much human input. This second facet is inevitably vague, but the basic objective is that the use of automatic algorithm construction methods can minimize the possibility that human biases could affect the selection and performance of the algorithm. Both the creation of the algorithm and its operation to classify objects or predict events are to be based on concrete, observable data. The history of relations between biology and the field of machine learning is long and complex. An early technique [1] for machine learning called the perceptron constituted an attempt to model actual neuronal behavior, and the field of artificial neural network (ANN) design emerged from this attempt. Early work on the analysis of translation initiation sequences [2] employed the perceptron to define criteria for start sites in Escherichia coli. Further artificial neural network architectures such as the adaptive resonance theory (ART) [3] and neocognitron [4] were inspired from the organization of the visual nervous system. In the intervening years, the flexibility of machine learning techniques has grown along with mathematical frameworks for measuring their reliability, and it is natural to hope that machine learning methods will improve the efficiency of discovery and understanding in the mounting volume and complexity of biological data. This tutorial is structured in four main components. Firstly, a brief section reviews definitions and mathematical prerequisites. Secondly, the field of supervised learning is described. Thirdly, methods of unsupervised learning are reviewed. Finally, a section reviews methods and examples as implemented in the open source data analysis and visualization language R (http://www.r-project.org).",
"title": ""
}
] |
[
{
"docid": "e849812c12446d78885c0f0dc9e4b318",
"text": "OBJECTIVES\nTo differentiate the porphyrias by clinical and biochemical methods.\n\n\nDESIGN AND METHODS\nWe describe levels of blood, urine, and fecal porphyrins and their precursors in the porphyrias and present an algorithm for their biochemical differentiation. Diagnoses were established using clinical and biochemical data. Porphyrin analyses were performed by high performance liquid chromatography.\n\n\nRESULTS AND CONCLUSIONS\nPlasma and urine porphyrin patterns were useful for diagnosis of porphyria cutanea tarda, but not the acute porphyrias. Erythropoietic protoporphyria was confirmed by erythrocyte protoporphyrin assay and erythrocyte fluorescence. Acute intermittent porphyria was diagnosed by increases in urine delta-aminolevulinic acid and porphobilinogen and confirmed by reduced erythrocyte porphobilinogen deaminase activity and normal or near-normal stool porphyrins. Variegate porphyria and hereditary coproporphyria were diagnosed by their characteristic stool porphyrin patterns. This appears to be the most convenient diagnostic approach until molecular abnormalities become more extensively defined and more widely available.",
"title": ""
},
{
"docid": "aa223de93696eec79feb627f899f8e8d",
"text": "The standard life events methodology for the prediction of psychological symptoms was compared with one focusing on relatively minor events, namely, the hassles and uplifts of everyday life. Hassles and Uplifts Scales were constructed and administered once a month for 10 consecutive months to a community sample of middle-aged adults. It was found that the Hassles Scale was a better predictor of concurrent and subsequent psychological symptoms than were the life events scores, and that the scale shared most of the variance in symptoms accounted for by life events. When the effects of life events scores were removed, hassles and symptoms remained significantly correlated. Uplifts were positively related to symptoms for women but not for men. Hassles and uplifts were also shown to be related, although only modestly so, to positive and negative affect, thus providing discriminate validation for hassles and uplifts in comparison to measures of emotion. It was concluded that the assessment of daily hassles and uplifts may be a better approach to the prediction of adaptational outcomes than the usual life events approach.",
"title": ""
},
{
"docid": "ee665e5a3d032a4e9b4e95cddac0f95c",
"text": "On p. 219, we describe the data we collected from BuzzSumo as “the total number of times each article was shared on Facebook” (emph. added). In fact, the BuzzSumo data are the number of engagements with each article, defined as the sum of shares, comments, and other interactions such as “likes.” All references to counts of Facebook shares in the paper and the online appendix based on the BuzzSumo data should be replaced with references to counts of Facebook engagements. None of the tables or figures in either the paper or the online appendix are affected by this change, nor does the change affect the results based on our custom survey. None of the substantive conclusions of the paper are affected with one exception discussed below, where our substantive conclusion is strengthened. Examples of cases where the text should be changed:",
"title": ""
},
{
"docid": "11b687aab787bc65f31d3f3037c2d1ed",
"text": "A review of the literature on the influence of beavers on the environment has been presented with regard to following aspects: (1) specific features of the ecology of beavers crucial for understanding their effects on the environment: (2) changes in the physical characteristics of habitats due to the activity of beavers (beavers as engineers); (3) the role of the beaver as a phytophage; (4) long-term changes of vegetation in beavers’ habitats and the possible consequences of these changes for beavers.",
"title": ""
},
{
"docid": "ca2f6c435c4eac77d6eecaf8d6feea18",
"text": "The fifth edition of the diagnostic and statistical manual of mental disorders (DSM-5) (APA in diagnostic and statistical manual of mental disorders, Author, Washington, 2013) has decided to merge the subtypes of pervasive developmental disorders into a single category of autism spectrum disorder (ASD) on the assumption that they cannot be reliably differentiated from one another. The purpose of this review is to analyze the basis of this assumption by examining the comparative studies between Asperger's disorder (AsD) and autistic disorder (AD), and between pervasive developmental disorder not otherwise specified (PDDNOS) and AD. In all, 125 studies compared AsD with AD. Of these, 30 studies concluded that AsD and AD were similar conditions while 95 studies found quantitative and qualitative differences between them. Likewise, 37 studies compared PDDNOS with AD. Nine of these concluded that PDDNOS did not differ significantly from AD while 28 reported quantitative and qualitative differences between them. Taken together, these findings do not support the conceptualization of AD, AsD and PDDNOS as a single category of ASD. Irrespective of the changes proposed by the DSM-5, future research and clinical practice will continue to find ways to meaningfully subtype the ASD.",
"title": ""
},
{
"docid": "c86f0c79fc2c823083f57e2425a0ab26",
"text": "Social media services such as Twitter generate phenomenal volume of content for most real-world events on a daily basis. Digging through the noise and redundancy to understand the important aspects of the content is a very challenging task. We propose a search and summarization framework to extract relevant representative tweets from an unfiltered tweet stream in order to generate a coherent and concise summary of an event. We introduce two topic models that take advantage of temporal correlation in the data to extract relevant tweets for summarization. The summarization framework has been evaluated using Twitter data on four real-world events. Evaluations are performed using Wikipedia articles on the events as well as using Amazon Mechanical Turk (MTurk) with human readers (MTurkers). Both experiments show that the proposed models outperform traditional LDA and lead to informative summaries.",
"title": ""
},
{
"docid": "38d9a18ba942e401c3d0638f88bc948c",
"text": "The question whether preemptive algorithms are better than nonpreemptive ones for scheduling a set of real-time tasks has been debated for a long time in the research community. In fact, especially under fixed priority systems, each approach has advantages and disadvantages, and no one dominates the other when both predictability and efficiency have to be taken into account in the system design. Recently, limited preemption models have been proposed as a viable alternative between the two extreme cases of fully preemptive and nonpreemptive scheduling. This paper presents a survey of the existing approaches for reducing preemptions and compares them under different metrics, providing both qualitative and quantitative performance evaluations.",
"title": ""
},
{
"docid": "0c6db1dc918d3e1b4b444b2b1410fb7f",
"text": "The concurrent use of drugs and herbal products is becoming increasingly prevalent over the last decade. Several herbal products have been known to modulate cytochrome P450 (CYP) enzymes and P-glycoprotein (P-gp) which are recognized as representative drug metabolizing enzymes and drug transporter, respectively. Thus, a summary of knowledge on the modulation of CYP and P-gp by commonly used herbs can provide robust fundamentals for optimizing CYP and/or P-gp substrate drug-based therapy. Herein, we review ten popular medicinal and/or dietary herbs as perpetrators of CYP- and P-gp-mediated pharmacokinetic herb-drug interactions. The main focus is placed on previous works on the ability of herbal extracts and their phytochemicals to modulate the expression and function of CYP and P-gp in several in vitro and in vivo animal and human systems.",
"title": ""
},
{
"docid": "02eec4b9078af92a774f6e46b36808f7",
"text": "Cancer cell migration is a plastic and adaptive process integrating cytoskeletal dynamics, cell-extracellular matrix and cell-cell adhesion, as well as tissue remodeling. In response to molecular and physical microenvironmental cues during metastatic dissemination, cancer cells exploit a versatile repertoire of invasion and dissemination strategies, including collective and single-cell migration programs. This diversity generates molecular and physical heterogeneity of migration mechanisms and metastatic routes, and provides a basis for adaptation in response to microenvironmental and therapeutic challenge. We here summarize how cytoskeletal dynamics, protease systems, cell-matrix and cell-cell adhesion pathways control cancer cell invasion programs, and how reciprocal interaction of tumor cells with the microenvironment contributes to plasticity of invasion and dissemination strategies. We discuss the potential and future implications of predicted \"antimigration\" therapies that target cytoskeletal dynamics, adhesion, and protease systems to interfere with metastatic dissemination, and the options for integrating antimigration therapy into the spectrum of targeted molecular therapies.",
"title": ""
},
{
"docid": "be593352763133428b837f1c593f30cf",
"text": "Deep Learning’s recent successes have mostly relied on Convolutional Networks, which exploit fundamental statistical properties of images, sounds and video data: the local stationarity and multi-scale compositional structure, that allows expressing long range interactions in terms of shorter, localized interactions. However, there exist other important examples, such as text documents or bioinformatic data, that may lack some or all of these strong statistical regularities. In this paper we consider the general question of how to construct deep architectures with small learning complexity on general non-Euclidean domains, which are typically unknown and need to be estimated from the data. In particular, we develop an extension of Spectral Networks which incorporates a Graph Estimation procedure, that we test on large-scale classification problems, matching or improving over Dropout Networks with far less parameters to estimate.",
"title": ""
},
{
"docid": "f5431531ce8e3d67de883d7630a17ed4",
"text": "The dentate gyrus of the hippocampus continues to produce new neurons throughout adulthood. Adult neurogenesis has been linked to hippocampal function, including learning and memory, anxiety regulation and feedback of the stress response. It is thus not surprising that stress, which affects hippocampal function, also alters the production and survival of new neurons. Glucocorticoids, along with other neurochemicals, have been implicated in stress-induced impairment of adult neurogenesis. Paradoxically, increases in corticosterone levels are sometimes associated with enhanced adult neurogenesis in the dentate gyrus. In these circumstances, the factors that buffer against the suppressive influence of elevated glucocorticoids remain unknown; their discovery may provide clues to reversing pathological processes arising from chronic exposure to aversive stress.",
"title": ""
},
{
"docid": "f3dbd127e5d76706b592c6154528a909",
"text": "Due to the undeniable advantage of prediction and proactivity, many research areas and industrial applications are accelerating the pace to keep up with data science and predictive analytics. However and due to three well-known facts, the reactive Complex Event Processing (CEP) technology might lag behind when prediction becomes a requirement. 1st fact: The one and only inference mechanism in this domain is totally guided by CEP rules. 2nd fact: The only way to define a CEP rule is by writing it manually with the help of a human expert. 3rd fact: Experts tend to write reactive CEP rules, because and regardless of the level of expertise, it is nearly impossible to manually write predictive CEP rules. Combining these facts together, the CEP is---and will stay--- a reactive computing technique. Therefore in this article, we present a novel data mining-based approach that automatically learns predictive CEP rules. The approach proposes a new learning algorithm where complex patterns from multivariate time series are learned. Then at run-time, a seamless transformation into the CEP world takes place. The result is a ready-to-use CEP engine with enrolled predictive CEP rules. Many experiments on publicly-available data sets demonstrate the effectiveness of our approach.",
"title": ""
},
{
"docid": "560cadfecdf5207851d333b4a122a06d",
"text": "Over the past years, state-of-the-art information extraction (IE) systems such as NELL [5] and ReVerb [9] have achieved impressive results by producing very large knowledge resources at web scale with minimal supervision. However, these resources lack the schema information, exhibit a high degree of ambiguity, and are difficult even for humans to interpret. Working with such resources becomes easier if there is a structured information base to which the resources can be linked. In this paper, we introduce the integration of open information extraction projects with Wikipedia-based IE projects that maintain a logical schema, as an important challenge for the NLP, semantic web, and machine learning communities. We describe the problem, present a gold-standard benchmark, and take the first steps towards a data-driven solution to the problem. This is especially promising, since NELL and ReVerb typically achieve a very large coverage, but still still lack a fullfledged clean ontological structure which, on the other hand, could be provided by large-scale ontologies like DBpedia [2] or YAGO [13].",
"title": ""
},
{
"docid": "10d90e9e1ef3b2759cd26e90997879bb",
"text": "Levels of genetic differentiation between populations can be highly variable across the genome, with divergent selection contributing to such heterogeneous genomic divergence. For example, loci under divergent selection and those tightly physically linked to them may exhibit stronger differentiation than neutral regions with weak or no linkage to such loci. Divergent selection can also increase genome-wide neutral differentiation by reducing gene flow (e.g. by causing ecological speciation), thus promoting divergence via the stochastic effects of genetic drift. These consequences of divergent selection are being reported in recently accumulating studies that identify: (i) 'outlier loci' with higher levels of divergence than expected under neutrality, and (ii) a positive association between the degree of adaptive phenotypic divergence and levels of molecular genetic differentiation across population pairs ['isolation by adaptation' (IBA)]. The latter pattern arises because as adaptive divergence increases, gene flow is reduced (thereby promoting drift) and genetic hitchhiking increased. Here, we review and integrate these previously disconnected concepts and literatures. We find that studies generally report 5-10% of loci to be outliers. These selected regions were often dispersed across the genome, commonly exhibited replicated divergence across different population pairs, and could sometimes be associated with specific ecological variables. IBA was not infrequently observed, even at neutral loci putatively unlinked to those under divergent selection. Overall, we conclude that divergent selection makes diverse contributions to heterogeneous genomic divergence. Nonetheless, the number, size, and distribution of genomic regions affected by selection varied substantially among studies, leading us to discuss the potential role of divergent selection in the growth of regions of differentiation (i.e. genomic islands of divergence), a topic in need of future investigation.",
"title": ""
},
{
"docid": "5f6b9395a3cd7af42c4822e2cf7eda7c",
"text": "Unilateral below-knee amputees develop abnormal gait characteristics that include bilateral asymmetries and an elevated metabolic cost relative to non-amputees. In addition, long-term prosthesis use has been linked to an increased prevalence of joint pain and osteoarthritis in the intact leg knee. To improve amputee mobility, prosthetic feet that utilize elastic energy storage and return (ESAR) have been designed, which perform important biomechanical functions such as providing body support and forward propulsion. However, the prescription of appropriate design characteristics (e.g., stiffness) is not well-defined since its influence on foot function and important in vivo biomechanical quantities such as metabolic cost and joint loading remain unclear. The design of feet that improve these quantities could provide considerable advancements in amputee care. Therefore, the purpose of this study was to couple design optimization with dynamic simulations of amputee walking to identify the optimal foot stiffness that minimizes metabolic cost and intact knee joint loading. A musculoskeletal model and distributed stiffness ESAR prosthetic foot model were developed to generate muscle-actuated forward dynamics simulations of amputee walking. Dynamic optimization was used to solve for the optimal muscle excitation patterns and foot stiffness profile that produced simulations that tracked experimental amputee walking data while minimizing metabolic cost and intact leg internal knee contact forces. Muscle and foot function were evaluated by calculating their contributions to the important walking subtasks of body support, forward propulsion and leg swing. The analyses showed that altering a nominal prosthetic foot stiffness distribution by stiffening the toe and mid-foot while making the ankle and heel less stiff improved ESAR foot performance by offloading the intact knee during early to mid-stance of the intact leg and reducing metabolic cost. The optimal design also provided moderate braking and body support during the first half of residual leg stance, while increasing the prosthesis contributions to forward propulsion and body support during the second half of residual leg stance. Future work will be directed at experimentally validating these results, which have important implications for future designs of prosthetic feet that could significantly improve amputee care.",
"title": ""
},
{
"docid": "7cd655bbea3b088618a196382b33ed1e",
"text": "Story generation is a well-recognized task in computational creativity research, but one that can be difficult to evaluate empirically. It is often inefficient and costly to rely solely on human feedback for judging the quality of generated stories. We address this by examining the use of linguistic analyses for automated evaluation, using metrics from existing work on predicting writing quality. We apply these metrics specifically to story continuation, where a model is given the beginning of a story and generates the next sentence, which is useful for systems that interactively support authors’ creativity in writing. We compare sentences generated by different existing models to human-authored ones according to the analyses. The results show some meaningful differences between the models, suggesting that this evaluation approach may be advantageous for future research.",
"title": ""
},
{
"docid": "ec5a556a8330ef2ef0f9465d5cb6e9b8",
"text": "We show that telco big data can make churn prediction much more easier from the $3$V's perspectives: Volume, Variety, Velocity. Experimental results confirm that the prediction performance has been significantly improved by using a large volume of training data, a large variety of features from both business support systems (BSS) and operations support systems (OSS), and a high velocity of processing new coming data. We have deployed this churn prediction system in one of the biggest mobile operators in China. From millions of active customers, this system can provide a list of prepaid customers who are most likely to churn in the next month, having $0.96$ precision for the top $50000$ predicted churners in the list. Automatic matching retention campaigns with the targeted potential churners significantly boost their recharge rates, leading to a big business value.",
"title": ""
},
{
"docid": "da36aa77b26e5966bdb271da19bcace3",
"text": "We present Brian, a new clock driven simulator for spiking neural networks which is available on almost all platforms. Brian is easy to learn and use, highly flexible and easily extensible. The Brian package itself and simulations using it are all written in the Python programming language, which is very well adapted to these goals. Python is an easy, concise and highly developed language with many advanced features and development tools, excellent documentation and a large community of users providing support and extension packages. Brian allows you to write very concise, natural and readable code for simulations, and makes it quick and efficient to play with these models (for example, changing the differential equations doesn't require a recompile of the code). Figure 1 shows an example of a complete network implemented in Brian, a randomly connected network of integrate and fire neurons with exponential inhibitory and excitatory currents (the CUBA network from [1]). Defining the model, running from Seventeenth Annual Computational Neuroscience Meeting: CNS*2008 Portland, OR, USA. 19–24 July 2008",
"title": ""
},
{
"docid": "5a40dc82635b3e9905b652da114eb3f4",
"text": "Deep reinforcement learning (DRL) brings the power of deep neural networks to bear on the generic task of trial-and-error learning, and its effectiveness has been convincingly demonstrated on tasks such as Atari video games and the game of Go. However, contemporary DRL systems inherit a number of shortcomings from the current generation of deep learning techniques. For example, they require very large datasets to work effectively, entailing that they are slow to learn even when such datasets are available. Moreover, they lack the ability to reason on an abstract level, which makes it difficult to implement high-level cognitive functions such as transfer learning, analogical reasoning, and hypothesis-based reasoning. Finally, their operation is largely opaque to humans, rendering them unsuitable for domains in which verifiability is important. In this paper, we propose an end-to-end reinforcement learning architecture comprising a neural back end and a symbolic front end with the potential to overcome each of these shortcomings. As proof-ofconcept, we present a preliminary implementation of the architecture and apply it to several variants of a simple video game. We show that the resulting system – though just a prototype – learns effectively, and, by acquiring a set of symbolic rules that are easily comprehensible to humans, dramatically outperforms a conventional, fully neural DRL system on a stochastic variant of the game.",
"title": ""
},
{
"docid": "42908bdaa9e72da204630d2ac25ed830",
"text": "We propose FINET, a system for detecting the types of named entities in short inputs—such as sentences or tweets—with respect to WordNet’s super fine-grained type system. FINET generates candidate types using a sequence of multiple extractors, ranging from explicitly mentioned types to implicit types, and subsequently selects the most appropriate using ideas from word-sense disambiguation. FINET combats data scarcity and noise from existing systems: It does not rely on supervision in its extractors and generates training data for type selection from WordNet and other resources. FINET supports the most fine-grained type system so far, including types with no annotated training data. Our experiments indicate that FINET outperforms state-of-the-art methods in terms of recall, precision, and granularity of extracted types.",
"title": ""
}
] |
scidocsrr
|
8747ae793c49f360839924cb20ab8f18
|
Pronunciation and silence probability modeling for ASR
|
[
{
"docid": "3e0741fb69ee9bdd3cc455577aab4409",
"text": "Recurrent neural network architectures have been shown to efficiently model long term temporal dependencies between acoustic events. However the training time of recurrent networks is higher than feedforward networks due to the sequential nature of the learning algorithm. In this paper we propose a time delay neural network architecture which models long term temporal dependencies with training times comparable to standard feed-forward DNNs. The network uses sub-sampling to reduce computation during training. On the Switchboard task we show a relative improvement of 6% over the baseline DNN model. We present results on several LVCSR tasks with training data ranging from 3 to 1800 hours to show the effectiveness of the TDNN architecture in learning wider temporal dependencies in both small and large data scenarios.",
"title": ""
},
{
"docid": "8fd5c54b5c45b6380980caa6d5fe7cfb",
"text": "In reverberant environments there are long term interactions between speech and corrupting sources. In this paper a time delay neural network (TDNN) architecture, capable of learning long term temporal relationships and translation invariant representations, is used for reverberation robust acoustic modeling. Further, iVectors are used as an input to the neural network to perform instantaneous speaker and environment adaptation, providing 10% relative improvement in word error rate. By subsampling the outputs at TDNN layers across time steps, training time is reduced. Using a parallel training algorithm we show that the TDNN can be trained on ∼ 5500 hours of speech data in 3 days using up to 32 GPUs. The TDNN is shown to provide results competitive with state of the art systems in the IARPA ASpIRE challenge, with 27.7% WER on the dev test set.",
"title": ""
}
] |
[
{
"docid": "16c58710e1285a55d75f996c2816b9b0",
"text": "Face morphing is an effect that shows a transition from one face image to another face image smoothly. It has been widely used in various fields of work, such as animation, movie production, games, and mobile applications. Two types of methods have been used to conduct face morphing. Semi automatic mapping methods, which allow users to map corresponding pixels between two face images, can produce a smooth transition of result images. Mapping the corresponding pixel between two human face images is usually not trivial. Fully automatic methods have also been proposed for morphing between two images having similar face properties, where the results depend on the similarity of the input face images. In this project, we apply a critical point filter to determine facial features for automatically mapping the correspondence of the input face images. The critical point filters can be used to extract the main features of input face images, including color, position and edge of each facial component in the input images. An energy function is also proposed for mapping the corresponding pixels between pixels of the input face images. The experimental results show that position of each face component plays a more important role than the edge and color of the face. We can summarize that, using the critical point filter, the proposed method to generate face morphing can produce a smooth image transition with our adjusted weight function.",
"title": ""
},
{
"docid": "d62dcea792acd1710b7a9f45dacd9336",
"text": "Hashing methods have been widely used for applications of large-scale image retrieval and classification. Non-deep hashing methods using handcrafted features have been significantly outperformed by deep hashing methods due to their better feature representation and end-to-end learning framework. However, the most striking successes in deep hashing have mostly involved discriminative models, which require labels. In this paper, we propose a novel unsupervised deep hashing method, named Deep Discrete Hashing (DDH), for large-scale image retrieval and classification. In the proposed framework, we address two main problems: 1) how to directly learn discrete binary codes? 2) how to equip the binary representation with the ability of accurate image retrieval and classification in an unsupervised way? We resolve these problems by introducing an intermediate variable and a loss function steering the learning process, which is based on the neighborhood structure in the original space. Experimental results on standard datasets (CIFAR-10, NUS-WIDE, and Oxford-17) demonstrate that our DDH significantly outperforms existing hashing methods by large margin in terms of mAP for image retrieval and object recognition. Code is available at https://github.com/htconquer/ddh.",
"title": ""
},
{
"docid": "b09cacfb35cd02f6a5345c206347c6ae",
"text": "Facebook, as one of the most popular social networking sites among college students, provides a platform for people to manage others' impressions of them. People tend to present themselves in a favorable way on their Facebook profile. This research examines the impact of using Facebook on people's perceptions of others' lives. It is argued that those with deeper involvement with Facebook will have different perceptions of others than those less involved due to two reasons. First, Facebook users tend to base judgment on examples easily recalled (the availability heuristic). Second, Facebook users tend to attribute the positive content presented on Facebook to others' personality, rather than situational factors (correspondence bias), especially for those they do not know personally. Questionnaires, including items measuring years of using Facebook, time spent on Facebook each week, number of people listed as their Facebook \"friends,\" and perceptions about others' lives, were completed by 425 undergraduate students taking classes across various academic disciplines at a state university in Utah. Surveys were collected during regular class period, except for two online classes where surveys were submitted online. The multivariate analysis indicated that those who have used Facebook longer agreed more that others were happier, and agreed less that life is fair, and those spending more time on Facebook each week agreed more that others were happier and had better lives. Furthermore, those that included more people whom they did not personally know as their Facebook \"friends\" agreed more that others had better lives.",
"title": ""
},
{
"docid": "11b20602fc9d6e97a5bcc857da7902d0",
"text": "This research investigates the Quality of Service (QoS) interaction at the edge of differentiated service (DiffServ) domain, denoted by video gateway (VG). VG is responsible for coordinating the QoS mapping between video applications and DiffServ enabled network. To accomplish the goal of achieving economical and high-quality end-to-end video streaming, which utilizes its awareness of relative service differentiation, the proposed QoS control framework includes the following three components: 1) the relative priority based indexing and categorization of streaming video content at sender, 2) the differentiated QoS levels with load variation in DiffServ networks, and 3) the feedforward and feedback mechanisms assisting QoS mapping of categorized index to DS level at the proposed VG. Especially, we focus on building a framework for dynamic QoS mapping, which intends to overcome both the QoS demand variations of CM applications (e.g., varying priorities from aggregated/categorized packets) and the QoS supply variations of DiffServ network (e.g., varying loss/delay due to fluctuating network loads). Thus, with the proposed QoS controls in both feedforward and feedback fashion, enhanced quality provisioning for CM applications (especially video streaming) is investigated under the given pricing model (e.g., DS level differentiated price/packet).",
"title": ""
},
{
"docid": "3a5d43d86d39966aca2d93d1cf66b13d",
"text": "In the current context of increased surveillance and security, more sophisticated and robust surveillance systems are needed. One idea relies on the use of pairs of video (visible spectrum) and thermal infrared (IR) cameras located around premises of interest. To automate the system, a robust person detection algorithm and the development of an efficient technique enabling the fusion of the information provided by the two sensors becomes necessary and these are described in this chapter. Recently, multi-sensor based image fusion system is a challenging task and fundamental to several modern day image processing applications, such as security systems, defence applications, and intelligent machines. Image fusion techniques have been actively investigated and have wide application in various fields. It is often a vital pre-processing procedure to many computer vision and image processing tasks which are dependent on the acquisition of imaging data via sensors, such as IR and visible. One such task is that of human detection. To detect humans with an artificial system is difficult for a number of reasons as shown in Figure 1 (Gavrila, 2001). The main challenge for a vision-based pedestrian detector is the high degree of variability with the human appearance due to articulated motion, body size, partial occlusion, inconsistent cloth texture, highly cluttered backgrounds and changing lighting conditions.",
"title": ""
},
{
"docid": "f9258f6e4b0613bedf56b82a44d3c82a",
"text": "Recent surveys and sample collection have confirmed the endemicity of Dermanyssus gallinae in poultry farming worldwide. The reduction in number and efficacy of many acaricide products has accentuated the prevalence rates of this poultry ectoparasite observed more often in non intensive systems such as free-range, barns or backyards and more often in laying hens than in broiler birds. The lack of knowledge from producers and the utilisation of inadequate, ineffective or illegal chemicals in many countries have been responsible for the increase in infestation rates due to the spread of acaricide resistance. The costs for control methods and treatment are showing the tremendous economic impact of this ectoparasite on poultry meat and egg industries. This paper reviews the prevalence rates of this poultry pest in different countries and for different farming systems and the production parameters which could be linked to this pest proliferation.",
"title": ""
},
{
"docid": "1ef82e0ef6860f66aadce8073617eb99",
"text": "The emergence and availability of remote storage providers prompted work in the security community that allows a client to verify integrity and availability of the data she outsourced to an untrusted remove storage server at a relatively low cost. Most recent solutions to this problem allow the client to read and update (insert, modify, or delete) stored data blocks while trying to lower the overhead associated with verifying data integrity. In this work we develop a novel and efficient scheme, computation and communication overhead of which is orders of magnitude lower than those of other state-of-the-art schemes. Our solution has a number of new features such as a natural support for operations on ranges of blocks, and revision control. The performance guarantees that we achieve stem from a novel data structure, termed balanced update tree, and removing the need to verify update operations.",
"title": ""
},
{
"docid": "2a7062cb92736cdaa782759fb5c941ec",
"text": "The main goal of agile development methods is to reduce risk thereby resulting in more successful and effective information systems. Indeed, Finding out, analyzing, prioritising, and planning risks are important activities in all development approaches, including Agile development. However, while there is an extensive body of academic literature on risk management, few little researches have attempted to study risk management in Agile development projects. This paper discusses the Risk Management (RM) activities in Agile software development, and shows how using Agile can help with traditional RM process. Finally, the paper sheds light on the benefits and limitations of some Agile methodologies from a RM perspective and highlights directions for future work in this topic.",
"title": ""
},
{
"docid": "8a8841e81793f19fe82106fbe5df91d9",
"text": "In this paper, we present anO(n log3 n) time algorithm for finding shortest paths in a planar graph with real weights. This can be compared to the best previous strongly polynomial time algorithm developed by Lipton, Rose, and Tarjan in 1978 which ran inO(n3=2) time, and the best polynomial algorithm developed by Henzinger, Klein, Subramanian, and Rao in 1994 which ran ine O(n4=3) time. We also present significantly improved algorithms for query and dynamic versions of the shortest path problems.",
"title": ""
},
{
"docid": "52da42b320e23e069519c228f1bdd8b5",
"text": "Over the last few years, C-RAN is proposed as a transformative architecture for 5G cellular networks that brings the flexibility and agility of cloud computing to wireless communications. At the same time, content caching in wireless networks has become an essential solution to lower the content- access latency and backhaul traffic loading, leading to user QoE improvement and network cost reduction. In this article, a novel cooperative hierarchical caching (CHC) framework in C-RAN is introduced where contents are jointly cached at the BBU and at the RRHs. Unlike in traditional approaches, the cache at the BBU, cloud cache, presents a new layer in the cache hierarchy, bridging the latency/capacity gap between the traditional edge-based and core-based caching schemes. Trace-driven simulations reveal that CHC yields up to 51 percent improvement in cache hit ratio, 11 percent decrease in average content access latency, and 18 percent reduction in backhaul traffic load compared to the edge-only caching scheme with the same total cache capacity. Before closing the article, we discuss the key challenges and promising opportunities for deploying content caching in C-RAN in order to make it an enabler technology in 5G ultra-dense systems.",
"title": ""
},
{
"docid": "2f05b806a6ee9fc8ee54a316269919c1",
"text": "Computers are increasingly expected to make smart decisions based on what humans consider commonsense. This would require computers to understand their environment, including properties of objects in the environment (e.g., a wheel is round), relations between objects (e.g., two wheels are part of a bike, or a bike is slower than a car) and interactions of objects (e.g., a driver drives a car on the road). The goal of this dissertation is to investigate automated methods for acquisition of large-scale, semantically organized commonsense knowledge. This goal poses challenges because commonsense knowledge is: (i) implicit and sparse as humans do not explicitly express the obvious, (ii) multimodal as it is spread across textual and visual contents, (iii) affected by reporting bias as uncommon facts are reported disproportionally, (iv) context dependent and thus holds merely with a certain confidence. Prior state-of-the-art methods to acquire commonsense are either not automated or based on shallow representations. Thus, they cannot produce large-scale, semantically organized commonsense knowledge. To achieve the goal, we divide the problem space into three research directions, making up the core contributions of this dissertation: • Properties of objects: acquisition of properties like hasSize, hasShape, etc. We develop WebChild, a semi-supervised method to compile semantically organized properties. • Relationships between objects: acquisition of relations like largerThan, partOf, memberOf, etc. We develop CMPKB, a linear-programming based method to compile comparative relations, and, we develop PWKB, a method based on statistical and logical inference to compile part-whole relations. • Interactions between objects: acquisition of activities like drive a car, park a car, etc., with attributes such as temporal or spatial attributes. We develop Knowlywood, a method based on semantic parsing and probabilistic graphical models to compile activity knowledge. Together, these methods result in the construction of a large, clean and semantically organized Commonsense Knowledge Base that we call WebChild KB.",
"title": ""
},
{
"docid": "5f4b28e79b7f193b290da2049238cdc2",
"text": "3D information visualization has existed for more than 100 years. 3D offers intrinsic attributes such as an extra dimension for encoding position and length, meshes and surfaces; lighting and separation. Further 3D can aid mental models for configuration of data within a 3D spatial framework. Perceived issues with 3D are solvable and successful, specialized information visualizations can be built.",
"title": ""
},
{
"docid": "c29349c32074392e83f51b1cd214ec8a",
"text": "Recent work has shown that optical flow estimation can be formulated as a supervised learning task and can be successfully solved with convolutional networks. Training of the so-called FlowNet was enabled by a large synthetically generated dataset. The present paper extends the concept of optical flow estimation via convolutional networks to disparity and scene flow estimation. To this end, we propose three synthetic stereo video datasets with sufficient realism, variation, and size to successfully train large networks. Our datasets are the first large-scale datasets to enable training and evaluation of scene flow methods. Besides the datasets, we present a convolutional network for real-time disparity estimation that provides state-of-the-art results. By combining a flow and disparity estimation network and training it jointly, we demonstrate the first scene flow estimation with a convolutional network.",
"title": ""
},
{
"docid": "047976efa58042e7e822e9337bbe582e",
"text": "Macrophages are central players in immune response, manifesting divergent phenotypes to control inflammation and innate immunity through release of cytokines and other signaling factors. Recently, the focus on metabolism has been reemphasized as critical signaling and regulatory pathways of human pathophysiology, ranging from cancer to aging, often converge on metabolic responses. Here, we used genome-scale modeling and multi-omics (transcriptomics, proteomics, and metabolomics) analysis to assess metabolic features that are critical for macrophage activation. We constructed a genome-scale metabolic network for the RAW 264.7 cell line to determine metabolic modulators of activation. Metabolites well-known to be associated with immunoactivation (glucose and arginine) and immunosuppression (tryptophan and vitamin D3) were among the most critical effectors. Intracellular metabolic mechanisms were assessed, identifying a suppressive role for de-novo nucleotide synthesis. Finally, underlying metabolic mechanisms of macrophage activation are identified by analyzing multi-omic data obtained from LPS-stimulated RAW cells in the context of our flux-based predictions. Our study demonstrates metabolism's role in regulating activation may be greater than previously anticipated and elucidates underlying connections between activation and metabolic effectors.",
"title": ""
},
{
"docid": "5182d5c7bff7ebc4b2a3491e115bd602",
"text": "Planning problems are among the most important and well-studied problems in artificial intelligence. They are most typically solved by tree search algorithms that simulate ahead into the future, evaluate future states, and back-up those evaluations to the root of a search tree. Among these algorithms, Monte-Carlo tree search (MCTS) is one of the most general, powerful and widely used. A typical implementation of MCTS uses cleverly designed rules, optimised to the particular characteristics of the domain. These rules control where the simulation traverses, what to evaluate in the states that are reached, and how to back-up those evaluations. In this paper we instead learn where, what and how to search. Our architecture, which we call an MCTSnet, incorporates simulation-based search inside a neural network, by expanding, evaluating and backing-up a vector embedding. The parameters of the network are trained end-to-end using gradient-based optimisation. When applied to small searches in the well-known planning problem Sokoban, the learned search algorithm significantly outperformed MCTS baselines.",
"title": ""
},
{
"docid": "376786b0ac83d5ebd3b28ede1793ad25",
"text": "Now we have a number of database technologies called usually NoSQL, like key-value, column-oriented, and document stores as well as search engines and graph databases. Whereas SQL software vendors offer advanced products with the capability to handle highly complex queries and transactions, NoSQL databases share rather characteristics concerning scaling and performance, as e.g. auto-sharding, distributed query support, and integrated caching. Their drawbacks can be a lack of schema or data consistency, difficulty in testing and maintaining, and absence of a higher query language. Complex data modelling and the SQL language as the only access tool to data are missing here. On the other hand, last studies show that both SQL and NoSQL databases have value for both for transactional and analytical Big Data. Top databases providers offer rearchitected database technologies combining row data stores with columnar in-memory compression enabling processing large data sets and analytical querying, often over massive, continuous data streams. The technological progress led to development of massively parallel processing analytic databases. The paper presents some details of current database technologies, their pros and cons in different application environments, and emerging trends in this area.",
"title": ""
},
{
"docid": "05f8bae694ca21d35d6a30fa6fa62f04",
"text": "To enhance developer productivity, all modern integrated development environments (IDEs) include code suggestion functionality that proposes likely next tokens at the cursor. While current IDEs work well for statically-typed languages, their reliance on type annotations means that they do not provide the same level of support for dynamic programming languages as for statically-typed languages. Moreover, suggestion engines in modern IDEs do not propose expressions or multi-statement idiomatic code. Recent work has shown that language models can improve code suggestion systems by learning from software repositories. This paper introduces a neural language model with a sparse pointer network aimed at capturing very longrange dependencies. We release a large-scale code suggestion corpus of 41M lines of Python code crawled from GitHub. On this corpus, we found standard neural language models to perform well at suggesting local phenomena, but struggle to refer to identifiers that are introduced many tokens in the past. By augmenting a neural language model with a pointer network specialized in referring to predefined classes of identifiers, we obtain a much lower perplexity and a 5 percentage points increase in accuracy for code suggestion compared to an LSTM baseline. In fact, this increase in code suggestion accuracy is due to a 13 times more accurate prediction of identifiers. Furthermore, a qualitative analysis shows this model indeed captures interesting long-range dependencies, like referring to a class member defined over 60 tokens in the past.",
"title": ""
},
{
"docid": "503d57c4a643791cb31817de83ca0b87",
"text": "The proponents of cyberspace promise that online discourse will increase political participation and pave the road for a democratic utopia. This article explores the potential for civil discourse in cyberspace by examining the level of civility in 287 discussion threads in political newsgroups. While scholars often use civility and politeness interchangeably, this study argues that this conflation ignores the democratic merit of robust and heated discussion. Therefore, civility was defined in a broader sense, by identifying as civil behaviors that enhance democratic conversation. In support of this distinction, the study results revealed that most messages posted on political newsgroups were civil, and further suggested that because the absence of face-to-face communication fostered more heated discussion, cyberspace might actually promote Lyotard’s vision of democratic emancipation through disagreement and anarchy (Lyotard, 1984). Thus, this study supported the internet’s potential to revive the public sphere, provided that greater diversity and volume of discussion is present.",
"title": ""
},
{
"docid": "cdc7632a3650ed6d392c9ddcf4003ff9",
"text": "An image related question defines a specific visual task that is required in order to produce an appropriate answer. The answer may depend on a minor detail in the image and require complex reasoning and use of prior knowledge. When humans perform this task, they are able to do it in a flexible and robust manner, integrating modularly any novel visual capability with diverse options for various elaborations of the task. In contrast, current approaches to solve this problem by a machine are based on casting the problem as an end-to-end learning problem, which lacks such abilities. We present a different approach, inspired by the aforementioned human capabilities. The approach is based on the compositional structure of the question. The underlying idea is that a question has an abstract representation based on its structure, which is compositional in nature. The question can consequently be answered by a composition of procedures corresponding to its substructures. The basic elements of the representation are logical patterns, which are put together to represent the question. These patterns include a parametric representation for object classes, properties and relations. Each basic pattern is mapped into a basic procedure that includes meaningful visual tasks, and the patterns are composed to produce the overall answering procedure. The UnCoRd (Understand Compose and Respond) system, based on this approach, integrates existing detection and classification schemes for a set of object classes, properties and relations. These schemes are incorporated in a modular manner, typical also to human vision. The logical composition of real visual tasks allows using meaningful intermediate results to elaborate the answer (e.g. reasoning) and provide corrections and alternatives when answers are negative. In addition, an external knowledge base is also integrated into the process, to supply common-knowledge information that may be required to understand the question and produce an answer. We performed a qualitative analysis of the system, which demonstrates its representation capabilities and provide suggestions for future developments.",
"title": ""
},
{
"docid": "0f24b6c36586505c1f4cc001e3ddff13",
"text": "A novel model for asymmetric multiagent reinforcement learning is introduced in this paper. The model addresses the problem where the information states of the agents involved in the learning task are not equal; some agents (leaders) have information how their opponents (followers) will select their actions and based on this information leaders encourage followers to select actions that lead to improved payoffs for the leaders. This kind of configuration arises e.g. in semi-centralized multiagent systems with an external global utility associated to the system. We present a brief literature survey of multiagent reinforcement learning based on Markov games and then propose an asymmetric learning model that utilizes the theory of Markov games. Additionally, we construct a practical learning method based on the proposed learning model and study its convergence properties. Finally, we test our model with a simple example problem and a larger two-layer pricing application.",
"title": ""
}
] |
scidocsrr
|
75b11edb91505989222b61cf8d93a431
|
How Does Batch Normalization Help Optimization? (No, It Is Not About Internal Covariate Shift)
|
[
{
"docid": "034bf47c5982756a1cf1c1ccd777d604",
"text": "We present weight normalization: a reparameterization of the weight vectors in a neural network that decouples the length of those weight vectors from their direction. By reparameterizing the weights in this way we improve the conditioning of the optimization problem and we speed up convergence of stochastic gradient descent. Our reparameterization is inspired by batch normalization but does not introduce any dependencies between the examples in a minibatch. This means that our method can also be applied successfully to recurrent models such as LSTMs and to noise-sensitive applications such as deep reinforcement learning or generative models, for which batch normalization is less well suited. Although our method is much simpler, it still provides much of the speed-up of full batch normalization. In addition, the computational overhead of our method is lower, permitting more optimization steps to be taken in the same amount of time. We demonstrate the usefulness of our method on applications in supervised image recognition, generative modelling, and deep reinforcement learning.",
"title": ""
},
{
"docid": "10ae6cdb445e4faf1e6bed5cad6eb3ba",
"text": "It this paper we revisit the fast stylization method introduced in Ulyanov et al. (2016). We show how a small change in the stylization architecture results in a significant qualitative improvement in the generated images. The change is limited to swapping batch normalization with instance normalization, and to apply the latter both at training and testing times. The resulting method can be used to train high-performance architectures for real-time image generation. The code will be made available at https://github.com/DmitryUlyanov/texture_nets.",
"title": ""
}
] |
[
{
"docid": "a6213ad508c996c0e62f71e6619654f0",
"text": "Angiogenesis is essential for normal tissue and even more so for solid malignancies. At present, inhibition of tumor angiogenesis is a major focus of anticancer drug development. Bevacizumab, a humanized antibody against VEGF, was the first antiangiogenic agent to be approved for advanced non-small cell lung cancer, breast cancer and colorectal cancer. The most commonly observed adverse events are hypertension, proteinuria, bleeding and thrombosis. Sunitinib, a small molecule blocking intracellular VEGF, KIT, Flt3 and PDGF receptors, which regulate angiogenesis and cell growth, is approved for the treatment of advanced renal cell cancer (RCC) and malignant gastrointestinal stromal tumor. The most frequent adverse events include hand-foot syndrome, stomatitis, diarrhea, fatigue, hypothyroidism and hypertension. Sorafenib, an oral multikinase inhibitor, is approved for the second-line treatment of advanced RCC and upfront treatment of advanced hepatocellular carcinoma. Most common adverse events with sorafenib are dermatologic (hand-foot skin reaction, rash, desquamation), fatigue, diarrhea, nausea, hypothyroidism and hypertension. More recently, cardiovascular toxicity has increasingly been recognized as a potential adverse event associated with sunitinib and sorafenib treatment. Elderly patients are at increased risk of thromboembolic events when receiving bevacizumab, and potentially for cardiac dysfunction when receiving sunitinib or sorafenib. The safety of antiangiogenic drugs is of special concern when taking these agents for longer-term adjuvant or maintenance treatment. Furthermore, newer investigational antiangiogenic drugs are briefly reviewed.",
"title": ""
},
{
"docid": "cf1332882cb6f68549d3c64029db3e9a",
"text": "In this paper, we look at the historical place that chickens have held in media depictions and as entertainment, analyse several types of representations of chickens in video games, and draw out reflections on society in the light of these representations. We also look at real-life, modern historical, and archaeological evidence of chicken treatment and the evolution of social attitudes with regard to animal rights, and deconstruct the depiction of chickens in video games in this light.",
"title": ""
},
{
"docid": "611fdf1451bdd5c683c5be00f46460b8",
"text": "Modeling a complex system is almost invariably a challenging task. The incorporation of experimental observations can be used to improve the quality of a model and thus to obtain better predictions about the behavior of the corresponding system. This approach, however, is affected by a variety of different errors, especially when a system simultaneously populates an ensemble of different states and experimental data are measured as averages over such states. To address this problem, we present a Bayesian inference method, called “metainference,” that is able to deal with errors in experimental measurements and with experimental measurements averaged over multiple states. To achieve this goal, metainference models a finite sample of the distribution of models using a replica approach, in the spirit of the replica-averaging modeling based on the maximum entropy principle. To illustrate the method, we present its application to a heterogeneous model system and to the determination of an ensemble of structures corresponding to the thermal fluctuations of a protein molecule. Metainference thus provides an approach to modeling complex systems with heterogeneous components and interconverting between different states by taking into account all possible sources of errors.",
"title": ""
},
{
"docid": "8df2c8cf6f6662ed60280b8777c64336",
"text": "In comparative genomics, functional annotations are transferred from one organism to another relying on sequence similarity. With more than 20 million citations in PubMed, text mining provides the ideal tool for generating additional large-scale homology-based predictions. To this end, we have refined a recent dataset of biomolecular events extracted from text, and integrated these predictions with records from public gene databases. Accounting for lexical variation of gene symbols, we have implemented a disambiguation algorithm that uniquely links the arguments of 11.2 million biomolecular events to well-defined gene families, providing interesting opportunities for query expansion and hypothesis generation. The resulting MySQL database, including all 19.2 million original events as well as their homology-based variants, is publicly available at http://bionlp.utu.fi/.",
"title": ""
},
{
"docid": "29bc53c2e50de52e073b7d0e304d0f5f",
"text": "UNLABELLED\nA theory is presented that attempts to answer two questions. What visual contents can an observer consciously access at one moment?\n\n\nANSWER\nonly one feature value (e.g., green) per dimension, but those feature values can be associated (as a group) with multiple spatially precise locations (comprising a single labeled Boolean map). How can an observer voluntarily select what to access?\n\n\nANSWER\nin one of two ways: (a) by selecting one feature value in one dimension (e.g., selecting the color red) or (b) by iteratively combining the output of (a) with a preexisting Boolean map via the Boolean operations of intersection and union. Boolean map theory offers a unified interpretation of a wide variety of visual attention phenomena usually treated in separate literatures. In so doing, it also illuminates the neglected phenomena of attention to structure.",
"title": ""
},
{
"docid": "c776fccb35d9aa43e965604573156c6a",
"text": "BACKGROUND\nMalnutrition in children is a major public health concern. This study aimed to determine the association between dietary diversity and stunting, underweight, wasting, and diarrhea and that between consumption of each specific food group and these nutritional and health outcomes among children.\n\n\nMETHODS\nA nationally representative household survey of 6209 children aged 12 to 59 months was conducted in Cambodia. We examined the consumption of food in the 24 hours before the survey and stunting, underweight, wasting, and diarrhea that had occurred in the preceding 2 weeks. A food variety score (ranging from 0 to 9) was calculated to represent dietary diversity.\n\n\nRESULTS\nStunting was negatively associated with dietary diversity (adjusted odd ratios [ORadj] 0.95, 95% confident interval [CI] 0.91-0.99, P = 0.01) after adjusting for socioeconomic and geographical factors. Consumption of animal source foods was associated with reduced risk of stunting (ORadj 0.69, 95% CI 0.54-0.89, P < 0.01) and underweight (ORadj 0.74, 95% CI 0.57-0.96, P = 0.03). On the other hand, the higher risk of diarrhea was significantly associated with consumption of milk products (ORadj 1.46, 95% CI 1.10-1.92, P = 0.02) and it was significantly pronounced among children from the poorer households (ORadj 1.85, 95% CI 1.17-2.93, P < 0.01).\n\n\nCONCLUSIONS\nConsumption of a diverse diet was associated with a reduction in stunting. In addition to dietary diversity, animal source food was a protective factor of stunting and underweight. Consumption of milk products was associated with an increase in the risk of diarrhea, particularly among the poorer households. Both dietary diversity and specific food types are important considerations of dietary recommendation.",
"title": ""
},
{
"docid": "126b52ab2e2585eabf3345ef7fb39c51",
"text": "We propose a method to build in real-time animated 3D head models using a consumer-grade RGB-D camera. Our framework is the first one to provide simultaneously comprehensive facial motion tracking and a detailed 3D model of the user's head. Anyone's head can be instantly reconstructed and his facial motion captured without requiring any training or pre-scanning. The user starts facing the camera with a neutral expression in the first frame, but is free to move, talk and change his face expression as he wills otherwise. The facial motion is tracked using a blendshape representation while the fine geometric details are captured using a Bump image mapped over the template mesh. We propose an efficient algorithm to grow and refine the 3D model of the head on-the-fly and in real-time. We demonstrate robust and high-fidelity simultaneous facial motion tracking and 3D head modeling results on a wide range of subjects with various head poses and facial expressions. Our proposed method offers interesting possibilities for animation production and 3D video telecommunications.",
"title": ""
},
{
"docid": "1e1500d70f232acfd6fd8fe9a629da77",
"text": "Transcranial direct current stimulation (tDCS) is a non-invasive technique for inducing prolonged functional changes in the human cerebral cortex. This simple and safe neurostimulation technique for modulating motor functions in Parkinson’s disease could extend treatment option for patients with movement disorders. We assessed whether tDCS applied daily over the cerebellum (cerebellar tDCS) and motor cortex (M1-tDCS) improves motor and cognitive symptoms and levodopa-induced dyskinesias in patients with Parkinson’s disease (PD). Nine patients (aged 60–85 years; four women; Hoehn & Yahr scale score 2–3) diagnosed as having idiopathic PD were recruited. To evaluate how tDCS (cerebellar tDCS or M1-tDCS) affects motor and cognitive function in PD, we delivered bilateral anodal (2 mA, 20 min, five consecutive days) and sham tDCS, in random order, in three separate experimental sessions held at least 1 month apart. In each session, as outcome variables, patients underwent the Unified Parkinson’s Disease Rating Scale (UPDRS III and IV) and cognitive testing before treatment (baseline), when treatment ended on day 5 (T1), 1 week later (T2), and then 4 weeks later (T3), at the same time each day. After patients received anodal cerebellar tDCS and M1-tDCS for five days, the UPDRS IV (dyskinesias section) improved (p < 0.001). Conversely, sham tDCS, cerebellar tDCS, and M1-tDCS left the other variables studied unchanged (p > 0.05). Despite the small sample size, our preliminary results show that anodal tDCS applied for five consecutive days over the motor cortical areas and cerebellum improves parkinsonian patients’ levodopa-induced dyskinesias.",
"title": ""
},
{
"docid": "8c0d117602ecadee24215f5529e527c6",
"text": "We present the first open-set language identification experiments using one-class classification models. We first highlight the shortcomings of traditional feature extraction methods and propose a hashing-based feature vectorization approach as a solution. Using a dataset of 10 languages from different writing systems, we train a One-Class Support Vector Machine using only a monolingual corpus for each language. Each model is evaluated against a test set of data from all 10 languages and we achieve an average F-score of 0.99, demonstrating the effectiveness of this approach for open-set language identification.",
"title": ""
},
{
"docid": "3f9a46f472ab276c39fb96b78df132ee",
"text": "In this paper, we present a novel technique that enables capturing of detailed 3D models from flash photographs integrating shading and silhouette cues. Our main contribution is an optimization framework which not only captures subtle surface details but also handles changes in topology. To incorporate normals estimated from shading, we employ a mesh-based deformable model using deformation gradient. This method is capable of manipulating precise geometry and, in fact, it outperforms previous methods in terms of both accuracy and efficiency. To adapt the topology of the mesh, we convert the mesh into an implicit surface representation and then back to a mesh representation. This simple procedure removes self-intersecting regions of the mesh and solves the topology problem effectively. In addition to the algorithm, we introduce a hand-held setup to achieve multi-view photometric stereo. The key idea is to acquire flash photographs from a wide range of positions in order to obtain a sufficient lighting variation even with a standard flash unit attached to the camera. Experimental results showed that our method can capture detailed shapes of various objects and cope with topology changes well.",
"title": ""
},
{
"docid": "4ded87cea91470b39a68a3f93b775146",
"text": "The aim of this article is to develop a GPS/IMU Multisensor fusion algorithm, taking context into consideration. Contextual variables are introduced to define fuzzy validity domains of each sensor. The algorithm increases the reliability of the position information. A simulation of this algorithm is then made by fusing GPS and IMU data coming from real tests on a land vehicle. Bad data delivered by GPS sensor are detected and rejected using contextual information thus increasing reliability. Moreover, because of a lack of credibility of GPS signal in some cases and because of the drift of the INS, GPS/INS association is not satisfactory at the moment. In order to avoid this problem, the authors propose to feed the fusion process based on a multisensor Kalman filter directly with the acceleration provided by the IMU. Moreover, the filter developed here gives the possibility to easily add other sensors in order to achieve performances required.",
"title": ""
},
{
"docid": "6fd79de4d6c78245a7a50fa6608d12ab",
"text": "Data-dependent hashing has recently attracted attention due to being able to support efficient retrieval and storage of high-dimensional data, such as documents, images, and videos. In this paper, we propose a novel learning-based hashing method called “supervised discrete hashing with relaxation” (SDHR) based on “supervised discrete hashing” (SDH). SDH uses ordinary least squares regression and traditional zero-one matrix encoding of class label information as the regression target (code words), thus fixing the regression target. In SDHR, the regression target is instead optimized. The optimized regression target matrix satisfies a large margin constraint for correct classification of each example. Compared with SDH, which uses the traditional zero-one matrix, SDHR utilizes the learned regression target matrix and, therefore, more accurately measures the classification error of the regression model and is more flexible. As expected, SDHR generally outperforms SDH. Experimental results on two large-scale image data sets (CIFAR-10 and MNIST) and a large-scale and challenging face data set (FRGC) demonstrate the effectiveness and efficiency of SDHR.",
"title": ""
},
{
"docid": "5efc720f54c94dffc52390d9d5eb7d3f",
"text": "Software-Defined Networking (SDN) is an emerging technology which brings flexibility and programmability to networks and introduces new services and features. However, most SDN architectures have been designed for wired infrastructures, especially in the data center space, and primary trends for wireless and mobile SDN are on the access network and the wireless backhaul. In this paper, we propose several designs for SDN-based Mobile Cloud architectures, focusing on Ad hoc networks. We present the required core components to build SDN-based Mobile Cloud, including variations that are required to accommodate different wireless environments, such as mobility and unreliable wireless link conditions. We also introduce several instances of the proposed architectures based on frequency selection of wireless transmission that are designed around different use cases of SDN-based Mobile Cloud. We demonstrate the feasibility of our architecture by implementing SDN-based routing in the mobile cloud and comparing it with traditional Mobile Ad Hoc Network (MANET) routing. The feasibility of our architecture is shown by achieving high packet delivery ratio with acceptable overhead.",
"title": ""
},
{
"docid": "aeeac5c99b992d56e8d0d7aaa7409a2f",
"text": "1 This working paper addresses the methods and approaches by which firms can successfully improve knowledge work processes. It is derived from a broader research project on \" Managing and Improving Knowledge Work Processes, \" itself a component project of the \" Mastering Information & Technology \" sponsored research program at Ernst & Young's Center for Business Innovation. Other working papers address issues of viewing knowledge work as a process 1 and the content of changes in knowledge work processes.",
"title": ""
},
{
"docid": "e54b9897e79391b86327883164781dff",
"text": "This review paper gives a detailed account of the development of mesh generation techniques on planar regions, over curved surfaces and within volumes for the past years. Emphasis will be on the generation of the unstructured meshes for purpose of complex industrial applications and adaptive refinement finite element analysis. Over planar domains and on curved surfaces, triangular and quadrilateral elements will be used, whereas for three-dimensional structures, tetrahedral and hexahedral elements have to be generated. Recent advances indicate that mesh generation on curved surfaces is quite mature now that elements following closely to surface curvatures could be generated more or less in an automatic manner. As the boundary recovery procedure are getting more and more robust and efficient, discretization of complex solid objects into tetrahedra by means of Delaunay triangulation and other techniques becomes routine work in industrial applications. However, the decomposition of a general object into hexahedral elements in a robust and efficient manner remains as a challenge for researchers in the mesh generation community. Algorithms for the generation of anisotropic meshes on 2D and 3D domains have also been proposed for problems where elongated elements along certain directions are required. A web-site for the latest development in meshing techniques is included for the interested readers.",
"title": ""
},
{
"docid": "d6c3896357022a27513f63a5e3f8b4d3",
"text": "The aging of the world's population presents vast societal and individual challenges. The relatively shrinking workforce to support the growing population of the elderly leads to a rapidly increasing amount of technological innovations in the field of elderly care. In this paper, we present an integrated framework consisting of various intelligent agents with their own expertise and responsibilities working in a holistic manner to assist, care, and accompany the elderly around the clock in the home environment. To support the independence of the elderly for Aging-In-Place (AIP), the intelligent agents must well understand the elderly, be fully aware of the home environment, possess high-level reasoning and learning capabilities, and provide appropriate tender care in the physical, cognitive, emotional, and social aspects. The intelligent agents sense in non-intrusive ways from different sources and provide wellness monitoring, recommendations, and services across diverse platforms and locations. They collaborate together and interact with the elderly in a natural and holistic manner to provide all-around tender care reactively and proactively. We present our implementation of the collaboration framework with a number of realized functionalities of the intelligent agents, highlighting its feasibility and importance in addressing various challenges in AIP.",
"title": ""
},
{
"docid": "054627999c113979429aa6e1fec22257",
"text": "A singular stochastic control problem with state constraints in two-dimensions is studied. We show that the value function is C and its directional derivatives are the value functions of certain optimal stopping problems. Guided by the optimal stopping problem we then introduce the associated no-action region and the free boundary and show that, under appropriate conditions, an optimally controlled process is a Brownian motion in the no-action region with reflection at the free boundary. This proves a conjecture of Martins, Shreve and Soner [22] on the form of an optimal control for this class of singular control problems. An important issue in our analysis is that the running cost is Lipschitz but not C. This lack of smoothness is one of the key obstacles in establishing regularity of the free boundary and of the value function. We show that the free boundary is Lipschitz and that the value function is C in the interior of the no-action region. We then use a verification argument applied to a suitable C approximation of the value function to establish optimality of the conjectured control.",
"title": ""
},
{
"docid": "df78e51c3ed3a6924bf92db6000062e1",
"text": "We study the problem of computing all Pareto-optimal journeys in a dynamic public transit network for two criteria: arrival time and number of transfers. Existing algorithms consider this as a graph problem, and solve it using variants of Dijkstra’s algorithm. Unfortunately, this leads to either high query times or suboptimal solutions. We take a different approach. We introduce RAPTOR, our novel round-based public transit router. Unlike previous algorithms, it is not Dijkstrabased, looks at each route (such as a bus line) in the network at most once per round, and can be made even faster with simple pruning rules and parallelization using multiple cores. Because it does not rely on preprocessing, RAPTOR works in fully dynamic scenarios. Moreover, it can be easily extended to handle flexible departure times or arbitrary additional criteria, such as fare zones. When run on London’s complex public transportation network, RAPTOR computes all Paretooptimal journeys between two random locations an order of magnitude faster than previous approaches, which easily enables interactive applications.",
"title": ""
},
{
"docid": "1a79ce41cde04dedfa662131d80caaca",
"text": "Modern CCD cameras are usually capable of a spatial accuracy greater than 1/50 of the pixel size. However, such accuracy is not easily attained due to various error sources that can affect the image formation process. Current calibration methods typically assume that the observations are unbiased, the only error is the zero-mean independent and identically distributed random noise in the observed image coordinates, and the camera model completely explains the mapping between the 3-D coordinates and the image coordinates. In general, these conditions are not met, causing the calibration results to be less accurate than expected. In this paper, a calibration procedure for precise 3-D computer vision applications is described. It introduces bias correction for circular control points and a non-recursive method for reversing the distortion model. The accuracy analysis is presented and the error sources that can reduce the theoretical accuracy are discussed. The tests with synthetic images indicate improvements in the calibration results in limited error conditions. In real images, the suppression of external error sources becomes a prerequisite for successful calibration.",
"title": ""
},
{
"docid": "b14a77c6e663af1445e466a3e90d4e5f",
"text": "This paper proposes an approach for applying GANs to NMT. We build a conditional sequence generative adversarial net which comprises of two adversarial sub models, a generator and a discriminator. The generator aims to generate sentences which are hard to be discriminated from human-translated sentences ( i.e., the golden target sentences); And the discriminator makes efforts to discriminate the machine-generated sentences from humantranslated ones. The two sub models play a mini-max game and achieve the win-win situation when they reach a Nash Equilibrium. Additionally, the static sentence-level BLEU is utilized as the reinforced objective for the generator, which biases the generation towards high BLEU points. During training, both the dynamic discriminator and the static BLEU objective are employed to evaluate the generated sentences and feedback the evaluations to guide the learning of the generator. Experimental results show that the proposed model consistently outperforms the traditional RNNSearch and the newly emerged state-ofthe-art Transformer on English-German and Chinese-English translation tasks.",
"title": ""
}
] |
scidocsrr
|
b092ec94d8d28dbca6227e821130e632
|
Geometric control of multiple quadrotor UAVs transporting a cable-suspended rigid body
|
[
{
"docid": "0004d4b53ff64b7c9ceca944d5091970",
"text": "In this paper we consider the problem of controlling multiple robots manipulating and transporting a payload in three dimensions via cables. We develop robot configurations that ensure static equilibrium of the payload at a desired pose while respecting constraints on the tension and provide analysis of payload stability for these configurations. We demonstrate our methods on a team of aerial robots via simulation and experimentation.",
"title": ""
},
{
"docid": "03b3aa5c74eb4d66c1bd969fbce835c7",
"text": "In the past few decades, unmanned aerial vehicles (UAVs) have become promising mobile platforms capable of navigating semiautonomously or autonomously in uncertain environments. The level of autonomy and the flexible technology of these flying robots have rapidly evolved, making it possible to coordinate teams of UAVs in a wide spectrum of tasks. These applications include search and rescue missions; disaster relief operations, such as forest fires [1]; and environmental monitoring and surveillance. In some of these tasks, UAVs work in coordination with other robots, as in robot-assisted inspection at sea [2]. Recently, radio-controlled UAVs carrying radiation sensors and video cameras were used to monitor, diagnose, and evaluate the situation at Japans Fukushima Daiichi nuclear plant facility [3].",
"title": ""
}
] |
[
{
"docid": "1a3cad2f10dd5c6a5aacb3676ca8917a",
"text": "BACKGROUND\nRecent findings suggest that the mental health costs of unemployment are related to both short- and long-term mental health scars. The main policy tools for dealing with young people at risk of labor market exclusion are Active Labor Market Policy programs for youths (youth programs). There has been little research on the potential effects of participation in youth programs on mental health and even less on whether participation in such programs alleviates the long-term mental health scarring caused by unemployment. This study compares exposure to open youth unemployment and exposure to youth program participation between ages 18 and 21 in relation to adult internalized mental health immediately after the end of the exposure period at age 21 and two decades later at age 43.\n\n\nMETHODS\nThe study uses a five wave Swedish 27-year prospective cohort study consisting of all graduates from compulsory school in an industrial town in Sweden initiated in 1981. Of the original 1083 participants 94.3% of those alive were still participating at the 27-year follow up. Exposure to open unemployment and youth programs were measured between ages 18-21. Mental health, indicated through an ordinal level three item composite index of internalized mental health symptoms (IMHS), was measured pre-exposure at age 16 and post exposure at ages 21 and 42. Ordinal regressions of internalized mental health at ages 21 and 43 were performed using the Polytomous Universal Model (PLUM). Models were controlled for pre-exposure internalized mental health as well as other available confounders.\n\n\nRESULTS\nResults show strong and significant relationships between exposure to open youth unemployment and IMHS at age 21 (OR = 2.48, CI = 1.57-3.60) as well as at age 43 (OR = 1.71, CI = 1.20-2.43). No such significant relationship is observed for exposure to youth programs at age 21 (OR = 0.95, CI = 0.72-1.26) or at age 43 (OR = 1.23, CI = 0.93-1.63).\n\n\nCONCLUSIONS\nA considered and consistent active labor market policy directed at youths could potentially reduce the short- and long-term mental health costs of youth unemployment.",
"title": ""
},
{
"docid": "ab3c0d4fecf7722a4b592473eb0de8dc",
"text": "IOT( Internet of Things) relying on exchange of information through radio frequency identification(RFID), is emerging as one of important technologies that find its use in various applications ranging from healthcare, construction, hospitality to transportation sector and many more. This paper describes about IOT, concentrating its use in improving and securing future shopping. This paper shows how RFID technology makes life easier and secure and thus helpful in the future. KeywordsIOT,RFID, Intelligent shopping, RFID tags, RFID reader, Radio frequency",
"title": ""
},
{
"docid": "13d7abc974d44c8c3723c3b9c8534fec",
"text": "We propose a novel approach to automatically produce multiple colorized versions of a grayscale image. Our method results from the observation that the task of automated colorization is relatively easy given a low-resolution version of the color image. We first train a conditional PixelCNN to generate a low resolution color for a given grayscale image. Then, given the generated low-resolution color image and the original grayscale image as inputs, we train a second CNN to generate a high-resolution colorization of an image. We demonstrate that our approach produces more diverse and plausible colorizations than existing methods, as judged by human raters in a ”Visual Turing Test”.",
"title": ""
},
{
"docid": "cc37744c95e5e41cb46b166132da53f6",
"text": "This work is part of research to build a system to combine facial and prosodic information to recognize commonly occurring user states such as delight and frustration. We create two experimental situations to elicit two emotional states: the first involves recalling situations while expressing either delight or frustration; the second experiment tries to elicit these states directly through a frustrating experience and through a delightful video. We find two significant differences in the nature of the acted vs. natural occurrences of expressions. First, the acted ones are much easier for the computer to recognize. Second, in 90% of the acted cases, participants did not smile when frustrated, whereas in 90% of the natural cases, participants smiled during the frustrating interaction, despite self-reporting significant frustration with the experience. This paper begins to explore the differences in the patterns of smiling that are seen under natural frustration and delight conditions, to see if there might be something measurably different about the smiles in these two cases, which could ultimately improve the performance of classifiers applied to natural expressions.",
"title": ""
},
{
"docid": "77ea0e24066d028d085069cb8f6733e0",
"text": "Road scene reconstruction is a fundamental and crucial module at the perception phase for autonomous vehicles, and will influence the later phase, such as object detection, motion planing and path planing. Traditionally, self-driving car uses Lidar, camera or fusion of the two kinds of sensors for sensing the environment. However, single Lidar or camera-based approaches will miss crucial information, and the fusion-based approaches often consume huge computing resources. We firstly propose a conditional Generative Adversarial Networks (cGANs)-based deep learning model that can rebuild rich semantic scene images from upsampled Lidar point clouds only. This makes it possible to remove cameras to reduce resource consumption and improve the processing rate. Simulation on the KITTI dataset also demonstrates that our model can reestablish color imagery from a single Lidar point cloud, and is effective enough for real time sensing on autonomous driving vehicles.",
"title": ""
},
{
"docid": "8b06564cbdd04df3c18215a05c55f787",
"text": "Floorplanning is the initial step in the process of designing layout of the chip. It is employed to plan the positions and shapes of modules during the process of VLSI Design cycle to optimize the cost metrics like layout area and wirelength. In this paper, a Hybrid Particle Swarm Optimization-Firefly (HPSOFF) algorithm is proposed which integrates Particle Swarm Optimization (PSO), Firefly (FF) and Modified Corner List (MCL) algorithms. Initially, PSO algorithm utilizes MCL algorithm for non-slicing floorplan representations and fitness value evaluation. The solutions obtained from PSO are provided as initial solutions to FF algorithm. Fitness function evaluation and floorplan representations for FF algorithm are again carried out using MCL algorithm. The proposed algorithm is illustrated using Microelectronics Centre of North Carolina (MCNC) and Gigascale Systems Research Centre (GSRC) benchmark circuits. The results obtained are compared with the solutions derived from other stochastic algorithms and the proposed algorithm provides better solutions for both the benchmark circuits. 9",
"title": ""
},
{
"docid": "dec89c3035ce2456c23e547252c5824a",
"text": "This is a survey of some of the nice properties of the associahedron (also called Stasheff polytope) from several points of views: topological, geometrical, combinatorial and algebraic.",
"title": ""
},
{
"docid": "8ef6a44e42dbc3ba2418a5b72243cdd4",
"text": "This study aims to contribute empirical computational results to the understanding of tonality and harmonic structure. It analyses aspects of tonal harmony and harmonic patterns based on a statistical, computational corpus analysis of Bach’s chorales. This is carried out using a novel heuristic method of segmentation developed specifically for that purpose. Analyses of distributions of single pc sets, chord classes and pc set transitions reveal very different structural patterns in both modes, many, but not all of which accord with standard music theory. In addition, most frequent chord transitions are found to exhibit a large degree of asymmetry, or, directedness, in way that for two pc sets A,B the transition frequencies f(A→B) and f(B→A) may differ to a large extent. Distributions of unigrams and bigrams are found to follow a Zipf distribution, i.e. decay in frequency roughly according to 1/x which implies that the majority of the musical structure is governed by a few frequent elements. The findings provide evidence for an underlying harmonic syntax which results in distinct statistical patterns. A subsequent hierarchical cluster analysis of pc sets based on respective antecedent and consequent patterns finds that this information suffices to group chords into meaningful functional groups solely on intrinsic statistical grounds without regard to pitch",
"title": ""
},
{
"docid": "d9830ad99cc9339d62f3c3f5ec1d460a",
"text": "The notion of value and of value creation has raised interest over the last 30 years for both researchers and practitioners. Although several studies have been conducted in marketing, value remains and elusive and often ill-defined concept. A clear understanding of value and value determinants can increase the awareness in strategic decisions and pricing choices. Objective of this paper is to preliminary discuss the main kinds of entity that an ontology of economic value should deal with.",
"title": ""
},
{
"docid": "504cb4e0f2b054f4e0b90fd7d9ab2253",
"text": "A monolithic radio frequency power amplifier for 1.9- 2.6 GHz has been realized in a 0.25 µm SiGe-bipolar technology. The balanced 2-stage push-pull power amplifier uses two on-chip transformers as input-balun and for interstage matching and is operating down to supply voltages as low as 1 V. A microstrip line balun acts as output matching network. At 1 V, 1.5 V, 2 V supply voltages output powers of 20 dBm, 23.5 dBm, 26 dBm are achieved at 2.45 GHz. The respective power added efficiency is 36%, 49.5%, 53%. The small-signal gain is 33 dB.",
"title": ""
},
{
"docid": "c05fc37d9f33ec94f4c160b3317dda00",
"text": "We consider the coordination control for multiagent systems in a very general framework where the position and velocity interactions among agents are modeled by independent graphs. Different algorithms are proposed and analyzed for different settings, including the case without leaders and the case with a virtual leader under fixed position and velocity interaction topologies, as well as the case with a group velocity reference signal under switching velocity interaction. It is finally shown that the proposed algorithms are feasible in achieving the desired coordination behavior provided the interaction topologies satisfy the weakest possible connectivity conditions. Such conditions relate only to the structure of the interactions among agents while irrelevant to their magnitudes and thus are easy to verify. Rigorous convergence analysis is preformed based on a combined use of tools from algebraic graph theory, matrix analysis as well as the Lyapunov stability theory.",
"title": ""
},
{
"docid": "8efee8d7c3bf229fa5936209c43a7cff",
"text": "This research investigates the meaning of “human-computer relationship” and presents techniques for constructing, maintaining, and evaluating such relationships, based on research in social psychology, sociolinguistics, communication and other social sciences. Contexts in which relationships are particularly important are described, together with specific benefits (like trust) and task outcomes (like improved learning) known to be associated with relationship quality. We especially consider the problem of designing for long-term interaction, and define relational agents as computational artifacts designed to establish and maintain long-term social-emotional relationships with their users. We construct the first such agent, and evaluate it in a controlled experiment with 101 users who were asked to interact daily with an exercise adoption system for a month. Compared to an equivalent task-oriented agent without any deliberate social-emotional or relationship-building skills, the relational agent was respected more, liked more, and trusted more, even after four weeks of interaction. Additionally, users expressed a significantly greater desire to continue working with the relational agent after the termination of the study. We conclude by discussing future directions for this research together with ethical and other ramifications of this work for HCI designers.",
"title": ""
},
{
"docid": "7ec5faf2081790e7baa1832d5f9ab5bd",
"text": "Text detection in complex background images is a challenging task for intelligent vehicles. Actually, almost all the widely-used systems focus on commonly used languages while for some minority languages, such as the Uyghur language, text detection is paid less attention. In this paper, we propose an effective Uyghur language text detection system in complex background images. First, a new channel-enhanced maximally stable extremal regions (MSERs) algorithm is put forward to detect component candidates. Second, a two-layer filtering mechanism is designed to remove most non-character regions. Third, the remaining component regions are connected into short chains, and the short chains are extended by a novel extension algorithm to connect the missed MSERs. Finally, a two-layer chain elimination filter is proposed to prune the non-text chains. To evaluate the system, we build a new data set by various Uyghur texts with complex backgrounds. Extensive experimental comparisons show that our system is obviously effective for Uyghur language text detection in complex background images. The F-measure is 85%, which is much better than the state-of-the-art performance of 75.5%.",
"title": ""
},
{
"docid": "46dc618a779bd658bfa019117c880d3a",
"text": "The concept and deployment of Internet of Things (IoT) has continued to develop momentum over recent years. Several different layered architectures for IoT have been proposed, although there is no consensus yet on a widely accepted architecture. In general, the proposed IoT architectures comprise three main components: an object layer, one or more middle layers, and an application layer. The main difference in detail is in the middle layers. Some include a cloud services layer for managing IoT things. Some propose virtual objects as digital counterparts for physical IoT objects. Sometimes both cloud services and virtual objects are included.In this paper, we take a first step toward our eventual goal of developing an authoritative family of access control models for a cloud-enabled Internet of Things. Our proposed access-control oriented architecture comprises four layers: an object layer, a virtual object layer, a cloud services layer, and an application layer. This 4-layer architecture serves as a framework to build access control models for a cloud-enabled IoT. Within this architecture, we present illustrative examples that highlight some IoT access control issues leading to a discussion of needed access control research. We identify the need for communication control within each layer and across adjacent layers (particularly in the lower layers), coupled with the need for data access control (particularly in the cloud services and application layers).",
"title": ""
},
{
"docid": "fe6c80ee021c2b73420fa41248772119",
"text": "Given the significant role of people in the management of security, attention has recently been paid to the issue of how to motivate employees to improve security performance of organizations. However, past work has been dependent on deterrence theory rooted in an extrinsic motivation model to help understand why employees do or do not follow security rules in their organization. We postulated that we could better explain employees’ security-related rule-following behavior with an approach rooted in an intrinsic motivation model. We therefore developed a model of employees’ motivation to comply with IS security policies which incorporated both extrinsic and intrinsic models of human behavior. It was tested with data collected through a survey of 602 employees in the United States. We found that variables rooted in the intrinsic motivation model contributed significantly more to the explained variance of employees’ compliance than did those rooted in the extrinsic motivation model. 2011 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "ab2096798261a8976846c5f72eeb18ee",
"text": "ion Description and Purpose Variable names Provide human readable names to data addresses Function names Provide human readable names to function addresses Control structures Eliminate ‘‘spaghetti’’ code (The ‘‘goto’’ statement is no longer necessary.) Argument passing Default argument values, keyword specification of arguments, variable length argument lists, etc. Data structures Allow conceptual organization of data Data typing Binds the type of the data to the type of the variable Static Insures program correctness, sacrificing generality. Dynamic Greater generality, sacrificing guaranteed correctness. Inheritance Allows creation of families of related types and easy re-use of common functionality Message dispatch Providing one name to multiple implementations of the same concept Single dispatch Dispatching to a function based on the run-time type of one argument Multiple dispatch Dispatching to a function based on the run-time type of multiple arguments. Predicate dispatch Dispatching to a function based on run-time state of arguments Garbage collection Automated memory management Closures Allow creation, combination, and use of functions as first-class values Lexical binding Provides access to values in the defining context Dynamic binding Provides access to values in the calling context (.valueEnvir in SC) Co-routines Synchronous cooperating processes Threads Asynchronous processes Lazy evaluation Allows the order of operations not to be specified. Infinitely long processes and infinitely large data structures can be specified and used as needed. Applying Language Abstractions to Computer Music The SuperCollider language provides many of the abstractions listed above. SuperCollider is a dynamically typed, single-inheritance, single-argument dispatch, garbage-collected, object-oriented language similar to Smalltalk (www.smalltalk.org). In SuperCollider, everything is an object, including basic types like letters and numbers. Objects in SuperCollider are organized into classes. The UGen class provides the abstraction of a unit generator, and the Synth class represents a group of UGens operating as a group to generate output. An instrument is constructed functionally. That is, when one writes a sound-processing function, one is actually writing a function that creates and connects unit generators. This is different from a procedural or static object specification of a network of unit generators. Instrument functions in SuperCollider can generate the network of unit generators using the full algorithmic capability of the language. For example, the following code can easily generate multiple versions of a patch by changing the values of the variables that specify the dimensions (number of exciters, number of comb delays, number of allpass delays). In a procedural language like Csound or a ‘‘wire-up’’ environment like Max, a different patch would have to be created for different values for the dimensions of the patch.",
"title": ""
},
{
"docid": "0f9151aa44b4175710af869082263631",
"text": "In order to research characteristics of unbalanced rotor system with external excitations, a dynamic model of rotor was established. This model not only considered the influences of the gyroscopic effect and the gravity, but also includes two kinds of unbalance which named static/dynamic unbalance. Use the hypothesis of small angle, expression of forces and torque which caused by translation and rotational of these two unbalances and gravity was derived in detail. Using the Lagrange method with six degree of freedom, motion equations of the system were derived. Combined Runge-Kutta approach, dynamic equations of the model were solved. Moreover, nonlinear vibration characteristics were analyzed by means of three kinds of diagrams. Thus, theoretical foundations are established for optimization design and fault diagnosis of rotor-bearing system.",
"title": ""
},
{
"docid": "820768d9fc4e8f9fb4452e4aeeafd270",
"text": "Lateral epicondylitis (Tennis Elbow) is the most frequent type of myotendinosis and can be responsible for substantial pain and loss of function of the affected limb. Muscular biomechanics characteristics and equipment are important in preventing the conditions. This article present on overview of the current knowledge on lateral Epicondylitis and focuses on Etiology, Diagnosis and treatment strategies, conservative treatment are discussed and recent surgical techniques are outlined. This information should assist health care practitioners who treat patients with this disorder.",
"title": ""
},
{
"docid": "2df6a72762190955d88c8ddb62f338c6",
"text": "Comparative studies have implicated the nucleus accumbens (NAcc) in the anticipation of incentives, but the relative responsiveness of this neural substrate during anticipation of rewards versus punishments remains unclear. Using event-related functional magnetic resonance imaging, we investigated whether the anticipation of increasing monetary rewards and punishments would increase NAcc blood oxygen level-dependent contrast (hereafter, \"activation\") in eight healthy volunteers. Whereas anticipation of increasing rewards elicited both increasing self-reported happiness and NAcc activation, anticipation of increasing punishment elicited neither. However, anticipation of both rewards and punishments activated a different striatal region (the medial caudate). At the highest reward level ($5.00), NAcc activation was correlated with individual differences in self-reported happiness elicited by the reward cues. These findings suggest that whereas other striatal areas may code for expected incentive magnitude, a region in the NAcc codes for expected positive incentive value.",
"title": ""
},
{
"docid": "dee922c700479ea808e59fd323193e48",
"text": "In this article we present a novel mapping system that robustly generates highly accurate 3D maps using an RGB-D camera. Our approach does not require any further sensors or odometry. With the availability of low-cost and light-weight RGB-D sensors such as the Microsoft Kinect, our approach applies to small domestic robots such as vacuum cleaners as well as flying robots such as quadrocopters. Furthermore, our system can also be used for free-hand reconstruction of detailed 3D models. In addition to the system itself, we present a thorough experimental evaluation on a publicly available benchmark dataset. We analyze and discuss the influence of several parameters such as the choice of the feature descriptor, the number of visual features, and validation methods. The results of the experiments demonstrate that our system can robustly deal with challenging scenarios such as fast cameras motions and feature-poor environments while being fast enough for online operation. Our system is fully available as open-source and has already been widely adopted by the robotics community.",
"title": ""
}
] |
scidocsrr
|
f6482aab3677432f68a19f764f308c58
|
The Impact of Agile Methodology ( DSDM ) on Software Project
|
[
{
"docid": "22d17576fef96e5fcd8ef3dd2fb0cc5f",
"text": "I n a previous article (\" Agile Software Development: The Business of Innovation , \" Computer, Sept. 2001, pp. 120-122), we introduced agile software development through the problem it addresses and the way in which it addresses the problem. Here, we describe the effects of working in an agile style. Over recent decades, while market forces, systems requirements, implementation technology, and project staff were changing at a steadily increasing rate, a different development style showed its advantages over the traditional one. This agile style of development directly addresses the problems of rapid change. A dominant idea in agile development is that the team can be more effective in responding to change if it can • reduce the cost of moving information between people, and • reduce the elapsed time between making a decision to seeing the consequences of that decision. To reduce the cost of moving information between people, the agile team works to • place people physically closer, • replace documents with talking in person and at whiteboards, and • improve the team's amicability—its sense of community and morale— so that people are more inclined to relay valuable information quickly. To reduce the time from decision to feedback, the agile team • makes user experts available to the team or, even better, part of the team and • works incrementally. Making user experts available as part of the team gives developers rapid feedback on the implications to the user of their design choices. The user experts, seeing the growing software in its earliest stages, learn both what the developers misunderstood and also which of their requests do not work as well in practice as they had thought. The term agile, coined by a group of people experienced in developing software this way, has two distinct connotations. The first is the idea that the business and technology worlds have become turbulent , high speed, and uncertain, requiring a process to both create change and respond rapidly to change. The first connotation implies the second one: An agile process requires responsive people and organizations. Agile development focuses on the talents and skills of individuals and molds process to specific people and teams, not the other way around. The most important implication to managers working in the agile manner is that it places more emphasis on people factors in the project: amicability, talent, skill, and communication. These qualities become a primary concern …",
"title": ""
},
{
"docid": "105f34c3fa2d4edbe83d184b7cf039aa",
"text": "Software development methodologies are constantly evolving due to changing technologies and new demands from users. Today's dynamic business environment has given rise to emergent organizations that continuously adapt their structures, strategies, and policies to suit the new environment [12]. Such organizations need information systems that constantly evolve to meet their changing requirements---but the traditional, plan-driven software development methodologies lack the flexibility to dynamically adjust the development process.",
"title": ""
},
{
"docid": "62ad7d2ce0451e9bdeafe541174730ef",
"text": "Objectives: The student who successfully completes this course. 1. Understand the genesis of project, program, and portfolio management and their importance to enterprise success. 2. Describes the various approaches for selecting projects, programs, and portfolios. 3. Demonstrates knowledge of project management terms and techniques, such as: • The triple constraint of project management • The project management knowledge areas and process groups • The project life cycle • Tools and techniques of project management, such as: Project selection methods Work breakdown structures Network diagrams, critical path analysis, and critical chain scheduling Cost estimates Earned value management Motivation theory and team building 4. Applies project management concepts by working on a group project as a project",
"title": ""
}
] |
[
{
"docid": "38450c8c93a3a7807972443fc2b59962",
"text": "UNLABELLED\nWe have created a Shiny-based Web application, called Shiny-phyloseq, for dynamic interaction with microbiome data that runs on any modern Web browser and requires no programming, increasing the accessibility and decreasing the entrance requirement to using phyloseq and related R tools. Along with a data- and context-aware dynamic interface for exploring the effects of parameter and method choices, Shiny-phyloseq also records the complete user input and subsequent graphical results of a user's session, allowing the user to archive, share and reproduce the sequence of steps that created their result-without writing any new code themselves.\n\n\nAVAILABILITY AND IMPLEMENTATION\nShiny-phyloseq is implemented entirely in the R language. It can be hosted/launched by any system with R installed, including Windows, Mac OS and most Linux distributions. Information technology administrators can also host Shiny--phyloseq from a remote server, in which case users need only have a Web browser installed. Shiny-phyloseq is provided free of charge under a GPL-3 open-source license through GitHub at http://joey711.github.io/shiny-phyloseq/.",
"title": ""
},
{
"docid": "262be71d64eef2534fab547ec3db6b9a",
"text": "In the past few decades, the rise in attacks on communication devices in networks has resulted in a reduction of network functionality, throughput, and performance. To detect and mitigate these network attacks, researchers, academicians, and practitioners developed Intrusion Detection Systems (IDSs) with automatic response systems. The response system is considered an important component of IDS, since without a timely response IDSs may not function properly in countering various attacks, especially on a real-time basis. To respond appropriately, IDSs should select the optimal response option according to the type of network attack. This research study provides a complete survey of IDSs and Intrusion Response Systems (IRSs) on the basis of our in-depth understanding of the response option for different types of network attacks. Knowledge of the path from IDS to IRS can assist network administrators and network staffs in understanding how to tackle different attacks with state-of-the-art technologies.",
"title": ""
},
{
"docid": "a69747683329667c0d697f3127fa58c1",
"text": "Clustering is the process of grouping a set of objects into classes of similar objects. Although definitions of similarity vary from one clustering model to another, in most of these models the concept of similarity is based on distances, e.g., Euclidean distance or cosine distance. In other words, similar objects are required to have close values on at least a set of dimensions. In this paper, we explore a more general type of similarity. Under the pCluster model we proposed, two objects are similar if they exhibit a coherent pattern on a subset of dimensions. For instance, in DNA microarray analysis, the expression levels of two genes may rise and fall synchronously in response to a set of environmental stimuli. Although the magnitude of their expression levels may not be close, the patterns they exhibit can be very much alike. Discovery of such clusters of genes is essential in revealing significant connections in gene regulatory networks. E-commerce applications, such as collaborative filtering, can also benefit from the new model, which captures not only the closeness of values of certain leading indicators but also the closeness of (purchasing, browsing, etc.) patterns exhibited by the customers. Our paper introduces an effective algorithm to detect such clusters, and we perform tests on several real and synthetic data sets to show its effectiveness.",
"title": ""
},
{
"docid": "d02a1619b53ba42e1dbc1fa7a2c65da8",
"text": "The 117 manuscripts submitted for the Hypertext '91 conference were assigned to members of the review committee, using a variety of automated methods based on information retrieval principles and Latent Semantic Indexing. Fifteen reviewers provided exhaustive ratings for the submitted abstracts, indicating how well each abstract matched their interests. The automated methods do a fairly good job of assigning relevant papers for review, but they are still somewhat poorer than assignments made manually by human experts and substantially poorer than an assignment perfectly matching the reviewers' own ranking of the papers. A new automated assignment method called “n of 2n” achieves better performance than human experts by sending reviewers more papers than they actually have to review and then allowing them to choose part of their review load themselves.",
"title": ""
},
{
"docid": "705b2a837b51ac5e354e1ec0df64a52a",
"text": "BACKGROUND\nGeneralized anxiety disorder (GAD) is a psychiatric disorder characterized by a constant and unspecific anxiety that interferes with daily-life activities. Its high prevalence in general population and the severe limitations it causes, point out the necessity to find new efficient strategies to treat it. Together with the cognitive-behavioural treatments, relaxation represents a useful approach for the treatment of GAD, but it has the limitation that it is hard to be learned. To overcome this limitation we propose the use of virtual reality (VR) to facilitate the relaxation process by visually presenting key relaxing images to the subjects. The visual presentation of a virtual calm scenario can facilitate patients' practice and mastery of relaxation, making the experience more vivid and real than the one that most subjects can create using their own imagination and memory, and triggering a broad empowerment process within the experience induced by a high sense of presence. According to these premises, the aim of the present study is to investigate the advantages of using a VR-based relaxation protocol in reducing anxiety in patients affected by GAD.\n\n\nMETHODS/DESIGN\nThe trial is based on a randomized controlled study, including three groups of 25 patients each (for a total of 75 patients): (1) the VR group, (2) the non-VR group and (3) the waiting list (WL) group. Patients in the VR group will be taught to relax using a VR relaxing environment and audio-visual mobile narratives; patients in the non-VR group will be taught to relax using the same relaxing narratives proposed to the VR group, but without the VR support, and patients in the WL group will not receive any kind of relaxation training. Psychometric and psychophysiological outcomes will serve as quantitative dependent variables, while subjective reports of participants will be used as qualitative dependent variables.\n\n\nCONCLUSION\nWe argue that the use of VR for relaxation represents a promising approach in the treatment of GAD since it enhances the quality of the relaxing experience through the elicitation of the sense of presence. This controlled trial will be able to evaluate the effects of the use of VR in relaxation while preserving the benefits of randomization to reduce bias.\n\n\nTRIAL REGISTRATION\nNCT00602212 (ClinicalTrials.gov).",
"title": ""
},
{
"docid": "b4a1661dd8e83c44cbee154046650681",
"text": "Thermal imaging has shown potential in assisting many aspects of smart irrigation management. This article examines key technical and legal issues and requirements supporting the use of Cloud of Things for managing water source-related data prior to discussing potential solutions.",
"title": ""
},
{
"docid": "350aeae5c69db969c35c673c0be2a98a",
"text": "Driver yawning detection is one of the key technologies used in driver fatigue monitoring systems. Real-time driver yawning detection is a very challenging problem due to the dynamics in driver's movements and lighting conditions. In this paper, we present a yawning detection system that consists of a face detector, a nose detector, a nose tracker and a yawning detector. Deep learning algorithms are developed for detecting driver face area and nose location. A nose tracking algorithm that combines Kalman filter with a dedicated open-source TLD (Track-Learning-Detection) tracker is developed to generate robust tracking results under dynamic driving conditions. Finally a neural network is developed for yawning detection based on the features including nose tracking confidence value, gradient features around corners of mouth and face motion features. Experiments are conducted on real-world driving data, and results show that the deep convolutional networks can generate a satisfactory classification result for detecting driver's face and nose when compared with other pattern classification methods, and the proposed yawning detection system is effective in real-time detection of driver's yawning states.",
"title": ""
},
{
"docid": "28d75588fdb4ff45929da124b001e8cc",
"text": "We present a novel training framework for neural sequence models, particularly for grounded dialog generation. The standard training paradigm for these models is maximum likelihood estimation (MLE), or minimizing the cross-entropy of the human responses. Across a variety of domains, a recurring problem with MLE trained generative neural dialog models (G) is that they tend to produce ‘safe’ and generic responses (‘I don’t know’, ‘I can’t tell’). In contrast, discriminative dialog models (D) that are trained to rank a list of candidate human responses outperform their generative counterparts; in terms of automatic metrics, diversity, and informativeness of the responses. However, D is not useful in practice since it can not be deployed to have real conversations with users. Our work aims to achieve the best of both worlds – the practical usefulness of G and the strong performance of D – via knowledge transfer from D to G. Our primary contribution is an end-to-end trainable generative visual dialog model, where G receives gradients from D as a perceptual (not adversarial) loss of the sequence sampled from G. We leverage the recently proposed Gumbel-Softmax (GS) approximation to the discrete distribution – specifically, a RNN augmented with a sequence of GS samplers, coupled with the straight-through gradient estimator to enable end-to-end differentiability. We also introduce a stronger encoder for visual dialog, and employ a self-attention mechanism for answer encoding along with a metric learning loss to aid D in better capturing semantic similarities in answer responses. Overall, our proposed model outperforms state-of-the-art on the VisDial dataset by a significant margin (2.67% on recall@10). The source code can be downloaded from https://github.com/jiasenlu/visDial.pytorch",
"title": ""
},
{
"docid": "77362cc72d7a09dbbb0f067c11fe8087",
"text": "The Cloud computing paradigm has revolutionised the computer science horizon during the past decade and has enabled the emergence of computing as the fifth utility. It has captured significant attention of academia, industries, and government bodies. Now, it has emerged as the backbone of modern economy by offering subscription-based services anytime, anywhere following a pay-as-you-go model. This has instigated (1) shorter establishment times for start-ups, (2) creation of scalable global enterprise applications, (3) better cost-to-value associativity for scientific and high-performance computing applications, and (4) different invocation/execution models for pervasive and ubiquitous applications. The recent technological developments and paradigms such as serverless computing, software-defined networking, Internet of Things, and processing at network edge are creating new opportunities for Cloud computing. However, they are also posing several new challenges and creating the need for new approaches and research strategies, as well as the re-evaluation of the models that were developed to address issues such as scalability, elasticity, reliability, security, sustainability, and application models. The proposed manifesto addresses them by identifying the major open challenges in Cloud computing, emerging trends, and impact areas. It then offers research directions for the next decade, thus helping in the realisation of Future Generation Cloud Computing.",
"title": ""
},
{
"docid": "0028061d8bd57be4aaf6a01995b8c3bb",
"text": "Steganography is the art of concealing the existence of information within seemingly harmless carriers. It is a method similar to covert channels, spread spectrum communication and invisible inks which adds another step in security. A message in cipher text may arouse suspicion while an invisible message will not. A digital image is a flexible medium used to carry a secret message because the slight modification of a cover image is hard to distinguish by human eyes. In this paper, we propose a revised version of information hiding scheme using Sudoku puzzle. The original work was proposed by Chang et al. in 2008, and their work was inspired by Zhang and Wang's method and Sudoku solutions. Chang et al. successfully used Sudoku solutions to guide cover pixels to modify pixel values so that secret messages can be embedded. Our proposed method is a modification of Chang et al’s method. Here a 27 X 27 Reference matrix is used instead of 256 X 256 reference matrix as proposed in the previous method. The earlier version is for a grayscale image but the proposed method is for a colored image.",
"title": ""
},
{
"docid": "f6f45817e0f88c336c9f8d2ada653382",
"text": "Memory-based computing using associative memory is a promising way to reduce the energy consumption of important classes of streaming applications by avoiding redundant computations. A set of frequent patterns that represent basic functions are pre-stored in Ternary Content Addressable Memory (TCAM) and reused. The primary limitation to using associative memory in modern parallel processors is the large search energy required by TCAMs. In TCAMs, all rows that match, except hit rows, precharge and discharge for every search operation, resulting in high energy consumption. In this paper, we propose a new Multiple-Access Single-Charge (MASC) TCAM architecture which is capable of searching TCAM contents multiple times with only a single precharge cycle. In contrast to previous designs, the MASC TCAM keeps the match-line voltage of all miss-rows high and uses their charge for the next search operation, while only the hit rows discharge. We use periodic refresh to control the accuracy of the search. We also implement a new type of approximate associative memory by setting longer refresh times for MASC TCAMs, which yields search results within 1–2 bit Hamming distances of the exact value. To further decrease the energy consumption of MASC TCAM and reduce the area, we implement MASC with crossbar TCAMs. Our evaluation on AMD Southern Island GPU shows that using MASC (crossbar MASC) associative memory can improve the average floating point units energy efficiency by 33.4, 38.1, and 36.7 percent (37.7, 42.6, and 43.1 percent) for exact matching, selective 1-HD and 2-HD approximations respectively, providing an acceptable quality of service (PSNR > 30 dB and average relative error <10 percent). This shows that MASC (crossbar MASC) can achieve 1.77X (1.93X) higher energy savings as compared to the state of the art implementation of GPGPU that uses voltage overscaling on TCAM.",
"title": ""
},
{
"docid": "75e14669377727660391ab3870d1627e",
"text": "Knowledge base (KB) completion aims to infer missing facts from existing ones in a KB. Among various approaches, path ranking (PR) algorithms have received increasing attention in recent years. PR algorithms enumerate paths between entitypairs in a KB and use those paths as features to train a model for missing fact prediction. Due to their good performances and high model interpretability, several methods have been proposed. However, most existing methods suffer from scalability (high RAM consumption) and feature explosion (trains on an exponentially large number of features) problems. This paper proposes a Context-aware Path Ranking (C-PR) algorithm to solve these problems by introducing a selective path exploration strategy. C-PR learns global semantics of entities in the KB using word embedding and leverages the knowledge of entity semantics to enumerate contextually relevant paths using bidirectional random walk. Experimental results on three large KBs show that the path features (fewer in number) discovered by C-PR not only improve predictive performance but also are more interpretable than existing baselines.",
"title": ""
},
{
"docid": "9f1acbd886cdf792fcaeafad9bfdfed3",
"text": "In technical support scams, cybercriminals attempt to convince users that their machines are infected with malware and are in need of their technical support. In this process, the victims are asked to provide scammers with remote access to their machines, who will then “diagnose the problem”, before offering their support services which typically cost hundreds of dollars. Despite their conceptual simplicity, technical support scams are responsible for yearly losses of tens of millions of dollars from everyday users of the web. In this paper, we report on the first systematic study of technical support scams and the call centers hidden behind them. We identify malvertising as a major culprit for exposing users to technical support scams and use it to build an automated system capable of discovering, on a weekly basis, hundreds of phone numbers and domains operated by scammers. By allowing our system to run for more than 8 months we collect a large corpus of technical support scams and use it to provide insights on their prevalence, the abused infrastructure, the illicit profits, and the current evasion attempts of scammers. Finally, by setting up a controlled, IRB-approved, experiment where we interact with 60 different scammers, we experience first-hand their social engineering tactics, while collecting detailed statistics of the entire process. We explain how our findings can be used by law-enforcing agencies and propose technical and educational countermeasures for helping users avoid being victimized by technical support scams.",
"title": ""
},
{
"docid": "fde0b02f0dbf01cd6a20b02a44cdc6cf",
"text": "This paper presents a process for capturing spatially and directionally varying illumination from a real-world scene and using this lighting to illuminate computer-generated objects. We use two devices for capturing such illumination. In the first we photograph an array of mirrored spheres in high dynamic range to capture the spatially varying illumination. In the second, we obtain higher resolution data by capturing images with an high dynamic range omnidirectional camera as it traverses across a plane. For both methods we apply the light field technique to extrapolate the incident illumination to a volume. We render computer-generated objects as illuminated by this captured illumination using a custom shader within an existing global illumination rendering system. To demonstrate our technique we capture several spatially-varying lighting environments with spotlights, shadows, and dappled lighting and use them to illuminate synthetic scenes. We also show comparisons to real objects under the same illumination.",
"title": ""
},
{
"docid": "c4bc226e59648be0191b95b91b3b9f33",
"text": "In this paper we present a new class of side-channel attacks on computer hard drives. Hard drives contain one or more spinning disks made of a magnetic material. In addition, they contain different magnets which rapidly move the head to a target position on the disk to perform a write or a read. The magnetic fields from the disk’s material and head are weak and well shielded. However, we show that the magnetic field due to the moving head can be picked up by sensors outside of the hard drive. With these measurements, we are able to deduce patterns about ongoing operations. For example, we can detect what type of the operating system is booting up or what application is being started. Most importantly, no special equipment is necessary. All attacks can be performed by using an unmodified smartphone placed in proximity of a hard drive.",
"title": ""
},
{
"docid": "8ccfe92400218bdc2aa1e9589337317a",
"text": "Much of the worlds data is streaming, time-series data, where anomalies give significant information in critical situations. Yet detecting anomalies in streaming data is a difficult task, requiring detectors to process data in real-time, and learn while simultaneously making predictions. We present a novel anomaly detection technique based on an on-line sequence memory algorithm called Hierarchical Temporal Memory (HTM). We show results from a live application that detects anomalies in financial metrics in realtime. We also test the algorithm on NAB, a published benchmark for real-time anomaly detection, where our algorithm achieves best-in-class results.",
"title": ""
},
{
"docid": "c4094c8b273d6332f36b6f452886de6a",
"text": "This paper presents original research on prevalence, user characteristics and effect profile of N,N-dimethyltryptamine (DMT), a potent hallucinogenic which acts primarily through the serotonergic system. Data were obtained from the Global Drug Survey (an anonymous online survey of people, many of whom have used drugs) conducted between November and December 2012 with 22,289 responses. Lifetime prevalence of DMT use was 8.9% (n=1980) and past year prevalence use was 5.0% (n=1123). We explored the effect profile of DMT in 472 participants who identified DMT as the last new drug they had tried for the first time and compared it with ratings provided by other respondents on psilocybin (magic mushrooms), LSD and ketamine. DMT was most often smoked and offered a strong, intense, short-lived psychedelic high with relatively few negative effects or \"come down\". It had a larger proportion of new users compared with the other substances (24%), suggesting its popularity may increase. Overall, DMT seems to have a very desirable effect profile indicating a high abuse liability that maybe offset by a low urge to use more.",
"title": ""
},
{
"docid": "33f0a2bbda3f701dab66a8ffb67d5252",
"text": "Microglia, the resident macrophages of the CNS, are exquisitely sensitive to brain injury and disease, altering their morphology and phenotype to adopt a so-called activated state in response to pathophysiological brain insults. Morphologically activated microglia, like other tissue macrophages, exist as many different phenotypes, depending on the nature of the tissue injury. Microglial responsiveness to injury suggests that these cells have the potential to act as diagnostic markers of disease onset or progression, and could contribute to the outcome of neurodegenerative diseases. The persistence of activated microglia long after acute injury and in chronic disease suggests that these cells have an innate immune memory of tissue injury and degeneration. Microglial phenotype is also modified by systemic infection or inflammation. Evidence from some preclinical models shows that systemic manipulations can ameliorate disease progression, although data from other models indicates that systemic inflammation exacerbates disease progression. Systemic inflammation is associated with a decline in function in patients with chronic neurodegenerative disease, both acutely and in the long term. The fact that diseases with a chronic systemic inflammatory component are risk factors for Alzheimer disease implies that crosstalk occurs between systemic inflammation and microglia in the CNS.",
"title": ""
},
{
"docid": "9ca27ddd53d13db68a5f2c4477c13967",
"text": "Humans have a remarkable ability to use physical commonsense and predict the effect of collisions. But do they understand the underlying factors? Can they predict if the underlying factors have changed? Interestingly, in most cases humans can predict the effects of similar collisions with different conditions such as changes in mass, friction, etc. It is postulated this is primarily because we learn to model physics with meaningful latent variables. This does not imply we can estimate the precise values of these meaningful variables (estimate exact values of mass or friction). Inspired by this observation, we propose an interpretable intuitive physics model where specific dimensions in the bottleneck layers correspond to different physical properties. In order to demonstrate that our system models these underlying physical properties, we train our model on collisions of different shapes (cube, cone, cylinder, spheres etc.) and test on collisions of unseen combinations of shapes. Furthermore, we demonstrate our model generalizes well even when similar scenes are simulated with different underlying properties.",
"title": ""
},
{
"docid": "1d6024cacf033182eaf97897934c296c",
"text": "Older adults with cognitive impairments often have difficulty performing instrumental activities of daily living (IADLs). Prompting technologies have gained popularity over the last decade and have the potential to assist these individuals with IADLs in order to live independently. Although prompting techniques are routinely used by caregivers and health care providers to aid individuals with cognitive impairment in maintaining their independence with everyday activities, there is no clear consensus or gold standard regarding prompt content, method of instruction, timing of delivery, or interface of prompt delivery in the gerontology or technology literatures. In this paper, we demonstrate how cognitive rehabilitation principles can inform and advance the development of more effective assistive prompting technologies that could be employed in smart environments. We first describe cognitive rehabilitation theory (CRT) and show how it provides a useful theoretical foundation for guiding the development of assistive technologies for IADL completion. We then use the CRT framework to critically review existing smart prompting technologies to answer questions that will be integral to advancing development of effective smart prompting technologies. Finally, we raise questions for future exploration as well as challenges and suggestions for future directions in this area of research.",
"title": ""
}
] |
scidocsrr
|
08f68f1b66a7f10b15854fb1b6ba0d18
|
Traffic sign recognition based on the NVIDIA Jetson TX1 embedded system using convolutional neural networks
|
[
{
"docid": "7d472441fb112f0851bcfe6854b8663e",
"text": "Detection and recognition of traffic sign, including various road signs and text, play an important role in autonomous driving, mapping/navigation and traffic safety. In this paper, we proposed a traffic sign detection and recognition system by applying deep convolutional neural network (CNN), which demonstrates high performance with regard to detection rate and recognition accuracy. Compared with other published methods which are usually limited to a predefined set of traffic signs, our proposed system is more comprehensive as our target includes traffic signs, digits, English letters and Chinese characters. The system is based on a multi-task CNN trained to acquire effective features for the localization and classification of different traffic signs and texts. In addition to the public benchmarking datasets, the proposed approach has also been successfully evaluated on a field-captured Chinese traffic sign dataset, with performance confirming its robustness and suitability to real-world applications.",
"title": ""
}
] |
[
{
"docid": "3bc90ab07ba35412d7f2b33da3ef56ab",
"text": "Laminated soft magnetic steel is very often used to manufacture the stator cores of axial-flux PM machines. However, as the magnetic flux typically has main components parallel to the lamination plane, different magnetic flux density levels may occur over the radial direction: High flux densities near the saturation level are found at the inner radius, while the laminations at the outer radius are used inefficiently. To obtain a leveled magnetic flux density, this paper introduces a radially varying air gap: At the inner radius, the air gap is increased, while at the outer radius, the air gap remains unchanged. This results in equal flux densities in the different lamination layers. As the total flux in the stator cores is decreased due to the variable air gap, the permanent-magnet thickness should be increased to compensate for this. The effect of a variable air gap is tested for both a low-grade non-oriented and a high-grade grain-oriented material. For both materials, the redistribution of the magnetic flux due to the variable air gap results in a significant decrease of the iron losses. In the presented prototype machine, the iron losses are reduced up to 8% by introducing a variable air gap. Finally, a prototype machine is constructed using an efficient manufacturing procedure to construct the laminated magnetic stator cores with variable air gap.",
"title": ""
},
{
"docid": "534fd7868826681596586f00f47cd819",
"text": "Locally weighted projection regression is a new algorithm that achieves nonlinear function approximation in high dimensional spaces with redundant and irrelevant input dimensions. At its core, it uses locally linear models, spanned by a small number of univariate regressions in selected directions in input space. This paper evaluates different methods of projection regression and derives a nonlinear function approximator based on them. This nonparametric local learning system i) learns rapidly with second order learning methods based on incremental training, ii) uses statistically sound stochastic cross validation to learn iii) adjusts its weighting kernels based on local information only, iv) has a computational complexity that is linear in the number of inputs, and v) can deal with a large number of possibly redundant inputs, as shown in evaluations with up to 50 dimensional data sets. To our knowledge, this is the first truly incremental spatially localized learning method to combine all these properties.",
"title": ""
},
{
"docid": "a00e157ed3160d880baad5a16952e00c",
"text": "In current minimally invasive surgery techniques, the tactile information available to the surgeon is limited. Improving tactile sensation could enhance the operability of surgical instruments. Considering surgical applications, requirements such as having electrical safety, a simple structure, and sterilization capability should be considered. The current study sought to develop a grasper that can measure grasping force at the tip, based on a previously proposed tactile sensing method using acoustic reflection. This method can satisfy the requirements for surgical applications because it has no electrical element within the part that is inserted into the patient’s body. We integrated our acoustic tactile sensing method into a conventional grasping forceps instrument. We designed the instrument so that acoustic cavities within a grasping arm and a fork sleeve were connected by a small cavity in a pivoting joint. In this design, when the angle between the two grasping arms changes during grasping, the total length and local curvature of the acoustic cavity remain unchanged. Thus, the grasping force can be measured regardless of the orientation of the grasping arm. We developed a prototype sensorized grasper based on our proposed design. Fundamental tests revealed that sensor output increased with increasing contact force applied to the grasping arm, and the angle of the grasping arm did not significantly affect the sensor output. Moreover, the results of a grasping test, in which objects with different softness characteristics were held by the grasper, revealed that the grasping force could be appropriately adjusted to handle different objects on the basis of sensor output. Experimental results demonstrated that the prototype grasper can measure grasping force, enabling safe and stable grasping.",
"title": ""
},
{
"docid": "ef21435ba48421b5bf26eeadc92c1cc5",
"text": "The goal of semantic role labeling (SRL) is to discover the predicate-argument structure of a sentence, which plays a critical role in deep processing of natural language. This paper introduces simple yet effective auxiliary tags for dependency-based SRL to enhance a syntaxagnostic model with multi-hop self-attention. Our syntax-agnostic model achieves competitive performance with state-of-the-art models on the CoNLL-2009 benchmarks both for English and Chinese.",
"title": ""
},
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "9e0186c53e0a55744f60074145d135e3",
"text": "Two new low-power, and high-performance 1bit Full Adder cells are proposed in this paper. These cells are based on low-power XOR/XNOR circuit and Majority-not gate. Majority-not gate, which produces Cout (Output Carry), is implemented with an efficient method, using input capacitors and a static CMOS inverter. This kind of implementation benefits from low power consumption, a high degree of regularity and simplicity. Eight state-of-the-art 1-bit Full Adders and two proposed Full Adders are simulated with HSPICE using 0.18μm CMOS technology at several supply voltages ranging from 2.4v down to 0.8v. Although low power consumption is targeted in implementation of our designs, simulation results demonstrate great improvement in terms of power consumption and also PDP.",
"title": ""
},
{
"docid": "d220b50011d5911c5272f2b633645d24",
"text": "We show an O(1.344) = O(2) algorithm for edge-coloring an n-vertex graph using three colors. Our algorithm uses polynomial space. This improves over the previous, O(2) algorithm of Beigel and Eppstein [1]. We apply a very natural approach of generating inclusion-maximal matchings of the graph. The time complexity of our algorithm is estimated using the “measure and con-",
"title": ""
},
{
"docid": "5bf90680117b7db4315cce18bc9aefa2",
"text": "Motivated by aiding human operators in the detection of dangerous objects in passenger luggage, such as in airports, we develop an automatic object detection approach for multi-view X-ray image data. We make three main contributions: First, we systematically analyze the appearance variations of objects in X-ray images from inspection systems. We then address these variations by adapting standard appearance-based object detection approaches to the specifics of dual-energy X-ray data and the inspection scenario itself. To that end we reduce projection distortions, extend the feature representation, and address both in-plane and out-of-plane object rotations, which are a key challenge compared to many detection tasks in photographic images. Finally, we propose a novel multi-view (multi-camera) detection approach that combines single-view detections from multiple views and takes advantage of the mutual reinforcement of geometrically consistent hypotheses. While our multi-view approach can be used atop arbitrary single-view detectors, thus also for multi-camera detection in photographic images, we evaluate our method on detecting handguns in carry-on luggage. Our results show significant performance gains from all components.",
"title": ""
},
{
"docid": "9cbd8a5ac00fc940baa63cf0fb4d2220",
"text": "— The paper presents a technique for anomaly detection in user behavior in a smart-home environment. Presented technique can be used for a service that learns daily patterns of the user and proactively detects unusual situations. We have identified several drawbacks of previously presented models such as: just one type of anomaly-inactivity, intricate activity classification into hierarchy, detection only on a daily basis. Our novelty approach desists these weaknesses, provides additional information if the activity is unusually short/long, at unusual location. It is based on a semi-supervised clustering model that utilizes the neural network Self-Organizing Maps. The input to the system represents data primarily from presence sensors, however also other sensors with binary output may be used. The experimental study is realized on both synthetic data and areal database collected in our own smart-home installation for the period of two months.",
"title": ""
},
{
"docid": "879af50edd27c74bde5b656d0421059a",
"text": "In this thesis we present an approach to adapt the Single Shot multibox Detector (SSD) for face detection. Our experiments are performed on the WIDER dataset which contains a large amount of small faces (faces of 50 pixels or less). The results show that the SSD method performs poorly on the small/hard subset of this dataset. We analyze the influence of increasing the resolution during inference and training time. Building on this analysis we present two additions to the SSD method. The first addition is changing the SSD architecture to an image pyramid architecture. The second addition is creating a selection criteria on each of the different branches of the image pyramid architecture. The results show that increasing the resolution, even during inference, increases the performance for the small/hard subset. By combining resolutions in an image pyramid structure we observe that the performance keeps consistent across different sizes of faces. Finally, the results show that adding a selection criteria on each branch of the image pyramid further increases performance, because the selection criteria negates the competing behaviour of the image pyramid. We conclude that our approach not only increases performance on the small/hard subset of the WIDER dataset but keeps on performing well on the large subset.",
"title": ""
},
{
"docid": "8c89db0cd8c5dc666d7d6b244d35326b",
"text": "Cervical cancer, as the fourth most common cause of death from cancer among women, has no symptoms in the early stage. There are few methods to diagnose cervical cancer precisely at present. Support vector machine (SVM) approach is introduced in this paper for cervical cancer diagnosis. Two improved SVM methods, support vector machine-recursive feature elimination and support vector machine-principal component analysis (SVM-PCA), are further proposed to diagnose the malignant cancer samples. The cervical cancer data are represented by 32 risk factors and 4 target variables: Hinselmann, Schiller, Cytology, and Biopsy. All four targets have been diagnosed and classified by the three SVM-based approaches, respectively. Subsequently, we make the comparison among these three methods and compare our ranking result of risk factors with the ground truth. It is shown that SVM-PCA method is superior to the others.",
"title": ""
},
{
"docid": "f69ce8f6d19cbf783d3be5a4daa116e2",
"text": "The pocket-sized ThrowBot is a sub-kilogram-class robot that provides short-range remote eyes and ears for urban combat. This paper provides an overview of lessons learned from experience, testing, and evaluation of the iRobot ThrowBot developed under the Defense Advanced Research Projects Agency (DARPA) Tactical Mobile Robots (TMR) program. Emphasis has been placed on investigating requirements for the next generation of ThrowBots to be developed by iRobot Corporation and SPAWAR Systems Center, San Diego (SSC San Diego) Unmanned Systems Branch. Details on recent evaluation activities performed at the Military Operations in Urban Terrain (MOUT) test site at Fort Benning, GA, are included, along with insights obtained throughout the development of the ThrowBot since its inception in 1999 as part of the TMR program.",
"title": ""
},
{
"docid": "8f85901b4577e310036ac7ef8dedc3d5",
"text": "State-of-the-art Chinese word segmentation systems typically exploit supervised models trained on a standard manually-annotated corpus, achieving performances over 95% on a similar standard testing corpus. However, the performances may drop significantly when the same models are applied onto Chinese microtext. One major challenge is the issue of informal words in the microtext. Previous studies show that informal word detection can be helpful for microtext processing. In this work, we investigate it under the neural setting, by proposing a joint segmentation model that integrates the detection of informal words simultaneously. In addition, we generate training corpus for the joint model by using existing corpus automatically. Experimental results show that the proposed model is highly effective for segmentation of Chinese microtext.",
"title": ""
},
{
"docid": "a64f1bb761ac8ee302a278df03eecaa8",
"text": "We analyze StirTrace towards benchmarking face morphing forgeries and extending it by additional scaling functions for the face biometrics scenario. We benchmark a Benford's law based multi-compression-anomaly detection approach and acceptance rates of morphs for a face matcher to determine the impact of the processing on the quality of the forgeries. We use 2 different approaches for automatically creating 3940 images of morphed faces. Based on this data set, 86614 images are created using StirTrace. A manual selection of 183 high quality morphs is used to derive tendencies based on the subjective forgery quality. Our results show that the anomaly detection seems to be able to detect anomalies in the morphing regions, the multi-compression-anomaly detection performance after the processing can be differentiated into good (e.g. cropping), partially critical (e.g. rotation) and critical results (e.g. additive noise). The influence of the processing on the biometric matcher is marginal.",
"title": ""
},
{
"docid": "599ebc69238c6d46e6bbd24dcbbcb2c5",
"text": "Horizon or skyline detection plays a vital role towards mountainous visual geo-localization, however most of the recently proposed visual geo-localization approaches rely on user-in-the-loop skyline detection methods. Detecting such a segmenting boundary fully autonomously would definitely be a step forward for these localization approaches. This paper provides a quantitative comparison of four such methods for autonomous horizon/sky line detection on an extensive data set. Specifically, we provide the comparison between four recently proposed segmentation methods; one explicitly targeting the problem of horizon detection[2], second focused on visual geo-localization but relying on accurate detection of skyline [15] and other two proposed for general semantic segmentation — Fully Convolutional Networks (FCN) [21] and SegNet[22]. Each of the first two methods is trained on a common training set [11] comprised of about 200 images while models for the third and fourth method are fine tuned for sky segmentation problem through transfer learning using the same data set. Each of the method is tested on an extensive test set (about 3K images) covering various challenging geographical, weather, illumination and seasonal conditions. We report average accuracy and average absolute pixel error for each of the presented formulation.",
"title": ""
},
{
"docid": "699ef9eecd9d7fbef01930915c3480f0",
"text": "Disassembly of the cone-shaped HIV-1 capsid in target cells is a prerequisite for establishing a life-long infection. This step in HIV-1 entry, referred to as uncoating, is critical yet poorly understood. Here we report a novel strategy to visualize HIV-1 uncoating using a fluorescently tagged oligomeric form of a capsid-binding host protein cyclophilin A (CypA-DsRed), which is specifically packaged into virions through the high-avidity binding to capsid (CA). Single virus imaging reveals that CypA-DsRed remains associated with cores after permeabilization/removal of the viral membrane and that CypA-DsRed and CA are lost concomitantly from the cores in vitro and in living cells. The rate of loss is modulated by the core stability and is accelerated upon the initiation of reverse transcription. We show that the majority of single cores lose CypA-DsRed shortly after viral fusion, while a small fraction remains intact for several hours. Single particle tracking at late times post-infection reveals a gradual loss of CypA-DsRed which is dependent on reverse transcription. Uncoating occurs both in the cytoplasm and at the nuclear membrane. Our novel imaging assay thus enables time-resolved visualization of single HIV-1 uncoating in living cells, and reveals the previously unappreciated spatio-temporal features of this incompletely understood process.",
"title": ""
},
{
"docid": "79a9208d16541c7ed4fbc9996a82ef6a",
"text": "Query processing in data integration occurs over network-bound, autonomous data sources. This requires extensions to traditional optimization and execution techniques for three reasons: there is an absence of quality statistics about the data, data transfer rates are unpredictable and bursty, and slow or unavailable data sources can often be replaced by overlapping or mirrored sources. This paper presents the Tukwila data integration system, designed to support adaptivity at its core using a two-pronged approach. Interleaved planning and execution with partial optimization allows Tukwila to quickly recover from decisions based on inaccurate estimates. During execution, Tukwila uses adaptive query operators such as the double pipelined hash join, which produces answers quickly, and the dynamic collector, which robustly and efficiently computes unions across overlapping data sources. We demonstrate that the Tukwila architecture extends previous innovations in adaptive execution (such as query scrambling, mid-execution re-optimization, and choose nodes), and we present experimental evidence that our techniques result in behavior desirable for a data integration system.",
"title": ""
},
{
"docid": "81f71bf0f923ff07a770ae30321382f6",
"text": "The growth rate of scientific publication has been studied from 1907 to 2007 using available data from a number of literature databases, including Science Citation Index (SCI) and Social Sciences Citation Index (SSCI). Traditional scientific publishing, that is publication in peer-reviewed journals, is still increasing although there are big differences between fields. There are no indications that the growth rate has decreased in the last 50 years. At the same time publication using new channels, for example conference proceedings, open archives and home pages, is growing fast. The growth rate for SCI up to 2007 is smaller than for comparable databases. This means that SCI was covering a decreasing part of the traditional scientific literature. There are also clear indications that the coverage by SCI is especially low in some of the scientific areas with the highest growth rate, including computer science and engineering sciences. The role of conference proceedings, open access archives and publications published on the net is increasing, especially in scientific fields with high growth rates, but this has only partially been reflected in the databases. The new publication channels challenge the use of the big databases in measurements of scientific productivity or output and of the growth rate of science. Because of the declining coverage and this challenge it is problematic that SCI has been used and is used as the dominant source for science indicators based on publication and citation numbers. The limited data available for social sciences show that the growth rate in SSCI was remarkably low and indicate that the coverage by SSCI was declining over time. National Science Indicators from Thomson Reuters is based solely on SCI, SSCI and Arts and Humanities Citation Index (AHCI). Therefore the declining coverage of the citation databases problematizes the use of this source.",
"title": ""
},
{
"docid": "30a296ed8bbb51f198be99853078b8fc",
"text": "In the big data era, scalability has become a crucial requirement for any useful computational model. Probabilistic graphical models are very useful for mining and discovering data insights, but they are not scalable enough to be suitable for big data problems. Bayesian Networks particularly demonstrate this limitation when their data is represented using few random variables with a massive set of outcome values for each of them. With hierarchical data - data that is arranged in a treelike structure with several levels - one would expect to see hundreds of thousands or millions of values distributed over even just a small number of levels. When modeling this kind of hierarchical data across large data sets, Bayesian networks become unsuitable for representing the probability distributions for the following reasons: i) each level represents a single random variable with hundreds of thousands of values, ii) the number of levels is usually small, so there are also few random variables, and iii) the structure of the network is predefined since the dependency is modeled top-down from each parent to each of its child nodes. In this paper we propose a scalable probabilistic graphical model to overcome these limitations for massive hierarchical data. We believe the proposed model will lead to an easily-scalable, more readable, and expressive implementation for problems that require probabilistic-based solutions for massive amounts of hierarchical data. We successfully applied this model to solve two different challenging probabilistic-based problems on massive hierarchical data sets for different domains, namely, bioinformatics and latent semantic discovery over search logs.",
"title": ""
},
{
"docid": "934b1a0959389d32382978cdd411ba87",
"text": "Human language is colored by a broad range of topics, but existing text analysis tools only focus on a small number of them. We present Empath, a tool that can generate and validate new lexical categories on demand from a small set of seed terms (like \"bleed\" and \"punch\" to generate the category violence). Empath draws connotations between words and phrases by deep learning a neural embedding across more than 1.8 billion words of modern fiction. Given a small set of seed words that characterize a category, Empath uses its neural embedding to discover new related terms, then validates the category with a crowd-powered filter. Empath also analyzes text across 200 built-in, pre-validated categories we have generated from common topics in our web dataset, like neglect, government, and social media. We show that Empath's data-driven, human validated categories are highly correlated (r=0.906) with similar categories in LIWC.",
"title": ""
}
] |
scidocsrr
|
eeebf035b4075ae1f4ec7fdee5956d0d
|
Characterizing Geographic Variation in Well-Being Using Tweets
|
[
{
"docid": "b33553d660c21cb493f428ed03bc9037",
"text": "A review of relevant literatures led to the construction of a self-report instrument designed to measure two subtypes of student engagement with school: cognitive and psychological engagement. The psychometric properties of this measure, the Student Engagement Instrument (SEI), were assessed based on responses of an ethnically and economically diverse urban sample of 1931 ninth grade students. Factor structures were obtained using exploratory factor analyses (EFAs) on half of the dataset, with model fit examined using confirmatory factor analyses (CFAs) on the other half of the dataset. The model displaying the best empirical fit consisted of six factors, and these factors correlated with expected educational outcomes. Further research is suggested in the iterative process of developing the SEI, and the implications of these findings are discussed. D 2006 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "f515695b3d404d29a12a5e8e58a91fc0",
"text": "One area of positive psychology analyzes subjective well-being (SWB), people's cognitive and affective evaluations of their lives. Progress has been made in understanding the components of SWB, the importance of adaptation and goals to feelings of well-being, the temperament underpinnings of SWB, and the cultural influences on well-being. Representative selection of respondents, naturalistic experience sampling measures, and other methodological refinements are now used to study SWB and could be used to produce national indicators of happiness.",
"title": ""
}
] |
[
{
"docid": "35ce8c11fa7dd22ef0daf9d0bd624978",
"text": "Out-of-vocabulary (OOV) words represent an important source of error in large vocabulary continuous speech recognition (LVCSR) systems. These words cause recognition failures, which propagate through pipeline systems impacting the performance of downstream applications. The detection of OOV regions in the output of a LVCSR system is typically addressed as a binary classification task, where each region is independently classified using local information. In this paper, we show that jointly predicting OOV regions, and including contextual information from each region, leads to substantial improvement in OOV detection. Compared to the state-of-the-art, we reduce the missed OOV rate from 42.6% to 28.4% at 10% false alarm rate.",
"title": ""
},
{
"docid": "656baf66e6dd638d9f48ea621593bac3",
"text": "Recent evidence suggests that a particular gut microbial community may favour occurrence of the metabolic diseases. Recently, we reported that high-fat (HF) feeding was associated with higher endotoxaemia and lower Bifidobacterium species (spp.) caecal content in mice. We therefore tested whether restoration of the quantity of caecal Bifidobacterium spp. could modulate metabolic endotoxaemia, the inflammatory tone and the development of diabetes. Since bifidobacteria have been reported to reduce intestinal endotoxin levels and improve mucosal barrier function, we specifically increased the gut bifidobacterial content of HF-diet-fed mice through the use of a prebiotic (oligofructose [OFS]). Compared with normal chow-fed control mice, HF feeding significantly reduced intestinal Gram-negative and Gram-positive bacteria including levels of bifidobacteria, a dominant member of the intestinal microbiota, which is seen as physiologically positive. As expected, HF-OFS-fed mice had totally restored quantities of bifidobacteria. HF-feeding significantly increased endotoxaemia, which was normalised to control levels in HF-OFS-treated mice. Multiple-correlation analyses showed that endotoxaemia significantly and negatively correlated with Bifidobacterium spp., but no relationship was seen between endotoxaemia and any other bacterial group. Finally, in HF-OFS-treated-mice, Bifidobacterium spp. significantly and positively correlated with improved glucose tolerance, glucose-induced insulin secretion and normalised inflammatory tone (decreased endotoxaemia, plasma and adipose tissue proinflammatory cytokines). Together, these findings suggest that the gut microbiota contribute towards the pathophysiological regulation of endotoxaemia and set the tone of inflammation for occurrence of diabetes and/or obesity. Thus, it would be useful to develop specific strategies for modifying gut microbiota in favour of bifidobacteria to prevent the deleterious effect of HF-diet-induced metabolic diseases.",
"title": ""
},
{
"docid": "1052a1454d421290dfdd8fdb448a50cc",
"text": "Viola and Jones [9] introduced a method to accurately and rapidly detect faces within an image. This technique can be adapted to accurately detect facial features. However, the area of the image being analyzed for a facial feature needs to be regionalized to the location with the highest probability of containing the feature. By regionalizing the detection area, false positives are eliminated and the speed of detection is increased due to the reduction of the area examined. INTRODUCTION The human face poses even more problems than other objects since the human face is a dynamic object that comes in many forms and colors [7]. However, facial detection and tracking provides many benefits. Facial recognition is not possible if the face is not isolated from the background. Human Computer Interaction (HCI) could greatly be improved by using emotion, pose, and gesture recognition, all of which require face and facial feature detection and tracking [2]. Although many different algorithms exist to perform face detection, each has its own weaknesses and strengths. Some use flesh tones, some use contours, and other are even more complex involving templates, neural networks, or filters. These algorithms suffer from the same problem; they are computationally expensive [2]. An image is only a collection of color and/or light intensity values. Analyzing these pixels for face detection is time consuming and difficult to accomplish because of the wide variations of shape and JCSC 21, 4 (April 2006) 128 Figure 1 Common Haar Features pigmentation within a human face. Pixels often require reanalysis for scaling and precision. Viola and Jones devised an algorithm, called Haar Classifiers, to rapidly detect any object, including human faces, using AdaBoost classifier cascades that are based on Haar-like features and not pixels [9]. HAAR CASCADE CLASSIFIERS The core basis for Haar classifier object detection is the Haar-like features. These features, rather than using the intensity values of a pixel, use the change in contrast values between adjacent rectangular groups of pixels. The contrast variances between the pixel groups are used to determine relative light and dark areas. Two or three adjacent groups with a relative contrast variance form a Haar-like feature. Haar-like features, as shown in Figure 1 are used to detect an image [8]. Haar features can easily be scaled by increasing or decreasing the size of the pixel group being examined. This allows features to be used to detect objects of various sizes. Integral Image The simple rectangular features of an image are c a l c u l a t e d u s i n g a n intermediate representation of an image, called the integral image [9]. The integral image is an array containing the sums of the pixels’ intensity values located directly to the left of a pixel and directly above the pixel at location (x, y) inclusive. So if A[x,y] is the original image and AI[x,y] is the integral image then the integral image is computed as shown in equation 1 and illustrated in Figure 2. (1) [ ] AI x y A x y x x y y , ( ' , ' ) ' , ' =",
"title": ""
},
{
"docid": "f6deeee48e0c8f1ed1d922093080d702",
"text": "Foreword: The ACM SIGCHI (Association for Computing Machinery Special Interest Group in Computer Human Interaction) community conducted a deliberative process involving a high-visibility committee, a day-long workshop at CHI99 (Pittsburgh, PA, May 15, 1999) and a collaborative authoring process. This interim report is offered to produce further discussion and input leading to endorsement by the SIGCHI Executive Committee and then other professional societies. The scope of this research agenda included advanced information and communications technology research that could yield benefits over the next two to five years.",
"title": ""
},
{
"docid": "946d36c96f6631577aa22bfd7cd396ba",
"text": "We present the first marker-less approach for temporally coherent 3D performance capture of a human with general clothing from monocular video. Our approach reconstructs articulated human skeleton motion as well as medium-scale non-rigid surface deformations in general scenes. Human performance capture is a challenging problem due to the large range of articulation, potentially fast motion, and considerable non-rigid deformations, even from multi-view data. Reconstruction from monocular video alone is drastically more challenging, since strong occlusions and the inherent depth ambiguity lead to a highly ill-posed reconstruction problem. We tackle these challenges by a novel approach that employs sparse 2D and 3D human pose detections from a convolutional neural network using a batch-based pose estimation strategy. Joint recovery of per-batch motion allows us to resolve the ambiguities of the monocular reconstruction problem based on a low-dimensional trajectory subspace. In addition, we propose refinement of the surface geometry based on fully automatically extracted silhouettes to enable medium-scale non-rigid alignment. We demonstrate state-of-the-art performance capture results that enable exciting applications such as video editing and free viewpoint video, previously infeasible from monocular video. Our qualitative and quantitative evaluation demonstrates that our approach significantly outperforms previous monocular methods in terms of accuracy, robustness, and scene complexity that can be handled.",
"title": ""
},
{
"docid": "23e32a61107fe286e432d5f2ecda7bad",
"text": "How do we scale information extraction to the massive size and unprecedented heterogeneity of the Web corpus? Beginning in 2003, our KnowItAll project has sought to extract high-quality knowledge from the Web. In 2007, we introduced the Open Information Extraction (Open IE) paradigm which eschews handlabeled training examples, and avoids domainspecific verbs and nouns, to develop unlexicalized, domain-independent extractors that scale to the Web corpus. Open IE systems have extracted billions of assertions as the basis for both commonsense knowledge and novel question-answering systems. This paper describes the second generation of Open IE systems, which rely on a novel model of how relations and their arguments are expressed in English sentences to double precision/recall compared with previous systems such as TEXTRUNNER and WOE.",
"title": ""
},
{
"docid": "581f8909adca17194df618cc951749cd",
"text": "In this paper the problem of emotion recognition using physiological signals is presented. Firstly the problems with acquisition of physiological signals related to specific human emotions are described. It is not a trivial problem to elicit real emotions and to choose stimuli that always, and for all people, elicit the same emotion. Also different kinds of physiological signals for emotion recognition are considered. A set of the most helpful biosignals is chosen. An experiment is described that was performed in order to verify the possibility of eliciting real emotions using specially prepared multimedia presentations, as well as finding physiological signals that are most correlated with human emotions. The experiment was useful for detecting and identifying many problems and helping to find their solutions. The results of this research can be used for creation of affect-aware applications, for instance video games, that will be able to react to user's emotions.",
"title": ""
},
{
"docid": "12fa7a50132468598cf20ac79f51b540",
"text": "As medical organizations modernize their operations, they are increasingly adopting electronic health records (EHRs) and deploying new health information technology systems that create, gather, and manage their information. As a result, the amount of data available to clinicians, administrators, and researchers in the healthcare system continues to grow at an unprecedented rate. However, despite the substantial evidence showing the benefits of EHR adoption, e-prescriptions, and other components of health information exchanges, healthcare providers often report only modest improvements in their ability to make better decisions by using more comprehensive clinical information. The large volume of clinical data now being captured for each patient poses many challenges to (a) clinicians trying to combine data from different disparate systems and make sense of the patient’s condition within the context of the patient’s medical history, (b) administrators trying to make decisions grounded in data, (c) researchers trying to understand differences in population outcomes, and (d) patients trying to make use of their own medical data. In fact, despite the many hopes that access to more information would lead to more informed decisions, access to comprehensive and large-scale clinical data resources has instead made some analytical processes even more difficult. Visual analytics is an emerging discipline that has shown significant promise in addressing many of these information overload challenges. Visual analytics is the science of analytical reasoning facilitated by advanced interactive visual interfaces. In order to facilitate reasoning over, and interpretation of, complex data, visual analytics techniques combine concepts from data mining, machine learning, human computing interaction, and human cognition. As the volume of healthrelated data continues to grow at unprecedented rates and new information systems are deployed to those already overrun with too much data, there is a need for exploring how visual analytics methods can be used to avoid information overload. Information overload is the problem that arises when individuals try to analyze a number of variables that surpass the limits of human cognition. Information overload often leads to users ignoring, overlooking, or misinterpreting crucial information. The information overload problem is widespread in the healthcare domain and can result in incorrect interpretations of data, wrong diagnoses, and missed warning signs of impending changes to patient conditions. The multi-modal and heterogeneous properties of EHR data together with the frequency of redundant, irrelevant, and subjective measures pose significant challenges to users trying to synthesize the information and obtain actionable insights. Yet despite these challenges, the promise of big data in healthcare remains. There is a critical need to support research and pilot projects to study effective ways of using visual analytics to support the analysis of large amounts of medical data. Currently new interactive interfaces are being developed to unlock the value of large-scale clinical databases for a wide variety of different tasks. For instance, visual analytics could help provide clinicians with more effective ways to combine the longitudinal clinical data with the patient-generated health data to better understand patient progression. Patients could be supported in understanding personalized wellness plans and comparing their health measurements against similar patients. Researchers could use visual analytics tools to help perform population-based analysis and obtain insights from large amounts of clinical data. Hospital administrators could use visual analytics to better understand the productivity of an organization, gaps in care, outcomes measurements, and patient satisfaction. Visual analytics systems—by combining advanced interactive visualization methods with statistical inference and correlation models—have the potential to support intuitive analysis for all of these user populations while masking the underlying complexity of the data. This special focus issue of JAMIA is dedicated to new research, applications, case studies, and approaches that use visual analytics to support the analysis of complex clinical data.",
"title": ""
},
{
"docid": "d0a68fbbca8e81f1ed9da8264278c1c5",
"text": "Comprising more than 61,000 servers located across nearly 1,000 networks in 70 countries worldwide, the Akamai platform delivers hundreds of billions of Internet interactions daily, helping thousands of enterprises boost the performance and reliability of their Internet applications. In this paper, we give an overview of the components and capabilities of this large-scale distributed computing platform, and offer some insight into its architecture, design principles, operation, and management.",
"title": ""
},
{
"docid": "adeb87cdd2e1f9dc7901e8496a432bf9",
"text": "Vector representation of words improves performance in various NLP tasks, but the high-dimensional word vectors are very difficult to interpret. We apply several rotation algorithms to the vector representation of words to improve the interpretability. Unlike previous approaches that induce sparsity, the rotated vectors are interpretable while preserving the expressive performance of the original vectors. Furthermore, any pre-built word vector representation can be rotated for improved interpretability. We apply rotation to skipgrams and glove and compare the expressive power and interpretability with the original vectors and the sparse overcomplete vectors. The results show that the rotated vectors outperform the original and the sparse overcomplete vectors for interpretability and expressiveness tasks.",
"title": ""
},
{
"docid": "f97244b3ca9641b43dc4f4592e30f48b",
"text": "In many real applications of machine learning and data mining, we are often confronted with high-dimensional data. How to cluster high-dimensional data is still a challenging problem due to the curse of dimensionality. In this paper, we try to address this problem using joint dimensionality reduction and clustering. Different from traditional approaches that conduct dimensionality reduction and clustering in sequence, we propose a novel framework referred to as discriminative embedded clustering which alternates them iteratively. Within this framework, we are able not only to view several traditional approaches and reveal their intrinsic relationships, but also to be stimulated to develop a new method. We also propose an effective approach for solving the formulated nonconvex optimization problem. Comprehensive analyses, including convergence behavior, parameter determination, and computational complexity, together with the relationship to other related approaches, are also presented. Plenty of experimental results on benchmark data sets illustrate that the proposed method outperforms related state-of-the-art clustering approaches and existing joint dimensionality reduction and clustering methods.",
"title": ""
},
{
"docid": "3f05325680ecc8c826a77961281b9748",
"text": "The purpose of this paper is to determine which variables influence consumers’ intentions towards purchasing natural cosmetics. Several variables are included in the regression analysis such as age, gender, consumers’ purchase tendency towards organic food, consumers’ new natural cosmetics brands and consumers’ tendency towards health consciousness. The data was collected through an online survey questionnaire using the purposive sample of 204 consumers from the Dubrovnik-Neretva County in March and April of 2015. Various statistical analyses were used such as binary logistic regression and correlation analysis. Binary logistic regression results show that gender, consumers’ purchase tendency towards organic food and consumers’ purchase tendency towards new natural cosmetics brands have an influence on consumer purchase intentions. However, consumers’ tendency towards health consciousness has no influence on consumers’ intentions towards purchasing natural cosmetics. Results of the correlation analysis indicate that there is a strong positive correlation between purchase intentions towards natural cosmetics and consumer references of natural cosmetics. The findings may be useful to online retailers, as well as marketers and practitioners to recognize and better understand the new trends that occur in the industry of natural cosmetics.",
"title": ""
},
{
"docid": "e256e4a70476b299658d455dfbd243dd",
"text": "The field of big code relies on mining large corpora of code to perform some learning task. A significant threat to this approach has been recently identified by Lopes et al. [19] who found a large amount of near-duplicate code on GitHub. However, the impact of code duplication has not been noticed by researchers devising machine learning models for source code. In this article, we study the effect of code duplication to machine learning models showing that reported metrics are sometimes inflated by up to 100% when testing on duplicated code corpora compared to the performance on de-duplicated corpora which more accurately represent how machine learning models of code are used by software engineers. We present an “errata” for widely used datasets, list best practices for collecting code corpora and evaluating machine learning models on them, and release tools to help the community avoid this problem in",
"title": ""
},
{
"docid": "90b59d264de9bc4054f4905c47e22596",
"text": "Bronson (1974) reviewed evidence in support of the claim that the development of visually guided behavior in the human infant over the first few months of life represents a shift from subcortical to cortical visual processing. Recently, this view has been brought into question for two reasons; first, evidence revealing apparently sophisticated perceptual abilities in the newborn, and second, increasing evidence for multiple cortica streams of visual processing. The present paper presents a reanalysis of the relation between the maturation of cortical pathways and the development of visually guided behavior, focusing in particular on how the maturational state of the primary visual cortex may constrain the functioning of neural pathways subserving oculomotor control.",
"title": ""
},
{
"docid": "5e2be59bde2a07a97c51193f7b064fae",
"text": "A real-time neural network model, called the vector-integration-to-endpoint (VITE) model is developed and used to simulate quantitatively behavioral and neural data about planned and passive arm movements. Invariants o farm movements emerge through network interactions rather than through an explicitly precomputed trajectory. Motor planning occurs in the form of a target position command (TPC), which specifies where the arm intends to move, and an independently controlled GO command, which specifies the movement's overall speed. Automatic processes convert this information into an arm trajectory with invariant properties. These automatic processes include computation of a present position command (PPC) and a difference vector (DV). The DV is the difference between the PPC and the TPC at any time. The PPC is gradually updated by integrating the DV through time. The GO signal multiplies the DV before it is integrated by the PPC. The PPC generates an outflow movement command to its target muscle groups. Opponent interactions regulate the PPCs to agonist and antagonist muscle groups. This system generates synchronous movements across synergetic muscles by automatically compensating for the different total contractions that each muscle group must undergo. Quantitative simulations are provided of Woodworth's law, of the speed-accuracy trade-offknown as Fitts's law, of isotonic arm-movement properties before and after deafferentation, of synchronous and compensatory \"central-error-correction\" properties of isometric contractions, of velocity amplification during target switching, of velocity profile invariance and asymmetry, of the changes in velocity profile asymmetry at higher movement speeds, of the automarie compensation for staggered onset times of synergetic muscles, of vector cell properties in precentral motor cortex, of the inverse relation between movement duration and peak velocity, and of peak acceleration as a function of movement amplitude and duration. It is shown that TPC, PPC, and DV computations are needed to actively modulate, or gate, the learning of associative maps between TPCs of different modalities, such as between the eye-head system and the hand-arm system. By using such an associative map, looking at an object can activate a TPC of the hand-arm system, as Piaget noted. Then a VITE circuit can translate this TPC into an invariant movement trajectory. An auxiliary circuit, called the Passive Update of Position (PUP) model is described for using inflow signals to update the PPC during passive arm movements owing to external forces. Other uses of outflow and inflow signals are also noted, such as for adaptive linearization of a nonlinear muscle plant, and sequential readout of TPCs during a serial plan, as in reaching and grasping. Comparisons are made with other models of motor control, such as the mass-spring and minimumjerk models.",
"title": ""
},
{
"docid": "6d3e19c44f7af5023ef991b722b078c5",
"text": "Volatile substances are commonly misused with easy-to-obtain commercial products, such as glue, shoe polish, nail polish remover, butane lighter fluid, gasoline and computer duster spray. This report describes a case of sudden death of a 29-year-old woman after presumably inhaling gas cartridge butane from a plastic bag. Autopsy, pathological and toxicological analyses were performed in order to determine the cause of death. Pulmonary edema was observed pathologically, and the toxicological study revealed 2.1μL/mL of butane from the blood. The causes of death from inhalation of volatile substances have been explained by four mechanisms; cardiac arrhythmia, anoxia, respiratory depression, and vagal inhibition. In this case, the cause of death was determined to be asphyxia from anoxia. Additionally, we have gathered fatal butane inhalation cases with quantitative analyses of butane concentrations, and reviewed other reports describing volatile substance abuse worldwide.",
"title": ""
},
{
"docid": "242b854de904075d04e7044e680dc281",
"text": "Adopting a motivational perspective on adolescent development, these two companion studies examined the longitudinal relations between early adolescents' school motivation (competence beliefs and values), achievement, emotional functioning (depressive symptoms and anger), and middle school perceptions using both variable- and person-centered analytic techniques. Data were collected from 1041 adolescents and their parents at the beginning of seventh and the end of eight grade in middle school. Controlling for demographic factors, regression analyses in Study 1 showed reciprocal relations between school motivation and positive emotional functioning over time. Furthermore, adolescents' perceptions of the middle school learning environment (support for competence and autonomy, quality of relationships with teachers) predicted their eighth grade motivation, achievement, and emotional functioning after accounting for demographic and prior adjustment measures. Cluster analyses in Study 2 revealed several different patterns of school functioning and emotional functioning during seventh grade that were stable over 2 years and that were predictably related to adolescents' reports of their middle school environment. Discussion focuses on the developmental significance of schooling for multiple adjustment outcomes during adolescence.",
"title": ""
},
{
"docid": "353fae3edb830aa86db682f28f64fd90",
"text": "The penetration of renewable resources in power system has been increasing in recent years. Many of these resources are uncontrollable and variable in nature, wind in particular, are relatively unpredictable. At high penetration levels, volatility of wind power production could cause problems for power system to maintain system security and reliability. One of the solutions being proposed to improve reliability and performance of the system is to integrate energy storage devices into the network. In this paper, unit commitment and dispatch schedule in power system with and without energy storage is examined for different level of wind penetration. Battery energy storage (BES) is considered as an alternative solution to store energy. The SCUC formulation and solution technique with wind power and BES is presented. The proposed formulation and model is validated with eight-bus system case study. Further, a discussion on the role of BES on locational pricing, economic, peak load shaving, and transmission congestion management had been made.",
"title": ""
},
{
"docid": "7dec4f1b872b6092bd1c050ec5aa07a9",
"text": "Predictive models based on machine learning can be highly sensitive to data error. Training data are often combined from a variety of different sources, each susceptible to different types of inconsistencies, and as new data stream in during prediction time, the model may encounter previously unseen inconsistencies. An important class of such inconsistencies are domain value violations that occur when an attribute value is outside of an allowed domain. We explore automatically detecting and repairing such violations by leveraging the often available clean test labels to determine whether a given detection and repair combination will improve model accuracy. We present BoostClean which automatically selects an ensemble of error detection and repair combinations using statistical boosting. BoostClean selects this ensemble from an extensible library that is pre-populated general detection functions, including a novel detector based on the Word2Vec deep learning model, which detects errors across a diverse set of domains. Our evaluation on a collection of 12 datasets from Kaggle, the UCI repository, realworld data analyses, and production datasets that show that BoostClean can increase absolute prediction accuracy by up to 9% over the best non-ensembled alternatives. Our optimizations including parallelism, materialization, and indexing techniques show a 22.2× end-to-end speedup on a 16-core machine.",
"title": ""
},
{
"docid": "fddf6e71af23aba468989d6d09da989c",
"text": "The rapidly increasing pervasiveness and integration of computers in human society calls for a broad discipline under which this development can be studied. We argue that to design and use technology one needs to develop and use models of humans and machines in all their aspects, including cognitive and memory models, but also social influence and (artificial) emotions. We call this wider discipline Behavioural Computer Science (BCS), and argue in this paper for why BCS models should unify (models of) the behaviour of humans and machines when designing information and communication technology systems. Thus, one main point to be addressed is the incorporation of empirical evidence for actual human behaviour, instead of making inferences about behaviour based on the rational agent model. Empirical studies can be one effective way to constantly update the behavioural models. We are motivated by the future advancements in artificial intelligence which will give machines capabilities that from many perspectives will be indistinguishable from those of humans. Such machine behaviour would be studied using BCS models, looking at questions about machine trust like “Can a self driving car trust its passengers?”, or artificial influence like “Can the user interface adapt to the user’s behaviour, and thus influence this behaviour?”. We provide a few directions for approaching BCS, focusing on modelling of human and machine behaviour, as well as their interaction.",
"title": ""
}
] |
scidocsrr
|
ab5180adc80e09c34a8bb19aa44fc27b
|
SVDD-based outlier detection on uncertain data
|
[
{
"docid": "b15095887da032b74a1f4ea9844d8e56",
"text": "From the first appearance of network attacks, the internet worm, to the most recent one in which the servers of several famous e-business companies were paralyzed for several hours, causing huge financial losses, network-based attacks have been increasing in frequency and severity. As a powerful weapon to protect networks, intrusion detection has been gaining a lot of attention. Traditionally, intrusion detection techniques are classified into two broad categories: misuse detection and anomaly detection. Misuse detection aims to detect well-known attacks as well as slight variations of them, by characterizing the rules that govern these attacks. Due to its nature, misuse detection has low false alarms but it is unable to detect any attacks that lie beyond its knowledge. Anomaly detection is designed to capture any deviations from the established profiles of users and systems normal behavior pattern. Although in principle, anomaly detection has the ability to detect new attacks, in practice this is far from easy. Anomaly detection has the potential to generate too many false alarms, and it is very time consuming and labor expensive to sift true intrusions from the false alarms. As new network attacks emerge, the need for intrusion detection systems to detect novel attacks becomes pressing. As we stated before, this is one of the hardest tasks to accomplish, since no knowledge about the novel attacks is available. However, if we view the problem from another angle, we can find a solution. Attacks do something that is different from normal activities: if we have comprehensive knowledge about normal activities and their normal deviations, then all activities ∗This work has been funded by AFRL Rome Labs under the contract F 30602-00-2-0512. †All the authors are at George Mason University, Center for Secure Information Systems Fairfax, VA 22303",
"title": ""
},
{
"docid": "cdaca4beb6aa4c932e6776b63a2482db",
"text": "In recent years, a number of indirect data collection methodologies have lead to the proliferation of uncertain data. Such data points are often represented in the form of a probabilistic function, since the corresponding deterministic value is not known. This increases the challenge of mining and managing uncertain data, since the precise behavior of the underlying data is no longer known. In this paper, we provide a survey of uncertain data mining and management applications. In the field of uncertain data management, we will examine traditional methods such as join processing, query processing, selectivity estimation, OLAP queries, and indexing. In the field of uncertain data mining, we will examine traditional mining problems such as classification and clustering. We will also examine a general transform based technique for mining uncertain data. We discuss the models for uncertain data, and how they can be leveraged in a variety of applications. We discuss different methodologies to process and mine uncertain data in a variety of forms.",
"title": ""
}
] |
[
{
"docid": "fb904fc99acf8228ae7585e29074f96c",
"text": "One of the biggest problems in manufacturing is the failure of machine tools due to loss of surface material in cutting operations like drilling and milling. Carrying on the process with a dull tool may damage the workpiece material fabricated. On the other hand, it is unnecessary to change the cutting tool if it is still able to continue cutting operation. Therefore, an effective diagnosis mechanism is necessary for the automation of machining processes so that production loss and downtime can be avoided. This study concerns with the development of a tool wear condition-monitoring technique based on a two-stage fuzzy logic scheme. For this, signals acquired from various sensors were processed to make a decision about the status of the tool. In the first stage of the proposed scheme, statistical parameters derived from thrust force, machine sound (acquired via a very sensitive microphone) and vibration signals were used as inputs to fuzzy process; and the crisp output values of this process were then taken as the input parameters of the second stage. Conclusively, outputs of this stage were taken into a threshold function, the output of which is used to assess the condition of the tool. r 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "8fd5c54b5c45b6380980caa6d5fe7cfb",
"text": "In reverberant environments there are long term interactions between speech and corrupting sources. In this paper a time delay neural network (TDNN) architecture, capable of learning long term temporal relationships and translation invariant representations, is used for reverberation robust acoustic modeling. Further, iVectors are used as an input to the neural network to perform instantaneous speaker and environment adaptation, providing 10% relative improvement in word error rate. By subsampling the outputs at TDNN layers across time steps, training time is reduced. Using a parallel training algorithm we show that the TDNN can be trained on ∼ 5500 hours of speech data in 3 days using up to 32 GPUs. The TDNN is shown to provide results competitive with state of the art systems in the IARPA ASpIRE challenge, with 27.7% WER on the dev test set.",
"title": ""
},
{
"docid": "a13a302e7e2fd5e09a054f1bf23f1702",
"text": "A number of machine learning (ML) techniques have recently been proposed to solve color constancy problem in computer vision. Neural networks (NNs) and support vector regression (SVR) in particular, have been shown to outperform many traditional color constancy algorithms. However, neither neural networks nor SVR were compared to simpler regression tools in those studies. In this article, we present results obtained with a linear technique known as ridge regression (RR) and show that it performs better than NNs, SVR, and gray world (GW) algorithm on the same dataset. We also perform uncertainty analysis for NNs, SVR, and RR using bootstrapping and show that ridge regression and SVR are more consistent than neural networks. The shorter training time and single parameter optimization of the proposed approach provides a potential scope for real time video tracking application.",
"title": ""
},
{
"docid": "ee1bc0f1681240eaf630421bb2e6fbb2",
"text": "The Caltech Multi-Vehicle Wireless Testbed (MVWT) is a platform designed to explore theoretical advances in multi-vehicle coordination and control, networked control systems and high confidence distributed computation. The contribution of this report is to present simulation and experimental results on the generation and implementation of optimal trajectories for the MVWT vehicles. The vehicles are nonlinear, spatially constrained and their input controls are bounded. The trajectories are generated using the NTG software package developed at Caltech. Minimum time trajectories and the application of Model Predictive Control (MPC) are investigated.",
"title": ""
},
{
"docid": "310aa30e2dd2b71c09780f7984a3663c",
"text": "E-governance is more than just a government website on the Internet. The strategic objective of e-governance is to support and simplify governance for all parties; government, citizens and businesses. The use of ICTs can connect all three parties and support processes and activities. In other words, in e-governance electronic means support and stimulate good governance. Therefore, the objectives of e-governance are similar to the objectives of good governance. Good governance can be seen as an exercise of economic, political, and administrative authority to better manage affairs of a country at all levels. It is not difficult for people in developed countries to imagine a situation in which all interaction with government can be done through one counter 24 hours a day, 7 days a week, without waiting in lines. However to achieve this same level of efficiency and flexibility for developing countries is going to be difficult. The experience in developed countries shows that this is possible if governments are willing to decentralize responsibilities and processes, and if they start to use electronic means. This paper is going to examine the legal and infrastructure issues related to e-governance from the perspective of developing countries. Particularly it will examine how far the developing countries have been successful in providing a legal framework.",
"title": ""
},
{
"docid": "dd1e7e027d88e58f9c85c8a43482b404",
"text": "Strepsiptera are obligate endoparasitoids that exhibit extreme sexual dimorphism and parasitize seven orders and 33 families of Insecta. The adult males and the first instar larvae in the Mengenillidia and Stylopidia are free-living, whereas the adult females in Mengenillidia are free-living but in the suborder Stylopidia they remain endoparasitic in the host. Parasitism occurs at the host larval/nymphal stage and continues in a mobile host until that host's adult stage. The life of the host is lengthened to allow the male strepsipteran to complete maturation and the viviparous female to release the first instar larvae when the next generation of the host's larvae/nymphs has been produced. The ability of strepsipterans to parasitize a wide range of hosts, in spite of being endoparasitoids, is perhaps due to their unique immune avoidance system. Aspects of virulence, heterotrophic heteronomy in the family Myrmecolacidae, cryptic species, genomics, immune response, and behavior of stylopized hosts are discussed in this chapter.",
"title": ""
},
{
"docid": "f693b26866ca8eb2a893dead7aa0fb21",
"text": "This paper deals with response signals processing in eddy current non-destructive testing. Non-sinusoidal excitation is utilized to drive eddy currents in a conductive specimen. The response signals due to a notch with variable depth are calculated by numerical means. The signals are processed in order to evaluate the depth of the notch. Wavelet transformation is used for this purpose. Obtained results are presented and discussed in this paper. Streszczenie. Praca dotyczy sygnałów wzbudzanych przy nieniszczącym testowaniu za pomocą prądów wirowych. Przy pomocy symulacji numerycznych wyznaczono sygnały odpowiedzi dla niesinusoidalnych sygnałów wzbudzających i defektów o różnej głębokości. Celem symulacji jest wyznaczenie zależności pozwalającej wyznaczyć głębokość defektu w zależności od odbieranego sygnału. W artykule omówiono wykorzystanie do tego celu transformaty falkowej. (Analiza falkowa impulsowych prądów wirowych)",
"title": ""
},
{
"docid": "19f96525e1e3dcc563a7b2138c8b1547",
"text": "The state of the art in bidirectional search has changed significantly a very short time period; we now can answer questions about unidirectional and bidirectional search that until very recently we were unable to answer. This paper is designed to provide an accessible overview of the recent research in bidirectional search in the context of the broader efforts over the last 50 years. We give particular attention to new theoretical results and the algorithms they inspire for optimal and nearoptimal node expansions when finding a shortest path. Introduction and Overview Shortest path algorithms have a long history dating to Dijkstra’s algorithm (DA) (Dijkstra 1959). DA is the canonical example of a best-first search which prioritizes state expansions by their g-cost (distance from the start state). Historically, there were two enhancements to DA developed relatively quickly: bidirectional search and the use of heuristics. Nicholson (1966) suggested bidirectional search where the search proceeds from both the start and the goal simultaneously. In a two dimensional search space a search to radius r will visit approximately r states. A bidirectional search will perform two searches of approximately (r/2) states, a reduction of a factor of two. In exponential state spaces the reduction is from b to 2b, an exponential gain in both memory and time. This is illustrated in Figure 1, where the large circle represents a unidirectional search towards the goal, while the smaller circles represent the two parts of a bidirectional search. Just two years later, DA was independently enhanced with admissible heuristics (distance estimates to the goal) that resulted in the A* algorithm (Hart, Nilsson, and Raphael 1968). A* is goal directed – the search is focused towards the goal by the heuristic. This significantly reduces the search effort required to find a path to the goal. The obvious challenge was whether these two enhancements could be effectively combined into bidirectional heuristic search (Bi-HS). Pohl (1969) first addressed this challenge showing that in practice unidirectional heuristic search (Uni-HS) seemed to beat out Bi-HS. Many Bi-HS Copyright c © 2018, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. algorithms were developed over the years (see a short survey below), but no such algorithm was shown to consistently outperform Uni-HS. Barker and Korf (2015) recently hypothesized that in most cases one should either use bidirectional brute-force search (Bi-BS) or Uni-HS (e.g. A*), but that Bi-HS is never the best approach. This work spurred further research into Bi-HS, and has lead to new theoretical understanding on the nature of Bi-HS as well as new Bi-HS algorithms (e.g., MM, fMM and NBS described below) with strong theoretical guarantees. The purpose of this paper is to provide a high-level picture of this new line of work while placing it in the larger context of previous work on bidirectional search. While there are still many questions yet to answer, we have, for the first time, the full suite of analytic tools necessary to determine whether bidirectional search will be useful on a given problem instance. This is coupled with a Bi-HS algorithm that is guaranteed to expand no more than twice the minimum number of the necessary state expansions in practice. With these tools we can illustrate use-cases for bidirectional search and point to areas of future research. Terminology and Background We define a shortest-path problem as a n-tuple (start, goal, expF , expB , hF , hB), where the goal is to find the least-cost path between start and goal in a graph G. G is not provided a priori, but is provided implicitly through the expF and expB functions that can expand and return the forward (backwards) successors of any state. Bidirectional search algorithms interleave two separate searches, a search forward from start and a search backward from goal. We use fF , gF and hF to indicate f -, g-, and h-costs in the forward search and fB , gB and hB similarly in the backward search. Likewise, OpenF and OpenB store states generated in the forward and backward directions, respectively. Finally, gminF , gminB , fminF and fminB denote the minimal gand f -values in OpenF and OpenB respectively. d(x, y) denotes the shortest distance between x and y. Front-to-end algorithms use two heuristic functions. The forward heuristic, hF , is forward admissible iff hF (u) ≤ d(u, goal) for all u in G and is forward consistent iff hF (u) ≤ d(u, u′) + hF (u′) for all u and u′ in G. The backward heuristic, hB , is backward admissible iff hB(v) ≤",
"title": ""
},
{
"docid": "1cb2ffad7243e3e0b5c16fae12c7ee49",
"text": "OBJECTIVE\nTo determine if inadequate approaches to randomized controlled trial design and execution are associated with evidence of bias in estimating treatment effects.\n\n\nDESIGN\nAn observational study in which we assessed the methodological quality of 250 controlled trials from 33 meta-analyses and then analyzed, using multiple logistic regression models, the associations between those assessments and estimated treatment effects.\n\n\nDATA SOURCES\nMeta-analyses from the Cochrane Pregnancy and Childbirth Database.\n\n\nMAIN OUTCOME MEASURES\nThe associations between estimates of treatment effects and inadequate allocation concealment, exclusions after randomization, and lack of double-blinding.\n\n\nRESULTS\nCompared with trials in which authors reported adequately concealed treatment allocation, trials in which concealment was either inadequate or unclear (did not report or incompletely reported a concealment approach) yielded larger estimates of treatment effects (P < .001). Odds ratios were exaggerated by 41% for inadequately concealed trials and by 30% for unclearly concealed trials (adjusted for other aspects of quality). Trials in which participants had been excluded after randomization did not yield larger estimates of effects, but that lack of association may be due to incomplete reporting. Trials that were not double-blind also yielded larger estimates of effects (P = .01), with odds ratios being exaggerated by 17%.\n\n\nCONCLUSIONS\nThis study provides empirical evidence that inadequate methodological approaches in controlled trials, particularly those representing poor allocation concealment, are associated with bias. Readers of trial reports should be wary of these pitfalls, and investigators must improve their design, execution, and reporting of trials.",
"title": ""
},
{
"docid": "bade68b8f95fc0ae5a377a52c8b04b5c",
"text": "The majority of deterministic mathematical programming problems have a compact formulation in terms of algebraic equations. Therefore they can easily take advantage of the facilities offered by algebraic modeling languages. These tools allow expressing models by using convenient mathematical notation (algebraic equations) and translate the models into a form understandable by the solvers for mathematical programs. Algebraic modeling languages provide facility for the management of a mathematical model and its data, and access different general-purpose solvers. The use of algebraic modeling languages (AMLs) simplifies the process of building the prototype model and in some cases makes it possible to create and maintain even the production version of the model. As presented in other chapters of this book, stochastic programming (SP) is needed when exogenous parameters of the mathematical programming problem are random. Dealing with stochasticities in planning is not an easy task. In a standard scenario-by-scenario analysis, the system is optimized for each scenario separately. Varying the scenario hypotheses we can observe the different optimal responses of the system and delineate the “strong trends” of the future. Indeed, this scenarioby-scenario approach implicitly assumes perfect foresight. The method provides a first-stage decision, which is valid only for the scenario under consideration. Having as many decisions as there are scenarios leaves the decision-maker without a clear recommendation. In stochastic programming the whole set of scenarios is combined into an event tree, which describes the unfolding of uncertainties over the period of planning. The model takes into account the uncertainties characterizing the scenarios through stochastic programming techniques. This adaptive plan is much closer, in spirit, to the way that decision-makers have to deal with uncertain future",
"title": ""
},
{
"docid": "f9a3f69cf26b279fa8600fd2ebbc3426",
"text": "We introduce Interactive Question Answering (IQA), the task of answering questions that require an autonomous agent to interact with a dynamic visual environment. IQA presents the agent with a scene and a question, like: \"Are there any apples in the fridge?\" The agent must navigate around the scene, acquire visual understanding of scene elements, interact with objects (e.g. open refrigerators) and plan for a series of actions conditioned on the question. Popular reinforcement learning approaches with a single controller perform poorly on IQA owing to the large and diverse state space. We propose the Hierarchical Interactive Memory Network (HIMN), consisting of a factorized set of controllers, allowing the system to operate at multiple levels of temporal abstraction. To evaluate HIMN, we introduce IQUAD V1, a new dataset built upon AI2-THOR [35], a simulated photo-realistic environment of configurable indoor scenes with interactive objects. IQUAD V1 has 75,000 questions, each paired with a unique scene configuration. Our experiments show that our proposed model outperforms popular single controller based methods on IQUAD V1. For sample questions and results, please view our video: https://youtu.be/pXd3C-1jr98.",
"title": ""
},
{
"docid": "8ff5fb7c1da449d311400757fdba8832",
"text": "There is a widespread concern in Western society about the visibility of pornography in public places and on the Internet. What are the consequences for young men and women, and how do they think about gender, sexuality, and pornography? Data was collected, through 22 individual interviews and seven focus groups, from 51 participants (36 women and 37 men aged 14-20 years) in Sweden. The results indicated a process of both normalization and ambivalence. Pornography was used as a form of social intercourse, a source of information, and a stimulus for sexual arousal. Pornography consumption was more common among the young men than among the women. For both the young men and women, the pornographic script functioned as a frame of reference in relation to bodily ideals and sexual performances. Most of the participants had acquired the necessary skills of how to deal with the exposure to pornography in a sensible and reflective manner.",
"title": ""
},
{
"docid": "f0e143229e788ab03637e72cfb0bf1d8",
"text": "Solid waste management is a key aspect of the environmental management of establishments belonging to the hospitality sector. In this study, we reviewed literature in this area, examining the current status of waste management for the hospitality sector, in general, with a focus on food waste management in particular. We specifically examined the for-profit subdivision of the hospitality sector, comprising primarily of hotels and restaurants. An account is given of the causes of the different types of waste encountered in this sector and what strategies may be used to reduce them. These strategies are further highlighted in terms of initiatives and practices which are already being implemented around the world to facilitate sustainable waste management. We also recommended a general waste management procedure to be followed by properties of the hospitality sector and described how waste mapping, an innovative yet simple strategy, can significantly reduce the waste generation of a hotel. Generally, we found that not many scholarly publications are available in this area of research. More studies need to be carried out on the implementation of sustainable waste management for the hospitality industry in different parts of the world and the challenges and opportunities involved.",
"title": ""
},
{
"docid": "e997f8468d132f1e28e0d6a8801f6fb1",
"text": "Change-blindness, occurs when large changes are missed under natural viewing conditions because they occur simultaneously with a brief visual disruption, perhaps caused by an eye movement,, a flicker, a blink, or a camera cut in a film sequence. We have found that this can occur even when the disruption does not cover or obscure the changes. When a few small, high-contrast shapes are briefly spattered over a picture, like mudsplashes on a car windscreen, large changes can be made simultaneously in the scene without being noticed. This phenomenon is potentially important in driving, surveillance or navigation, as dangerous events occurring in full view can go unnoticed if they coincide with even very small, apparently innocuous, disturbances. It is also important for understanding how the brain represents the world.",
"title": ""
},
{
"docid": "674da28b87322e7dfc7aad135d44ae55",
"text": "As the technology migrates into the deep submicron manufacturing(DSM) era, the critical dimension of the circuits is getting smaller than the lithographic wavelength. The unavoidable light diffraction phenomena in the sub-wavelength technologies have become one of the major factors in the yield rate. Optical proximity correction (OPC) is one of the methods adopted to compensate for the light diffraction effect as a post layout process.However, the process is time-consuming and the results are still limited by the original layout quality. In this paper, we propose a maze routing method that considers the optical effect in the routing algorithm. By utilizing the symmetrical property of the optical system, the light diffraction is efficiently calculated and stored in tables. The costs that guide the router to minimize the optical interferences are obtained from these look-up tables. The problem is first formulated as a constrained maze routing problem, then it is shown to be a multiple constrained shortest path problem. Based on the Lagrangian relaxation method, an effective algorithm is designed to solve the problem.",
"title": ""
},
{
"docid": "51760cbc4145561e23702b6624bfa9f8",
"text": "Plant Diseases and Pests are a major challenge in the agriculture sector. An accurate and a faster detection of diseases and pests in plants could help to develop an early treatment technique while substantially reducing economic losses. Recent developments in Deep Neural Networks have allowed researchers to drastically improve the accuracy of object detection and recognition systems. In this paper, we present a deep-learning-based approach to detect diseases and pests in tomato plants using images captured in-place by camera devices with various resolutions. Our goal is to find the more suitable deep-learning architecture for our task. Therefore, we consider three main families of detectors: Faster Region-based Convolutional Neural Network (Faster R-CNN), Region-based Fully Convolutional Network (R-FCN), and Single Shot Multibox Detector (SSD), which for the purpose of this work are called \"deep learning meta-architectures\". We combine each of these meta-architectures with \"deep feature extractors\" such as VGG net and Residual Network (ResNet). We demonstrate the performance of deep meta-architectures and feature extractors, and additionally propose a method for local and global class annotation and data augmentation to increase the accuracy and reduce the number of false positives during training. We train and test our systems end-to-end on our large Tomato Diseases and Pests Dataset, which contains challenging images with diseases and pests, including several inter- and extra-class variations, such as infection status and location in the plant. Experimental results show that our proposed system can effectively recognize nine different types of diseases and pests, with the ability to deal with complex scenarios from a plant's surrounding area.",
"title": ""
},
{
"docid": "0554ba7273fce60ee3866ee8628778d6",
"text": "In this paper, a review of the theories and experiments devoted to the understanding of the development of the electrical breakdown of a gas insulated gap, i. e., the switching delay, is presented. The presentation is chronological. The classical Townsend and streamer models for breakdown are discussed; followed by a brief account of the continuous acceleration and avalanche-chain models. These last two models have been proposed primarily to describe breakdown at large electric fields. Then, the two-group model for breakdown at voltages above approximately 20-percent self-breakdown is presented. Finally, a brief analysis is given of the present state of the field and the direction it is takdng.",
"title": ""
},
{
"docid": "082e747ab9f93771a71e2b6147d253b2",
"text": "Social networks are often grounded in spatial locality where individuals form relationships with those they meet nearby. However, the location of individuals in online social networking platforms is often unknown. Prior approaches have tried to infer individuals’ locations from the content they produce online or their online relations, but often are limited by the available location-related data. We propose a new method for social networks that accurately infers locations for nearly all of individuals by spatially propagating location assignments through the social network, using only a small number of initial locations. In five experiments, we demonstrate the effectiveness in multiple social networking platforms, using both precise and noisy data to start the inference, and present heuristics for improving performance. In one experiment, we demonstrate the ability to infer the locations of a group of users who generate over 74% of the daily Twitter message volume with an estimated median location error of 10km. Our results open the possibility of gathering large quantities of location-annotated data from social media platforms.",
"title": ""
},
{
"docid": "87ecd8c0331b6277cddb6a9a11cec42f",
"text": "OBJECTIVE\nThis study aimed to determine the principal factors contributing to the cost of avoiding a birth with Down syndrome by using cell-free DNA (cfDNA) to replace conventional screening.\n\n\nMETHODS\nA range of unit costs were assigned to each item in the screening process. Detection rates were estimated by meta-analysis and modeling. The marginal cost associated with the detection of additional cases using cfDNA was estimated from the difference in average costs divided by the difference in detection.\n\n\nRESULTS\nThe main factor was the unit cost of cfDNA testing. For example, replacing a combined test costing $150 with 3% false-positive rate and invasive testing at $1000, by cfDNA tests at $2000, $1500, $1000, and $500, the marginal cost is $8.0, $5.8, $3.6, and $1.4m, respectively. Costs were lower when replacing a quadruple test and higher for a 5% false-positive rate, but the relative importance of cfDNA unit cost was unchanged. A contingent policy whereby 10% to 20% women were selected for cfDNA testing by conventional screening was considerably more cost-efficient. Costs were sensitive to cfDNA uptake.\n\n\nCONCLUSION\nUniversal cfDNA screening for Down syndrome will only become affordable by public health purchasers if costs fall substantially. Until this happens, the contingent use of cfDNA is recommended.",
"title": ""
}
] |
scidocsrr
|
2f222df22dfbe11676b380cf3d593fe8
|
Semantic Sentence Matching with Densely-connected Recurrent and Co-attentive Information
|
[
{
"docid": "6b00269aca800918836e1e0c759165fc",
"text": "We add an interpretable semantics to the paraphrase database (PPDB). To date, the relationship between phrase pairs in the database has been weakly defined as approximately equivalent. We show that these pairs represent a variety of relations, including directed entailment (little girl/girl) and exclusion (nobody/someone). We automatically assign semantic entailment relations to entries in PPDB using features derived from past work on discovering inference rules from text and semantic taxonomy induction. We demonstrate that our model assigns these relations with high accuracy. In a downstream RTE task, our labels rival relations from WordNet and improve the coverage of a proof-based RTE system by 17%.",
"title": ""
},
{
"docid": "273bb44ed02076008d5d2835baed9494",
"text": "Modeling informal inference in natural language is very challenging. With the recent availability of large annotated data, it has become feasible to train complex models such as neural networks to perform natural language inference (NLI), which have achieved state-of-the-art performance. Although there exist relatively large annotated data, can machines learn all knowledge needed to perform NLI from the data? If not, how can NLI models benefit from external knowledge and how to build NLI models to leverage it? In this paper, we aim to answer these questions by enriching the state-of-the-art neural natural language inference models with external knowledge. We demonstrate that the proposed models with external knowledge further improve the state of the art on the Stanford Natural Language Inference (SNLI) dataset.",
"title": ""
},
{
"docid": "87f0a390580c452d77fcfc7040352832",
"text": "• J. Wieting, M. Bansal, K. Gimpel, K. Livescu, and D. Roth. 2015. From paraphrase database to compositional paraphrase model and back. TACL. • K. S. Tai, R. Socher, and C. D. Manning. 2015. Improved semantic representations from treestructured long short-term memory networks. ACL. • W. Yin and H. Schutze. 2015. Convolutional neural network for paraphrase identification. NAACL. The product also streams internet radio and comes with a 30-day free trial for realnetworks' rhapsody music subscription. The device plays internet radio streams and comes with a 30-day trial of realnetworks rhapsody music service. Given two sentences, measure their similarity:",
"title": ""
}
] |
[
{
"docid": "b732824ec9677b639e34de68818aae50",
"text": "Although there is wide agreement that backfilling produces significant benefits in scheduling of parallel jobs, there is no clear consensus on which backfilling strategy is preferable e.g. should conservative backfilling be used or the more aggressive EASY backfilling scheme; should a First-Come First-Served(FCFS) queue-priority policy be used, or some other such as Shortest job First(SF) or eXpansion Factor(XF); In this paper, we use trace-based simulation to address these questions and glean new insights into the characteristics of backfilling strategies for job scheduling. We show that by viewing performance in terms of slowdowns and turnaround times of jobs within various categories based on their width (processor request size), length (job duration) and accuracy of the user’s estimate of run time, some consistent trends may be observed.",
"title": ""
},
{
"docid": "9f883ffe537afa07a38c90c0174f7b03",
"text": "The scope and purpose of this work is 2-fold: to synthesize the available evidence and to translate it into recommendations. This document provides recommendations only when there is evidence to support them. As such, they do not constitute a complete protocol for clinical use. Our intention is that these recommendations be used by others to develop treatment protocols, which necessarily need to incorporate consensus and clinical judgment in areas where current evidence is lacking or insufficient. We think it is important to have evidence-based recommendations to clarify what aspects of practice currently can and cannot be supported by evidence, to encourage use of evidence-based treatments that exist, and to encourage creativity in treatment and research in areas where evidence does not exist. The communities of neurosurgery and neuro-intensive care have been early pioneers and supporters of evidence-based medicine and plan to continue in this endeavor. The complete guideline document, which summarizes and evaluates the literature for each topic, and supplemental appendices (A-I) are available online at https://www.braintrauma.org/coma/guidelines.",
"title": ""
},
{
"docid": "c4c3a0bccbf4e093750e1ef356d2f09c",
"text": "We propose to enhance the RNN decoder in a neural machine translator (NMT) with external memory, as a natural but powerful extension to the state in the decoding RNN. This memory-enhanced RNN decoder is called MEMDEC. At each time during decoding, MEMDEC will read from this memory and write to this memory once, both with content-based addressing. Unlike the unbounded memory in previous work(Bahdanau et al., 2014) to store the representation of source sentence, the memory in MEMDEC is a matrix with predetermined size designed to better capture the information important for the decoding process at each time step. Our empirical study on Chinese-English translation shows that it can improve by 4.8 BLEU upon Groundhog and 5.3 BLEU upon on Moses, yielding the best performance achieved with the same training set.",
"title": ""
},
{
"docid": "3db1f5eea78fc6a763e58c261502d156",
"text": "Deceptive opinion spam detection has attracted significant attention from both business and research communities. Existing approaches are based on manual discrete features, which can capture linguistic and psychological cues. However, such features fail to encode the semantic meaning of a document from the discourse perspective, which limits the performance. In this paper, we empirically explore a neural network model to learn document-level representation for detecting deceptive opinion spam. In particular, given a document, the model learns sentence representations with a convolutional neural network, which are combined using a gated recurrent neural network with attention mechanism to model discourse information and yield a document vector. Finally, the document representation is used directly as features to identify deceptive opinion spam. Experimental results on three domains (Hotel, Restaurant, and Doctor) show that our proposed method outperforms state-of-the-art methods.",
"title": ""
},
{
"docid": "194c1a9a16ee6dad00c41544fca74371",
"text": "Computers are not (yet?) capable of being reasonable any more than is a Second Lieutenant. Against stupidity, the Gods themselves contend in vain. Banking systems include the back-end bookkeeping systems that record customers' account details and transaction processing systems such as cash machine networks and high-value interbank money transfer systems that feed them with data. They are important for a number of reasons. First, bookkeeping was for many years the main business of the computer industry, and banking was its most intensive area of application. Personal applications such as Netscape and Powerpoint might now run on more machines, but accounting is still the critical application for the average business. So the protection of bookkeeping systems is of great practical importance. It also gives us a well-understood model of protection in which confidentiality plays almost no role, but where the integrity of records (and their immutability once made) is of paramount importance. Second, transaction processing systems—whether for small debits such as $50 cash machine withdrawals or multimillion-dollar wire transfers—were the applications that launched commercial cryptography. Banking applications drove the development not just of encryption algorithms and protocols, but also of the supporting technologies, such as tamper-resistant cryptographic processors. These processors provide an important and interesting example of a trusted computing base that is quite different from",
"title": ""
},
{
"docid": "81f7938d647ac9658995fb61f508aa0c",
"text": "This letter describes a robust voice activity detector using an ultrasonic Doppler sonar device. An ultrasonic beam is incident on the talker's face. Facial movements result in Doppler frequency shifts in the reflected signal that are sensed by an ultrasonic sensor. Speech-related facial movements result in identifiable patterns in the spectrum of the received signal that can be used to identify speech activity. These sensors are not affected by even high levels of ambient audio noise. Unlike most other non-acoustic sensors, the device need not be taped to a talker. A simple yet robust method of extracting the voice activity information from the ultrasonic Doppler signal is developed and presented in this letter. The algorithm is seen to be very effective and robust to noise, and it can be implemented in real time.",
"title": ""
},
{
"docid": "6a4595e71ad1c4e6196f17af20c8c1ef",
"text": "We propose a novel regularizer to improve the training of Generative Adversarial Networks (GANs). The motivation is that when the discriminatorD spreads out its model capacity in the right way, the learning signals given to the generator G are more informative and diverse. These in turn help G to explore better and discover the real data manifold while avoiding large unstable jumps due to the erroneous extrapolation made by D . Our regularizer guides the rectifier discriminator D to better allocate its model capacity, by encouraging the binary activation patterns on selected internal layers of D to have a high joint entropy. Experimental results on both synthetic data and real datasets demonstrate improvements in stability and convergence speed of the GAN training, as well as higher sample quality. The approach also leads to higher classification accuracies in semi-supervised learning.",
"title": ""
},
{
"docid": "06a241bc0483a910a3fecef8e7e7883a",
"text": "Linear programming duality yields e,cient algorithms for solving inverse linear programs. We show that special classes of conic programs admit a similar duality and, as a consequence, establish that the corresponding inverse programs are e,ciently solvable. We discuss applications of inverse conic programming in portfolio optimization and utility function identi0cation. c © 2004 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "e2d63fece5536aa4668cd5027a2f42b9",
"text": "To ensure integrity, trust, immutability and authenticity of software and information (cyber data, user data and attack event data) in a collaborative environment, research is needed for cross-domain data communication, global software collaboration, sharing, access auditing and accountability. Blockchain technology can significantly automate the software export auditing and tracking processes. It allows to track and control what data or software components are shared between entities across multiple security domains. Our blockchain-based solution relies on role-based and attribute-based access control and prevents unauthorized data accesses. It guarantees integrity of provenance data on who updated what software module and when. Furthermore, our solution detects data leakages, made behind the scene by authorized blockchain network participants, to unauthorized entities. Our approach is used for data forensics/provenance, when the identity of those entities who have accessed/ updated/ transferred the sensitive cyber data or sensitive software is determined. All the transactions in the global collaborative software development environment are recorded in the blockchain public ledger and can be verified any time in the future. Transactions can not be repudiated by invokers. We also propose modified transaction validation procedure to improve performance and to protect permissioned IBM Hyperledger-based blockchains from DoS attacks, caused by bursts of invalid transactions.",
"title": ""
},
{
"docid": "535b093171db9cfafba4fc91c4254137",
"text": "Millimeter-wave communication is one way to alleviate the spectrum gridlock at lower frequencies while simultaneously providing high-bandwidth communication channels. MmWave makes use of MIMO through large antenna arrays at both the base station and the mobile station to provide sufficient received signal power. This article explains how beamforming and precoding are different in MIMO mmWave systems than in their lower-frequency counterparts, due to different hardware constraints and channel characteristics. Two potential architectures are reviewed: hybrid analog/digital precoding/combining and combining with low-resolution analog- to-digital converters. The potential gains and design challenges for these strategies are discussed, and future research directions are highlighted.",
"title": ""
},
{
"docid": "a5158a06583184c2d71b2c2f0d740834",
"text": "Next-generation sequencing is changing the paradigm of clinical genetic testing. Today there are numerous molecular tests available, including single-gene tests, gene panels, and exome sequencing or genome sequencing. As a result, ordering physicians face the conundrum of selecting the best diagnostic tool for their patients with genetic conditions. Single-gene testing is often most appropriate for conditions with distinctive clinical features and minimal locus heterogeneity. Next-generation sequencing–based gene panel testing, which can be complemented with array comparative genomic hybridization and other ancillary methods, provides a comprehensive and feasible approach for heterogeneous disorders. Exome sequencing and genome sequencing have the advantage of being unbiased regarding what set of genes is analyzed, enabling parallel interrogation of most of the genes in the human genome. However, current limitations of next-generation sequencing technology and our variant interpretation capabilities caution us against offering exome sequencing or genome sequencing as either stand-alone or first-choice diagnostic approaches. A growing interest in personalized medicine calls for the application of genome sequencing in clinical diagnostics, but major challenges must be addressed before its full potential can be realized. Here, we propose a testing algorithm to help clinicians opt for the most appropriate molecular diagnostic tool for each scenario.Genet Med 17 6, 444–451.",
"title": ""
},
{
"docid": "43af3570e8eeee6cf113991e6c0994cf",
"text": "The main goal of modeling human conversation is to create agents which can interact with people in both open-ended and goal-oriented scenarios. End-to-end trained neural dialog systems are an important line of research for such generalized dialog models as they do not resort to any situation-specific handcrafting of rules. However, incorporating personalization into such systems is a largely unexplored topic as there are no existing corpora to facilitate such work. In this paper, we present a new dataset of goal-oriented dialogs which are influenced by speaker profiles attached to them. We analyze the shortcomings of an existing end-toend dialog system based on Memory Networks and propose modifications to the architecture which enable personalization. We also investigate personalization in dialog as a multi-task learning problem, and show that a single model which shares features among various profiles outperforms separate models for each profile.",
"title": ""
},
{
"docid": "81c90998c5e456be34617e702dbfa4f5",
"text": "In this paper, a new unsupervised learning algorithm, namely Nonnegative Discriminative Feature Selection (NDFS), is proposed. To exploit the discriminative information in unsupervised scenarios, we perform spectral clustering to learn the cluster labels of the input samples, during which the feature selection is performed simultaneously. The joint learning of the cluster labels and feature selection matrix enables NDFS to select the most discriminative features. To learn more accurate cluster labels, a nonnegative constraint is explicitly imposed to the class indicators. To reduce the redundant or even noisy features, `2,1-norm minimization constraint is added into the objective function, which guarantees the feature selection matrix sparse in rows. Our algorithm exploits the discriminative information and feature correlation simultaneously to select a better feature subset. A simple yet efficient iterative algorithm is designed to optimize the proposed objective function. Experimental results on different real world datasets demonstrate the encouraging performance of our algorithm over the state-of-the-arts. Introduction The dimension of data is often very high in many domains (Jain and Zongker 1997; Guyon and Elisseeff 2003), such as image and video understanding (Wang et al. 2009a; 2009b), and bio-informatics. In practice, not all the features are important and discriminative, since most of them are often correlated or redundant to each other, and sometimes noisy (Duda, Hart, and Stork 2001; Liu, Wu, and Zhang 2011). These features may result in adverse effects in some learning tasks, such as over-fitting, low efficiency and poor performance (Liu, Wu, and Zhang 2011). Consequently, it is necessary to reduce dimensionality, which can be achieved by feature selection or transformation to a low dimensional space. In this paper, we focus on feature selection, which is to choose discriminative features by eliminating the ones with little or no predictive information based on certain criteria. Many feature selection algorithms have been proposed, which can be classified into three main families: filter, wrapper, and embedded methods. The filter methods (Duda, Hart, Copyright c © 2012, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. and Stork 2001; He, Cai, and Niyogi 2005; Zhao and Liu 2007; Masaeli, Fung, and Dy 2010; Liu, Wu, and Zhang 2011; Yang et al. 2011a) use statistical properties of the features to filter out poorly informative ones. They are usually performed before applying classification algorithms. They select a subset of features only based on the intrinsic properties of the data. In the wrapper approaches (Guyon and Elisseeff 2003; Rakotomamonjy 2003), feature selection is “wrapped” in a learning algorithm and the classification performance of features is taken as the evaluation criterion. Embedded methods (Vapnik 1998; Zhu et al. 2003) perform feature selection in the process of model construction. In contrast with filter methods, wrapper and embedded methods are tightly coupled with in-built classifiers, which causes that they are less generality and computationally expensive. In this paper, we focus on the filter feature selection algorithm. Because of the importance of discriminative information in data analysis, it is beneficial to exploit discriminative information for feature selection, which is usually encoded in labels. However, how to select discriminative features in unsupervised scenarios is a significant but hard task due to the lack of labels. In light of this, we propose a novel unsupervised feature selection algorithm, namely Nonnegative Discriminative Feature Selection (NDFS), in this paper. We perform spectral clustering and feature selection simultaneously to select the discriminative features for unsupervised learning. The cluster label indicators are obtained by spectral clustering to guide the feature selection procedure. Different from most of the previous spectral clustering algorithms (Shi and Malik 2000; Yu and Shi 2003), we explicitly impose a nonnegative constraint into the objective function, which is natural and reasonable as discussed later in this paper. With nonnegative and orthogonality constraints, the learned cluster indicators are much closer to the ideal results and can be readily utilized to obtain cluster labels. Our method exploits the discriminative information and feature correlation in a joint framework. For the sake of feature selection, the feature selection matrix is constrained to be sparse in rows, which is formulated as `2,1-norm minimization term. To solve the proposed problem, a simple yet effective iterative algorithm is proposed. Extensive experiments are conducted on different datasets, which show that the proposed approach outperforms the state-of-the-arts in different applications. Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence",
"title": ""
},
{
"docid": "ce58edfaeebe01207046bf059e9bfa5d",
"text": "This paper introduces a robotic technology based supporting device, the Robot Mask, to enhance facial expressiveness and support physiotherapy for facial paralyzed persons. The wearable device, which consists of Shape Memory Alloy (SMA) based linear actuators, functions by pulling the facial skin towards anatomically selected directions. Since facial expressions are silent, SMA were selected over electrical motors. This paper introduces a compact and fully controllable actuation unit with position feedback and a novel controlling scenario that uses the selected hybrid actuation of bidirectional multi segment SMA wires in series to pull the wires. When designing the actuators, a biomechanical analysis was conducted to find anatomical parameters of natural smiles, and the Robot Mask was evaluated for its suitability as a facial expression supporter.",
"title": ""
},
{
"docid": "68cb8836a07846d19118d21383f6361a",
"text": "Background: Dental rehabilitation of partially or totally edentulous patients with oral implants has become a routine treatment modality in the last decades, with reliable long-term results. However, unfavorable local conditions of the alveolar ridge, due to atrophy, periodontal disease, and trauma sequelae may provide insufficient bone volume or unfavorable vertical, horizontal, and sagittal intermaxillary relationships, which may render implant placement impossible or incorrect from a functional and esthetic viewpoint. The aim of the current review is to discuss the different strategies for reconstruction of the alveolar ridge defect for implant placement. Study design: The study design includes a literature review of the articles that address the association between Reconstruction of Mandibular Alveolar Ridge Defects and Implant Placement. Results: Yet, despite an increasing number of publications related to the correction of deficient alveolar ridges, much controversy still exists concerning which is the more suitable and reliable technique. This is often because the publications are of insufficient methodological quality (inadequate sample size, lack of well-defined exclusion and inclusion criteria, insufficient follow-up, lack of well-defined success criteria, etc.). Conclusion: On the basis of available data it is difficult to conclude that a particular surgical procedure offered better outcome as compared to another. Hence the practical use of the available bone augmentation procedures for dental implants depends on the clinician’s preference in general and the clinical findings in the patient in particular. Surgical techniques that reduce trauma, preserve and augment the alveolar ridge represent key areas in the goal to optimize implant results.",
"title": ""
},
{
"docid": "309713b49b8ea3bd6feee408c351467a",
"text": "In this paper we describe a hybrid system that applies maximum entropy model (MaxEnt), language specific rules and gazetteers to the task of named entity recognition (NER) in Indian languages designed for the IJCNLP NERSSEAL shared task. Starting with named entity (NE) annotated corpora and a set of features we first build a baseline NER system. Then some language specific rules are added to the system to recognize some specific NE classes. Also we have added some gazetteers and context patterns to the system to increase the performance. As identification of rules and context patterns requires language knowledge, we were able to prepare rules and identify context patterns for Hindi and Bengali only. For the other languages the system uses the MaxEnt model only. After preparing the one-level NER system, we have applid a set of rules to identify the nested entities. The system is able to recognize 12 classes of NEs with 65.13% f-value in Hindi, 65.96% f-value in Bengali and 44.65%, 18.74%, and 35.47% f-value in Oriya, Telugu and Urdu respectively.",
"title": ""
},
{
"docid": "20e13726ebc2430f7305c75d70761a18",
"text": "The procedure of pancreaticoduodenectomy consists of three parts: resection, lymph node dissection, and reconstruction. A transection of the pancreas is commonly performed after a maneuver of the pancreatic head, exposing of the portal vein or lymph node dissection, and it should be confirmed as a safe method for pancreatic transection for decreasing the incidence of pancreatic fistula. However, there are only a few clinical trials with high levels of evidence for pancreatic surgery. In this report, we discuss the following issues: dissection of peripancreatic tissue, exposing the portal vein, pancreatic transection, dissection of the right hemicircle of the peri-superior mesenteric artery including plexus and lymph nodes, and dissection of the pancreatic parenchyma.",
"title": ""
},
{
"docid": "b86bd120f306cb5be466a691c0899399",
"text": "Multirate refresh techniques exploit the non-uniformity in retention times of DRAM cells to reduce the DRAM refresh overheads. Such techniques rely on accurate profiling of retention times of cells, and perform faster refresh only for a few rows which have cells with low retention times. Unfortunately, retention times of some cells can change at runtime due to Variable Retention Time (VRT), which makes it impractical to reliably deploy multirate refresh. Based on experimental data from 24 DRAM chips, we develop architecture-level models for analyzing the impact of VRT. We show that simply relying on ECC DIMMs to correct VRT failures is unusable as it causes a data error once every few months. We propose AVATAR, a VRT-aware multirate refresh scheme that adaptively changes the refresh rate for different rows at runtime based on current VRT failures. AVATAR provides a time to failure in the regime of several tens of years while reducing refresh operations by 62%-72%.",
"title": ""
},
{
"docid": "e6ecb07f4e5b03f576833169c32d702d",
"text": "This paper presents a new human-humanoid communication system by using a directional speaker. The directional speaker produces directional sound beams by using intermodulation of ultrasonic sound beams and non linearity in air. This technology solves problems and brings new ways of human-humanoid communication as follows: 1) A humanoid can recognize human speeches during speaking because a microphone installed in the humanoid does not capture self voices played by the directional speaker. 2) A humanoid can speak to a specific person as if it whispers. The directional speaker is installed at the position of the mouth of the humanoid. Preliminary experiments show the efficiency of the directional speaker to realize the above two functions.",
"title": ""
},
{
"docid": "8b5a06aab3e4bc184733eb108c1706ae",
"text": "Profiling data to determine metadata about a given dataset is an important and frequent activity of any IT professional and researcher and is necessary for various use-cases. It encompasses a vast array of methods to examine datasets and produce metadata. Among the simpler results are statistics, such as the number of null values and distinct values in a column, its data type, or the most frequent patterns of its data values. Metadata that are more difficult to compute involve multiple columns, namely correlations, unique column combinations, functional dependencies, and inclusion dependencies. Further techniques detect conditional properties of the dataset at hand. This survey provides a classification of data profiling tasks and comprehensively reviews the state of the art for each class. In addition, we review data profiling tools and systems from research and industry. We conclude with an outlook on the future of data profiling beyond traditional profiling tasks and beyond relational databases.",
"title": ""
}
] |
scidocsrr
|
fdd7565cf8687acd5acfade4b321f65c
|
Audio-visual speech recognition using deep bottleneck features and high-performance lipreading
|
[
{
"docid": "7d78ca30853ed8a84bbb56fe82e3b9ba",
"text": "Deep belief networks (DBN) have shown impressive improvements over Gaussian mixture models for automatic speech recognition. In this work we use DBNs for audio-visual speech recognition; in particular, we use deep learning from audio and visual features for noise robust speech recognition. We test two methods for using DBNs in a multimodal setting: a conventional decision fusion method that combines scores from single-modality DBNs, and a novel feature fusion method that operates on mid-level features learned by the single-modality DBNs. On a continuously spoken digit recognition task, our experiments show that these methods can reduce word error rate by as much as 21% relative over a baseline multi-stream audio-visual GMM/HMM system.",
"title": ""
},
{
"docid": "72f17106ad48b144ccab55b564fece7d",
"text": "We present an efficient and robust model matching method which uses a joint shape and texture appearance model to generate a set of region template detectors. The model is fitted to an unseen image in an iterative manner by generating templates using the joint model and the current parameter estimates, correlating the templates with the target image to generate response images and optimising the shape parameters so as to maximise the sum of responses. The appearance model is similar to that used in the AAM [1]. However in our approach the appearance model is used to generate likely feature templates, instead of trying to approximate the image pixels directly. We show that when applied to human faces, our Constrained Local Model (CLM) algorithm is more robust and more accurate than the original AAM search method, which relies on the image reconstruction error to update the model parameters. We demonstrate improved localisation accuracy on two publicly available face data sets and improved tracking on a challenging set of in-car face sequences.",
"title": ""
}
] |
[
{
"docid": "1e4292950f907d26b27fa79e1e8fa41f",
"text": "All over the world every business and profit earning firm want to make their consumer loyal. There are many factors responsible for this customer loyalty but two of them are prominent. This research study is focused on that how customer satisfaction and customer retention contribute towards customer loyalty. For analysis part of this study, Universities students of Peshawar Region were targeted. A sample of 120 were selected from three universities of Peshawar. These universities were Preston University, Sarhad University and City University of Science and Information technology. Analysis was conducted with the help of SPSS 19. Results of the study shows that customer loyalty is more dependent upon Customer satisfaction in comparison of customer retention. Customer perceived value and customer perceived quality are the major factors which contribute for the customer loyalty of Universities students for mobile handsets.",
"title": ""
},
{
"docid": "916a8420e7f15d783aab7e23ad60e16a",
"text": "Motion detection in video is important for a number of applications and fields. In video surveillance, motion detection is an essential accompaniment to activity recognition for early warning systems. Robotics also has much to gain from motion detection and segmentation, particularly in high speed motion tracking for tactile systems. There are a myriad of techniques for detecting and masking motion in an image. Successful systems have used Gaussian Models to discern background from foreground in an image (motion from static imagery). However, particularly in the case of a moving camera or frame of reference, it is necessary to compensate for the motion of the camera when attempting to discern objects moving in the foreground. For example, it is possible to estimate motion of the camera through optical flow methods or temporal differencing and then compensate for this motion in a background subtraction model. We selection a method by Yi et al. using Dual-Mode Single Gaussian Models which does just this. We implement the technique in Intel’s Thread Building Blocks (TBB) and NVIDIA’s CUDA libraries. We then compare parallelization improvements with a theoretical analysis of speedups based on the characteristics of our selected model and attributes of both TBB and CUDA. We make our implementation available to the public.",
"title": ""
},
{
"docid": "c105fdde48fdcbab369dc9698dc9fce9",
"text": "Social link identification SIL, that is to identify accounts across different online social networks that belong to the same user, is an important task in social network applications. Most existing methods to solve this problem directly applied machine-learning classifiers on features extracted from user’s rich information. In practice, however, only some limited user information can be obtained because of privacy concerns. In addition, we observe the existing methods cannot handle huge amount of potential account pairs from different OSNs. In this paper, we propose an effective SIL method to address the above two challenges by expanding known anchor links (seed account pairs belonging to the same person). In particular, we leverage potentially useful information possessed by the existing anchor link, and then develop a local expansion model to identify new social links, which are taken as a generated anchor link to be used for iteratively identifying additional new social link. We evaluate our method on two most popular Chinese social networks. Experimental results show our proposed method achieves much better performance in terms of both the number of correct account pairs and efficiency.",
"title": ""
},
{
"docid": "1a4c510097bb45346590b611c75a78c4",
"text": "We augment adversarial training (AT) with worst case adversarial training (WCAT) which improves adversarial robustness by 11% over the current stateof-the-art result in the `2 norm on CIFAR-10. We obtain verifiable average case and worst case robustness guarantees, based on the expected and maximum values of the norm of the gradient of the loss. We interpret adversarial training as Total Variation Regularization, which is a fundamental tool in mathematical image processing, and WCAT as Lipschitz regularization.",
"title": ""
},
{
"docid": "bfbd291ce302fc2d7bd8909bd0f7e01a",
"text": "The correlative change analysis of state parameters can provide powerful technical supports for safe, reliable, and high-efficient operation of the power transformers. However, the analysis methods are primarily based on a single or a few state parameters, and hence the potential failures can hardly be found and predicted. In this paper, a data-driven method of association rule mining for transformer state parameters has been proposed by combining the Apriori algorithm and probabilistic graphical model. In this method the disadvantage that whenever the frequent items are searched the whole data items have to be scanned cyclically has been overcame. This method is used in mining association rules of the numerical solutions of differential equations. The result indicates that association rules among the numerical solutions can be accurately mined. Finally, practical measured data of five 500 kV transformers is analyzed by the proposed method. The association rules of various state parameters have been excavated, and then the mined association rules are used in modifying the prediction results of single state parameters. The results indicate that the application of the mined association rules improves the accuracy of prediction. Therefore, the effectiveness and feasibility of the proposed method in association rule mining has been proved.",
"title": ""
},
{
"docid": "c3f7f9b70763c012698cad8295e50f2c",
"text": "Recommender systems are widely used in many areas, especially in e-commerce. Recently, they are also applied in e-learning tasks such as recommending resources (e.g. papers, books,..) to the learners (students). In this work, we propose a novel approach which uses recommender system techniques for educational data mining, especially for predicting student performance. To validate this approach, we compare recommender system techniques with traditional regression methods such as logistic/linear regression by using educational data for intelligent tutoring systems. Experimental results show that the proposed approach can improve prediction results.",
"title": ""
},
{
"docid": "65017c1a19a0e0b131d894622cb5f14c",
"text": "One of the most important steps in building a recommender system is the interaction design process, which defines how the recommender system interacts with a user. It also shapes the experience the user gets, from the point she registers and provides her preferences to the system, to the point she receives recommendations generated by the system. A proper interaction design may improve user experience and hence may result in higher usability of the system, as well as, in higher satisfaction. In this paper, we focus on the interaction design of a mobile food recommender system that, through a novel interaction process, elicits users’ long-term and short-term preferences for recipes. User’s long-term preferences are captured by asking the user to rate and tag familiar recipes, while for collecting the short-term preferences, the user is asked to select the ingredients she would like to include in the recipe to be prepared. Based on the combined exploitation of both types of preferences, a set of personalized recommendations is generated. We conducted a user study measuring the usability of the proposed interaction. The results of the study show that the majority of users rates the quality of the recommendations high and the system achieves usability scores above the standard benchmark.",
"title": ""
},
{
"docid": "58047bd197ebeb760156cc33462c1335",
"text": "We present a nonlinear, dynamic controller for a 6DOF quadrotor operating in an estimated, spatially varying, turbulent wind field. The quadrotor dynamics include the aerodynamic effects of drag, rotor blade flapping, and induced thrust due to translational velocity and external wind fields. To control the quadrotor we use a dynamic input/output feedback linearization controller that estimates a parametric model of the wind field using a recursive Bayesian filter. Each rotor experiences a possibly different wind field, which introduces moments that are accounted for in the controller and allows flight in wind fields that vary over the length of the vehicle. We add noise to the wind field in the form of Dryden turbulence to simulate the algorithm in two applications: autonomous ship landing and quadrotor proximity flight.",
"title": ""
},
{
"docid": "986f55bb12d71e534e1e2fe970f610fb",
"text": "Code corpora, as observed in large software systems, are now known to be far more repetitive and predictable than natural language corpora. But why? Does the difference simply arise from the syntactic limitations of programming languages? Or does it arise from the differences in authoring decisions made by the writers of these natural and programming language texts? We conjecture that the differences are not entirely due to syntax, but also from the fact that reading and writing code is un-natural for humans, and requires substantial mental effort; so, people prefer to write code in ways that are familiar to both reader and writer. To support this argument, we present results from two sets of studies: 1) a first set aimed at attenuating the effects of syntax, and 2) a second, aimed at measuring repetitiveness of text written in other settings (e.g. second language, technical/specialized jargon), which are also effortful to write. We find that this repetition in source code is not entirely the result of grammar constraints, and thus some repetition must result from human choice. While the evidence we find of similar repetitive behavior in technical and learner corpora does not conclusively show that such language is used by humans to mitigate difficulty, it is consistent with that theory. This discovery of “non-syntactic” repetitive behaviour is actionable, and can be leveraged for statistically significant improvements on the code suggestion task. We discuss this finding, and other future implications on practice, and for research.",
"title": ""
},
{
"docid": "98dcb6001d3b487493e911cc2737ce47",
"text": "The development of an automatic telemedicine system for computer-aided screening and grading of diabetic retinopathy depends on reliable detection of retinal lesions in fundus images. In this paper, a novel method for automatic detection of both microaneurysms and hemorrhages in color fundus images is described and validated. The main contribution is a new set of shape features, called Dynamic Shape Features, that do not require precise segmentation of the regions to be classified. These features represent the evolution of the shape during image flooding and allow to discriminate between lesions and vessel segments. The method is validated per-lesion and per-image using six databases, four of which are publicly available. It proves to be robust with respect to variability in image resolution, quality and acquisition system. On the Retinopathy Online Challenge's database, the method achieves a FROC score of 0.420 which ranks it fourth. On the Messidor database, when detecting images with diabetic retinopathy, the proposed method achieves an area under the ROC curve of 0.899, comparable to the score of human experts, and it outperforms state-of-the-art approaches.",
"title": ""
},
{
"docid": "042431e96028ed9729e6b174a78d642d",
"text": "We address the problem of multi-class classification in the case where the number of classes is very large. We propose a double sampling strategy on top of a multi-class to binary reduction strategy, which transforms the original multi-class problem into a binary classification problem over pairs of examples. The aim of the sampling strategy is to overcome the curse of long-tailed class distributions exhibited in majority of large-scale multi-class classification problems and to reduce the number of pairs of examples in the expanded data. We show that this strategy does not alter the consistency of the empirical risk minimization principle defined over the double sample reduction. Experiments are carried out on DMOZ and Wikipedia collections with 10,000 to 100,000 classes where we show the efficiency of the proposed approach in terms of training and prediction time, memory consumption, and predictive performance with respect to state-of-the-art approaches.",
"title": ""
},
{
"docid": "c0e5dfd33b2cb87f91c58d47286fde40",
"text": "Recently, a variety of representation learning approaches have been developed in the literature to induce latent generalizable features across two domains. In this paper, we extend the standard hidden Markov models (HMMs) to learn distributed state representations to improve cross-domain prediction performance. We reformulate the HMMs by mapping each discrete hidden state to a distributed representation vector and employ an expectationmaximization algorithm to jointly learn distributed state representations and model parameters. We empirically investigate the proposed model on cross-domain part-ofspeech tagging and noun-phrase chunking tasks. The experimental results demonstrate the effectiveness of the distributed HMMs on facilitating domain adaptation.",
"title": ""
},
{
"docid": "33e41cf93ec8bb99c215dbce4afc34f8",
"text": "This paper presents a general, trainable system for object detection in unconstrained, cluttered scenes. The system derives much of its power from a representation that describes an object class in terms of an overcomplete dictionary of local, oriented, multiscale intensity differences between adjacent regions, efficiently computable as a Haar wavelet transform. This example-based learning approach implicitly derives a model of an object class by training a support vector machine classifier using a large set of positive and negative examples. We present results on face, people, and car detection tasks using the same architecture. In addition, we quantify how the representation affects detection performance by considering several alternate representations including pixels and principal components. We also describe a real-time application of our person detection system as part of a driver assistance system.",
"title": ""
},
{
"docid": "2ba8dbe9a5dd2b06d0ed5031b519c51f",
"text": "Machine Learning on graphs and manifolds are important ubiquitous tasks with applications ranging from network analysis to 3D shape analysis. Traditionally, machine learning approaches relied on user-defined heuristics to extract features encoding structural information about a graph or mesh. Recently, there has been an increasing interest in geometric deep learning [6] that automatically learns signals defined on graphs and manifolds. We are then motivated to apply such methods to address the multifaceted challenges arising in computational biology and computer graphics for decades, i.e. protein function prediction and 3D facial expression recognition. Here we propose a deep graph neural network to successfully address the semi-supervised multi-label classification problem (i.e. protein function prediction). With regard to 3D facial expression recognition, we propose a deep residual B-Spline graph convolution network, which allows for end-to-end training and inference without using hand-crafted feature descriptors. Our method outperforms the current baseline results on 4DFAB [10] dataset.",
"title": ""
},
{
"docid": "9e259cafd152ad35dcd04e6a9c7d65ab",
"text": "Second-order pooling, a.k.a. bilinear pooling, has proven effective for deep learning based visual recognition. However, the resulting second-order networks yield a final representation that is orders of magnitude larger than that of standard, first-order ones, making them memory-intensive and cumbersome to deploy. Here, we introduce a general, parametric compression strategy that can produce more compact representations than existing compression techniques, yet outperform both compressed and uncompressed second-order models. Our approach is motivated by a statistical analysis of the network’s activations, relying on operations that lead to a Gaussian-distributed final representation, as inherently used by first-order deep networks. As evidenced by our experiments, this lets us outperform the state-of-the-art first-order and second-order models on several benchmark recognition datasets.",
"title": ""
},
{
"docid": "ae43fc77cfe3e88f00a519744407eed7",
"text": "In this work we use the recent advances in representation learning to propose a neural architecture for the problem of natural language inference. Our approach is aligned to mimic how a human does the natural language inference process given two statements. The model uses variants of Long Short Term Memory (LSTM), attention mechanism and composable neural networks, to carry out the task. Each part of our model can be mapped to a clear functionality humans do for carrying out the overall task of natural language inference. The model is end-to-end differentiable enabling training by stochastic gradient descent. On Stanford Natural Language Inference(SNLI) dataset, the proposed model achieves better accuracy numbers than all published models in literature.",
"title": ""
},
{
"docid": "eafa6403e38d2ceb63ef7c00f84efe77",
"text": "We propose a novel approach to learning distributed representations of variable-length text sequences in multiple languages simultaneously. Unlike previous work which often derive representations of multi-word sequences as weighted sums of individual word vectors, our model learns distributed representations for phrases and sentences as a whole. Our work is similar in spirit to the recent paragraph vector approach but extends to the bilingual context so as to efficiently encode meaning-equivalent text sequences of multiple languages in the same semantic space. Our learned embeddings achieve state-of-theart performance in the often used crosslingual document classification task (CLDC) with an accuracy of 92.7 for English to German and 91.5 for German to English. By learning text sequence representations as a whole, our model performs equally well in both classification directions in the CLDC task in which past work did not achieve.",
"title": ""
},
{
"docid": "35792db324d1aaf62f19bebec6b1e825",
"text": "Keyphrases: Global Vectors for Word Representation (GloVe). Intrinsic and extrinsic evaluations. Effect of hyperparameters on analogy evaluation tasks. Correlation of human judgment with word vector distances. Dealing with ambiguity in word using contexts. Window classification. This set of notes first introduces the GloVe model for training word vectors. Then it extends our discussion of word vectors (interchangeably called word embeddings) by seeing how they can be evaluated intrinsically and extrinsically. As we proceed, we discuss the example of word analogies as an intrinsic evaluation technique and how it can be used to tune word embedding techniques. We then discuss training model weights/parameters and word vectors for extrinsic tasks. Lastly we motivate artificial neural networks as a class of models for natural language processing tasks.",
"title": ""
}
] |
scidocsrr
|
a6408e0dd5b3fe97a957388b5402eac6
|
IPv6 Addressing Strategies for IoT
|
[
{
"docid": "0ca445eed910eacccbb9f2cc9569181b",
"text": "Nanotechnology promises new solutions for many applications in the biomedical, industrial and military fields as well as in consumer and industrial goods. The interconnection of nanoscale devices with existing communication networks and ultimately the Internet defines a new networking paradigm that is further referred to as the Internet of Nano-Things. Within this context, this paper discusses the state of the art in electromagnetic communication among nanoscale devices. An in-depth view is provided from the communication and information theoretic perspective, by highlighting the major research challenges in terms of channel modeling, information encoding and protocols for nanonetworks and the Internet of Nano-Things.",
"title": ""
}
] |
[
{
"docid": "071b34508ab6aa0eefbc9f5966a127ee",
"text": "Existing single view, 3D face reconstruction methods can produce beautifully detailed 3D results, but typically only for near frontal, unobstructed viewpoints. We describe a system designed to provide detailed 3D reconstructions of faces viewed under extreme conditions, out of plane rotations, and occlusions. Motivated by the concept of bump mapping, we propose a layered approach which decouples estimation of a global shape from its mid-level details (e.g., wrinkles). We estimate a coarse 3D face shape which acts as a foundation and then separately layer this foundation with details represented by a bump map. We show how a deep convolutional encoder-decoder can be used to estimate such bump maps. We further show how this approach naturally extends to generate plausible details for occluded facial regions. We test our approach and its components extensively, quantitatively demonstrating the invariance of our estimated facial details. We further provide numerous qualitative examples showing that our method produces detailed 3D face shapes in viewing conditions where existing state of the art often break down.",
"title": ""
},
{
"docid": "1eb30a6cf31e5c256b9d1ca091e532cc",
"text": "The aim of this study was to evaluate the range of techniques used by radiologists performing shoulder, hip, and knee arthrography using fluoroscopic guidance. Questionnaires on shoulder, hip, and knee arthrography were distributed to radiologists at a national radiology meeting. We enquired regarding years of experience, preferred approaches, needle gauge, gadolinium dilution, and volume injected. For each approach, the radiologist was asked their starting and end needle position based on a numbered and lettered grid superimposed on a radiograph. Sixty-eight questionnaires were returned. Sixty-eight radiologists performed shoulder and hip arthrography, and 65 performed knee arthrograms. Mean experience was 13.5 and 12.8 years, respectively. For magnetic resonance arthrography, a gadolinium dilution of 1/200 was used by 69–71%. For shoulder arthrography, an anterior approach was preferred by 65/68 (96%). The most common site of needle end position, for anterior and posterior approaches, was immediately lateral to the humeral cortex. A 22-gauge needle was used by 46/66 (70%). Mean injected volume was 12.7 ml (5–30). For hip arthrography, an anterior approach was preferred by 51/68 (75%). The most common site of needle end position, for anterior and lateral approaches, was along the lateral femoral head/neck junction. A 22-gauge needle was used by 53/68 (78%). Mean injected volume was 11.5 ml (5–20). For knee arthrography, a lateral approach was preferred by 41/64 (64%). The most common site of needle end position, for lateral and medial approaches, was mid-patellofemoral joint level. A 22-gauge needle was used by 36/65 (56%). Mean injected volume was 28.2 ml (5–60). Arthrographic approaches for the shoulder, hip, and knee vary among radiologists over a wide range of experience levels.",
"title": ""
},
{
"docid": "099a291a9a0adaf1b6d276387ab73ca5",
"text": "BACKGROUND\nAround the world, populations are aging and there is a growing concern about ways that older adults can maintain their health and well-being while living in their homes.\n\n\nOBJECTIVES\nThe aim of this paper was to conduct a systematic literature review to determine: (1) the levels of technology readiness among older adults and, (2) evidence for smart homes and home-based health-monitoring technologies that support aging in place for older adults who have complex needs.\n\n\nRESULTS\nWe identified and analyzed 48 of 1863 relevant papers. Our analyses found that: (1) technology-readiness level for smart homes and home health monitoring technologies is low; (2) the highest level of evidence is 1b (i.e., one randomized controlled trial with a PEDro score ≥6); smart homes and home health monitoring technologies are used to monitor activities of daily living, cognitive decline and mental health, and heart conditions in older adults with complex needs; (3) there is no evidence that smart homes and home health monitoring technologies help address disability prediction and health-related quality of life, or fall prevention; and (4) there is conflicting evidence that smart homes and home health monitoring technologies help address chronic obstructive pulmonary disease.\n\n\nCONCLUSIONS\nThe level of technology readiness for smart homes and home health monitoring technologies is still low. The highest level of evidence found was in a study that supported home health technologies for use in monitoring activities of daily living, cognitive decline, mental health, and heart conditions in older adults with complex needs.",
"title": ""
},
{
"docid": "69e2cd21ca9b5d14a09820b83f77c105",
"text": "Stochastic Gradient Descent (SGD) is an important algorithm in machine learning. With constant learning rates, it is a stochastic process that, after an initial phase of convergence, generates samples from a stationary distribution. We show that SGD with constant rates can be effectively used as an approximate posterior inference algorithm for probabilistic modeling. Specifically, we show how to adjust the tuning parameters of SGD such as to match the resulting stationary distribution to the posterior. This analysis rests on interpreting SGD as a continuoustime stochastic process and then minimizing the Kullback-Leibler divergence between its stationary distribution and the target posterior. (This is in the spirit of variational inference.) In more detail, we model SGD as a multivariate Ornstein-Uhlenbeck process and then use properties of this process to derive the optimal parameters. This theoretical framework also connects SGD to modern scalable inference algorithms; we analyze the recently proposed stochastic gradient Fisher scoring under this perspective. We demonstrate that SGD with properly chosen constant rates gives a new way to optimize hyperparameters in probabilistic models.",
"title": ""
},
{
"docid": "f3727bfc3965bcb49d8897f144ac13a3",
"text": "Presenteeism refers to attending work while ill. Although it is a subject of intense interest to scholars in occupational medicine, relatively few organizational scholars are familiar with the concept. This article traces the development of interest in presenteeism, considers its various conceptualizations, and explains how presenteeism is typically measured. Organizational and occupational correlates of attending work when ill are reviewed, as are medical correlates of resulting productivity loss. It is argued that presenteeism has important implications for organizational theory and practice, and a research agenda for organizational scholars is presented. Copyright # 2009 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "703174754f25cea5a9f1e1e7f1988b76",
"text": "This study proposes a data-driven approach to phone set construction for code-switching automatic speech recognition (ASR). Acoustic and context-dependent cross-lingual articulatory features (AFs) are incorporated into the estimation of the distance between triphone units for constructing a Chinese-English phone set. The acoustic features of each triphone in the training corpus are extracted for constructing an acoustic triphone HMM. Furthermore, the articulatory features of the \"last/first\" state of the corresponding preceding/succeeding triphone in the training corpus are used to construct an AF-based GMM. The AFs, extracted using a deep neural network (DNN), are used for code-switching articulation modeling to alleviate the data sparseness problem due to the diverse context-dependent phone combinations in intra-sentential code-switching. The triphones are then clustered to obtain a Chinese-English phone set based on the acoustic HMMs and the AF-based GMMs using a hierarchical triphone clustering algorithm. Experimental results on code-switching ASR show that the proposed method for phone set construction outperformed other traditional methods.",
"title": ""
},
{
"docid": "413df06d6ba695aa5baa13ea0913c6e6",
"text": "Time stamping is a technique used to prove the existence of certain digital data prior to a specific point in time. With the recent development of electronic commerce, time stamping is now widely recognized as an important technique used to ensure the integrity of digital data for a long time period. Various time stamping schemes and services have been proposed. When one uses a certain time stamping service, he should confirm in advance that its security level sufficiently meets his security requirements. However, time stamping schemes are generally so complicated that it is not easy to evaluate their security levels accurately. It is important for users to have a good grasp of current studies of time stamping schemes and to make use of such studies to select an appropriate time stamping service. Une and Matsumoto [2000], [2001a], [2001b] and [2002] have proposed a method of classifying time stamping schemes and evaluating their security systematically. Their papers have clarified the objectives, functions and entities involved in time stamping schemes and have discussed the conditions sufficient to detect the alteration of a time stamp in each scheme. This paper explains existing problems regarding the security evaluation of time stamping schemes and the results of Une and Matsumoto [2000], [2001a], [2001b] and [2002]. It also applies their results to some existing time stamping schemes and indicates possible directions of further research into time stamping schemes.",
"title": ""
},
{
"docid": "76a1efcbbbdd3ed81aa38925b804c589",
"text": "Typical recent neural network designs are primarily convolutional layers, but 1 the tricks enabling structured efficient linear layers (SELLs) have not yet been 2 adapted to the convolutional setting. We present a method to express the weight 3 tensor in a convolutional layer using diagonal matrices, discrete cosine transforms 4 (DCTs) and permutations that can be optimised using standard stochastic gradient 5 methods. A network composed of such structured efficient convolutional layers 6 (SECL) outperforms existing low-rank networks and demonstrates competitive 7 computational efficiency. 8",
"title": ""
},
{
"docid": "911b20694a9f8ba98a2cfe74705b0f86",
"text": "Synthetic hand pose data has been frequently used in vision based hand gesture recognition. However existing synthetic hand pose generators are not able to detect intersection between various hand parts and can synthesize self intersecting poses. Using such data may lead to learning wrong models. We propose a method to eliminate self intersecting synthetic hand poses by accurately detecting intersections between various hand parts. We model each hand part as a convex hull and calculate pairwise distance between the parts, labeling any pair with a negative distance as intersecting. A hand pose with at least one pair of intersecting parts is labeled as self intersecting. We show experimentally that our method is very accurate and performs better than existing techniques. We also show that it is fast enough for offline data generation.",
"title": ""
},
{
"docid": "83ae128f71bb154177881012dfb6a680",
"text": "Cell imbalance in large battery packs degrades their capacity delivery, especially for cells connected in series where the weakest cell dominates their overall capacity. In this article, we present a case study of exploiting system reconfigurations to mitigate the cell imbalance in battery packs. Specifically, instead of using all the cells in a battery pack to support the load, selectively skipping cells to be discharged may actually enhance the pack’s capacity delivery. Based on this observation, we propose CSR, a Cell Skipping-assisted Reconfiguration algorithm that identifies the system configuration with (near)-optimal capacity delivery. We evaluate CSR using large-scale emulation based on empirically collected discharge traces of 40 lithium-ion cells. CSR achieves close-to-optimal capacity delivery when the cell imbalance in the battery pack is low and improves the capacity delivery by about 20% and up to 1x in the case of a high imbalance.",
"title": ""
},
{
"docid": "cd982f21a3f8b782a267e7241f93a957",
"text": "Today's distributed file system architectures scale well to large amounts of data. Their performance, however, is often limited by their metadata server. In this paper, we reconsider the database backend of the metadata server and propose a design that simplifies implementation and enhances performance.In particular, we argue that the concept of log-structured merge (LSM) trees is a better foundation for the storage layer of a metadata server than the traditionally used B-trees. We present BabuDB, a database that relies on LSM-tree-like index structures, and describe how it stores file system metadata.We show that our solution offers better scalability and performance than equivalent ext4 and Berkeley DB-based metadata server implementations. Our experiments include real-world metadata traces from a Linux kernel build and an IMAP mail server. Results show that BabuDB is up to twice as fast as the ext4-based backend and outperforms a Berkeley DB setup by an order of magnitude.",
"title": ""
},
{
"docid": "3ee8ad2c9e07c33781fc53ac2e11cd6e",
"text": "Tapping into the \"folk knowledge\" needed to advance machine learning applications.",
"title": ""
},
{
"docid": "5deae44a9c14600b1a2460836ed9572d",
"text": "Grasping an object in a cluttered, unorganized environment is challenging because of unavoidable contacts and interactions between the robot and multiple immovable (static) and movable (dynamic) obstacles in the environment. Planning an approach trajectory for grasping in such situations can benefit from physics-based simulations that describe the dynamics of the interaction between the robot manipulator and the environment. In this work, we present a physics-based trajectory optimization approach for planning grasp approach trajectories. We present novel cost objectives and identify failure modes relevant to grasping in cluttered environments. Our approach uses rollouts of physics-based simulations to compute the gradient of the objective and of the dynamics. Our approach naturally generates behaviors such as choosing to push objects that are less likely to topple over, recognizing and avoiding situations which might cause a cascade of objects to fall over, and adjusting the manipulator trajectory to push objects aside in a direction orthogonal to the grasping direction. We present results in simulation for grasping in a variety of cluttered environments with varying levels of density of obstacles in the environment. Our experiments in simulation indicate that our approach outperforms a baseline approach that considers multiple straight-line trajectories modified to account for static obstacles by an aggregate success rate of 14% with varying degrees of object clutter.",
"title": ""
},
{
"docid": "90f90bee3fa1f66b7eb9c7da0f5a6d8e",
"text": "Stack Overflow is a popular questions and answers (Q&A) website among software developers. It counts more than two millions of users who actively contribute by asking and answering thousands of questions daily. Identifying and reviewing low quality posts preserves the quality of site's contents and it is crucial to maintain a good user experience. In Stack Overflow the identification of poor quality posts is performed by selected users manually. The system also uses an automated identification system based on textual features. Low quality posts automatically enter a review queue maintained by experienced users. We present an approach to improve the automated system in use at Stack Overflow. It analyzes both the content of a post (e.g., simple textual features and complex readability metrics) and community-related aspects (e.g., popularity of a user in the community). Our approach reduces the size of the review queue effectively and removes misclassified good quality posts.",
"title": ""
},
{
"docid": "1cbf4840e09a950a5adfcbbfbd476d6a",
"text": "We introduce an online neural sequence to sequence model that learns to alternate between encoding and decoding segments of the input as it is read. By independently tracking the encoding and decoding representations our algorithm permits exact polynomial marginalization of the latent segmentation during training, and during decoding beam search is employed to find the best alignment path together with the predicted output sequence. Our model tackles the bottleneck of vanilla encoder-decoders that have to read and memorize the entire input sequence in their fixedlength hidden states before producing any output. It is different from previous attentive models in that, instead of treating the attention weights as output of a deterministic function, our model assigns attention weights to a sequential latent variable which can be marginalized out and permits online generation. Experiments on abstractive sentence summarization and morphological inflection show significant performance gains over the baseline encoder-decoders.",
"title": ""
},
{
"docid": "cee3ad388fcc1e3083acec79aba6261f",
"text": "BACKGROUND\nThere is growing evidence that oxidative stress contributes to the pathogenesis of hypertension and endothelial dysfunction. Thus, dietary antioxidants may beneficially influence blood pressure (BP) and endothelial function by reducing oxidative stress.\n\n\nOBJECTIVE\nTo determine if vitamin C and polyphenols, alone or in combination, can lower BP, improve endothelial function and reduce oxidative stress in hypertensive individuals.\n\n\nDESIGN\nA total of 69 treated hypertensive individuals with a mean 24-h ambulatory systolic blood pressure > or = 125 mmHg participated in a randomized, double-blind, placebo-controlled, factorial trial. Following a 3-week washout, participants received 500 mg/day vitamin C, 1000 mg/day grape-seed polyphenols, both vitamin C and polyphenols, or neither for 6 weeks. At baseline and post-intervention, 24-h ambulatory BP, ultrasound-assessed endothelium-dependent and -independent vasodilation of the brachial artery, and markers of oxidative damage, (plasma and urinary F2-isoprostanes, oxidized low-density lipoproteins and plasma tocopherols), were measured.\n\n\nRESULTS\nA significant interaction between grape-seed and vitamin C treatments for effects on BP was observed. Vitamin C alone reduced systolic BP versus placebo (-1.8 +/- 0.8 mmHg, P = 0.03), while polyphenols did not (-1.3 +/- 0.8 mmHg, P = 0.12). However, treatment with the combination of vitamin C and polyphenols increased systolic BP (4.8 +/- 0.9 mmHg versus placebo; 6.6 +/- 0.8 mmHg versus vitamin C; 6.1 +/- 0.9 mmHg versus polyphenols mmHg, each P < 0.0001) and diastolic BP (2.7 +/- 0.6 mmHg, P < 0.0001 versus placebo; 1.5 +/- 0.6 mmHg, P = 0.016 versus vitamin C; 3.2 +/- 0.7 mmHg, P < 0.0001 versus polyphenols). Endothelium-dependent and -independent vasodilation, and markers of oxidative damage were not significantly altered.\n\n\nCONCLUSION\nAlthough the mechanism remains to be elucidated, these results suggest caution for hypertensive subjects taking supplements containing combinations of vitamin C and polyphenols.",
"title": ""
},
{
"docid": "67544e71b45acb84923a3db84534a377",
"text": "The precision of point-of-gaze (POG) estimation during a fixation is an important factor in determining the usability of a noncontact eye-gaze tracking system for real-time applications. The objective of this paper is to define and measure POG fixation precision, propose methods for increasing the fixation precision, and examine the improvements when the methods are applied to two POG estimation approaches. To achieve these objectives, techniques for high-speed image processing that allow POG sampling rates of over 400 Hz are presented. With these high-speed POG sampling rates, the fixation precision can be improved by filtering while maintaining an acceptable real-time latency. The high-speed sampling and digital filtering techniques developed were applied to two POG estimation techniques, i.e., the highspeed pupil-corneal reflection (HS P-CR) vector method and a 3-D model-based method allowing free head motion. Evaluation on the subjects has shown that when operating at 407 frames per second (fps) with filtering, the fixation precision for the HS P-CR POG estimation method was improved by a factor of 5.8 to 0.035deg (1.6 screen pixels) compared to the unfiltered operation at 30 fps. For the 3-D POG estimation method, the fixation precision was improved by a factor of 11 to 0.050deg (2.3 screen pixels) compared to the unfiltered operation at 30 fps.",
"title": ""
},
{
"docid": "37384ddd7497b823f7f2f7b8b2e9a824",
"text": "This paper presents a comparative analysis of the three widely used parallel sorting algorithms: OddEven sort, Rank sort and Bitonic sort in terms of sorting rate, sorting time and speed-up on CPU and different GPU architectures. Alongside we have implemented novel parallel algorithm: min-max butterfly network, for finding minimum and maximum in large data sets. All algorithms have been implemented exploiting data parallelism model, for achieving high performance, as available on multi-core GPUs using the OpenCL specification. Our results depicts minimum speed-up19x of bitonic sort against oddeven sorting technique for small queue sizes on CPU and maximum of 2300x speed-up for very large queue sizes on Nvidia Quadro 6000 GPU architecture. Our implementation of full-butterfly network sorting results in relatively better performance than all of the three sorting techniques: bitonic, odd-even and rank sort. For min-max butterfly network, our findings report high speed-up of Nvidia quadro 6000 GPU for high data set size reaching 2 24 with much lower sorting time.",
"title": ""
},
{
"docid": "1cac08a96e946fb6d98290aa8bb6c434",
"text": "Accelerated in vitro release testing methodology has been developed as an indicator of product performance to be used as a discriminatory quality control (QC) technique for the release of clinical and commercial batches of biodegradable microspheres. While product performance of biodegradable microspheres can be verified by in vivo and/or in vitro experiments, such evaluation can be particularly challenging because of slow polymer degradation, resulting in extended study times, labor, and expense. Three batches of Leuprolide poly(lactic-co-glycolic acid) (PLGA) microspheres having varying morphology (process variants having different particle size and specific surface area) were manufactured by the solvent extraction/evaporation technique. Tests involving in vitro release, polymer degradation and hydration of the microspheres were performed on the three batches at 55°C. In vitro peptide release at 55°C was analyzed using a previously derived modification of the Weibull function termed the modified Weibull equation (MWE). Experimental observations and data analysis confirm excellent reproducibility studies within and between batches of the microsphere formulations demonstrating the predictability of the accelerated experiments at 55°C. The accelerated test method was also successfully able to distinguish the in vitro product performance between the three batches having varying morphology (process variants), indicating that it is a suitable QC tool to discriminate product or process variants in clinical or commercial batches of microspheres. Additionally, data analysis utilized the MWE to further quantify the differences obtained from the accelerated in vitro product performance test between process variants, thereby enhancing the discriminatory power of the accelerated methodology at 55°C.",
"title": ""
},
{
"docid": "7c4104651e484e4cbff5735d62f114ef",
"text": "A pair of salient tradeoffs have driven the multiple-input multiple-output (MIMO) systems developments. More explicitly, the early era of MIMO developments was predominantly motivated by the multiplexing-diversity tradeoff between the Bell Laboratories layered space-time and space-time block coding. Later, the linear dispersion code concept was introduced to strike a flexible tradeoff. The more recent MIMO system designs were motivated by the performance-complexity tradeoff, where the spatial modulation and space-time shift keying concepts eliminate the problem of inter-antenna interference and perform well with the aid of low-complexity linear receivers without imposing a substantial performance loss on generic maximum-likelihood/max a posteriori -aided MIMO detection. Against the background of the MIMO design tradeoffs in both uncoded and coded MIMO systems, in this treatise, we offer a comprehensive survey of MIMO detectors ranging from hard decision to soft decision. The soft-decision MIMO detectors play a pivotal role in approaching to the full-performance potential promised by the MIMO capacity theorem. In the near-capacity system design, the soft-decision MIMO detection dominates the total complexity, because all the MIMO signal combinations have to be examined, when both the channel’s output signal and the a priori log-likelihood ratios gleaned from the channel decoder are taken into account. Against this background, we provide reduced-complexity design guidelines, which are conceived for a wide-range of soft-decision MIMO detectors.",
"title": ""
}
] |
scidocsrr
|
76baf3a5db643215d7f1b9acd50ee4fc
|
DECAF: A Platform-Neutral Whole-System Dynamic Binary Analysis Platform
|
[
{
"docid": "cc7033023e1c5a902dfa10c8346565c4",
"text": "Satisfiability Modulo Theories (SMT) problem is a decision problem for logical first order formulas with respect to combinations of background theories such as: arithmetic, bit-vectors, arrays, and uninterpreted functions. Z3 is a new and efficient SMT Solver freely available from Microsoft Research. It is used in various software verification and analysis applications.",
"title": ""
},
{
"docid": "bb36611c41a3a4ffccb6c0ce55d8e13c",
"text": "Dynamic taint analysis (DTA) is a powerful technique for, among other things, tracking the flow of sensitive information. However, it is vulnerable to false negative errors caused by implicit flows, situations in which tainted data values affect control flow, which in turn affects other data. We propose DTA++, an enhancement to dynamic taint analysis that additionally propagates taint along a targeted subset of control-flow dependencies. Our technique first diagnoses implicit flows within information-preserving transformations, where they are most likely to cause undertainting. Then it generates rules to add additional taint only for those control dependencies, avoiding the explosion of tainting that can occur when propagating taint along all control dependencies indiscriminately. We implement DTA++ using the BitBlaze platform for binary analysis, and apply it to off-the-shelf Windows/x86 applications. In a case study of 8 applications such as Microsoft Word, DTA++ efficiently locates just a few implicit flows that could otherwise lead to under-tainting, and resolves them by propagating taint while introducing little over-tainting.",
"title": ""
},
{
"docid": "3c4e3d86df819aea592282b171191d0d",
"text": "Memory forensic analysis collects evidence for digital crimes and malware attacks from the memory of a live system. It is increasingly valuable, especially in cloud computing. However, memory analysis on on commodity operating systems (such as Microsoft Windows) faces the following key challenges: (1) a partial knowledge of kernel data structures; (2) difficulty in handling ambiguous pointers; and (3) lack of robustness by relying on soft constraints that can be easily violated by kernel attacks. To address these challenges, we present MACE, a memory analysis system that can extract a more complete view of the kernel data structures for closed-source operating systems and significantly improve the robustness by only leveraging pointer constraints (which are hard to manipulate) and evaluating these constraint globally (to even tolerate certain amount of pointer attacks). We have evaluated MACE on 100 memory images for Windows XP SP3 and Windows 7 SP0. Overall, MACE can construct a kernel object graph from a memory image in just a few minutes, and achieves over 95% recall and over 96% precision. Our experiments on real-world rootkit samples and synthetic attacks further demonstrate that MACE outperforms other external memory analysis tools with respect to wider coverage and better robustness.",
"title": ""
}
] |
[
{
"docid": "69c65ed870be8074d21ed1cfd0a42a2f",
"text": "With the popularity of online multimedia videos, there has been much interest in recent years in acoustic event detection and classification for the improvement of online video search. The audio component of a video has the potential to contribute significantly to multimedia event classification. Recent research in audio document classification has drawn parallels to text and image document retrieval by employing what is referred to as the bag-of-audio words (BoAW) method. Compared to supervised approaches where audio concept detectors are trained using annotated data and extracted labels are used as lowlevel features for multimedia event classification. The BoAW approach extracts audio concepts in an unsupervised fashion. Hence this method has the advantage that it can be employed easily for a new set of audio concepts in multimedia videos without going through a laborious annotation effort. In this paper, we explore variations of the BoAW method and present results on NIST 2011 multimedia event detection (MED) dataset.",
"title": ""
},
{
"docid": "c979c978f1b8c82c2b0b8235464e2bf1",
"text": "Cloud Computing is one of the biggest buzzwords in the computer world these days. It allows resource sharing that includes software, platform and infrastructure by means of virtualization. Virtualization is the core technology behind cloud resource sharing. This environment strives to be dynamic, reliable, and customizable with a guaranteed quality of service. Security is as much of an issue in the cloud as it is anywhere else. Different people share different point of view on cloud computing. Some believe it is unsafe to use cloud. Cloud vendors go out of their way to ensure security. This paper investigates few major security issues with cloud computing and the existing counter measures to those security challenges in the world of cloud computing..",
"title": ""
},
{
"docid": "55f95c7b59f17fb210ebae97dbd96d72",
"text": "Clustering is a widely studied data mining problem in the text domains. The problem finds numerous applications in customer segmentation, classification, collaborative filtering, visualization, document organization, and indexing. In this chapter, we will provide a detailed survey of the problem of text clustering. We will study the key challenges of the clustering problem, as it applies to the text domain. We will discuss the key methods used for text clustering, and their relative advantages. We will also discuss a number of recent advances in the area in the context of social network and linked data.",
"title": ""
},
{
"docid": "ab51b39647784a4788c705e2fb6b3a20",
"text": "We propose a light-weight, yet effective, technique for fuzz-testing security protocols. Our technique is modular, it exercises (stateful) protocol implementations in depth, and handles encrypted traffic. We use a concrete implementation of the protocol to generate valid inputs, and mutate the inputs using a set of fuzz operators. A dynamic memory analysis tool monitors the execution as an oracle to detect the vulnerabilities exposed by fuzz-testing. We provide the fuzzer with the necessary keys and cryptographic algorithms in order to properly mutate encrypted messages. We present a case study on two widely used, mature implementations of the Internet Key Exchange (IKE) protocol and report on two new vulnerabilities discovered by our fuzz-testing tool. We also compare the effectiveness of our technique to two existing model-based fuzz-testing tools for IKE.",
"title": ""
},
{
"docid": "e6a5ce99e55594cd945a57f801bd2d35",
"text": "Cloud Computing is a powerful, flexible, cost efficient platform for providing consumer IT services over the Internet. However Cloud Computing has various levels of risk factors because most important information is outsourced by third party vendors, which means harder to maintain the level of security for data. Steganography is art of hiding information in an image. In this most of the techniques are based on the Least Significant Bit(LSB) bit ,but the hackers easily detect as it embed data sequentially in all pixels .Instead of embedding data sequentially some of the techniques choose randomly. A better approach for this chooses edge pixels for embedding data. So we propose novel technique to hide the data in the Fibonacci edge pixels of an image by extending previous edge based algorithms. This algorithm hides the data in the Fibonacci edge pixels of an image and thus ensures better security against attackers.",
"title": ""
},
{
"docid": "4d136b60209ef625c09a15e3e5abb7f7",
"text": "Alterations in the bidirectional interactions between the intestine and the nervous system have important roles in the pathogenesis of irritable bowel syndrome (IBS). A body of largely preclinical evidence suggests that the gut microbiota can modulate these interactions. A small and poorly defined role for dysbiosis in the development of IBS symptoms has been established through characterization of altered intestinal microbiota in IBS patients and reported improvement of subjective symptoms after its manipulation with prebiotics, probiotics, or antibiotics. It remains to be determined whether IBS symptoms are caused by alterations in brain signaling from the intestine to the microbiota or primary disruption of the microbiota, and whether they are involved in altered interactions between the brain and intestine during development. We review the potential mechanisms involved in the pathogenesis of IBS in different groups of patients. Studies are needed to better characterize alterations to the intestinal microbiome in large cohorts of well-phenotyped patients, and to correlate intestinal metabolites with specific abnormalities in gut-brain interactions.",
"title": ""
},
{
"docid": "94a59a5cbcffb2e33732533477bb51c1",
"text": "Personas are a critical method for orienting design and development teams to user experience. Prior work has noted challenges in justifying them to developers. In contrast, it has been assumed that designers and user experience professionals - whose goal is to focus designs on targeted users - will readily exploit personas. This paper examines that assumption. We present the first study of how experienced user-centered design (UCD) practitioners with prior experience deploying personas, use and perceive personas in industrial software design. We identify limits to the persona approach in the context studied. Practitioners used personas almost exclusively for communication, but not for design. Participants identified four problems with personas, finding them abstract, impersonal, misleading and distracting. Our findings argue for a new approach to persona deployment and construction. Personas cannot replace immersion in actual user data. And rather than focusing on creating engaging personas, it is critical to avoid persona attributes that mislead or distract.",
"title": ""
},
{
"docid": "2e00e8ee2e5661ca17c621adcea99cb7",
"text": "SCRUM poses key challenges for usability (Baxter et al., 2008). First, product goals are set without an adequate study of the userpsilas needs and context. The user stories selected may not be good enough from the usability perspective. Second, user stories of usability import may not be prioritized high enough. Third, given the fact that a product owner thinks in terms of the minimal marketable set of features in a just-in-time process, it is difficult for the development team to get a holistic view of the desired product or features. This experience report proposes U-SCRUM as a variant of the SCRUM methodology. Unlike typical SCRUM, where at best a team member is responsible for usability, U-SCRUM is based on our experience with having two product owners, one focused on usability and the other on the more conventional functions. Our preliminary result is that U-SCRUM yields improved usability than SCRUM.",
"title": ""
},
{
"docid": "3e9f338da297c5173cf075fa15cd0a2e",
"text": "Recent years have witnessed a surge of publications aimed at tracing temporal changes in lexical semantics using distributional methods, particularly prediction-based word embedding models. However, this vein of research lacks the cohesion, common terminology and shared practices of more established areas of natural language processing. In this paper, we survey the current state of academic research related to diachronic word embeddings and semantic shifts detection. We start with discussing the notion of semantic shifts, and then continue with an overview of the existing methods for tracing such time-related shifts with word embedding models. We propose several axes along which these methods can be compared, and outline the main challenges before this emerging subfield of NLP, as well as prospects and possible applications.",
"title": ""
},
{
"docid": "6f518559d8c99ea1e6368ec8c108cabe",
"text": "This paper introduces an integrated Local Interconnect Network (LIN) transceiver which sets a new performance benchmark in terms of electromagnetic compatibility (EMC). The proposed topology succeeds in an extraordinary high robustness against RF disturbances which are injected into the BUS and in very low electromagnetic emissions (EMEs) radiated by the LIN network without adding any external components for filtering. In order to evaluate the circuits superior EMC performance, it was designed using a HV-BiCMOS technology for automotive applications, the EMC behavior was measured and the results were compared with a state of the art topology.",
"title": ""
},
{
"docid": "777e3818dfeb25358dedd6f740e20411",
"text": "Chronic obstructive pulmonary, pneumonia, asthma, tuberculosis, lung cancer diseases are the most important chest diseases. These chest diseases are important health problems in the world. In this study, a comparative chest diseases diagnosis was realized by using multilayer, probabilistic, learning vector quantization, and generalized regression neural networks. The chest diseases dataset were prepared by using patient’s epicrisis reports from a chest diseases hospital’s database. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "01e35a79d042275d7935e7531b4c7fde",
"text": "Biometrics technologies are gaining popularity today since they provide more reliable and efficient means of authentication and verification. Keystroke Dynamics is one of the famous biometric technologies, which will try to identify the authenticity of a user when the user is working via a keyboard. The authentication process is done by observing the change in the typing pattern of the user. A comprehensive survey of the existing keystroke dynamics methods, metrics, different approaches are given in this study. This paper also discusses about the various security issues and challenges faced by keystroke dynamics. KeywordsBiometris; Keystroke Dynamics; computer Security; Information Security; User Authentication.",
"title": ""
},
{
"docid": "6708846369ea2f352ac8784c75e4652d",
"text": "This work presents simple and fast structured Bayesian learning for matrix and tensor factorization models. An unblocked Gibbs sampler is proposed for factorization machines (FM) which are a general class of latent variable models subsuming matrix, tensor and many other factorization models. We empirically show on the large Netflix challenge dataset that Bayesian FM are fast, scalable and more accurate than state-of-the-art factorization models.",
"title": ""
},
{
"docid": "189709296668a8dd6f7be8e1b2f2e40f",
"text": "Uncertain data management, querying and mining have become important because the majority of real world data is accompanied with uncertainty these days. Uncertainty in data is often caused by the deficiency in underlying data collecting equipments or sometimes manually introduced to preserve data privacy. This work discusses the problem of distance-based outlier detection on uncertain datasets of Gaussian distribution. The Naive approach of distance-based outlier on uncertain data is usually infeasible due to expensive distance function. Therefore a cell-based approach is proposed in this work to quickly identify the outliers. The infinite nature of Gaussian distribution prevents to devise effective pruning techniques. Therefore an approximate approach using bounded Gaussian distribution is also proposed. Approximating Gaussian distribution by bounded Gaussian distribution enables an approximate but more efficient cell-based outlier detection approach. An extensive empirical study on synthetic and real datasets show that our proposed approaches are effective, efficient and scalable.",
"title": ""
},
{
"docid": "694a4039dba2354177ecda2e01c027c1",
"text": "This paper presents two pole shoe shapes used in interior permanent magnet (IPM) machines to produce a sinusoidal air-gap field. The first shape has an air-gap length which varies with the inverse cosine of the angle from the pole shoe middle. The second shape uses an arc with its centre offset from the origin. Although both designs are documented in the literature, no design rules exist regarding their optimum geometry for use in IPM machines. This paper corrects this by developing optimum ratios of the q-axis to d-axis air-gap lengths. The centre-offset arc design is improved by introducing flux barriers into its geometry through developing an optimum ratio of pole shoe width to permanent magnet (PM) width. Consequent pole rotors are also investigated and a third optimum ratio, the soft magnetic pole shoe angle to pole pitch, is developed. The three ratios will aid machine designers in their design work.",
"title": ""
},
{
"docid": "a2247241882074e5d27a3c3bbbde5936",
"text": "As scientific computation continues to scale, it is crucial to use floating-point arithmetic processors as efficiently as possible. Lower precision allows streaming architectures to perform more operations per second and can reduce memory bandwidth pressure on all architectures. However, using a precision that is too low for a given algorithm and data set will result in inaccurate results. Thus, developers must balance speed and accuracy when choosing the floating-point precision of their subroutines and data structures. I am investigating techniques to help developers learn about the runtime floating-point behavior of their programs, and to help them make decisions concerning the choice of precision in implementation. I propose to develop methods that will generate floating-point precision configurations, automatically testing and validating them using binary instrumentation. The goal is ultimately to make a recommendation to the developer regarding which parts of the program can be reduced to single-precision. The central thesis is that automated analysis techniques can make recommendations regarding the precision levels that each part of a computer program must use to maintain overall accuracy, with the goal of improving performance on scientific codes.",
"title": ""
},
{
"docid": "38037437ce3e86cda024f81cbd81cd6f",
"text": "BACKGROUND\nIt is widely known that more boys are born during and immediately after wars, but there has not been any ultimate (evolutionary) explanation for this 'returning soldier effect'. Here, I suggest that the higher sex ratios during and immediately after wars might be a byproduct of the fact that taller soldiers are more likely to survive battle and that taller parents are more likely to have sons.\n\n\nMETHODS\nI analyze a large sample of British Army service records during World War I.\n\n\nRESULTS\nSurviving soldiers were on average more than one inch (3.33 cm) taller than fallen soldiers.\n\n\nCONCLUSIONS\nConservative estimates suggest that the one-inch height advantage alone is more than twice as sufficient to account for all the excess boys born in the UK during and after World War I. While it remains unclear why taller soldiers are more likely to survive battle, I predict that the returning soldier effect will not happen in more recent and future wars.",
"title": ""
},
{
"docid": "b6b58b7a1c5d9112ea24c74539c95950",
"text": "We describe a view-management component for interactive 3D user interfaces. By view management, we mean maintaining visual constraints on the projections of objects on the view plane, such as locating related objects near each other, or preventing objects from occluding each other. Our view-management component accomplishes this by modifying selected object properties, including position, size, and transparency, which are tagged to indicate their constraints. For example, some objects may have geometric properties that are determined entirely by a physical simulation and which cannot be modified, while other objects may be annotations whose position and size are flexible.We introduce algorithms that use upright rectangular extents to represent on the view plane a dynamic and efficient approximation of the occupied space containing the projections of visible portions of 3D objects, as well as the unoccupied space in which objects can be placed to avoid occlusion. Layout decisions from previous frames are taken into account to reduce visual discontinuities. We present augmented reality and virtual reality examples to which we have applied our approach, including a dynamically labeled and annotated environment.",
"title": ""
},
{
"docid": "8d56697ff01bba1023ee7a43da0d589a",
"text": "Autism spectrum disorder (ASD) is a neurodevelopmental disability with atypical traits in behavioral and physiological responses. These atypical traits in individuals with ASD may be too subtle and subjective to measure visually using tedious methods of scoring. Alternatively, the use of intrusive sensors in the measurement of psychophysical responses in individuals with ASD may likely cause inhibition and bias. This paper proposes a novel experimental protocol for non-intrusive sensing and analysis of facial expression, visual scanning, and eye-hand coordination to investigate behavioral markers for ASD. An institutional review board approved pilot study is conducted to collect the response data from two groups of subjects (ASD and control) while they engage in the tasks of visualization, recognition, and manipulation. For the first time in the ASD literature, the facial action coding system is used to classify spontaneous facial responses. Statistical analyses reveal significantly (p <0.01) higher prevalence of smile expression for the group with ASD with the eye-gaze significantly averted (p<0.05) from viewing the face in the visual stimuli. This uncontrolled manifestation of smile without proper visual engagement suggests impairment in reciprocal social communication, e.g., social smile. The group with ASD also reveals poor correlation in eye-gaze and hand movement data suggesting deficits in motor coordination while performing a dynamic manipulation task. The simultaneous sensing and analysis of multimodal response data may provide useful quantitative insights into ASD to facilitate early detection of symptoms for effective intervention planning.",
"title": ""
},
{
"docid": "1303f7a3ddec79951e1b0e7480cdc04e",
"text": "Despite the availability of many effective antihypertensive drugs, the drug therapy for resistant hypertension remains a prominent problem. Reviews offer only the general recommendations of increasing dosage and adding drugs, offering clinicians little guidance with respect to the specifics of selecting medications and dosages. A simplified decision tree for drug selection that would be effective in most cases is needed. This review proposes such an approach. The approach is mechanism-based, targeting treatment at three hypertensive mechanisms: (1) sodium/volume, (2) the renin-angiotensin system (RAS), and (3) the sympathetic nervous system (SNS). It assumes baseline treatment with a 2-drug combination directed at sodium/volume and the RAS and recommends proceeding with one or both of just two treatment options: (1) strengthening the diuretic regimen, possibly with the addition of spironolactone, and/or (2) adding agents directed at the SNS, usually a β-blocker or combination of an α- and a β-blocker. The review calls for greater research and clinical attention directed to: (1) assessment of clinical clues that can help direct treatment toward either sodium/volume or the SNS, (2) increased recognition of the role of neurogenic (SNS-mediated) hypertension in resistant hypertension, (3) increased recognition of the effective but underutilized combination of α- + β-blockade, and (4) drug pharmacokinetics and dosing.",
"title": ""
}
] |
scidocsrr
|
779a7a75bce231c820bba58bf178b303
|
PPSGen: Learning-Based Presentation Slides Generation for Academic Papers
|
[
{
"docid": "f0c1bfed4083e6f6e5748fdbe76bd42a",
"text": "Multidocument extractive summarization relies on the concept of sentence centrality to identify the most important sentences in a document. Centrality is typically defined in terms of the presence of particular important words or in terms of similarity to a centroid pseudo-sentence. We are now considering an approach for computing sentence importance based on the concept of eigenvector centrality (prestige) that we call LexPageRank. In this model, a sentence connectivity matrix is constructed based on cosine similarity. If the cosine similarity between two sentences exceeds a particular predefined threshold, a corresponding edge is added to the connectivity matrix. We provide an evaluation of our method on DUC 2004 data. The results show that our approach outperforms centroid-based summarization and is quite successful compared to other summarization systems.",
"title": ""
},
{
"docid": "6d5429ddf4050724432da73af60274d6",
"text": "We present an Integer Linear Program for exact inference under a maximum coverage model for automatic summarization. We compare our model, which operates at the subsentence or “concept”-level, to a sentencelevel model, previously solved with an ILP. Our model scales more efficiently to larger problems because it does not require a quadratic number of variables to address redundancy in pairs of selected sentences. We also show how to include sentence compression in the ILP formulation, which has the desirable property of performing compression and sentence selection simultaneously. The resulting system performs at least as well as the best systems participating in the recent Text Analysis Conference, as judged by a variety of automatic and manual content-based metrics.",
"title": ""
},
{
"docid": "6c45d7b4a7732da4441261f7f1e9e42c",
"text": "In citation-based summarization, text written by several researchers is leveraged to identify the important aspects of a target paper. Previous work on this problem focused almost exclusively on its extraction aspect (i.e. selecting a representative set of citation sentences that highlight the contribution of the target paper). Meanwhile, the fluency of the produced summaries has been mostly ignored. For example, diversity, readability, cohesion, and ordering of the sentences included in the summary have not been thoroughly considered. This resulted in noisy and confusing summaries. In this work, we present an approach for producing readable and cohesive citation-based summaries. Our experiments show that the proposed approach outperforms several baselines in terms of both extraction quality and fluency.",
"title": ""
},
{
"docid": "0ac0f9965376f5547a2dabd3d06b6b96",
"text": "A sentence extract summary of a document is a subset of the document's sentences that contains the main ideas in the document. We present an approach to generating such summaries, a hidden Markov model that judges the likelihood that each sentence should be contained in the summary. We compare the results of this method with summaries generated by humans, showing that we obtain significantly higher agreement than do earlier methods.",
"title": ""
}
] |
[
{
"docid": "1d26fc3a5f07e7ea678753e7171846c4",
"text": "Data uncertainty is an inherent property in various applications due to reasons such as outdated sources or imprecise measurement. When data mining techniques are applied to these data, their uncertainty has to be considered to obtain high quality results. We present UK-means clustering, an algorithm that enhances the K-means algorithm to handle data uncertainty. We apply UKmeans to the particular pattern of moving-object uncertainty. Experimental results show that by considering uncertainty, a clustering algorithm can produce more accurate results.",
"title": ""
},
{
"docid": "6ea59490942d4748ce85c728573bdb9a",
"text": "We present an accurate, efficient, and robust pose estimation system based on infrared LEDs. They are mounted on a target object and are observed by a camera that is equipped with an infrared-pass filter. The correspondences between LEDs and image detections are first determined using a combinatorial approach and then tracked using a constant-velocity model. The pose of the target object is estimated with a P3P algorithm and optimized by minimizing the reprojection error. Since the system works in the infrared spectrum, it is robust to cluttered environments and illumination changes. In a variety of experiments, we show that our system outperforms state-of-the-art approaches. Furthermore, we successfully apply our system to stabilize a quadrotor both indoors and outdoors under challenging conditions. We release our implementation as open-source software.",
"title": ""
},
{
"docid": "3256ea4c5b8ca8a4061df16038b51146",
"text": "We present initial methods for incorporating unstructured external textual information into neural dialogue systems for predicting the next utterance of a user in a two-party chat conversation. The main objective is to leverage additional information about the topic of the conversation to improve the prediction accuracy. We propose a simple method for extracting this knowledge, using a combination of hashing and TF-IDF, and a way to use it for selecting the best next utterance of a conversation, by encoding a vector representation with a recurrent neural network (RNN). This is combined with an RNN encoding of the context and response of the conversation in order to make a prediction. We perform a case study using the recently released Ubuntu Dialogue Corpus, where the additional knowledge considered consists of the Ubuntu manpages. Preliminary results suggest that leveraging external knowledge sources in such a manner could lead to performance improvements for predicting the next utterance.",
"title": ""
},
{
"docid": "a54ac6991dce07d51ac028b8a249219e",
"text": "Rearrangement of immunoglobulin heavy-chain variable (VH) gene segments has been suggested to be regulated by interleukin 7 signaling in pro–B cells. However, the genetic evidence for this recombination pathway has been challenged. Furthermore, no molecular components that directly control VH gene rearrangement have been elucidated. Using mice deficient in the interleukin 7–activated transcription factor STAT5, we demonstrate here that STAT5 regulated germline transcription, histone acetylation and DNA recombination of distal VH gene segments. STAT5 associated with VH gene segments in vivo and was recruited as a coactivator with the transcription factor Oct-1. STAT5 did not affect the nuclear repositioning or compaction of the immunoglobulin heavy-chain locus. Therefore, STAT5 functions at a distinct step in regulating distal VH recombination in relation to the transcription factor Pax5 and histone methyltransferase Ezh2.",
"title": ""
},
{
"docid": "b6a8abc8946f8b13a22e3bacd2a6caa5",
"text": "The aim of this research was to determine the sun protection factor (SPF) of sunscreens emulsions containing chemical and physical sunscreens by ultraviolet spectrophotometry. Ten different commercially available samples of sunscreen emulsions of various manufactures were evaluated. The SPF labeled values were in the range of 8 to 30. The SPF values of the 30% of the analyzed samples are in close agreement with the labeled SPF, 30% presented SPF values above the labeled amount and 40% presented SPF values under the labeled amount. The proposed spectrophotometric method is simple and rapid for the in vitro determination of SPF values of sunscreens emulsions. *Correspondence:",
"title": ""
},
{
"docid": "ff0395e9146ab7a3416cf911f42fcf7f",
"text": "Financial Time Series analysis and prediction is one of the interesting areas in which past data could be used to anticipate and predict data and information about future. There are many artificial intelligence approaches used in the prediction of time series, such as Artificial Neural Networks (ANN) and Hidden Markov Models (HMM). In this paper HMM and HMM approaches for predicting financial time series are presented. ANN and HMM are used to predict time series that consists of highest and lowest Forex index series as input variable. Both of ANN and HMM are trained on the past dataset of the chosen currencies (such as EURO/ USD which is used in this paper). The trained ANN and HMM are used to search for the variable of interest behavioral data pattern from the past dataset. The obtained results was compared with real values from Forex (Foreign Exchange) market database [1]. The power and predictive ability of the two models are evaluated on the basis of Mean Square Error (MSE). The Experimental results obtained are encouraging, and it demonstrate that ANN and HMM can closely predict the currency market, with a small different in predicting performance.",
"title": ""
},
{
"docid": "db66428e21d473b7d77fde0c3ae6d6c3",
"text": "In order to improve electric vehicle lead-acid battery charging speed, analysis the feasibility of shortening the charging time used the charge method with negative pulse discharge, presenting the negative pulse parameters determined method for the fast charging with pulse discharge, determined the negative pulse amplitude and negative pulse duration in the pulse charge with negative pulse. Experiments show that the determined parameters with this method has some Advantages such as short charging time, small temperature rise etc, and the method of negative pulse parameters determined can used for different capacity of lead-acid batteries.",
"title": ""
},
{
"docid": "15e4cfb84801e86211709a8d24979eaa",
"text": "The English Lexicon Project is a multiuniversity effort to provide a standardized behavioral and descriptive data set for 40,481 words and 40,481 nonwords. It is available via the Internet at elexicon.wustl.edu. Data from 816 participants across six universities were collected in a lexical decision task (approximately 3400 responses per participant), and data from 444 participants were collected in a speeded naming task (approximately 2500 responses per participant). The present paper describes the motivation for this project, the methods used to collect the data, and the search engine that affords access to the behavioral measures and descriptive lexical statistics for these stimuli.",
"title": ""
},
{
"docid": "0fc08886411f225a3e5e767be3b6fd39",
"text": "To realize the promise of ubiquitous embedded deep network inference, it is essential to seek limits of energy and area efficiency. To this end, low-precision networks offer tremendous promise because both energy and area scale down quadratically with the reduction in precision. Here, for the first time, we demonstrate ResNet-18, ResNet-34, ResNet-50, ResNet152, Inception-v3, densenet-161, and VGG-16bn networks on the ImageNet classification benchmark that, at 8-bit precision exceed the accuracy of the full-precision baseline networks after one epoch of finetuning, thereby leveraging the availability of pretrained models. We also demonstrate for the first time ResNet-18, ResNet-34, and ResNet-50 4-bit models that match the accuracy of the full-precision baseline networks. Surprisingly, the weights of the low-precision networks are very close (in cosine similarity) to the weights of the corresponding baseline networks, making training from scratch unnecessary. The number of iterations required by stochastic gradient descent to achieve a given training error is related to the square of (a) the distance of the initial solution from the final plus (b) the maximum variance of the gradient estimates. By drawing inspiration from this observation, we (a) reduce solution distance by starting with pretrained fp32 precision baseline networks and fine-tuning, and (b) combat noise introduced by quantizing weights and activations during training, by using larger batches along with matched learning rate annealing. Together, these two techniques offer a promising heuristic to discover low-precision networks, if they exist, close to fp32 precision baseline networks.",
"title": ""
},
{
"docid": "08c26880862b09e81acc1cd99904fded",
"text": "Efficient use of high speed hardware requires operating system components be customized to the application workload. Our general purpose operating systems are ill-suited for this task. We present EbbRT, a framework for constructing per-application library operating systems for cloud applications. The primary objective of EbbRT is to enable highperformance in a tractable and maintainable fashion. This paper describes the design and implementation of EbbRT, and evaluates its ability to improve the performance of common cloud applications. The evaluation of the EbbRT prototype demonstrates memcached, run within a VM, can outperform memcached run on an unvirtualized Linux. The prototype evaluation also demonstrates an 14% performance improvement of a V8 JavaScript engine benchmark, and a node.js webserver that achieves a 50% reduction in 99th percentile latency compared to it run on Linux.",
"title": ""
},
{
"docid": "df89dc3a36ac18fd880a7249022b4b2c",
"text": "ConvNets and Imagenet have driven the recent success of deep learning for image classification. However, the marked slowdown in performance improvement, the recent studies on the lack of robustness of neural networks to adversarial examples and their tendency to exhibit undesirable biases questioned the reliability and sustained development of these methods. This work investigates these questions from the perspective of the end-user by using human subject studies and explanations. We experimentally demonstrate that the accuracy and robustness of ConvNets measured on Imagenet are underestimated. We show that explanations can mitigate the impact of misclassified adversarial examples from the perspective of the end-user and we introduce a novel tool for uncovering the undesirable biases learned by a model. These contributions also show that explanations are a promising tool for improving our understanding of ConvNets’ predictions and for designing more reliable models1.",
"title": ""
},
{
"docid": "ed7a114d02244b7278c8872c567f1ba6",
"text": "We present a new visualization, called the Table Lens, for visualizing and making sense of large tables. The visualization uses a focus+context (fisheye) technique that works effectively on tabular information because it allows display of crucial label information and multiple distal focal areas. In addition, a graphical mapping scheme for depicting table contents has been developed for the most widespread kind of tables, the cases-by-variables table. The Table Lens fuses symbolic and graphical representations into a single coherent view that can be fluidly adjusted by the user. This fusion and interactivity enables an extremely rich and natural style of direct manipulation exploratory data analysis.",
"title": ""
},
{
"docid": "0f4750f3998766e8f2a506a2d432f3bf",
"text": "Presently sustainability of fashion in the worldwide is the major considerable issue. The much talked concern is for the favor of fashion’s sustainability around the world. Many organizations and fashion conscious personalities have come forward to uphold the further extension of the campaign of good environment for tomorrows. On the other hand, fashion for the morality or ethical issues is one of the key concepts for the humanity and sustainability point of view. Main objectives of this study to justify the sustainability concern of fashion companies and their policy. In this paper concerned brands are focused on the basis of their present activities related fashion from the manufacturing to the marketing process. Most of the cases celebrities are in the forwarded stages for the upheld of the fashion sustainability. For the conservation of the environment, sustainability of the fashion is the utmost need in the present fastest growing world. Nowadays, fashion is considered the vital issue for the ecological aspect with morality concern. The research is based on the rigorously study with the reading materials. The data have been gathered from various sources, mainly academic literature, research article, conference article, PhD thesis, under graduate & post graduate dissertation and a qualitative research method approach has been adopted for this research. For the convenience of the reader and future researchers, Analysis and Findings have done in the same time.",
"title": ""
},
{
"docid": "3cc74bce3c395b82dac437286aace591",
"text": "We present a technique for simulating plastic deformation in sheets of thin materials, such as crumpled paper, dented metal, and wrinkled cloth. Our simulation uses a framework of adaptive mesh refinement to dynamically align mesh edges with folds and creases. This framework allows efficient modeling of sharp features and avoids bend locking that would be otherwise caused by stiff in-plane behavior. By using an explicit plastic embedding space we prevent remeshing from causing shape diffusion. We include several examples demonstrating that the resulting method realistically simulates the behavior of thin sheets as they fold and crumple.",
"title": ""
},
{
"docid": "38419655a4a8fedfd9e0c3001741f165",
"text": "Convolutional Neural Networks (CNN) has achieved a great success in image recognition task by automatically learning a hierarchical feature representation from raw data. While the majority of Time-Series Classification (TSC) literature is focused on 1D signals, this paper uses Recurrence Plots (RP) to transform time-series into 2D texture images and then take advantage of the deep CNN classifier. Image representation of time-series introduces different feature types that are not available for 1D signals, and therefore TSC can be treated as texture image recognition task. CNN model also allows learning different levels of representations together with a classifier, jointly and automatically. Therefore, using RP and CNN in a unified framework is expected to boost the recognition rate of TSC. Experimental results on the UCR time-series classification archive demonstrate competitive accuracy of the proposed approach, compared not only to the existing deep architectures, but also to the state-of-the art TSC algorithms.",
"title": ""
},
{
"docid": "72160110892868fb941f40ce7ddfa82b",
"text": "Having gained momentum from its promise of centralized control over distributed network architectures at bargain costs, software-defined Networking (SDN) is an ever-increasing topic of research. SDN offers a simplified means to dynamically control multiple simple switches via a single controller program, which contrasts with current network infrastructures where individual network operators manage network devices individually. Already, SDN has realized some extraordinary use cases outside of academia with companies, such as Google, AT&T, Microsoft, and many others. However, SDN still presents many research and operational challenges for government, industry, and campus networks. Because of these challenges, many SDN solutions have developed in an ad hoc manner that are not easily adopted by other organizations. Hence, this paper seeks to identify some of the many challenges where new and current researchers can still contribute to the advancement of SDN and further hasten its broadening adoption by network operators.",
"title": ""
},
{
"docid": "0fc051613dd8ac7b555a85f0ed2cccbc",
"text": "BACKGROUND\nAtezolizumab is a humanised antiprogrammed death-ligand 1 (PD-L1) monoclonal antibody that inhibits PD-L1 and programmed death-1 (PD-1) and PD-L1 and B7-1 interactions, reinvigorating anticancer immunity. We assessed its efficacy and safety versus docetaxel in previously treated patients with non-small-cell lung cancer.\n\n\nMETHODS\nWe did a randomised, open-label, phase 3 trial (OAK) in 194 academic or community oncology centres in 31 countries. We enrolled patients who had squamous or non-squamous non-small-cell lung cancer, were 18 years or older, had measurable disease per Response Evaluation Criteria in Solid Tumors, and had an Eastern Cooperative Oncology Group performance status of 0 or 1. Patients had received one to two previous cytotoxic chemotherapy regimens (one or more platinum based combination therapies) for stage IIIB or IV non-small-cell lung cancer. Patients with a history of autoimmune disease and those who had received previous treatments with docetaxel, CD137 agonists, anti-CTLA4, or therapies targeting the PD-L1 and PD-1 pathway were excluded. Patients were randomly assigned (1:1) to intravenously receive either atezolizumab 1200 mg or docetaxel 75 mg/m2 every 3 weeks by permuted block randomisation (block size of eight) via an interactive voice or web response system. Coprimary endpoints were overall survival in the intention-to-treat (ITT) and PD-L1-expression population TC1/2/3 or IC1/2/3 (≥1% PD-L1 on tumour cells or tumour-infiltrating immune cells). The primary efficacy analysis was done in the first 850 of 1225 enrolled patients. This study is registered with ClinicalTrials.gov, number NCT02008227.\n\n\nFINDINGS\nBetween March 11, 2014, and April 29, 2015, 1225 patients were recruited. In the primary population, 425 patients were randomly assigned to receive atezolizumab and 425 patients were assigned to receive docetaxel. Overall survival was significantly longer with atezolizumab in the ITT and PD-L1-expression populations. In the ITT population, overall survival was improved with atezolizumab compared with docetaxel (median overall survival was 13·8 months [95% CI 11·8-15·7] vs 9·6 months [8·6-11·2]; hazard ratio [HR] 0·73 [95% CI 0·62-0·87], p=0·0003). Overall survival in the TC1/2/3 or IC1/2/3 population was improved with atezolizumab (n=241) compared with docetaxel (n=222; median overall survival was 15·7 months [95% CI 12·6-18·0] with atezolizumab vs 10·3 months [8·8-12·0] with docetaxel; HR 0·74 [95% CI 0·58-0·93]; p=0·0102). Patients in the PD-L1 low or undetectable subgroup (TC0 and IC0) also had improved survival with atezolizumab (median overall survival 12·6 months vs 8·9 months; HR 0·75 [95% CI 0·59-0·96]). Overall survival improvement was similar in patients with squamous (HR 0·73 [95% CI 0·54-0·98]; n=112 in the atezolizumab group and n=110 in the docetaxel group) or non-squamous (0·73 [0·60-0·89]; n=313 and n=315) histology. Fewer patients had treatment-related grade 3 or 4 adverse events with atezolizumab (90 [15%] of 609 patients) versus docetaxel (247 [43%] of 578 patients). One treatment-related death from a respiratory tract infection was reported in the docetaxel group.\n\n\nINTERPRETATION\nTo our knowledge, OAK is the first randomised phase 3 study to report results of a PD-L1-targeted therapy, with atezolizumab treatment resulting in a clinically relevant improvement of overall survival versus docetaxel in previously treated non-small-cell lung cancer, regardless of PD-L1 expression or histology, with a favourable safety profile.\n\n\nFUNDING\nF. Hoffmann-La Roche Ltd, Genentech, Inc.",
"title": ""
},
{
"docid": "e68aac3565df039aa431bf2a69e27964",
"text": "region, a five-year-old girl with mild asthma presented to the emergency department of a children’s hospital in acute respiratory distress. She had an 11-day history of cough, rhinorrhea and progressive chest discomfort. She was otherwise healthy, with no history of severe respiratory illness, prior hospital admissions or immu nocompromise. Outside of infrequent use of salbutamol, she was not taking any medications, and her routine childhood immunizations, in cluding conjugate pneumococcal vaccine, were up to date. She had not received the pandemic influenza vaccine because it was not yet available for her age group. The patient had been seen previously at a community health centre a week into her symptoms, and a chest radiograph had shown perihi lar and peribronchial thickening but no focal con solidation, atelectasis or pleural effusion. She had then been reassessed 24 hours later at an influenza assessment centre and empirically started on oseltamivir. Two days later, with the onset of vomiting, diarrhea, fever and progressive shortness of breath, she was brought to the emergency department of the children’s hospital. On examination, she was in considerable distress; her heart rate was 170 beats/min, her respiratory rate was 60 breaths/min and her blood pressure was 117/57 mm Hg. Her oxygen saturations on room air were consistently 70%. On auscultation, she had decreased air entry to the right side with bronchial breath sounds. Repeat chest radiography showed almost complete opacification of the right hemithorax, air bronchograms in the middle and lower lobes, and minimal aeration to the apex. This was felt to be in keeping with whole lung consolidation and parapneumonic effusion. The left lung appeared normal. Blood tests done on admission showed a hemoglobin level of 122 (normal 110–140) g/L, a leukocyte count of 1.5 (normal 5.5–15.5) × 10/L (neutrophils 11% [normal 47%] and bands 19% [normal 5%]) and a platelet count of 92 (normal 217–533) × 10/L. Results of blood tests were otherwise unremarkable. Venous blood gas had a pH level of 7.32 (normal 7.35–7.42), partial pressure of carbon dioxide of 43 (normal 32– 43) mm Hg, a base deficit of 3.6 (normal –2 to 3) mmol/L, and a bicarbonate level of 21.8 (normal 21–26) mmol/L. The initial serum creatinine level was 43.0 (normal < 36) μmol/L and the urea level was 6.5 (normal 2.0–7.0) mmol/L, with no clinical evidence of renal dysfunction. Given the patient’s profound increased work of breathing, she was admitted to the intensive care unit (ICU), where intubation was required because of her continued decline over the next 24 hours. Blood cultures taken on admission were negative. Nasopharyngeal aspirates were negative on rapid respiratory viral testing, but antiviral treatment for presumed pandemic (H1N1) influenza was continued given her clinical presentation, the prevalence of pandemic influenza in the community and the low sensitivity of the test in the range of only 62%. Viral cultures were not done. Empiric treatment with intravenous cefotaxime (200 mg/kg/d) and vancomycin (40 mg/kg/d) was started in the ICU for broad antimicrobial coverage, including possible Cases",
"title": ""
},
{
"docid": "7e33c62ce15c9eb7894a0feff3d2cfb4",
"text": "Revenue management has been used in a variety of industries and generally takes the form of managing demand by manipulating length of customer usage and price. Supply mix is rarely considered, although it can have considerable impact on revenue. In this research, we focused on developing an optimal supply mix, specifically on determining the supply mix that would maximize revenue. We used data from a Chevys restaurant, part of a large chain of Mexican restaurants, in conjunction with a simulation model to evaluate and enumerate all possible supply (table) mixes. Compared to the restaurant’s existing table mix, the optimal mix is capable of handling a 30% increase in customer volume without increasing waiting times beyond their original levels. While our study was in a restaurant context, the results of this research are applicable to other service businesses.",
"title": ""
}
] |
scidocsrr
|
152d1acb7b341db49c2ecc90854fea77
|
ObfusMem: A low-overhead access obfuscation for trusted memories
|
[
{
"docid": "33aa9af9a5f3d3f0b8bf21dca3b13d2f",
"text": "Microarchitectural resources such as caches and predictors can be used to leak information across security domains. Significant prior work has demonstrated attacks and defenses for specific types of such microarchitectural side and covert channels. In this paper, we introduce a general mathematical study of microarchitectural channels using information theory. Our conceptual contribution is a simple mathematical abstraction that captures the common characteristics of all microarchitectural channels. We call this the Bucket model and it reveals that microarchitectural channels are fundamentally different from side and covert channels in networking. We then quantify the communication capacity of several microarchitectural covert channels (including channels that rely on performance counters, AES hardware and memory buses) and measure bandwidths across both KVM based heavy-weight virtualization and light-weight operating-system level isolation. We demonstrate channel capacities that are orders of magnitude higher compared to what was previously considered possible. Finally, we introduce a novel way of detecting intelligent adversaries that try to hide while running covert channel eavesdropping attacks. Our method generalizes a prior detection scheme (that modeled static adversaries) by introducing noise that hides the detection process from an intelligent eavesdropper.",
"title": ""
}
] |
[
{
"docid": "866fd6d60fc835080dff69f6143348fd",
"text": "In this paper we consider the problem of classifying shapes within a given category (e.g., chairs) into finer-grained classes (e.g., chairs with arms, rocking chairs, swivel chairs). We introduce a multi-label (i.e., shapes can belong to multiple classes) semi-supervised approach that takes as input a large shape collection of a given category with associated sparse and noisy labels, and outputs cleaned and complete labels for each shape. The key idea of the proposed approach is to jointly learn a distance metric for each class which captures the underlying geometric similarity within that class, e.g., the distance metric for swivel chairs evaluates the global geometric resemblance of chair bases. We show how to achieve this objective by first geometrically aligning the input shapes, and then learning the class-specific distance metrics by exploiting the feature consistency provided by this alignment. The learning objectives consider both labeled data and the mutual relations between the distance metrics. Given the learned metrics, we apply a graph-based semi-supervised classification technique to generate the final classification results.\n In order to evaluate the performance of our approach, we have created a benchmark data set where each shape is provided with a set of ground truth labels generated by Amazon's Mechanical Turk users. The benchmark contains a rich variety of shapes in a number of categories. Experimental results show that despite this variety, given very sparse and noisy initial labels, the new method yields results that are superior to state-of-the-art semi-supervised learning techniques.",
"title": ""
},
{
"docid": "172f206c8b3b0bc0d75793a13fa9ef88",
"text": "Knowledge bases are important resources for a variety of natural language processing tasks but suffer from incompleteness. We propose a novel embedding model, ITransF, to perform knowledge base completion. Equipped with a sparse attention mechanism, ITransF discovers hidden concepts of relations and transfer statistical strength through the sharing of concepts. Moreover, the learned associations between relations and concepts, which are represented by sparse attention vectors, can be interpreted easily. We evaluate ITransF on two benchmark datasets— WN18 and FB15k for knowledge base completion and obtains improvements on both the mean rank and Hits@10 metrics, over all baselines that do not use additional information.",
"title": ""
},
{
"docid": "3fc74e621d0e485e1e706367d30e0bad",
"text": "Many commercial navigation aids suffer from a number of design flaws, the most important of which are related to the human interface that conveys information to the user. Aids for the visually impaired are lightweight electronic devices that are either incorporated into a long cane, hand-held or worn by the client, warning of hazards ahead. Most aids use vibrating buttons or sound alerts to warn of upcoming obstacles, a method which is only capable of conveying very crude information regarding direction and proximity to the nearest object. Some of the more sophisticated devices use a complex audio interface in order to deliver more detailed information, but this often compromises the user's hearing, a critical impairment for a blind user. The author has produced an original design and working prototype solution which is a major first step in addressing some of these faults found in current production models for the blind.",
"title": ""
},
{
"docid": "2a3b9c70dc8f80419ba4557752c4e603",
"text": "The proliferation of sensors and mobile devices and their connectedness to the network have given rise to numerous types of situation monitoring applications. Data Stream Management Systems (DSMSs) have been proposed to address the data processing needs of such applications that require collection of high-speed data, computing results on-the-fly, and taking actions in real-time. Although a lot of work appears in the area of DSMS, not much has been done in multilevel secure (MLS) DSMS making the technology unsuitable for highly sensitive applications, such as battlefield monitoring. An MLS–DSMS should ensure the absence of illegal information flow in a DSMS and more importantly provide the performance needed to handle continuous queries. We illustrate why the traditional DSMSs cannot be used for processing multilevel secure continuous queries and discuss various DSMS architectures for processing such queries. We implement one such architecture and demonstrate how it processes continuous queries. In order to provide better quality of service and memory usage in a DSMS, we show how continuous queries submitted by various users can be shared. We provide experimental evaluations to demonstrate the performance benefits achieved through query sharing.",
"title": ""
},
{
"docid": "4244db44909f759b2acdb1bd9d23632e",
"text": "This paper implements of a three phase grid synchronization for doubly-fed induction generators (DFIG) in wind generation system. A stator flux oriented vector is used to control the variable speed DFIG for the utility grid synchronization, active power and reactive power. Before synchronization, the stator voltage is adjusted equal to the amplitude of the grid voltage by controlling the d-axis rotor current. The frequency of stator voltage is synchronized with the grid by controlling the rotor flux angle equal to the difference between the rotor angle (mechanical speed in electrical degree) and the grid angle. The phase shift between stator voltage and the grid voltage is compensated by comparing the d-axis stator voltage and the grid voltage to generate a compensation angle. After the synchronization is achieved, the active power and reactive power are controlled to extract the optimum energy capture and fulfilled with the standard of utility grid requirements for wind turbine. The q-axis and d-axis rotor current are used to control the active and reactive power respectively. The implementation was conducted on a 1 kW conventional induction wound rotor controlled the digital signal controller board. The experimentation results confirm that the DFIG can be synchronized to the utility grid and the active power and the reactive power can be independently controlled.",
"title": ""
},
{
"docid": "9ca46f81c121866f6d8f3d9c8a102b64",
"text": "Assessment of age and size structure of marine populations is often used to detect and determine the effect of natural and anthropogenic factors, such as commercial fishing, upon marine communities. A primary tool in the characterisation of population structure is the distribution of the lengths or biomass of a large sample of individual specimens of a particular species. Rather than use relatively unreliable visual estimates by divers, an underwater stereo-video system has been developed to improve the accuracy of the measurement of lengths of highly indicative species such as reef fish. In common with any system used for accurate measurements, the design and calibration of the underwater stereo-video system are of paramount importance to realise the maximum possible accuracy from the system. Aspects of the design of the system, the calibration procedure and algorithm, the determination of the relative orientation of the two cameras, stereo-measurement and stereo-matching, and the tracking of individual specimens are discussed. Also addressed is the stability of the calibrations and relative orientation of the cameras during dives to capture video sequences of marine life.",
"title": ""
},
{
"docid": "6db4fd07b395b593d69bce020c741306",
"text": "Network security systems inspect packet payloads for signatures of attacks. These systems use regular expression matching at their core. Many techniques for implementing regular expression matching at line rate have been proposed. Solutions differ in the type of automaton used (i.e., deterministic vs. non-deterministic) and in the configuration of implementation-specific parameters. While each solution has been shown to perform well on specific rule sets and traffic patterns, there has been no systematic comparison across a large set of solutions, rule sets and traffic patterns. Thus, it is extremely challenging for a practitioner to make an informed decision within the plethora of existing algorithmic and architectural proposals. To address this problem, we present a comprehensive evaluation of a broad set of regular expression matching techniques. We consider both algorithmic and architectural aspects. Specifically, we explore the performance, area requirements, and power consumption of implementations targeting processors and field programmable gate arrays using rule sets of practical size and complexity. We present detailed performance results and specific guidelines for determining optimal configurations based on a simple evaluation of the rule set. These guidelines can help significantly when implementing regular expression matching systems in practice.",
"title": ""
},
{
"docid": "288c9106eef92c4da63de68b0921cfd0",
"text": "Automated computer-aided detection (CADe) has been an important tool in clinical practice and research. State-of-the-art methods often show high sensitivities at the cost of high false-positives (FP) per patient rates. We design a two-tiered coarse-to-fine cascade framework that first operates a candidate generation system at sensitivities ~ 100% of but at high FP levels. By leveraging existing CADe systems, coordinates of regions or volumes of interest (ROI or VOI) are generated and function as input for a second tier, which is our focus in this study. In this second stage, we generate 2D (two-dimensional) or 2.5D views via sampling through scale transformations, random translations and rotations. These random views are used to train deep convolutional neural network (ConvNet) classifiers. In testing, the ConvNets assign class (e.g., lesion, pathology) probabilities for a new set of random views that are then averaged to compute a final per-candidate classification probability. This second tier behaves as a highly selective process to reject difficult false positives while preserving high sensitivities. The methods are evaluated on three data sets: 59 patients for sclerotic metastasis detection, 176 patients for lymph node detection, and 1,186 patients for colonic polyp detection. Experimental results show the ability of ConvNets to generalize well to different medical imaging CADe applications and scale elegantly to various data sets. Our proposed methods improve performance markedly in all cases. Sensitivities improved from 57% to 70%, 43% to 77%, and 58% to 75% at 3 FPs per patient for sclerotic metastases, lymph nodes and colonic polyps, respectively.",
"title": ""
},
{
"docid": "84b8e98e143c0bfba79506c44ea12e6d",
"text": "Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended. Is such progress avoidable? If not to be avoided, can events be guided so that we may survive? These questions are investigated. Some possible answers (and some further dangers) are presented. _What is The Singularity?_ The acceleration of technological progress has been the central feature of this century. I argue in this paper that we are on the edge of change comparable to the rise of human life on Earth. The precise cause of this change is the imminent creation by technology of entities with greater than human intelligence. There are several means by which science may achieve this breakthrough (and this is another reason for having confidence that the event will occur): o The development of computers that are \"awake\" and superhumanly intelligent. (To date, most controversy in the area of AI relates to whether we can create human equivalence in a machine. But if the answer is \"yes, we can\", then there is little doubt that beings more intelligent can be constructed shortly thereafter. o Large computer networks (and their associated users) may \"wake up\" as a superhumanly intelligent entity. o Computer/human interfaces may become so intimate that users may reasonably be considered superhumanly intelligent. o Biological science may find ways to improve upon the natural human intellect. The first three possibilities depend in large part on improvements in computer hardware. Progress in computer hardware has followed an amazingly steady curve in the last few decades [16]. Based largely on this trend, I believe that the creation of greater than human intelligence will occur during the next thirty years. (Charles Platt [19] has pointed out the AI enthusiasts have been making claims like this for the last thirty years. Just so I'm not guilty of a relative-time ambiguity, let me more specific: I'll be surprised if this event occurs before 2005 or after 2030.) What are the consequences of this event? When greater-than-human intelligence drives progress, that progress will be much more rapid. In fact, there seems no reason why progress itself would not involve the creation of still more intelligent entities -on a still-shorter time scale. The best analogy that I see is with the evolutionary past: Animals can adapt to problems and make inventions, but often no faster than natural selection can do its work -the world acts as its own simulator in the case of natural selection. We humans have the ability to internalize the world and conduct \"what if's\" in our heads; we can solve many problems thousands of times faster than natural selection. Now, by creating the means to execute those simulations at much higher speeds, we are entering a regime as radically different from our human past as we humans are from the lower animals. From the human point of view this change will be a throwing away of all the previous rules, perhaps in the blink of an eye, an exponential runaway beyond any hope of control. Developments that before were thought might only happen in \"a million years\" (if ever) will likely happen in the next century. (In [4], Greg Bear paints a picture of the major changes happening in a matter of hours.) I think it's fair to call this event a singularity (\"the Singularity\" for the purposes of this paper). It is a point where our models must be discarded and a new reality rules. As we move closer and closer to this point, it will loom vaster and vaster over human affairs till the notion becomes a commonplace. Yet when it finally happens it may still be a great surprise and a greater unknown. In the 1950s there were very few who saw it: Stan Ulam [27] paraphrased John von Neumann as saying: One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue. Von Neumann even uses the term singularity, though it appears he is still thinking of normal progress, not the creation of superhuman intellect. (For me, the superhumanity is the essence of the Singularity. Without that we would get a glut of technical riches, never properly absorbed (see [24]).) In the 1960s there was recognition of some of the implications of superhuman intelligence. I. J. Good wrote [10]: Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an \"intelligence explosion,\" and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the _last_ invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. ... It is more probable than not that, within the twentieth century, an ultraintelligent machine will be built and that it will be the last invention that man need make. Good has captured the essence of the runaway, but does not pursue its most disturbing consequences. Any intelligent machine of the sort he describes would not be humankind's \"tool\" -any more than humans are the tools of rabbits or robins or chimpanzees. Through the '60s and '70s and '80s, recognition of the cataclysm spread [28] [1] [30] [4]. Perhaps it was the science-fiction writers who felt the first concrete impact. After all, the \"hard\" science-fiction writers are the ones who try to write specific stories about all that technology may do for us. More and more, these writers felt an opaque wall across the future. Once, they could put such fantasies millions of years in the future [23]. Now they saw that their most diligent extrapolations resulted in the unknowable ... soon. Once, galactic empires might have seemed a Post-Human domain. Now, sadly, even interplanetary ones are. What about the '90s and the '00s and the '10s, as we slide toward the edge? How will the approach of the Singularity spread across the human world view? For a while yet, the general critics of machine sapience will have good press. After all, till we have hardware as powerful as a human brain it is probably foolish to think we'll be able to create human equivalent (or greater) intelligence. (There is the far-fetched possibility that we could make a human equivalent out of less powerful hardware, if were willing to give up speed, if we were willing to settle for an artificial being who was literally slow [29]. But it's much more likely that devising the software will be a tricky process, involving lots of false starts and experimentation. If so, then the arrival of self-aware machines will not happen till after the development of hardware that is substantially more powerful than humans' natural equipment.) But as time passes, we should see more symptoms. The dilemma felt by science fiction writers will be perceived in other creative endeavors. (I have heard thoughtful comic book writers worry about how to have spectacular effects when everything visible can be produced by the technically commonplace.) We will see automation replacing higher and higher level jobs. We have tools right now (symbolic math programs, cad/cam) that release us from most low-level drudgery. Or put another way: The work that is truly productive is the domain of a steadily smaller and more elite fraction of humanity. In the coming of the Singularity, we are seeing the predictions of _true_ technological unemployment finally come true. Another symptom of progress toward the Singularity: ideas themselves should spread ever faster, and even the most radical will quickly become commonplace. When I began writing, it seemed very easy to come up with ideas that took decades to percolate into the cultural consciousness; now the lead time seems more like eighteen months. (Of course, this could just be me losing my imagination as I get old, but I see the effect in others too.) Like the shock in a compressible flow, the Singularity moves closer as we accelerate through the critical speed. And what of the arrival of the Singularity itself? What can be said of its actual appearance? Since it involves an intellectual runaway, it will probably occur faster than any technical revolution seen so far. The precipitating event will likely be unexpected -perhaps even to the researchers involved. (\"But all our previous models were catatonic! We were just tweaking some parameters....\") If networking is widespread enough (into ubiquitous embedded systems), it may seem as if our artifacts as a whole had suddenly wakened. And what happens a month or two (or a day or two) after that? I have only analogies to point to: The rise of humankind. We will be in the Post-Human era. And for all my rampant technological optimism, sometimes I think I'd be more comfortable if I were regarding these transcendental events from one thousand years remove ... instead of twenty. _Can the Singularity be Avoided?_ Well, maybe it won't happen at all: Sometimes I try to imagine the symptoms that we should expect to see if the Singularity is not to develop. There are the widely respected arguments of Penrose [18] and Searle [21] against the practicality of machine sapience. In August of 1992, Thinking Machines Corporation held a workshop to investigate the question \"How We Will Build a Machine that Thinks\" [Thearling]. As you might guess from the workshop's title, the participants were not especially supportive of the arguments against machine intelligence. In fact, there was general agreement that minds can exist on nonbiological substrates and that algorithms are of central importance to the existence of minds. However, there was much debate about the raw hardware power that is present in organic brains.",
"title": ""
},
{
"docid": "17f82f248ef8e0f8b8d7d733d93e6bed",
"text": "Syslogs on switches are a rich source of information for both post-mortem diagnosis and proactive prediction of switch failures in a datacenter network. However, such information can be effectively extracted only through proper processing of syslogs, e.g., using suitable machine learning techniques. A common approach to syslog processing is to extract (i.e., build) templates from historical syslog messages and then match syslog messages to these templates. However, existing template extraction techniques either have low accuracies in learning the “correct” set of templates, or does not support incremental learning in the sense the entire set of templates has to be rebuilt (from processing all historical syslog messages again) when a new template is to be added, which is prohibitively expensive computationally if used for a large datacenter network. To address these two problems, we propose a frequent template tree (FT-tree) model in which frequent combinations of (syslog) words are identified and then used as message templates. FT-tree empirically extracts message templates more accurately than existing approaches, and naturally supports incremental learning. To compare the performance of FT-tree and three other template learning techniques, we experimented them on two-years' worth of failure tickets and syslogs collected from switches deployed across 10+ datacenters of a tier-1 cloud service provider. The experiments demonstrated that FT-tree improved the estimation/prediction accuracy (as measured by F1) by 155% to 188%, and the computational efficiency by 117 to 730 times.",
"title": ""
},
{
"docid": "10117f9d3b8b4720ea37cbf36073c130",
"text": "This biomechanical study was performed to measure tissue pressure in the infrapatellar fat pad and the volume changes of the anterior knee compartment during knee flexion–extension motion. Knee motion from 120° of flexion to full extension was simulated on ten fresh frozen human knee specimens (six from males, four from females, average age 44 years) using a hydraulic kinematic simulator (30, 40, and 50 Nm extension moment). Infrapatellar tissue pressure was measured using a closed cell sensor. Infrapatellar volume change in the anterior knee compartment was evaluated subsequent to removal of the fat pad using a water-filled bladder. We found a significant increase of the infrapatellar tissue pressure during knee flexion, at flexion angles of <20° and >100°. The average tissue pressure ranged from 343 (±223) mbar at 0° to 60 (±64) mbar at 60° of flexion. The smallest volume in the anterior knee compartment was measured at full extension and 120° of flexion, whereas the maximum volume was observed at 50° of flexion. In conclusion, the data suggest a biomechanical function of the infrapatellar fat pad at flexion angles of <20° and >100°, which suggests a role of the infrapatellar fat pad in stabilizing the patella in the extremes of knee motion.",
"title": ""
},
{
"docid": "827ecd05ff323a45bf880a65f34494e9",
"text": "BACKGROUND\nSocial support can be a critical component of how a woman adjusts to infertility, yet few studies have investigated its impact on infertility-related coping and stress. We examined relationships between social support contexts and infertility stress domains, and tested if they were mediated by infertility-related coping strategies in a sample of infertile women.\n\n\nMETHODS\nThe Multidimensional Scale of Perceived Social Support, the Copenhagen Multi-centre Psychosocial Infertility coping scales and the Fertility Problem Inventory were completed by 252 women seeking treatment. Structural equation modeling analysis was used to test the hypothesized multiple mediation model.\n\n\nRESULTS\nThe final model revealed negative effects from perceived partner support to relationship concern (β = -0.47), sexual concern (β = -0.20) and rejection of childfree lifestyle through meaning-based coping (β = -0.04). Perceived friend support had a negative effect on social concern through active-confronting coping (β = -0.04). Finally, besides a direct negative association with social concern (β = -0.30), perceived family support was indirectly and negatively related with all infertility stress domains (β from -0.04 to -0.13) through a positive effect of active-avoidance coping. The model explained between 12 and 66% of the variance of outcomes.\n\n\nCONCLUSIONS\nDespite being limited by a convenience sampling and cross-sectional design, results highlight the importance of social support contexts in helping women deal with infertility treatment. Health professionals should explore the quality of social networks and encourage seeking positive support from family and partners. Findings suggest it might prove useful for counselors to use coping skills training interventions, by retraining active-avoidance coping into meaning-based and active-confronting strategies.",
"title": ""
},
{
"docid": "3fc5b1ccb0a72f10ece8a452132aee7d",
"text": "Correspondence: [email protected] Institute for Automation and Applied Informatics, Karlsruhe Institute of Technology, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen, Germany Full list of author information is available at the end of the article Abstract How has digital transformation changed airport ground operations? Although the relevant peer-reviewed literature emphasizes the role of cost savings as a key driver behind digitalization of airport ground operations, the focus is on data-driven, customer-centric innovations. This paper argues that ground handling agents are deploying new technologies mainly to boost process efficiency and to cut costs. Our research shows that ground handling agents are embracing current trends to craft new business models and develop new revenue streams. In this paper, we examine the ground handling agent’s value chain and identify areas that are strongly affected by digital transformation and those that are not. We discuss different business scenarios for digital technology and link them with relevant research, such as automated service data capturing, new digital services for passengers, big data, indoor navigation, and wearables in airport ground operations. We assess the maturity level of discussed technologies using NASA technology readiness levels.",
"title": ""
},
{
"docid": "f246d7722e38119292dea7b88549e5ee",
"text": "P is of prime importance to many individuals when they attempt to develop online social relationships. Nonetheless, it has been observed that individuals’ behavior is at times inconsistent with their privacy concerns, e.g., they disclose substantial private information in synchronous online social interactions, even though they are aware of the risks involved. Drawing on the hyperpersonal framework and the privacy calculus perspective, this paper elucidates the interesting roles of privacy concerns and social rewards in synchronous online social interactions by examining the causes and the behavioral strategies that individuals utilize to protect their privacy. An empirical study involving 251 respondents was conducted in online chat rooms. Our results indicate that individuals utilize both self-disclosure and misrepresentation to protect their privacy and that social rewards help explain why individuals may not behave in accordance with their privacy concerns. In addition, we find that perceived anonymity of others and perceived intrusiveness affect both privacy concerns and social rewards. Our findings also suggest that higher perceived anonymity of self decreases individuals’ privacy concerns, and higher perceived media richness increases social rewards. Generally, this study contributes to the information systems literature by integrating the hyperpersonal framework and the privacy calculus perspective to identify antecedents of privacy trade-off and predict individuals’ behavior in synchronous online social interactions.",
"title": ""
},
{
"docid": "489127100b00493d81dc7644648732ad",
"text": "This paper presents a software tool - called Fractal Nature - that provides a set of fractal and physical based methods for creating realistic terrains called Fractal Nature. The output of the program can be used for creating content for video games and serious games. The approach for generating the terrain is based on noise filters, such as Gaussian distribution, capable of rendering highly realistic environments. It is demonstrated how a random terrain can change its shape and visual appearance containing artefacts such as steep slopes and smooth riverbeds. Moreover, two interactive erosion systems, hydraulic and thermal, were implemented. An initial evaluation with 12 expert users provided useful feedback for the applicability of the algorithms in video games as well as insights for future improvements.",
"title": ""
},
{
"docid": "63ab6c486aa8025c38bd5b7eadb68cfa",
"text": "The demands on a natural language understanding system used for spoken language differ somewhat from the demands of text processing. For processing spoken language, there is a tension between the system being as robust as necessary, and as constrained as possible. The robust system will a t tempt to find as sensible an interpretation as possible, even in the presence of performance errors by the speaker, or recognition errors by the speech recognizer. In contrast, in order to provide language constraints to a speech recognizer, a system should be able to detect that a recognized string is not a sentence of English, and disprefer that recognition hypothesis from the speech recognizer. If the coupling is to be tight, with parsing and recognition interleaved, then the parser should be able to enforce as many constraints as possible for partial utterances. The approach taken in Gemini is to tightly constrain language recognition to limit overgeneration, but to extend the language analysis to recognize certain characteristic patterns of spoken utterances (but not generally thought of as part of grammar) and to recognize specific types of performance errors by the speaker.",
"title": ""
},
{
"docid": "0edf1a71c25b217f0c794b2a3e358496",
"text": "Eight species of the genus Asparagus, members of the group of European species closely related to A. officinalis, were analysed using internal transcribed spacer (ITS), and expressed sequence tag-derived simple sequence repeat (EST-SSR) markers, as well as cytological observations of their hybrids, to study their phylogenetic relationships and the possibility of broadening the narrow genetic base of cultivated varieties. Phylogenetic analysis using ITS data revealed two major clades: clade I consisting of A. acutifolius and clade II (referred to in this study as the ‘officinalis group’) comprised of sequences derived from species closely related to A. officinalis; but the different species within the ‘officinalis group’ could not be clearly separated. In contrast, cluster analysis of EST-SSR marker data showed six major clades and clearly separated each population, grouping most of the genotypes from each population together. That is, EST-SSR markers were found to be more informative than ITS markers about the relationships within the ‘officinalis group’, indicating that EST-SSR markers are more useful than ITS sequences for establishing phylogenetic relationships at the intrageneric level. All the crosses carried out at the same ploidy level were successful. The high crossability, together with the regular meiotic behaviour and high pollen and seed fertility observed in the interspecific hybrids analysed, suggest relatively close relationships between the species studied. We conclude that the group of species classified in the ‘officinalis group’ are in the primary gene pool, indicating that these species could be used to increase the genetic diversity of the cultivated species. In addition, the tetraploid landrace “Morado de Huétor” could be employed as a bridge to generate new cultivated germplasm.",
"title": ""
},
{
"docid": "c0f138d3bf0626100e0d1d702da90eac",
"text": "Building a theory on extant species, as Ackermann et al. do, is a useful contribution to the field of language evolution. Here, I add another living model that might be of interest: human language ontogeny in the first year of life. A better knowledge of this phase might help in understanding two more topics among the \"several building blocks of a comprehensive theory of the evolution of spoken language\" indicated in their conclusion by Ackermann et al., that is, the foundation of the co-evolution of linguistic motor skills with the auditory skills underlying speech perception, and the possible phylogenetic interactions of protospeech production with referential capabilities.",
"title": ""
},
{
"docid": "c32673f901f67389e5ac5d4b5d994617",
"text": "This article is about testing the equality of several normal means when the variances are unknown and arbitrary, i.e., the set up of the one-way ANOVA. Even though several tests are available in the literature, none of them perform well in terms of type I error probability under various sample size and parameter combinations. In fact, the type I errors can be highly inflated for some of the commonly used tests; a serious issue that appears to have been overlooked. We propose a parametric bootstrap (PB) approach and compare it with three existing location-scale invariant tests – the Welch test, James test and the generalized F (GF) test. The Type I error rates and powers of the tests are evaluated using Monte Carlo simulation. Our studies show that the PB test is the best among the four tests with respect to Type I error rates. The PB test performs very satisfactorily even for small samples while the Welch test and the GF test exhibit poor Type I error properties when the sample sizes are small and/or the number of means to be compared is moderate to large. The James test performs better than the Welch test and the GF test. It is also noted that the same tests can be used to test the significance of the random effect variance component in a one-way random model under unequal error variances. Such models are widely used to analyze data from inter-laboratory studies. The methods are illustrated using some examples.",
"title": ""
}
] |
scidocsrr
|
ba0b8aafb940690d12b70173f8fcc71b
|
Algorithm-Induced Prior for Image Restoration
|
[
{
"docid": "5ffa04d21eb0a118fae96df17b5520a5",
"text": "Most existing state-of-the-art image denoising algorithms are based on exploiting similarity between a relatively modest number of patches. These patch-based methods are strictly dependent on patch matching, and their performance is hamstrung by the ability to reliably find sufficiently similar patches. As the number of patches grows, a point of diminishing returns is reached where the performance improvement due to more patches is offset by the lower likelihood of finding sufficiently close matches. The net effect is that while patch-based methods, such as BM3D, are excellent overall, they are ultimately limited in how well they can do on (larger) images with increasing complexity. In this paper, we address these shortcomings by developing a paradigm for truly global filtering where each pixel is estimated from all pixels in the image. Our objectives in this paper are two-fold. First, we give a statistical analysis of our proposed global filter, based on a spectral decomposition of its corresponding operator, and we study the effect of truncation of this spectral decomposition. Second, we derive an approximation to the spectral (principal) components using the Nyström extension. Using these, we demonstrate that this global filter can be implemented efficiently by sampling a fairly small percentage of the pixels in the image. Experiments illustrate that our strategy can effectively globalize any existing denoising filters to estimate each pixel using all pixels in the image, hence improving upon the best patch-based methods.",
"title": ""
},
{
"docid": "e714a3add981415be1f48dfe12c245a6",
"text": "Many material and biological samples in scientific imaging are characterized by nonlocal repeating structures. These are studied using scanning electron microscopy and electron tomography. Sparse sampling of individual pixels in a two-dimensional image acquisition geometry, or sparse sampling of projection images with large tilt increments in a tomography experiment, can enable high speed data acquisition and minimize sample damage caused by the electron beam. In this paper, we present an algorithm for electron tomographic reconstruction and sparse image interpolation that exploits the nonlocal redundancy in images. We adapt a framework, termed plug-and-play priors, to solve these imaging problems in a regularized inversion setting. The power of the plug-and-play approach is that it allows a wide array of modern denoising algorithms to be used as a “prior model” for tomography and image interpolation. We also present sufficient mathematical conditions that ensure convergence of the plug-and-play approach, and we use these insights to design a new nonlocal means denoising algorithm. Finally, we demonstrate that the algorithm produces higher quality reconstructions on both simulated and real electron microscope data, along with improved convergence properties compared to other methods.",
"title": ""
}
] |
[
{
"docid": "bf6c25593274cebad438a3f44f31f44a",
"text": "It has been observed that there is a great growth of the market share of PLB in developed countries. Earlier most of the people were using branded clothes, but now a days the companies have introduced their own private brands to increase their popularity and more profit. Companies are providing more discounts on private brands to get more customers. Retailers have not only customized and localized the products as per customer’s tastes and preference but also created PLBs’. At present customers are more intelligent and smart, they look for the product which gives them value, so today’s private label brands are more competitive, reasonable price and of better quality. The consumers prefer private label brands heavily because they can save money. The apparels are second most demanded product after FMCG. So, here we are focusing on the private label apparels brands. Big companies like Pantaloons, Wills Lifestyle, Reliance and many more retailers having their own brands. The researcher aimed to find and analyze the effect of various brand related attributes (Brand knowledge (brand image, brand awareness) on consumer’s purchase intention towards private label brands in apparels. Most of the customer purchase depends upon its brand image. The study is carried at some reputed stores of Ahmedabad like Pantaloons, Westside and Will Lifestyle. It tries to establish the relationship between brands related factors and their impact on consumers purchase",
"title": ""
},
{
"docid": "991420a2abaf1907ab4f5a1c2dcf823d",
"text": "We are interested in counting the number of instances of object classes in natural, everyday images. Previous counting approaches tackle the problem in restricted domains such as counting pedestrians in surveillance videos. Counts can also be estimated from outputs of other vision tasks like object detection. In this work, we build dedicated models for counting designed to tackle the large variance in counts, appearances, and scales of objects found in natural scenes. Our approach is inspired by the phenomenon of subitizing – the ability of humans to make quick assessments of counts given a perceptual signal, for small count values. Given a natural scene, we employ a divide and conquer strategy while incorporating context across the scene to adapt the subitizing idea to counting. Our approach offers consistent improvements over numerous baseline approaches for counting on the PASCAL VOC 2007 and COCO datasets. Subsequently, we study how counting can be used to improve object detection. We then show a proof of concept application of our counting methods to the task of Visual Question Answering, by studying the how many? questions in the VQA and COCO-QA datasets.",
"title": ""
},
{
"docid": "04afc062996d9db91168116347819ddd",
"text": "BACKGROUND\nThis study investigated the role of Sirtuin 1 (SIRT1)/forkhead box O3 (FOXO3) pathway, and a possible protective function for Icariin (ICA), in intestinal ischemia-reperfusion (I/R) injury and hypoxia-reoxygenation (H/R) injury.\n\n\nMATERIALS AND METHODS\nMale Sprague-Dawley rats were pretreated with different doses of ICA (30 and 60 mg/kg) or olive oil as control 1 h before intestinal I/R. Caco-2 cells were pretreated with different concentrations of ICA (25, 50, and 100 μg/mL) and then subjected to H/R-induced injury.\n\n\nRESULTS\nThe in vivo results demonstrated that ICA pretreatment significantly improved I/R-induced tissue damage and decreased serum tumor necrosis factor α and interleukin-6 levels. Changes of manganese superoxide dismutase, Bcl-2, and Bim were also reversed by ICA, and apoptosis was reduced. Importantly, the protective effects of ICA were positively associated with SIRT1 activation. Increased SIRT1 expression, as well as decreased acetylated FOXO3 expression, was observed in Caco-2 cells pretreated with ICA. Additionally, the protective effects of ICA were abrogated in the presence of SIRT1 inhibitor nicotinamide. This suggests that ICA exerts a protective effect upon H/R injury through activation of SIRT1/FOXO3 signaling pathway. Accordingly, the SIRT1 activator resveratrol achieved a similar protective effect as ICA on H/R injury, whereas cellular damage resulting from H/R was exacerbated by SIRT1 knockdown and nicotinamide.\n\n\nCONCLUSIONS\nSIRT1, activated by ICA, protects intestinal epithelial cells from I/R injury by inducing FOXO3 deacetylation both in vivo and in vitro These findings suggest that the SIRT1/FOXO3 pathway can be a target for therapeutic approaches intended to minimize injury resulting from intestinal dysfunction.",
"title": ""
},
{
"docid": "4e73acdb2458cbcb30b5ec173d88a1f9",
"text": "The research objective of this work was to understand pedestrians’ behavior and interaction with vehicles during pre-crash scenarios that provides critical information on how to improve pedestrian safety. In this study, we recruited 110 cars and their drivers in the greater Indianapolis area for a one year naturalistic driving study starting in March 2012. The drivers were selected based on their geographic, demographic, and driving route representativeness. We used off-the-shelf vehicle black boxes for data recording, which are installed at the front windshield behind the rear-view mirrors. It records highresolution forward-view videos (recording driving views outside of front windshield), GPS information, and G-sensor information. We developed category-based multi-stage pedestrian detection and behavior analysis tools to efficiently process this large scale driving dataset. To ensure the accuracy, we incorporated the human-in-loop process to verify the automatic pedestrian detection results. For each pedestrian event, we generate a 5-second video to further study potential conflicts between pedestrians and vehicle. For each detected potential conflict event, we generate a 15second video to analyze pedestrian behavior. We conduct in-depth analysis of pedestrian behavior in regular and near-miss scenarios using the naturalistic data. We observed pedestrian and vehicle interaction videos and studied what scenarios might be more dangerous and could more likely to result in potential conflicts. We observed: 1) Children alone as pedestrians is an elevated risk; 2) three or more adults may be more likely to result in potential conflicts with vehicles than one or two adults; 3) parking lots, communities, school areas, shopping malls, etc. could have more potential conflicts than regular urban/rural driving environments; 4) when pedestrian is crossing road, there is much higher potential conflict than pedestrian walking along/against traffic; 5) There is an elevated risk for pedestrians walking in road (where vehicles can drive by); 6) when pedestrians are jogging, it is much more likely to have potential conflict than walking or standing.; and 7) it is more likely to have potential conflict at cross walk and junction than other road types. Furthermore, we estimated the pedestrian appearance points of all potential conflict events and time to collision (TTC). Most potential conflict events have a TTC value ranging from 1 second to 6 seconds, with the range of 2 seconds to 4 seconds being associated with highest percentages of all the cases. The mean value of TTC is 3.84 seconds with standard deviation of 1.74 seconds. To date, we have collected about 65TB of driving data with about 1.1 million miles. We have processed about 50% of the data. We are continuously working on the data collection and processing. There could be some changes in our observation results after including all data. But the existing analysis is based on a quite large-scale data and would provide a good estimation.",
"title": ""
},
{
"docid": "fb4d7b1ef7b5163b319550fdf3926a8c",
"text": "Speaker identification refers to the task of localizing the face of a person who has the same identity as the ongoing voice in a video. This task not only requires collective perception over both visual and auditory signals, the robustness to handle severe quality degradations and unconstrained content variations are also indispensable. In this paper, we describe a novel multimodal Long Short-Term Memory (LSTM) architecture which seamlessly unifies both visual and auditory modalities from the beginning of each sequence input. The key idea is to extend the conventional LSTM by not only sharing weights across time steps, but also sharing weights across modalities. We show that modeling the temporal dependency across face and voice can significantly improve the robustness to content quality degradations and variations. We also found that our multimodal LSTM is robustness to distractors, namely the non-speaking identities. We applied our multimodal LSTM to The Big Bang Theory dataset and showed that our system outperforms the state-of-the-art systems in speaker identification with lower false alarm rate and higher recogni-",
"title": ""
},
{
"docid": "a4969e82e3cccf5c9ca7177d4ca5007c",
"text": "Traditional views of automaticity are in need of revision. For example, automaticity often has been treated as an all-or-none phenomenon, and traditional theories have held that automatic processes are independent of attention. Yet recent empirical data suggest that automatic processes are continuous, and furthermore are subject to attentional control. A model of attention is presented to address these issues. Within a parallel distributed processing framework, it is proposed that the attributes of automaticity depend on the strength of a processing pathway and that strength increases with training. With the Stroop effect as an example, automatic processes are shown to be continuous and to emerge gradually with practice. Specifically, a computational model of the Stroop task simulates the time course of processing as well as the effects of learning. This was accomplished by combining the cascade mechanism described by McClelland (1979) with the backpropagation learning algorithm (Rumelhart, Hinton, & Williams, 1986). The model can simulate performance in the standard Stroop task, as well as aspects of performance in variants of this task that manipulate stimulus-onset asynchrony, response set, and degree of practice. The model presented is contrasted against other models, and its relation to many of the central issues in the literature on attention, automaticity, and interference is discussed.",
"title": ""
},
{
"docid": "fba5b69c3b0afe9f39422db8c18dba06",
"text": "It is well known that stressful experiences may affect learning and memory processes. Less clear is the exact nature of these stress effects on memory: both enhancing and impairing effects have been reported. These opposite effects may be explained if the different time courses of stress hormone, in particular catecholamine and glucocorticoid, actions are taken into account. Integrating two popular models, we argue here that rapid catecholamine and non-genomic glucocorticoid actions interact in the basolateral amygdala to shift the organism into a 'memory formation mode' that facilitates the consolidation of stressful experiences into long-term memory. The undisturbed consolidation of these experiences is then promoted by genomic glucocorticoid actions that induce a 'memory storage mode', which suppresses competing cognitive processes and thus reduces interference by unrelated material. Highlighting some current trends in the field, we further argue that stress affects learning and memory processes beyond the basolateral amygdala and hippocampus and that stress may pre-program subsequent memory performance when it is experienced during critical periods of brain development.",
"title": ""
},
{
"docid": "d141bc7c69d7c3133770a9dd2f61536e",
"text": "Customer relationship management (CRM) has become one of the most influential technologies in the world, and companies are increasingly implementing it to create value. However, despite significant investment in CRM technology infrastructure, empirical research offers inconsistent support for its positive impact on performance. This study develops and tests a research model analyzing the process through which CRM technology infrastructure translates into organizational performance, drawing on the resource-based view (RBV) and the knowledge-based view (KBV) of the firm. Based on an international sample of 125 hotels, the results suggest that organizational commitment and knowledge management fully mediate this process. 2014 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "e0140fa65c44d867a1d128d45fdc40e3",
"text": "Recursion is an important topic in computer science curricula. It is related to the acquisition of competences regarding problem decomposition, functional abstraction and the concept of induction. In comparison with direct recursion, mutual recursion is considered to be more complex. Consequently, it is generally addressed superficially in CS1/2 programming courses and textbooks. We show that, when a problem is approached appropriately, not only can mutual recursion be a powerful tool, but it can also be easy to understand and fun. This paper provides several intuitive and attractive algorithms that rely on mutual recursion, and which have been designed to help strengthen students' ability to decompose problems and apply induction. Furthermore, we show that a solution based on mutual recursion may be easier to design, prove and comprehend than other solutions based on direct recursion. We have evaluated the use of these algorithms while teaching recursion concepts. Results suggest that mutual recursion, in comparison with other types of recursion, is not as hard as it seems when: (1) determining the result of a (mathematical) function call, and, most importantly, (2) designing algorithms for solving simple problems.",
"title": ""
},
{
"docid": "ac1302f482309273d9e61fdf0f093e01",
"text": "Retinal vessel segmentation is an indispensable step for automatic detection of retinal diseases with fundoscopic images. Though many approaches have been proposed, existing methods tend to miss fine vessels or allow false positives at terminal branches. Let alone undersegmentation, over-segmentation is also problematic when quantitative studies need to measure the precise width of vessels. In this paper, we present a method that generates the precise map of retinal vessels using generative adversarial training. Our methods achieve dice coefficient of 0.829 on DRIVE dataset and 0.834 on STARE dataset which is the state-of-the-art performance on both datasets.",
"title": ""
},
{
"docid": "4ab88350149ad4c21a1111b17b76fb27",
"text": "Landmark image classification attracts increasing research attention due to its great importance in real applications, ranging from travel guide recommendation to 3-D modelling and visualization of geolocation. While large amount of efforts have been invested, it still remains unsolved by academia and industry. One of the key reasons is the large intra-class variance rooted from the diverse visual appearance of landmark images. Distinguished from most existing methods based on scalable image search, we approach the problem from a new perspective and model landmark classification as multi-modal categorization , which enjoys advantages of low storage overhead and high classification efficiency. Toward this goal, a novel and effective feature representation, called hierarchical multi-modal exemplar (HMME) feature, is proposed to characterize landmark images. In order to compute HMME, training images are first partitioned into the regions with hierarchical grids to generate candidate images and regions. Then, at the stage of exemplar selection, hierarchical discriminative exemplars in multiple modalities are discovered automatically via iterative boosting and latent region label mining. Finally, HMME is generated via a region-based locality-constrained linear coding (RLLC), which effectively encodes semantics of the discovered exemplars into HMME. Meanwhile, dimension reduction is applied to reduce redundant information by projecting the raw HMME into lower-dimensional space. The final HMME enjoys advantages of discriminative and linearly separable. Experimental study has been carried out on real world landmark datasets, and the results demonstrate the superior performance of the proposed approach over several state-of-the-art techniques.",
"title": ""
},
{
"docid": "c0a1992713bb680138c9c49b4c1a6d1c",
"text": "In this paper, a new data-driven information hiding scheme called generative steganography by sampling (GSS) is proposed. The stego is directly sampled by a powerful generator without an explicit cover. Secret key shared by both parties is used for message embedding and extraction, respectively. Jensen-Shannon Divergence is introduced as new criteria for evaluation of the security of the generative steganography. Based on these principles, a simple practical generative steganography method is proposed using semantic image inpainting. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated stego images.",
"title": ""
},
{
"docid": "49c1ec052694c19d7971f75ae7fde74a",
"text": "This paper quantifies the value of pronunciation lexicons in large vocabulary continuous speech recognition (LVCSR) systems that support keyword search (KWS) in low resource languages. State-of-the-art LVCSR and KWS systems are developed for conversational telephone speech in Tagalog, and the baseline lexicon is augmented via three different grapheme-to-phoneme models that yield increasing coverage of a large Tagalog word-list. It is demonstrated that while the increased lexical coverage - or reduced out-of-vocabulary (OOV) rate - leads to only modest (ca 1%-4%) improvements in word error rate, the concomitant improvements in actual term weighted value are as much as 60%. It is also shown that incorporating the augmented lexicons into the LVCSR system before indexing speech is superior to using them post facto, e.g., for approximate phonetic matching of OOV keywords in pre-indexed lattices. These results underscore the disproportionate importance of automatic lexicon augmentation for KWS in morphologically rich languages, and advocate for using them early in the LVCSR stage.",
"title": ""
},
{
"docid": "2ff08c8505e7d68304b63c6942feb837",
"text": "This paper presents a Retrospective Event Detection algorithm, called Eventy-Topic Detection (ETD), which automatically generates topics that describe events in a large, temporal text corpus. Our approach leverages the structure of the topic modeling framework, specifically the Latent Dirichlet Allocation (LDA), to generate topics which are then later labeled as Eventy-Topics or non-Eventy-Topics. The system first runs daily LDA topic models, then calculates the cosine similarity between the topics of the daily topic models, and then runs our novel Bump-Detection algorithm. Similar topics labeled as an Eventy-Topic are then grouped together. The algorithm is demonstrated on two Terabyte sized corpuses a Reuters News corpus and a Twitter corpus. Our method is evaluated on a human annotated test set. Our algorithm demonstrates its ability to accurately describe and label events in a temporal text corpus.",
"title": ""
},
{
"docid": "1ef814163a5c91155a2d7e1b4b19f4d7",
"text": "In this article, a frequency reconfigurable fractal patch antenna using pin diodes is proposed and studied. The antenna structure has been designed on FR-4 low-cost substrate material of relative permittivity εr = 4.4, with a compact volume of 30×30×0.8 mm3. The bandwidth and resonance frequency of the antenna design will be increased when we exploit the fractal iteration on the patch antenna. This antenna covers some service bands such as: WiMAX, m-WiMAX, WLAN, C-band and X band applications. The simulation of the proposed antenna is carried out using CST microwave studio. The radiation pattern and S parameter are further presented and discussed.",
"title": ""
},
{
"docid": "8f750438e7d78873fd33174d2e347ea5",
"text": "This paper discusses the possibility of recognizing and predicting user activities in the IoT (Internet of Things) based smart environment. The activity recognition is usually done through two steps: activity pattern clustering and activity type decision. Although many related works have been suggested, they had some limited performance because they focused only on one part between the two steps. This paper tries to find the best combination of a pattern clustering method and an activity decision algorithm among various existing works. For the first step, in order to classify so varied and complex user activities, we use a relevant and efficient unsupervised learning method called the K-pattern clustering algorithm. In the second step, the training of smart environment for recognizing and predicting user activities inside his/her personal space is done by utilizing the artificial neural network based on the Allen's temporal relations. The experimental results show that our combined method provides the higher recognition accuracy for various activities, as compared with other data mining classification algorithms. Furthermore, it is more appropriate for a dynamic environment like an IoT based smart home.",
"title": ""
},
{
"docid": "84cb130679353dbdeff24100409f57fe",
"text": "Cloud computing has become another buzzword after Web 2.0. However, there are dozens of different definitions for cloud computing and there seems to be no consensus on what a cloud is. On the other hand, cloud computing is not a completely new concept; it has intricate connection to the relatively new but thirteen-year established grid computing paradigm, and other relevant technologies such as utility computing, cluster computing, and distributed systems in general. This paper strives to compare and contrast cloud computing with grid computing from various angles and give insights into the essential characteristics of both.",
"title": ""
},
{
"docid": "75d209a8baed5fabbda46cba44bd5867",
"text": "The Internet of Things (IoT) idea, explored across the globe, brings about an important issue: how to achieve interoperability among multiple existing (and constantly created) IoT platforms. In this context, in January 2016, the European Commission has funded seven projects that are to deal with various aspects of interoperability in the Internet of Things. Among them, the INTERIoT project is aiming at the design and implementation of, and experimentation with, an open cross-layer framework and associated methodology to provide voluntary interoperability among heterogeneous IoT platforms. While the project considers interoperability across all layers of the software stack, we are particularly interested in answering the question: how ontologies and semantic data processing can be harnessed to facilitate interoperability across the IoT landscape. Henceforth, we have engaged in a fact nding mission to establish what is currently at our disposal when semantic interoperability is concerned. Since the INTER-IoT project is initially driven by two use cases originating from (i) (e/m)Health and (ii) transportation and logistics, these two application domains were used to provide context for our search. The paper summarizes our ndings and provides foundation for developing methods and tools for supporting semantic interoperability in the INTER-IoT project (and beyond).",
"title": ""
},
{
"docid": "36e8ecc13c1f92ca3b056359e2d803f0",
"text": "We propose a novel module, the reviewer module, to improve the encoder-decoder learning framework. The reviewer module is generic, and can be plugged into an existing encoder-decoder model. The reviewer module performs a number of review steps with attention mechanism on the encoder hidden states, and outputs a fact vector after each review step; the fact vectors are used as the input of the attention mechanism in the decoder. We show that the conventional encoderdecoders are a special case of our framework. Empirically, we show that our framework can improve over state-of-the-art encoder-decoder systems on the tasks of image captioning and source code captioning.",
"title": ""
},
{
"docid": "0d1193978e4f8be0b78c6184d7ece3fe",
"text": "Network representations of systems from various scientific and societal domains are neither completely random nor fully regular, but instead appear to contain recurring structural building blocks [1]. These features tend to be shared by networks belonging to the same broad class, such as the class of social networks or the class of biological networks. At a finer scale of classification within each such class, networks describing more similar systems tend to have more similar features. This occurs presumably because networks representing similar purposes or constructions would be expected to be generated by a shared set of domain specific mechanisms, and it should therefore be possible to classify these networks into categories based on their features at various structural levels. Here we describe and demonstrate a new, hybrid approach that combines manual selection of features of potential interest with existing automated classification methods. In particular, selecting well-known and well-studied features that have been used throughout social network analysis and network science [2, 3] and then classifying with methods such as random forests [4] that are of special utility in the presence of feature collinearity, we find that we achieve higher accuracy, in shorter computation time, with greater interpretability of the network classification results. Past work in the area of network classification has primarily focused on distinguishing networks from different categories using two different broad classes of approaches. In the first approach , network classification is carried out by examining certain specific structural features and investigating whether networks belonging to the same category are similar across one or more dimensions as defined by these features [5, 6, 7, 8]. In other words, in this approach the investigator manually chooses the structural characteristics of interest and more or less manually (informally) determines the regions of the feature space that correspond to different classes. These methods are scalable to large networks and yield results that are easily interpreted in terms of the characteristics of interest, but in practice they tend to lead to suboptimal classification accuracy. In the second approach, network classification is done by using very flexible machine learning classi-fiers that, when presented with a network as an input, classify its category or class as an output To somewhat oversimplify, the first approach relies on manual feature specification followed by manual selection of a classification system, whereas the second approach is its opposite, relying on automated feature detection followed by automated classification. While …",
"title": ""
}
] |
scidocsrr
|
46d1e816e5b1399722dc641ed4c0a897
|
A Single-Stage Single-Phase Transformer-Less Doubly Grounded Grid-Connected PV Interface
|
[
{
"docid": "819f6b62eb3f8f9d60437af28c657935",
"text": "The global electrical energy consumption is rising and there is a steady increase of the demand on the power capacity, efficient production, distribution and utilization of energy. The traditional power systems are changing globally, a large number of dispersed generation (DG) units, including both renewable and nonrenewable energy sources such as wind turbines, photovoltaic (PV) generators, fuel cells, small hydro, wave generators, and gas/steam powered combined heat and power stations, are being integrated into power systems at the distribution level. Power electronics, the technology of efficiently processing electric power, play an essential part in the integration of the dispersed generation units for good efficiency and high performance of the power systems. This paper reviews the applications of power electronics in the integration of DG units, in particular, wind power, fuel cells and PV generators.",
"title": ""
}
] |
[
{
"docid": "f1b02ac1e1e88ad0e47046ddc03ac50d",
"text": "References In the real world, some large KBs are lack of type information. Fine-grained entity typing aims at identifying semantic types of an entity in KB. Existing methods suffer from two main problems: Ignoring rich structural and partial-labeled information in KB. Requiring large scale corpus in which entity mentions are annotated. We propose APE model, which can fully utilize various kinds of information comprehensively. Our work benefits many real applications, such as relation extraction, entity linking and QA system.",
"title": ""
},
{
"docid": "1c16eec32b941af1646843bb81d16b5f",
"text": "Facebook is rapidly gaining recognition as a powerful research tool for the social sciences. It constitutes a large and diverse pool of participants, who can be selectively recruited for both online and offline studies. Additionally, it facilitates data collection by storing detailed records of its users' demographic profiles, social interactions, and behaviors. With participants' consent, these data can be recorded retrospectively in a convenient, accurate, and inexpensive way. Based on our experience in designing, implementing, and maintaining multiple Facebook-based psychological studies that attracted over 10 million participants, we demonstrate how to recruit participants using Facebook, incentivize them effectively, and maximize their engagement. We also outline the most important opportunities and challenges associated with using Facebook for research, provide several practical guidelines on how to successfully implement studies on Facebook, and finally, discuss ethical considerations.",
"title": ""
},
{
"docid": "2a30aa44df358be7bb27afd0014a07ff",
"text": "The adoption of Smart Grid devices throughout utility networks will effect tremendous change in grid operations and usage of electricity over the next two decades. The changes in ways to control loads, coupled with increased penetration of renewable energy sources, offer a new set of challenges in balancing consumption and generation. Increased deployment of energy storage devices in the distribution grid will help make this process happen more effectively and improve system performance. This paper addresses the new types of storage being utilized for grid support and the ways they are integrated into the grid.",
"title": ""
},
{
"docid": "b9a6803c0525c41291a575715a604b0f",
"text": "The Internet-of-Things (IoT) has quickly moved from the realm of hype to reality with estimates of over 25 billion devices deployed by 2020. While IoT has huge potential for societal impact, it comes with a number of key security challenges---IoT devices can become the entry points into critical infrastructures and can be exploited to leak sensitive information. Traditional host-centric security solutions in today's IT ecosystems (e.g., antivirus, software patches) are fundamentally at odds with the realities of IoT (e.g., poor vendor security practices and constrained hardware). We argue that the network will have to play a critical role in securing IoT deployments. However, the scale, diversity, cyberphysical coupling, and cross-device use cases inherent to IoT require us to rethink network security along three key dimensions: (1) abstractions for security policies; (2) mechanisms to learn attack and normal profiles; and (3) dynamic and context-aware enforcement capabilities. Our goal in this paper is to highlight these challenges and sketch a roadmap to avoid this impending security disaster.",
"title": ""
},
{
"docid": "444472f7c11a35a747b50bc9ffc7fea7",
"text": "Deep Neural Networks (DNNs) are very popular these days, and are the subject of a very intense investigation. A DNN is made by layers of internal units (or neurons), each of which computes an affine combination of the output of the units in the previous layer, applies a nonlinear operator, and outputs the corresponding value (also known as activation). A commonly-used nonlinear operator is the so-called rectified linear unit (ReLU), whose output is just the maximum between its input value and zero. In this (and other similar cases like max pooling, where the max operation involves more than one input value), one can model the DNN as a 0-1 Mixed Integer Linear Program (0-1 MILP) where the continuous variables correspond to the output values of each unit, and a binary variable is associated with each ReLU to model its yes/no nature. In this paper we discuss the peculiarity of this kind of 0-1 MILP models, and describe an effective bound-tightening technique intended to ease its solution. We also present possible applications of the 0-1 MILP model arising in feature visualization and in the construction of adversarial examples. Preliminary computational results are reported, aimed at investigating (on small DNNs) the computational performance of a state-of-the-art MILP solver when applied to a known test case, namely, hand-written digit recognition.",
"title": ""
},
{
"docid": "ee862e43dc73654abe1616858d8cd9d8",
"text": "From a single image, humans are able to perceive the full 3D shape of an object by exploiting learned shape priors from everyday life. Contemporary single-image 3D reconstruction algorithms aim to solve this task in a similar fashion, but often end up with priors that are highly biased by training classes. Here we present an algorithm, Generalizable Reconstruction (GenRe), designed to capture more generic, class-agnostic shape priors. We achieve this with an inference network and training procedure that combine 2.5D representations of visible surfaces (depth and silhouette), spherical shape representations of both visible and non-visible surfaces, and 3D voxel-based representations, in a principled manner that exploits the causal structure of how 3D shapes give rise to 2D images. Experiments demonstrate that GenRe performs well on single-view shape reconstruction, and generalizes to diverse novel objects from categories not seen during training.",
"title": ""
},
{
"docid": "c619d692d9e8a262f85324f6e35471e6",
"text": "Affect conveys important implicit information in human communication. Having the capability to correctly express affect during human-machine conversations is one of the major milestones in artificial intelligence. In recent years, extensive research on open-domain neural conversational models has been conducted. However, embedding affect into such models is still under explored. In this paper, we propose an endto-end affect-rich open-domain neural conversational model that produces responses not only appropriate in syntax and semantics, but also with rich affect. Our model extends the Seq2Seq model and adopts VAD (Valence, Arousal and Dominance) affective notations to embed each word with affects. In addition, our model considers the effect of negators and intensifiers via a novel affective attention mechanism, which biases attention towards affect-rich words in input sentences. Lastly, we train our model with an affect-incorporated objective function to encourage the generation of affect-rich words in the output responses. Evaluations based on both perplexity and human evaluations show that our model outperforms the state-of-the-art baseline model of comparable size in producing natural and affect-rich responses. Introduction Affect is a psychological experience of feeling or emotion. As a vital part of human intelligence, having the capability to recognize, understand and express affect and emotions like human has been arguably one of the major milestones in artificial intelligence (Picard 1997). Open-domain conversational models aim to generate coherent and meaningful responses when given user input sentences. In recent years, neural network based generative conversational models relying on Sequence-to-Sequence network (Seq2Seq) (Sutskever, Vinyals, and Le 2014) have been widely adopted due to its success in neural machine translation. Seq2Seq based conversational models have the advantages of end-to-end training paradigm and unrestricted response space over conventional retrieval-based models. To make neural conversational models more engaging, various techniques have been proposed, such as using stochastic latent variable (Serban et al. 2017) to promote response diversity and encoding topic (Xing et al. 2017) into conversational models to produce more coherent responses. Copyright c © 2019, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. However, embedding affect into neural conversational models has been seldom explored, despite that it has many benefits such as improving user satisfaction (Callejas, Griol, and López-Cózar 2011), fewer breakdowns (Martinovski and Traum 2003), and more engaged conversations (Robison, McQuiggan, and Lester 2009). For real-world applications, Fitzpatrick, Darcy, and Vierhile (2017) developed a rule-based empathic chatbot to deliver cognitive behavior therapy to young adults with depression and anxiety, and obtained significant results on depression reduction. Despite of these benefits, there are a few challenges in the affect embedding in neural conversational models that existing approaches fail to address: (i) It is difficult to capture the emotion of a sentence, partly because negators and intensifiers often change its polarity and strength. Handling negators and intensifiers properly still remains as a challenge in sentiment analysis. (ii) It is difficult to embed emotions naturally in responses with correct grammar and semantics (Ghosh et al. 2017). In this paper, we propose an end-to-end single-turn opendomain neural conversational model to address the aforementioned challenges to produce responses that are natural and affect-rich. Our model extends Seq2Seq model with attention (Luong, Pham, and Manning 2015). We leverage an external corpus (Warriner, Kuperman, and Brysbaert 2013) to provide affect knowledge for each word in the Valence, Arousal and Dominance (VAD) dimensions (Mehrabian 1996). We then incorporate the affect knowledge into the embedding layer of our model. VAD notation has been widely used as a dimensional representation of human emotions in psychology and various computational models, e.g., (Wang, Tan, and Miao 2016; Tang et al. 2017). 2D plots of selected words with extreme VAD values are shown in Figure 1. To capture the effect of negators and intensifiers, we propose a novel biased attention mechanism that explicitly considers negators and intensifiers in attention computation. To maintain correct grammar and semantics, we train our Seq2Seq model with a weighted cross-entropy loss that encourages the generation of affect-rich words without degrading language fluency. Our main contributions are summarized as follows: • For the first time, we propose a novel affective attention mechanism to incorporate the effect of negators and intensifiers in conversation modeling. Our mechanism inar X iv :1 81 1. 07 07 8v 1 [ cs .C L ] 1 7 N ov 2 01 8",
"title": ""
},
{
"docid": "2c251c8f1fcf15510a5c82de33daced3",
"text": "BACKGROUND\nOverall diet quality measurements have been suggested as a useful tool to assess diet-disease relationships. Oxidative stress has been related to the development of obesity and other chronic diseases. Furthermore, antioxidant intake is being considered as protective against cell oxidative damage and related metabolic complications.\n\n\nOBJECTIVE\nTo evaluate potential associations between the dietary total antioxidant capacity of foods (TAC), the energy density of the diet, and other relevant nutritional quality indexes in healthy young adults.\n\n\nMETHODS\nSeveral anthropometric variables from 153 healthy participants (20.8 +/- 2.7 years) included in this study were measured. Dietary intake was assessed by a validated food-frequency questionnaire, which was also used to calculate the dietary TAC and for daily energy intake adjustment.\n\n\nRESULTS\nPositive significant associations were found between dietary TAC and Mediterranean energy density hypothesis-oriented dietary scores (Mediterranean Diet Score, Alternate Mediterranean Diet Score, Modified Mediterranean Diet Score), non-Mediterranean hypothesis-oriented dietary scores (Healthy Eating Index, Alternate Healthy Eating Index, Diet Quality Index-International, Diet Quality Index-Revised), and diversity of food intake indicators (Recommended Food Score, Quantitative Index for Dietary Diversity in terms of total energy intake). The Mediterranean Diet Quality Index and Diet Quality Index scores (a Mediterranean and a non-Mediterranean hypothesis-oriented dietary score, respectively), whose lower values refer to a higher diet quality, decreased with higher values of dietary TAC. Energy density was also inversely associated with dietary TAC.\n\n\nCONCLUSION\nThese data suggest that dietary TAC, as a measure of antioxidant intake, may also be a potential marker of diet quality in healthy subjects, providing a novel approach to assess the role of antioxidant intake on health promotion and diet-based therapies.",
"title": ""
},
{
"docid": "b638259c8fd52b17af0c03f54447da80",
"text": "The LoRa technology has emerged as an interesting solution for low power, long range loT applications by proposing multiple “degrees of freedom” at the physical layer. This flexibility provides either a long range at the cost of a lower data rate or higher throughput at the cost of low sensitivity, so a shorter range. In this paper, we analyze the flexibility of LoRa and propose various strategies to adapt its radio parameters (such as the spreading factor, bandwidth, and transmission power) to different deployment scenarios. We compute the energy consumption of LoRa transceivers using various radio configurations in both star and mesh topologies. Our simulation results show that in a star topology, we can achieve the optimal scaling-up/down strategy of LoRa radio parameters to obtain either a high data rate or a long range while respecting low energy consumption. In mesh networks, energy consumption is optimized by exploiting various radio configurations and the network topology (e.g., the number of hops, the network density, the cell coverage). Finally, we propose a strategy to take advantage of both star and mesh topologies.",
"title": ""
},
{
"docid": "dfa611e19a3827c66ea863041a3ef1e2",
"text": "We study the problem of malleability of Bitcoin transactions. Our first two contributions can be summarized as follows: (i) we perform practical experiments on Bitcoin that show that it is very easy to maul Bitcoin transactions with high probability, and (ii) we analyze the behavior of the popular Bitcoin wallets in the situation when their transactions are mauled; we conclude that most of them are to some extend not able to handle this situation correctly. The contributions in points (i) and (ii) are experimental. We also address a more theoretical problem of protecting the Bitcoin distributed contracts against the “malleability” attacks. It is well-known that malleability can pose serious problems in some of those contracts. It concerns mostly the protocols which use a “refund” transaction to withdraw a financial deposit in case the other party interrupts the protocol. Our third contribution is as follows: (iii) we show a general method for dealing with the transaction malleability in Bitcoin contracts. In short: this is achieved by creating a malleability-resilient “refund” transaction which does not require any modification of the Bitcoin protocol.",
"title": ""
},
{
"docid": "3ad19b3710faeda90db45e2f7cebebe8",
"text": "Motion planning is a fundamental problem in robotics. It comes in a variety of forms, but the simplest version is as follows. We are given a robot system B, which may consist of several rigid objects attached to each other through various joints, hinges, and links, or moving independently, and a 2D or 3D environment V cluttered with obstacles. We assume that the shape and location of the obstacles and the shape of B are known to the planning system. Given an initial placement Z1 and a final placement Z2 of B, we wish to determine whether there exists a collisionavoiding motion of B from Z1 to Z2, and, if so, to plan such a motion. In this simplified and purely geometric setup, we ignore issues such as incomplete information, nonholonomic constraints, control issues related to inaccuracies in sensing and motion, nonstationary obstacles, optimality of the planned motion, and so on. Since the early 1980s, motion planning has been an intensive area of study in robotics and computational geometry. In this chapter we will focus on algorithmic motion planning, emphasizing theoretical algorithmic analysis of the problem and seeking worst-case asymptotic bounds, and only mention briefly practical heuristic approaches to the problem. The majority of this chapter is devoted to the simplified version of motion planning, as stated above. Section 51.1 presents general techniques and lower bounds. Section 51.2 considers efficient solutions to a variety of specific moving systems with a small number of degrees of freedom. These efficient solutions exploit various sophisticated methods in computational and combinatorial geometry related to arrangements of curves and surfaces (Chapter 30). Section 51.3 then briefly discusses various extensions of the motion planning problem such as computing optimal paths with respect to various quality measures, computing the path of a tethered robot, incorporating uncertainty, moving obstacles, and more.",
"title": ""
},
{
"docid": "f795576f7927f8c0b4543d31c43c0675",
"text": "Computer scientists, linguists, stylometricians, and cognitive scientists have successfully divided corpora into modes, domains, genres, registers, and authors. The limitations for these successes, however, often result from insufficient indices with which their corpora are analyzed. In this paper, we use Coh-Metrix, a computational tool that analyzes text on over 200 indices of cohesion and difficulty. We demonstrate how, with the benefit of statistical analysis, texts can be analyzed for subtle, yet meaningful differences. In this paper, we report evidence that authors within the same register can be computationally distinguished despite evidence that stylistic markers can also shift significantly over time.",
"title": ""
},
{
"docid": "c17b748624d6abc22c39c48b1f374603",
"text": "This paper addresses the problem of motion estimation in 3D point cloud sequences that are characterized by moving 3D positions and color attributes. Motion estimation is key to effective compression of these sequences, but it remains a challenging problem as the temporally successive frames have varying sizes without explicit correspondence information. We represent the time-varying geometry of these sequences with a set of graphs, and consider 3D positions and color attributes of the points clouds as signals on the vertices of the graph. We then cast motion estimation as a feature matching problem between successive graphs. The motion is estimated on a sparse set of representative vertices using new spectral graph wavelet descriptors. A dense motion field is eventually interpolated by solving a graph-based regularization problem. The estimated motion is finally used for color compensation in the compression of 3D point cloud sequences. Experimental results demonstrate that our method is able to accurately estimate the motion and to bring significant improvement in terms of color compression performance.",
"title": ""
},
{
"docid": "c1aa687c4a48cfbe037fe87ed4062dab",
"text": "This paper deals with the modelling and control of a single sided linear switched reluctance actuator. This study provide a presentation of modelling and proposes a study on open and closed loop controls for the studied motor. From the proposed model, its dynamic behavior is described and discussed in detail. In addition, a simpler controller based on PID regulator is employed to upgrade the dynamic behavior of the motor. The simulation results in closed loop show a significant improvement in dynamic response compared with open loop. In fact, this simple type of controller offers the possibility to improve the dynamic response for sliding door application.",
"title": ""
},
{
"docid": "9eeb3ce9d963bc3bab6c32e651c34772",
"text": "In bioequivalence assessment, the consumer risk of erroneously accepting bioequivalence is of primary concern. In order to control the consumer risk, the decision problem is formulated with bioinequivalence as hypothesis and bioequivalence as alternative. In the parametric approach, a split into two one-sided test problems and application of two-sample t-tests have been suggested. Rejection of both hypotheses at nominal alpha-level is equivalent to the inclusion of the classical (shortest) (1-2 alpha) 100%-confidence interval in the bioequivalence range. This paper demonstrates that the rejection of the two one-sided hypotheses at nominal alpha-level by means of nonparametric Mann-Whitney-Wilcoxon tests is equivalent to the inclusion of the corresponding distribution-free (1-2 alpha) 100%-confidence interval in the bioequivalence range. This distribution-free (nonparametric) approach needs weaker model assumptions and hence presents an alternative to the parametric approach.",
"title": ""
},
{
"docid": "72bb768adc44f6b9c5c6ac08161c93c2",
"text": "A central challenge to using first-order methods for optimizing nonconvex problems is the presence of saddle points. First-order methods often get stuck at saddle points, greatly deteriorating their performance. Typically, to escape from saddles one has to use secondorder methods. However, most works on second-order methods rely extensively on expensive Hessian-based computations, making them impractical in large-scale settings. To tackle this challenge, we introduce a generic framework that minimizes Hessianbased computations while at the same time provably converging to secondorder critical points. Our framework carefully alternates between a first-order and a second-order subroutine, using the latter only close to saddle points, and yields convergence results competitive to the state-of-the-art. Empirical results suggest that our strategy also enjoys a good practical performance.",
"title": ""
},
{
"docid": "57d854e0b082ccb773cdc2a4bab0f6f3",
"text": "The use of Recurrent Neural Networks for video captioning has recently gained a lot of attention, since they can be used both to encode the input video and to generate the corresponding description. In this paper, we present a recurrent video encoding scheme which can discover and leverage the hierarchical structure of the video. Unlike the classical encoder-decoder approach, in which a video is encoded continuously by a recurrent layer, we propose a novel LSTM cell which can identify discontinuity points between frames or segments and modify the temporal connections of the encoding layer accordingly. We evaluate our approach on three large-scale datasets: the Montreal Video Annotation dataset, the MPII Movie Description dataset and the Microsoft Video Description Corpus. Experiments show that our approach can discover appropriate hierarchical representations of input videos and improve the state of the art results on movie description datasets.",
"title": ""
},
{
"docid": "6cb2004d77c5a0ccb4f0cbab3058b2bc",
"text": "the field of optical character recognition.",
"title": ""
},
{
"docid": "39682fc0385d7bc85267479bf20326b3",
"text": "This study assessed how problem video game playing (PVP) varies with game type, or \"genre,\" among adult video gamers. Participants (n=3,380) were adults (18+) who reported playing video games for 1 hour or more during the past week and completed a nationally representative online survey. The survey asked about characteristics of video game use, including titles played in the past year and patterns of (problematic) use. Participants self-reported the extent to which characteristics of PVP (e.g., playing longer than intended) described their game play. Five percent of our sample reported moderate to extreme problems. PVP was concentrated among persons who reported playing first-person shooter, action adventure, role-playing, and gambling games most during the past year. The identification of a subset of game types most associated with problem use suggests new directions for research into the specific design elements and reward mechanics of \"addictive\" video games and those populations at greatest risk of PVP with the ultimate goal of better understanding, preventing, and treating this contemporary mental health problem.",
"title": ""
},
{
"docid": "7d5ea1222f893c974300bdd10163ac0f",
"text": "Potato (Solanum tuberosum L.) is the world’s most important non-grain food crop and is central to global food security. It is clonally propagated, highly heterozygous, autotetraploid, and suffers acute inbreeding depression. Here we use a homozygous doubled-monoploid potato clone to sequence and assemble 86% of the 844-megabase genome. We predict 39,031 protein-coding genes and present evidence for at least two genome duplication events indicative of a palaeopolyploid origin. As the first genome sequence of an asterid, the potato genome reveals 2,642 genes specific to this large angiosperm clade. We also sequenced a heterozygous diploid clone and show that gene presence/absence variants and other potentially deleterious mutations occur frequently and are a likely cause of inbreeding depression. Gene family expansion, tissue-specific expression and recruitment of genes to new pathways contributed to the evolution of tuber development. The potato genome sequence provides a platform for genetic improvement of this vital crop.",
"title": ""
}
] |
scidocsrr
|
a28c31a746d8e1114155893dd13b3d81
|
How to Architect a Query Compiler, Revisited
|
[
{
"docid": "7de050ef4260ad858a620f9aa773b5a7",
"text": "We present DBToaster, a novel query compilation framework for producing high performance compiled query executors that incrementally and continuously answer standing aggregate queries using in-memory views. DBToaster targets applications that require efficient main-memory processing of standing queries (views) fed by high-volume data streams, recursively compiling view maintenance (VM) queries into simple C++ functions for evaluating database updates (deltas). While today’s VM algorithms consider the impact of single deltas on view queries to produce maintenance queries, we recursively consider deltas of maintenance queries and compile to thoroughly transform queries into code. Recursive compilation successively elides certain scans and joins, and eliminates significant query plan interpreter overheads. In this demonstration, we walk through our compilation algorithm, and show the significant performance advantages of our compiled executors over other query processors. We are able to demonstrate 1-3 orders of magnitude improvements in processing times for a financial application and a data warehouse loading application, both implemented across a wide range of database systems, including PostgreSQL, HSQLDB, a commercial DBMS ’A’, the Stanford STREAM engine, and a commercial stream processor ’B’.",
"title": ""
},
{
"docid": "2a7406d0b2ce795bb09b042497680b33",
"text": "In-memory databases require careful tuning and many engineering tricks to achieve good performance. Such database performance engineering is hard: a plethora of data and hardware-dependent optimization techniques form a design space that is difficult to navigate for a skilled engineer – even more so for a query compiler. To facilitate performanceoriented design exploration and query plan compilation, we present Voodoo, a declarative intermediate algebra that abstracts the detailed architectural properties of the hardware, such as multior many-core architectures, caches and SIMD registers, without losing the ability to generate highly tuned code. Because it consists of a collection of declarative, vector-oriented operations, Voodoo is easier to reason about and tune than low-level C and related hardware-focused extensions (Intrinsics, OpenCL, CUDA, etc.). This enables our Voodoo compiler to produce (OpenCL) code that rivals and even outperforms the fastest state-of-the-art in memory databases for both GPUs and CPUs. In addition, Voodoo makes it possible to express techniques as diverse as cacheconscious processing, predication and vectorization (again on both GPUs and CPUs) with just a few lines of code. Central to our approach is a novel idea we termed control vectors, which allows a code generating frontend to expose parallelism to the Voodoo compiler in a abstract manner, enabling portable performance across hardware platforms. We used Voodoo to build an alternative backend for MonetDB, a popular open-source in-memory database. Our backend allows MonetDB to perform at the same level as highly tuned in-memory databases, including HyPeR and Ocelot. We also demonstrate Voodoo’s usefulness when investigating hardware conscious tuning techniques, assessing their performance on different queries, devices and data.",
"title": ""
}
] |
[
{
"docid": "2195b13944fe4449104b506cd1625e60",
"text": "Gamification represents an effective way to incentivize user behavior across a number of computing applications. However, despite the fact that physical activity is essential for a healthy lifestyle, surprisingly little is known about how gamification and in particular competitions shape human physical activity. Here we study how competitions affect physical activity. We focus on walking challenges in a mobile activity tracking application where multiple users compete over who takes the most steps over a predefined number of days. We synthesize our findings in a series of game and app design implications. In particular, we analyze nearly 2,500 physical activity competitions over a period of one year capturing more than 800,000 person days of activity tracking. We observe that during walking competitions, the average user increases physical activity by 23%. Furthermore, there are large increases in activity for both men and women across all ages, and weight status, and even for users that were previously fairly inactive. We also find that the composition of participants greatly affects the dynamics of the game. In particular, if highly unequal participants get matched to each other, then competition suffers and the overall effect on the physical activity drops significantly. Furthermore, competitions with an equal mix of both men and women are more effective in increasing the level of activities. We leverage these insights to develop a statistical model to predict whether or not a competition will be particularly engaging with significant accuracy. Our models can serve as a guideline to help design more engaging competitions that lead to most beneficial behavioral changes.",
"title": ""
},
{
"docid": "1389323613225897330d250e9349867b",
"text": "Description: The field of data mining lies at the confluence of predictive analytics, statistical analysis, and business intelligence. Due to the ever–increasing complexity and size of data sets and the wide range of applications in computer science, business, and health care, the process of discovering knowledge in data is more relevant than ever before. This book provides the tools needed to thrive in today s big data world. The author demonstrates how to leverage a company s existing databases to increase profits and market share, and carefully explains the most current data science methods and techniques. The reader will learn data mining by doing data mining . By adding chapters on data modelling preparation, imputation of missing data, and multivariate statistical analysis, Discovering Knowledge in Data, Second Edition remains the eminent reference on data mining .",
"title": ""
},
{
"docid": "18252c7ff1b73eba07a35a68e4bcffd7",
"text": "This paper addresses the phenomenon of event composition: t he derivation of a single event description expressed in one clause from two le xical heads which could have been used in the description of independent events, eac h expressed in a distinct clause. In English, this phenomenon is well attested with re spect to sentences whose verb is found in combination with an XP describing a result no t strictly lexically entailed by this verb, as in (1).",
"title": ""
},
{
"docid": "53a962f9ab6dacf876e129ce3039fd69",
"text": "9 Background. Office of Academic Affairs (OAA), Office of Student Life (OSL) and Information Technology Helpdesk (ITD) are support functions within a university which receives hundreds of email messages on the daily basis. A large percentage of emails received by these departments are frequent and commonly used queries or request for information. Responding to every query by manually typing is a tedious and time consuming task and an automated approach for email response suggestion can save lot of time. 10",
"title": ""
},
{
"docid": "2f0e34b8956cb1fe998e740f77f55e84",
"text": "We present an approach to grammar induction that utilizes syntactic universals to improve dependency parsing across a range of languages. Our method uses a single set of manually-specified language-independent rules that identify syntactic dependencies between pairs of syntactic categories that commonly occur across languages. During inference of the probabilistic model, we use posterior expectation constraints to require that a minimum proportion of the dependencies we infer be instances of these rules. We also automatically refine the syntactic categories given in our coarsely tagged input. Across six languages our approach outperforms state-of-theart unsupervised methods by a significant margin.1",
"title": ""
},
{
"docid": "3f90af944ed7603fa7bbe8780239116a",
"text": "Display advertising has been a significant source of revenue for publishers and ad networks in online advertising ecosystem. One important business model in online display advertising is Ad Exchange marketplace, also called non-guaranteed delivery (NGD), in which advertisers buy targeted page views and audiences on a spot market through real-time auction. In this paper, we describe a bid landscape forecasting system in NGD marketplace for any advertiser campaign specified by a variety of targeting attributes. In the system, the impressions that satisfy the campaign targeting attributes are partitioned into multiple mutually exclusive samples. Each sample is one unique combination of quantified attribute values. We develop a divide-and-conquer approach that breaks down the campaign-level forecasting problem. First, utilizing a novel star-tree data structure, we forecast the bid for each sample using non-linear regression by gradient boosting decision trees. Then we employ a mixture-of-log-normal model to generate campaign-level bid distribution based on the sample-level forecasted distributions. The experiment results of a system developed with our approach show that it can accurately forecast the bid distributions for various campaigns running on the world's largest NGD advertising exchange system, outperforming two baseline methods in term of forecasting errors.",
"title": ""
},
{
"docid": "6470c8a921a9095adb96afccaa0bf97b",
"text": "Complex tasks with a visually rich component, like diagnosing seizures based on patient video cases, not only require the acquisition of conceptual but also of perceptual skills. Medical education has found that besides biomedical knowledge (knowledge of scientific facts) clinical knowledge (actual experience with patients) is crucial. One important aspect of clinical knowledge that medical education has hardly focused on, yet, are perceptual skills, like visually searching, detecting, and interpreting relevant features. Research on instructional design has shown that in a visually rich, but simple classification task perceptual skills could be conveyed by means of showing the eye movements of a didactically behaving expert. The current study applied this method to medical education in a complex task. This was done by example video cases, which were verbally explained by an expert. In addition the experimental groups saw a display of the expert’s eye movements recorded, while he performed the task. Results show that blurring non-attended areas of the expert enhances diagnostic performance of epileptic seizures by medical students in contrast to displaying attended areas as a circle and to a control group without attention guidance. These findings show that attention guidance fosters learning of perceptual aspects of clinical knowledge, if implemented in a spotlight manner.",
"title": ""
},
{
"docid": "e3566963e4307c15086a54afe7661f32",
"text": "Next-generation wireless networks must support ultra-reliable, low-latency communication and intelligently manage a massive number of Internet of Things (IoT) devices in real-time, within a highly dynamic environment. This need for stringent communication quality-of-service (QoS) requirements as well as mobile edge and core intelligence can only be realized by integrating fundamental notions of artificial intelligence (AI) and machine learning across the wireless infrastructure and end-user devices. In this context, this paper provides a comprehensive tutorial that introduces the main concepts of machine learning, in general, and artificial neural networks (ANNs), in particular, and their potential applications in wireless communications. For this purpose, we present a comprehensive overview on a number of key types of neural networks that include feed-forward, recurrent, spiking, and deep neural networks. For each type of neural network, we present the basic architecture and training procedure, as well as the associated challenges and opportunities. Then, we provide an in-depth overview on the variety of wireless communication problems that can be addressed using ANNs, ranging from communication using unmanned aerial vehicles to virtual reality and edge caching.For each individual application, we present the main motivation for using ANNs along with the associated challenges while also providing a detailed example for a use case scenario and outlining future works that can be addressed using ANNs. In a nutshell, this article constitutes one of the first holistic tutorials on the development of machine learning techniques tailored to the needs of future wireless networks. This research was supported by the U.S. National Science Foundation under Grants CNS-1460316 and IIS-1633363. ar X iv :1 71 0. 02 91 3v 1 [ cs .I T ] 9 O ct 2 01 7",
"title": ""
},
{
"docid": "559893f48207bc694259712d4a607bad",
"text": "The purpose of this conceptual paper is to discuss four main different tools which are: mobile marketing, E-mail marketing, web marketing and marketing through social networking sites, which use to distribute e-marketing promotion and understanding their different influence on consumers` perception. This study also highlighted the E-marketing, marketing through internet, mobile marketing, web marketing and role of social networks and their component in term of perceptual differences and features which are important to them according to the literatures. The review of the research contains some aspect of mobile marketing, terms like adaption, role of trust, and customers’ satisfaction. Moreover some attributes of marketing through E-mail like Permission issue in Email in aim of using for marketing activity and key success factor base on previous literatures.",
"title": ""
},
{
"docid": "2e6af4ea3a375f67ce5df110a31aeb85",
"text": "Controlled power system separation, which separates the transmission system into islands in a controlled manner, is considered the final resort against a blackout under severe disturbances, e.g., cascading events. Three critical problems of controlled separation are where and when to separate and what to do after separation, which are rarely studied together. They are addressed in this paper by a proposed unified controlled separation scheme based on synchrophasors. The scheme decouples the three problems by partitioning them into sub-problems handled strategically in three time stages: the Offline Analysis stage determines elementary generator groups, optimizes potential separation points in between, and designs post-separation control strategies; the Online Monitoring stage predicts separation boundaries by modal analysis on synchrophasor data; the Real-time Control stage calculates a synchrophasor-based separation risk index for each boundary to predict the time to perform separation. The proposed scheme is demonstrated on a 179-bus power system by case studies.",
"title": ""
},
{
"docid": "4dc50a9c0665b5e2a7dcbc369acefdb0",
"text": "Nature is the principal source for proposing new optimization methods such as genetic algorithms (GA) and simulated annealing (SA) methods. All traditional evolutionary algorithms are heuristic population-based search procedures that incorporate random variation and selection. The main contribution of this study is that it proposes a novel optimization method that relies on one of the theories of the evolution of the universe; namely, the Big Bang and Big Crunch Theory. In the Big Bang phase, energy dissipation produces disorder and randomness is the main feature of this phase; whereas, in the Big Crunch phase, randomly distributed particles are drawn into an order. Inspired by this theory, an optimization algorithm is constructed, which will be called the Big Bang–Big Crunch (BB–BC) method that generates random points in the Big Bang phase and shrinks those points to a single representative point via a center of mass or minimal cost approach in the Big Crunch phase. It is shown that the performance of the new (BB–BC) method demonstrates superiority over an improved and enhanced genetic search algorithm also developed by the authors of this study, and outperforms the classical genetic algorithm (GA) for many benchmark test functions. q 2005 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "528812aa635d6b9f0b65cc784fb256e1",
"text": "Pointing tasks are commonly studied in HCI research, for example to evaluate and compare different interaction techniques or devices. A recent line of work has modelled user-specific touch behaviour with machine learning methods to reveal spatial targeting error patterns across the screen. These models can also be applied to improve accuracy of touchscreens and keyboards, and to recognise users and hand postures. However, no implementation of these techniques has been made publicly available yet, hindering broader use in research and practical deployments. Therefore, this paper presents a toolkit which implements such touch models for data analysis (Python), mobile applications (Java/Android), and the web (JavaScript). We demonstrate several applications, including hand posture recognition, on touch targeting data collected in a study with 24 participants. We consider different target types and hand postures, changing behaviour over time, and the influence of hand sizes.",
"title": ""
},
{
"docid": "ad31171c2c3c08a33dedd0ecdc01c8f7",
"text": "With the appearance of many devices that everyday captured a large number of images. The rapid access to these huge collections of images and automatically characterizing an activity or an experience from this huge collection of unlabeled and unstructured egocentric data presents major challenges and requires novel and efficient algorithmic solutions. One of the big challenges of egocentric vision and lifelogging is to develop automatic algorithms to automatically characterize everyday activities. Such information is of high interest to predict migraines attacks or assure healthy behavior of patients and individuals of high healthy risk. In this work, we first conduct a comprehensive survey of existing egocentric datasets and we will present our future contribution to automatically characterize everyday activities.",
"title": ""
},
{
"docid": "54b4726650b3afcddafb120ff99c9951",
"text": "Online harassment has been a problem to a greater or lesser extent since the early days of the internet. Previous work has applied anti-spam techniques like machine-learning based text classification (Reynolds, 2011) to detecting harassing messages. However, existing public datasets are limited in size, with labels of varying quality. The #HackHarassment initiative (an alliance of 1 tech companies and NGOs devoted to fighting bullying on the internet) has begun to address this issue by creating a new dataset superior to its predecssors in terms of both size and quality. As we (#HackHarassment) complete further rounds of labelling, later iterations of this dataset will increase the available samples by at least an order of magnitude, enabling corresponding improvements in the quality of machine learning models for harassment detection. In this paper, we introduce the first models built on the #HackHarassment dataset v1.0 (a new open dataset, which we are delighted to share with any interested researcherss) as a benchmark for future research.",
"title": ""
},
{
"docid": "bd320ffcd9c28e2c3ea2d69039bfdbe9",
"text": "3D LiDAR scanners are playing an increasingly important role in autonomous driving as they can generate depth information of the environment. However, creating large 3D LiDAR point cloud datasets with point-level labels requires a significant amount of manual annotation. This jeopardizes the efficient development of supervised deep learning algorithms which are often data-hungry. We present a framework to rapidly create point clouds with accurate point-level labels from a computer game. To our best knowledge, this is the first publication on LiDAR point cloud simulation framework for autonomous driving. The framework supports data collection from both auto-driving scenes and user-configured scenes. Point clouds from auto-driving scenes can be used as training data for deep learning algorithms, while point clouds from user-configured scenes can be used to systematically test the vulnerability of a neural network, and use the falsifying examples to make the neural network more robust through retraining. In addition, the scene images can be captured simultaneously in order for sensor fusion tasks, with a method proposed to do automatic registration between the point clouds and captured scene images. We show a significant improvement in accuracy (+9%) in point cloud segmentation by augmenting the training dataset with the generated synthesized data. Our experiments also show by testing and retraining the network using point clouds from user-configured scenes, the weakness/blind spots of the neural network can be fixed.",
"title": ""
},
{
"docid": "dfa29dceea4d755137dacbbe07db25d8",
"text": "BACKGROUND\nThe Parent Report of Children's Abilities-Revised (PARCA-R) is a questionnaire for assessing cognitive and language development in very preterm infants. Given the increased risk of developmental delay in infants born late and moderately preterm (LMPT; 32-36 weeks), this study aimed to validate this questionnaire as a screening tool in this population.\n\n\nMETHODS\nParents of 219 children born LMPT completed the PARCA-R questionnaire and the Brief Infant Toddler Social and Emotional Assessment when children were 24 months corrected age (range, 24 months-27 months). The children were subsequently assessed by using the cognitive and language scales of the Bayley Scales of Infant and Toddler Development, Third Edition (Bayley-III).\n\n\nRESULTS\nAn average Bayley-III, cognitive and language (CB-III) score and a total PARCA-R Parent Report Composite (PRC) score were computed. There was a large association between PRC and CB-III scores (r = 0.66, P < .001) indicating good concurrent validity. Using Youden index, the optimum PARCA-R cutoff for identifying children with moderate/severe developmental delay (CB-III scores < 80) was PRC scores < 73. This gave sensitivity 0.90 (95% confidence interval: 0.75-1.00) and specificity 0.76 (95% confidence interval: 0.70-0.82), indicating good diagnostic utility. Approximately two-thirds of the children who had a PRC score < 73 had false-positive screens. However, these children had significantly poorer cognitive and behavioral outcomes than children with true negative screens.\n\n\nCONCLUSIONS\nThe PARCA-R has good concurrent validity with a gold standard developmental test and can be used to identify LMPT infants who may benefit from a clinical assessment. The PARCA-R has potential for clinical use as a first-line cognitive screening tool for this sizeable population of infants in whom follow-up may be beneficial.",
"title": ""
},
{
"docid": "8ed247a04a8e5ab201807e0d300135a3",
"text": "We reproduce the Structurally Constrained Recurrent Network (SCRN) model, and then regularize it using the existing widespread techniques, such as naïve dropout, variational dropout, and weight tying. We show that when regularized and optimized appropriately the SCRN model can achieve performance comparable with the ubiquitous LSTMmodel in language modeling task on English data, while outperforming it on non-English data. Title and Abstract in Russian Воспроизведение и регуляризация SCRN модели Мы воспроизводим структурно ограниченную рекуррентную сеть (SCRN), а затем добавляем регуляризацию, используя существующие широко распространенные методы, такие как исключение (дропаут), вариационное исключение и связка параметров. Мы показываем, что при правильной регуляризации и оптимизации показатели SCRN сопоставимы с показателями вездесущей LSTM в задаче языкового моделирования на английских текстах, а также превосходят их на неанглийских данных.",
"title": ""
},
{
"docid": "5d3a0b1dfdbffbd4465ad7a9bb2f6878",
"text": "The Cancer Genome Atlas (TCGA) is a public funded project that aims to catalogue and discover major cancer-causing genomic alterations to create a comprehensive \"atlas\" of cancer genomic profiles. So far, TCGA researchers have analysed large cohorts of over 30 human tumours through large-scale genome sequencing and integrated multi-dimensional analyses. Studies of individual cancer types, as well as comprehensive pan-cancer analyses have extended current knowledge of tumorigenesis. A major goal of the project was to provide publicly available datasets to help improve diagnostic methods, treatment standards, and finally to prevent cancer. This review discusses the current status of TCGA Research Network structure, purpose, and achievements.",
"title": ""
},
{
"docid": "e3da610a131922990edaa6216ff4a025",
"text": "Learning high-level image representations using object proposals has achieved remarkable success in multi-label image recognition. However, most object proposals provide merely coarse information about the objects, and only carefully selected proposals can be helpful for boosting the performance of multi-label image recognition. In this paper, we propose an object-proposal-free framework for multi-label image recognition: random crop pooling (RCP). Basically, RCP performs stochastic scaling and cropping over images before feeding them to a standard convolutional neural network, which works quite well with a max-pooling operation for recognizing the complex contents of multi-label images. To better fit the multi-label image recognition task, we further develop a new loss function-the dynamic weighted Euclidean loss-for the training of the deep network. Our RCP approach is amazingly simple yet effective. It can achieve significantly better image recognition performance than the approaches using object proposals. Moreover, our adapted network can be easily trained in an end-to-end manner. Extensive experiments are conducted on two representative multi-label image recognition data sets (i.e., PASCAL VOC 2007 and PASCAL VOC 2012), and the results clearly demonstrate the superiority of our approach.",
"title": ""
}
] |
scidocsrr
|
e6144cbeb803c69e2c24ac1a76e46af8
|
When Money Learns to Fly: Towards Sensing as a Service Applications Using Bitcoin
|
[
{
"docid": "cf90703045e958c48282d758f84f2568",
"text": "One expectation about the future Internet is the participation of billions of sensor nodes, integrating the physical with the digital world. This Internet of Things can offer new and enhanced services and applications based on knowledge about the environment and the entities within. Millions of micro-providers could come into existence, forming a highly fragmented market place with new business opportunities to offer commercial services. In the related field of Internet and Telecommunication services, the design of markets and pricing schemes has been a vital research area in itself. We discuss how these findings can be transferred to the Internet of Things. Both the appropriate market structure and corresponding pricing schemes need to be well understood to enable a commercial success of sensor-based services. We show some steps that an evolutionary establishment of this market might have to take.",
"title": ""
}
] |
[
{
"docid": "cc64adfeed5dcc457e03bd03efcd03ba",
"text": "This work presents methods for path planning and obstacle avoidance for the humanoid robot QRIO, allowing the robot to autonomously walk around in a home environment. For an autonomous robot, obstacle detection and localization as well as representing them in a map are crucial tasks for the success of the robot. Our approach is based on plane extraction from data captured by a stereo-vision system that has been developed specifically for QRIO. We briefly overview the general software architecture composed of perception, short and long term memory, behavior control, and motion control, and emphasize on our methods for obstacle detection by plane extraction, occupancy grid mapping, and path planning. Experimental results complete the description of our system.",
"title": ""
},
{
"docid": "8d469e95232a8c4c8dce9aa8aee2f357",
"text": "In this paper, a wearable hand exoskeleton with force-controllable and compact actuator modules is proposed. In order to apply force feedback accurately while allowing natural finger motions, the exoskeleton linkage structure with three degrees of freedom (DOFs) was designed, which was inspired by the muscular skeletal structure of the finger. As an actuating system, a series elastic actuator (SEA) mechanism, which consisted of a small linear motor, a manually designed motor driver, a spring and potentiometers, was applied. The friction of the motor was identified and compensated for obtaining a linearized model of the actuating system. Using a LQ (linear quadratic) tuned PD (proportional and derivative) controller and a disturbance observer (DOB), the proposed actuator module could generate the desired force accurately with actual finger movements. By integrating together the proposed exoskeleton structure, actuator modules and control algorithms, a wearable hand exoskeleton with force-controllable and compact actuator modules was developed to deliver accurate force to the fingertips for flexion/extension motions.",
"title": ""
},
{
"docid": "acaf692dc8abca626c51c65e79982a35",
"text": "In this paper an impulse-radio ultra-wideband (IR-UWB) hardware demonstrator is presented, which can be used as a radar sensor for highly precise object tracking and breath-rate sensing. The hardware consists of an impulse generator integrated circuit (IC) in the transmitter and a correlator IC with an integrating baseband circuit as correlation receiver. The radiated impulse is close to a fifth Gaussian derivative impulse with σ = 51 ps, efficiently using the Federal Communications Commission indoor mask. A detailed evaluation of the hardware is given. For the tracking, an impulse train is radiated by the transmitter, and the reflections of objects in front of the sensor are collected by the receiver. With the reflected signals, a continuous hardware correlation is computed by a sweeping impulse correlation. The correlation is applied to avoid sampling of the RF impulse with picosecond precision. To localize objects precisely in front of the sensor, three impulse tracking methods are compared: Tracking of the maximum impulse peak, tracking of the impulse slope, and a slope-to-slope tracking of the object's reflection and the signal of the static direct coupling between transmit and receive antenna; the slope-to-slope tracking showing the best performance. The precision of the sensor is shown by a measurement with a metal plate of 1-mm sinusoidal deviation, which is clearly resolved. Further measurements verify the use of the demonstrated principle as a breathing sensor. The breathing signals of male humans and a seven-week-old infant are presented, qualifying the IR-UWB radar principle as a useful tool for breath-rate determination.",
"title": ""
},
{
"docid": "34858704b21e0665b4774802f4e66958",
"text": "Though based on abstractions of nature, current evolutionary algorithms and artificial life models lack the drive to complexity characteristic of natural evolution. Thus this paper argues that the prevalent fitness-pressure-based abstraction does not capture how natural evolution discovers complexity. Alternatively, this paper proposes that natural evolution can be abstracted as a process that discovers many ways to express the same functionality. That is, all successful organisms must meet the same minimal criteria of survival and reproduction. This abstraction leads to the key idea in this paper: Searching for novel ways of meeting the same minimal criteria, which is an accelerated model of this new abstraction, may be an effective search algorithm. Thus the existing novelty search method, which rewards any new behavior, is extended to enforce minimal criteria. Such minimal criteria novelty search prunes the space of viable behaviors and may often be more efficient than the search for novelty alone. In fact, when compared to the raw search for novelty and traditional fitness-based search in the two maze navigation experiments in this paper, minimal criteria novelty search evolves solutions more consistently. It is possible that refining the evolutionary computation abstraction in this way may lead to solving more ambitious problems and evolving more complex artificial organisms.",
"title": ""
},
{
"docid": "646b594b713a92a5a0ab6b97ee91d927",
"text": "We aim to constrain the evolution of active galactic nuclei (AGNs) as a function of obscuration using an X-ray-selected sample of ∼2000 AGNs from a multi-tiered survey including the CDFS, AEGIS-XD, COSMOS, and XMM-XXL fields. The spectra of individual X-ray sources are analyzed using a Bayesian methodology with a physically realistic model to infer the posterior distribution of the hydrogen column density and intrinsic X-ray luminosity. We develop a novel non-parametric method that allows us to robustly infer the distribution of the AGN population in X-ray luminosity, redshift, and obscuring column density, relying only on minimal smoothness assumptions. Our analysis properly incorporates uncertainties from low count spectra, photometric redshift measurements, association incompleteness, and the limited sample size. We find that obscured AGNs with NH > 1022 cm−2 account for 77 −5% of the number density and luminosity density of the accretion supermassive black hole population with LX > 1043 erg s−1, averaged over cosmic time. Compton-thick AGNs account for approximately half the number and luminosity density of the obscured population, and 38 −7% of the total. We also find evidence that the evolution is obscuration dependent, with the strongest evolution around NH ≈ 1023 cm−2. We highlight this by measuring the obscured fraction in Compton-thin AGNs, which increases toward z ∼ 3, where it is 25% higher than the local value. In contrast, the fraction of Compton-thick AGNs is consistent with being constant at ≈35%, independent of redshift and accretion luminosity. We discuss our findings in the context of existing models and conclude that the observed evolution is, to first order, a side effect of anti-hierarchical growth.",
"title": ""
},
{
"docid": "94f1de78a229dc542a67ea564a0b259f",
"text": "Voice enabled personal assistants like Microsoft Cortana are becoming better every day. As a result more users are relying on such software to accomplish more tasks. While these applications are significantly improving due to great advancements in the underlying technologies, there are still shortcomings in their performance resulting in a class of user queries that such assistants cannot yet handle with satisfactory results. We analyze the data from millions of user queries, and build a machine learning system capable of classifying user queries into two classes; a class of queries that are addressable by Cortana with high user satisfaction, and a class of queries that are not. We then use unsupervised learning to cluster similar queries and assign them to human assistants who can complement Cortana functionality.",
"title": ""
},
{
"docid": "fc9193f15f6e96043271302be917f2c7",
"text": "In this article we introduce the main notions of our core ontology for the robotics and automation field, one of first results of the newly formed IEEE-RAS Working Group, named Ontologies for Robotics and Automation. It aims to provide a common ground for further ontology development in Robotics and Automation. Furthermore, we will discuss the main core ontology definitions as well as the ontology development process employed.",
"title": ""
},
{
"docid": "8ff846e3747029549185f3c8df7d100e",
"text": "In this paper, we describe a phenomenon, which we named “super-convergence”, where neural networks can be trained an order of magnitude faster than with standard training methods. The existence of super-convergence is relevant to understanding why deep networks generalize well. One of the key elements of super-convergence is training with one learning rate cycle and a large maximum learning rate. A primary insight that allows super-convergence training is that large learning rates regularize the training, hence requiring a reduction of all other forms of regularization in order to preserve an optimal regularization balance. We also derive a simplification of the Hessian Free optimization method to compute an estimate of the optimal learning rate. Experiments demonstrate super-convergence for Cifar-10/100, MNIST and Imagenet datasets, and resnet, wide-resnet, densenet, and inception architectures. In addition, we show that super-convergence provides a greater boost in performance relative to standard training when the amount of labeled training data is limited. The architectures to replicate this work will be made available upon publication.",
"title": ""
},
{
"docid": "289b1eaf4535374f339b683983a655f9",
"text": "© The Author(s) 2017. This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/ publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. P156 Multiscale modeling of ischemic stroke with the NEURON reaction‐diffusion module Adam J. H. Newton, Alexandra H. Seidenstein, Robert A. McDougal, William W. Lytton Department of Neuroscience, Yale University, New Haven, CT 06520, USA; Department Physiology & Pharmacology, SUNY Downstate, Brooklyn, NY 11203, USA; NYU School of Engineering, 6 MetroTech Center, Brooklyn, NY 11201, USA; Kings County Hospital Center, Brooklyn, NY 11203, USA Correspondence: Adam J. H. Newton ([email protected]) BMC Neuroscience 2017, 18 (Suppl 1):P156",
"title": ""
},
{
"docid": "331c9dfa628f2bd045b6e0ad643a4d33",
"text": "What is most evident in the recent debate concerning new wetland regulations drafted by the U.S. Army Corps of Engineers is that small, isolated wetlands will likely continue to be lost. The critical biological question is whether small wetlands are expendable, and the fundamental issue is the lack of biologically relevant data on the value of wetlands, especially so-called “isolated” wetlands of small size. We used data from a geographic information system for natural-depression wetlands on the southeastern Atlantic coastal plain (U.S.A.) to examine the frequency distribution of wetland sizes and their nearest-wetland distances. Our results indicate that the majority of natural wetlands are small and that these small wetlands are rich in amphibian species and serve as an important source of juvenile recruits. Analyses simulating the loss of small wetlands indicate a large increase in the nearest-wetland distance that could impede “rescue” effects at the metapopulation level. We argue that small wetlands are extremely valuable for maintaining biodiversity, that the loss of small wetlands will cause a direct reduction in the connectance among remaining species populations, and that both existing and recently proposed legislation are inadequate for maintaining the biodiversity of wetland flora and fauna. Small wetlands are not expendable if our goal is to maintain present levels of species biodiversity. At the very least, based on these data, regulations should protect wetlands as small as 0.2 ha until additional data are available to compare diversity directly across a range of wetland sizes. Furthermore, we strongly advocate that wetland legislation focus not only on size but also on local and regional wetland distribution in order to protect ecological connectance and the source-sink dynamics of species populations. Son los Humedales Pequeños Prescindibles? Resumen: Algo muy evidente en el reciente debate sobre las nuevas regulaciones de humedales elaboradas por el cuerpo de ingenieros de la armada de los Estados Unidos es que los humedales aislados pequeños seguramente se continuarán perdiendo. La pregunta biológica crítica es si los humedales pequeños son prescindibles y e asunto fundamental es la falta de datos biológicos relevantes sobre el valor de los humedales, especialmente los llamados humedales “aislados” de tamaño pequeño. Utilizamos datos de GIS para humedales de depresiones naturales en la planicie del sureste de la costa Atlántica (U.S.A.) para examinar la distribución de frecuencias de los tamaños de humedales y las distancias a los humedales mas cercanos. Nuestros resultados indican que la mayoría de los humedales naturales son pequeños y que estos humedales pequeños son ricos en especies de anfibios y sirven como una fuente importante de reclutas juveniles. Análisis simulando la pérdida de humedales pequeños indican un gran incremento en la distancia al humedal mas cercano lo cual impediría efectos de “rescate” a nivel de metapoblación. Argumentamos que los humedales pequeños son extremadamente valiosos para el mantenimiento de la biodiversidad, que la pérdida de humedales pequeños causará una reducción directa en la conexión entre poblaciones de especies remanentes y que tanto la legislación propuesta como la existente son inadecuadas para mantener la biodiversidad de la flora y fauna de los humedales. Si nuestra meta es mantener los niveles actuales de biodiversidad de especies, los humedales pequeños no son prescindibles. En base en estos datos, las regulaciones deberían por lo menos proteger humedales tan pequeños como 0.2 ha hasta que se tengan a la mano datos adicionales para comPaper submitted April 1, 1998; revised manuscript accepted June 24, 1998. 1130 Expendability of Small Wetlands Semlitsch & Bodie Conservation Biology Volume 12, No. 5, October 1998 parar directamente la diversidad a lo largo de un rango de humedales de diferentes tamaños. Mas aún, abogamos fuertemente por que la regulación de los pantanos se enfoque no solo en el tamaño, sino también en la distribución local y regional de los humedales para poder proteger la conexión ecológica y las dinámicas fuente y sumidero de poblaciones de especies.",
"title": ""
},
{
"docid": "a5a53221aa9ccda3258223b9ed4e2110",
"text": "Accurate and reliable inventory forecasting can save an organization from overstock, under-stock and no stock/stock-out situation of inventory. Overstocking leads to high cost of storage and its maintenance, whereas under-stocking leads to failure to meet the demand and losing profit and customers, similarly stock-out leads to complete halt of production or sale activities. Inventory transactions generate data, which is a time-series data having characteristic volume, speed, range and regularity. The inventory level of an item depends on many factors namely, current stock, stock-on-order, lead-time, annual/monthly target. In this paper, we present a perspective of treating Inventory management as a problem of Genetic Programming based on inventory transactions data. A Genetic Programming — Symbolic Regression (GP-SR) based mathematical model is developed and subsequently used to make forecasts using Holt-Winters Exponential Smoothing method for time-series modeling. The GP-SR model evolves based on RMSE as the fitness function. The performance of the model is measured in terms of RMSE and MAE. The estimated values of item demand from the GP-SR model is finally used to simulate a time-series and forecasts are generated for inventory required on a monthly time horizon.",
"title": ""
},
{
"docid": "b1f3c073ec058b0b73c524aa2d381e5f",
"text": "A PCR-based assay was developed for more accurate identification of Vibrio parahaemolyticus through targeting the bla CARB-17 like element, an intrinsic β-lactamase gene that may also be regarded as a novel species-specific genetic marker of this organism. Homologous analysis showed that bla CARB-17 like genes were more conservative than the tlh, toxR and atpA genes, the genetic markers commonly used as detection targets in identification of V. parahaemolyticus. Our data showed that this bla CARB-17-specific PCR-based detection approach consistently achieved 100% specificity, whereas PCR targeting the tlh and atpA genes occasionally produced false positive results. Furthermore, a positive result of this test is consistently associated with an intrinsic ampicillin resistance phenotype of the test organism, presumably conferred by the products of bla CARB-17 like genes. We envision that combined analysis of the unique genetic and phenotypic characteristics conferred by bla CARB-17 shall further enhance the detection specificity of this novel yet easy-to-use detection approach to a level superior to the conventional methods used in V. parahaemolyticus detection and identification.",
"title": ""
},
{
"docid": "86c3582e15e37a3fd088ca5f3ed107b3",
"text": "Symbiotic interactions have been shown to facilitate shifts in the structure and function of host plant communities. For example, parasitic plants can induce changes in plant diversity through the suppression of competitive community dominants. Arbuscular mycorrhizal (AM) fungi have also be shown to induce shifts in host communities by increasing host plant nutrient uptake and growth while suppressing non-mycorrhizal species. AM fungi can therefore function as ecosystem engineers facilitating shifts in host plant communities though the presumed physiological suppression of non-contributing or non-mycorrhizal plant species. This dichotomy in plant response to AM fungi has been suggested as a tool to suppress weed species (many of which are non-mycorrhizal) in agro-ecosystems where mycorrhizal crop species are cultivated. Rinaudo et al. (2010), this issue, have demonstrated that AM fungi can suppress pernicious non-mycorrhizal weed species including Chenopodium album (fat hen) while benefiting the crop plant Helianthus annuus (sunflower). These findings now suggest a future for harnessing AM fungi as agro-ecosystem engineers representing potential alternatives to costly and environmentally damaging herbicides.",
"title": ""
},
{
"docid": "7ab7a2270c364bfad24ea155f003a032",
"text": "In this letter, we present a method of two-dimensional canonical correlation analysis (2D-CCA) where we extend the standard CCA in such a way that relations between two different sets of image data are directly sought without reshaping images into vectors. We stress that 2D-CCA dramatically reduces the computational complexity, compared to the standard CCA. We show the useful behavior of 2D-CCA through numerical examples of correspondence learning between face images in different poses and illumination conditions.",
"title": ""
},
{
"docid": "8a7ca3dabe17e3e3b9b94a4348d362ff",
"text": "Contrast enhancement of an image can efficiently performed by Histogram Equalization. However, this method tends to introduce unnecessary visual deterioration such as saturation effect. One of the solutions to overcome this weakness is by preserving the mean brightness of the input image inside the output image. This paper proposes a new histogram equalization method called Contrast Stretching Recursively Separated Histogram Equalization (CSRSHE), for brightness preservation and image contrast enhancement. This algorithm applies a two stage approach: 1) A new intensity is assigned to each pixel according to an adaptive transfer function that is designed on the basis of the global and local statistics of the input image. 2) Performing recursive mean separate histogram equalization based on a modified local contrast stretching manipulation. We show that compared to other existent methods, CSRSHE preserves the image brightness more accurately and produces images with better contrast enhancement.",
"title": ""
},
{
"docid": "b94c18b8d3915709d03b94cffe979363",
"text": "We apply the weight of evidence reformulation of AdaBoosted naive Bayes scoring due to Ridgeway et al. (1998) to the problem of diagnosing insurance claim fraud. The method effectively combines the advantages of boosting and the explanatory power of the weight of evidence scoring framework. We present the results of an experimental evaluation with an emphasis on discriminatory power, ranking ability, and calibration of probability estimates. The data to which we apply the method consists of closed personal injury protection (PIP) automobile insurance claims from accidents that occurred in Massachusetts (USA) during 1993 and were previously investigated for suspicion of fraud by domain experts. The data mimic the most commonly occurring data configuration, that is, claim records consisting of information pertaining to several binary fraud indicators. The findings of the study reveal the method to be a valuable contribution to the design of intelligible, accountable, and efficient fraud detection support.",
"title": ""
},
{
"docid": "fff9e38c618a6a644e3795bdefd74801",
"text": "Several code smell detection tools have been developed providing different results, because smells can be subjectively interpreted, and hence detected, in different ways. In this paper, we perform the largest experiment of applying machine learning algorithms to code smells to the best of our knowledge. We experiment 16 different machine-learning algorithms on four code smells (Data Class, Large Class, Feature Envy, Long Method) and 74 software systems, with 1986 manually validated code smell samples. We found that all algorithms achieved high performances in the cross-validation data set, yet the highest performances were obtained by J48 and Random Forest, while the worst performance were achieved by support vector machines. However, the lower prevalence of code smells, i.e., imbalanced data, in the entire data set caused varying performances that need to be addressed in the future studies. We conclude that the application of machine learning to the detection of these code smells can provide high accuracy (>96 %), and only a hundred training examples are needed to reach at least 95 % accuracy.",
"title": ""
},
{
"docid": "ff0c3f9fa9033be78b107c2f052203fa",
"text": "Complex networks, such as biological, social, and communication networks, often entail uncertainty, and thus, can be modeled as probabilistic graphs. Similar to the problem of similarity search in standard graphs, a fundamental problem for probabilistic graphs is to efficiently answer k-nearest neighbor queries (k-NN), which is the problem of computing the k closest nodes to some specific node. In this paper we introduce a framework for processing k-NN queries in probabilistic graphs. We propose novel distance functions that extend well-known graph concepts, such as shortest paths. In order to compute them in probabilistic graphs, we design algorithms based on sampling. During k-NN query processing we efficiently prune the search space using novel techniques. Our experiments indicate that our distance functions outperform previously used alternatives in identifying true neighbors in real-world biological data. We also demonstrate that our algorithms scale for graphs with tens of millions of edges.",
"title": ""
},
{
"docid": "c4e463cc685abe62d11b7ea73ed85693",
"text": "Distributed generator (DG) resources are small scale electric power generating plants that can provide power to homes, businesses or industrial facilities in distribution systems. Power loss reductions, voltage profile improvement and increasing reliability are some advantages of DG units. The above benefits can be achieved by optimal placement of DGs. Whale optimization algorithm (WOA), a novel metaheuristic algorithm, is used to determine the optimal DG size. WOA is modeled based on the unique hunting behavior of humpback whales. The WOA is evaluated on IEEE 15, 33, 69 and 85-bus test systems. WOA was compared with different types of DGs and other evolutionary algorithms. When compared with voltage sensitivity index method, WOA and index vector methods gives better results. From the analysis best results have been achieved from type III DG operating at 0.9 pf.",
"title": ""
},
{
"docid": "369c90f09cb52b7cff76f03ae99861f1",
"text": "The paper proposes a classification scheme for the roles of citations in empirical studies from the social sciences and related fields. The use of the classification, which has eight categories, is illustrated in sociology, education, demography, epidemiology and librarianship; its association with the citations' location within the paper is presented. The question of repeated citations of the same document is discussed. Several research questions to which this classification is relevant are proposed. The need for further critique, validation and experimentation is pointed out.",
"title": ""
}
] |
scidocsrr
|
4818cefbc4f1a2fdf9ab282e3c51ed19
|
Amazon Aurora: On Avoiding Distributed Consensus for I/Os, Commits, and Membership Changes
|
[
{
"docid": "0105247ab487c2d06f3ffa0d00d4b4f9",
"text": "Many distributed storage systems achieve high data access throughput via partitioning and replication, each system with its own advantages and tradeoffs. In order to achieve high scalability, however, today's systems generally reduce transactional support, disallowing single transactions from spanning multiple partitions. Calvin is a practical transaction scheduling and data replication layer that uses a deterministic ordering guarantee to significantly reduce the normally prohibitive contention costs associated with distributed transactions. Unlike previous deterministic database system prototypes, Calvin supports disk-based storage, scales near-linearly on a cluster of commodity machines, and has no single point of failure. By replicating transaction inputs rather than effects, Calvin is also able to support multiple consistency levels---including Paxos-based strong consistency across geographically distant replicas---at no cost to transactional throughput.",
"title": ""
},
{
"docid": "04b66d9285404e7fb14fcec3cd66316a",
"text": "Amazon Aurora is a relational database service for OLTP workloads offered as part of Amazon Web Services (AWS). In this paper, we describe the architecture of Aurora and the design considerations leading to that architecture. We believe the central constraint in high throughput data processing has moved from compute and storage to the network. Aurora brings a novel architecture to the relational database to address this constraint, most notably by pushing redo processing to a multi-tenant scale-out storage service, purpose-built for Aurora. We describe how doing so not only reduces network traffic, but also allows for fast crash recovery, failovers to replicas without loss of data, and fault-tolerant, self-healing storage. We then describe how Aurora achieves consensus on durable state across numerous storage nodes using an efficient asynchronous scheme, avoiding expensive and chatty recovery protocols. Finally, having operated Aurora as a production service for over 18 months, we share the lessons we have learnt from our customers on what modern cloud applications expect from databases.",
"title": ""
},
{
"docid": "558082c8d15613164d586cab0ba04d9c",
"text": "One of the potential benefits of distributed systems is their use in providing highly-available services that are likely to be usable when needed. Availabilay is achieved through replication. By having inore than one copy of information, a service continues to be usable even when some copies are inaccessible, for example, because of a crash of the computer where a copy was stored. This paper presents a new replication algorithm that has desirable performance properties. Our approach is based on the primary copy technique. Computations run at a primary. which notifies its backups of what it has done. If the primary crashes, the backups are reorganized, and one of the backups becomes the new primary. Our method works in a general network with both node crashes and partitions. Replication causes little delay in user computations and little information is lost in a reorganization; we use a special kind of timestamp called a viewstamp to detect lost information.",
"title": ""
}
] |
[
{
"docid": "89dc55f20b4cfcb63d55b8b9ead8611b",
"text": "2018 How Does Batch Normalization Help Optimization? S. Santurkar*, D. Tsipras*, A. Ilyas*, & A. Mądry NIPS 2018 (Oral presentation) 2018 Adversarially Robust Generalization Requires More Data L. Schmidt, S. Santurkar, D. Tsipras, K. Talwar, & A. Mądry NIPS 2018 (Spotlight presentation) 2018 A Classification–Based Study of Covariate Shift in GAN Distributions S. Santurkar, L. Schmidt, & A. Mądry ICML 2018 2018 Generative Compression S. Santurkar, D. Budden, & N. Shavit PCS 2018 2017 Deep Tensor Convolution on Multicores D. Budden, A. Matveev, S. Santurkar, S. R. Chaudhuri, & N. Shavit ICML 2017",
"title": ""
},
{
"docid": "f575b371d01ad0af38ca83d4adde1eb5",
"text": "Multiple-antenna systems, also known as multiple-input multiple-output radio, can improve the capacity and reliability of radio communication. However, the multiple RF chains associated with multiple antennas are costly in terms of size, power, and hardware. Antenna selection is a low-cost low-complexity alternative to capture many of the advantages of MIMO systems. This article reviews classic results on selection diversity, followed by a discussion of antenna selection algorithms at the transmit and receive sides. Extensions of classical results to antenna subset selection are presented. Finally, several open problems in this area are pointed out.",
"title": ""
},
{
"docid": "9983efe8998d0fc13b1bfe82add11e29",
"text": "Research suggests that spaced learning, compared with massed learning, results in superior long-term retention (the spacing effect). Son (2010) identified a potentially important moderator of the spacing effect: metacognitive control. Specifically, when participants chose massed restudy but were instead forced to space the restudy, the spacing effect disappeared in adults (or was reduced in children). This suggests spacing is less effective (or possibly ineffective) if implemented against the wishes of the learner. A closer examination of this paradigm, however, reveals that item-selection issues might alternatively explain the disappearance of the spacing effect. In the current experiments, we replicated the original design demonstrating that an item-selection confound is operating. Furthermore, relative to a more appropriate baseline, the spacing effect was significant and of the same size whether participants' restudy choices were honored or violated. In this paradigm, metacognitive control does not appear to moderate the spacing effect.",
"title": ""
},
{
"docid": "ef1d28df2575c2c844ca2fa109893d92",
"text": "Measurement of the quantum-mechanical phase in quantum matter provides the most direct manifestation of the underlying abstract physics. We used resonant x-ray scattering to probe the relative phases of constituent atomic orbitals in an electronic wave function, which uncovers the unconventional Mott insulating state induced by relativistic spin-orbit coupling in the layered 5d transition metal oxide Sr2IrO4. A selection rule based on intra-atomic interference effects establishes a complex spin-orbital state represented by an effective total angular momentum = 1/2 quantum number, the phase of which can lead to a quantum topological state of matter.",
"title": ""
},
{
"docid": "5e5c03220940a7c771ec19fe45f71eae",
"text": "A developmental trajectory describes the course of a behavior over age or time. A group-based method for identifying distinctive groups of individual trajectories within the population and for profiling the characteristics of group members is demonstrated. Such clusters might include groups of \"increasers.\" \"decreasers,\" and \"no changers.\" Suitably defined probability distributions are used to handle 3 data types—count, binary, and psychometric scale data. Four capabilities are demonstrated: (a) the capability to identify rather than assume distinctive groups of trajectories, (b) the capability to estimate the proportion of the population following each such trajectory group, (c) the capability to relate group membership probability to individual characteristics and circumstances, and (d) the capability to use the group membership probabilities for various other purposes such as creating profiles of group members.",
"title": ""
},
{
"docid": "ebb024bbd923d35fd86adc2351073a48",
"text": "Background: Depression is a chronic condition that results in considerable disability, and particularly in later life, severely impacts the life quality of the individual with this condition. The first aim of this review article was to summarize, synthesize, and evaluate the research base concerning the use of dance-based exercises on health status, in general, and secondly, specifically for reducing depressive symptoms, in older adults. A third was to provide directives for professionals who work or are likely to work with this population in the future. Methods: All English language peer reviewed publications detailing the efficacy of dance therapy as an intervention strategy for older people in general, and specifically for minimizing depression and dependence among the elderly were analyzed.",
"title": ""
},
{
"docid": "654d078b7aa669ea730630e7e37b64b5",
"text": "Cancers are believed to arise from cancer stem cells (CSCs), but it is not known if these cells remain dependent upon the niche microenvironments that regulate normal stem cells. We show that endothelial cells interact closely with self-renewing brain tumor cells and secrete factors that maintain these cells in a stem cell-like state. Increasing the number of endothelial cells or blood vessels in orthotopic brain tumor xenografts expanded the fraction of self-renewing cells and accelerated the initiation and growth of tumors. Conversely, depletion of blood vessels from xenografts ablated self-renewing cells from tumors and arrested tumor growth. We propose that brain CSCs are maintained within vascular niches that are important targets for therapeutic approaches.",
"title": ""
},
{
"docid": "ec5eedf75536fb0581ed876afb9d8502",
"text": "Recent works have been applying self-attention to various fields in computer vision and natural language processing. However, the memory and computational demands of existing self-attention operations grow quadratically with the spatiotemporal size of the input. This prohibits the application of self-attention on large inputs, e.g., long sequences, high-definition images, or large videos. To remedy this, this paper proposes a novel factorized attention (FA) module, which achieves the same expressive power as previous approaches with substantially less memory and computational consumption. The resource-efficiency allows more widespread and flexible application of it. Empirical evaluations on object recognition demonstrate the effectiveness of these advantages. FA-augmented models achieved state-ofthe-art performance for object detection and instance segmentation on MS-COCO. Further, the resource-efficiency of FA democratizes self-attention to fields where the prohibitively high costs currently prevent its application. The state-of-the-art result for stereo depth estimation on the Scene Flow dataset exemplifies this.",
"title": ""
},
{
"docid": "c8e446ab0dbdaf910b5fb98f672a35dc",
"text": "MinHash and SimHash are the two widely adopted Locality Sensitive Hashing (LSH) algorithms for large-scale data processing applications. Deciding which LSH to use for a particular problem at hand is an important question, which has no clear answer in the existing literature. In this study, we provide a theoretical answer (validated by experiments) that MinHash virtually always outperforms SimHash when the data are binary, as common in practice such as search. The collision probability of MinHash is a function of resemblance similarity (R), while the collision probability of SimHash is a function of cosine similarity (S). To provide a common basis for comparison, we evaluate retrieval results in terms of S for both MinHash and SimHash. This evaluation is valid as we can prove that MinHash is a valid LSH with respect to S, by using a general inequality S ≤ R ≤ S 2−S . Our worst case analysis can show that MinHash significantly outperforms SimHash in high similarity region. Interestingly, our intensive experiments reveal that MinHash is also substantially better than SimHash even in datasets where most of the data points are not too similar to each other. This is partly because, in practical data, often R ≥ S z−S holds where z is only slightly larger than 2 (e.g., z ≤ 2.1). Our restricted worst case analysis by assuming S z−S ≤ R ≤ S 2−S shows that MinHash indeed significantly outperforms SimHash even in low similarity region. We believe the results in this paper will provide valuable guidelines for search in practice, especially when the data are sparse. Appearing in Proceedings of the 17 International Conference on Artificial Intelligence and Statistics (AISTATS) 2014, Reykjavik, Iceland. JMLR: W&CP volume 33. Copyright 2014 by the authors.",
"title": ""
},
{
"docid": "85d4675562eb87550c3aebf0017e7243",
"text": "Online social media are complementing and in some cases replacing person-to-person social interaction and redefining the diffusion of information. In particular, microblogs have become crucial grounds on which public relations, marketing, and political battles are fought. We introduce an extensible framework that will enable the real-time analysis of meme diffusion in social media by mining, visualizing, mapping, classifying, and modeling massive streams of public microblogging events. We describe a Web service that leverages this framework to track political memes in Twitter and help detect astroturfing, smear campaigns, and other misinformation in the context of U.S. political elections. We present some cases of abusive behaviors uncovered by our service. Finally, we discuss promising preliminary results on the detection of suspicious memes via supervised learning based on features extracted from the topology of the diffusion networks, sentiment analysis, and crowdsourced annotations.",
"title": ""
},
{
"docid": "9fb9664eea84d3bc0f59f7c4714debc1",
"text": "International research has shown that users are complacent when it comes to smartphone security behaviour. This is contradictory, as users perceive data stored on the `smart' devices to be private and worth protecting. Traditionally less attention is paid to human factors compared to technical security controls (such as firewalls and antivirus), but there is a crucial need to analyse human aspects as technology alone cannot deliver complete security solutions. Increasing a user's knowledge can improve compliance with good security practices, but for trainers and educators to create meaningful security awareness materials they must have a thorough understanding of users' existing behaviours, misconceptions and general attitude towards smartphone security.",
"title": ""
},
{
"docid": "c0a1b48688cd0269b787a17fa5d15eda",
"text": "Animating human character has become an active research area in computer graphics. It is really important for development of virtual environment applications such as computer games and virtual reality. One of the popular methods to animate the character is by using motion graph. Since motion graph is the main focus of this research, we investigate the preliminary work of motion graph and discuss about the main components of motion graph like distance metrics and motion transition. These two components will be taken into consideration during the process of development of motion graph. In this paper, we will also present a general framework and future plan of this study.",
"title": ""
},
{
"docid": "820727a0489e2d865288a7b5444eaa62",
"text": "For the networked control system (NCSs) with short network-induced time delay, the online fault detection method is proposed. The Markov jumping model for NCSs is established under the condition that the network-induced time delay can be governed by the Markov chain. The feasible solution of the reduced order fault detection filter is achieved based on the robust filtering method, and the non-convex optimization problem with the constraint of matrix rank is obtained. The local optimal solution of the optimization problem is considered based on the alternating projection method and the parameters of the fault detection filter are presented. Finally, the numerical examples show that the proposed approach can restrain the impact of the delays, and detect the faults quickly and effectively.",
"title": ""
},
{
"docid": "2aa4d40a0fb07996701c0148266ddc1b",
"text": "BACKGROUND/AIMS\nNeurodegenerative disorders (ND) have a major impact on quality of life (QoL) and place a substantial burden on patients, their families and carers; they are the second leading cause of disability. The objective of this study was to examine QoL in persons with ND.\n\n\nMETHODS\nA battery of subjective assessments was used, including the World Health Organization Quality of Life Questionnaire (WHOQOL-BREF) and the World Health Organization Quality of Life - Disability (WHOQOL-DIS). Psychometric properties of the WHOQOL-BREF and WHOQOL-DIS were investigated using classical psychometric methods.\n\n\nRESULTS\nParticipants (n = 149) were recruited and interviewed at two specialized centers to obtain information on health and disability perceptions, depressive symptoms (Hospital Anxiety and Depression Scale - Depression, HADS-D), Fatigue Assessment Scale (FAS), Satisfaction with Life (SWL), generic QoL (WHOQOL-BREF, WHOQOL-DIS), specific QoL (Multiple Sclerosis Impact Scale, MSIS-29; Parkinson's Disease Questionnaire, PDQ-39) and sociodemographics. Internal consistency was acceptable, except for the WHOQOL-BREF social (0.67). Associations, using Pearson's and Spearman's rho correlations, were confirmed between WHOQOL-BREF and WHOQOL-DIS with MSIS-29, PDQ-39, HADS-D, FAS and SWL. Regarding 'known group' differences, Student's t tests showed that WHOQOL-BREF and WHOQOL-DIS scores significantly discriminated between depressed and nondepressed and those perceiving a more severe impact of the disability on their lives.\n\n\nCONCLUSION\nThis study is the first to report on use of the WHOQOL-BREF and WHOQOL-DIS in Spanish persons with ND; they are promising useful tools in assessing persons with ND through the continuum of care, as they include important dimensions commonly omitted from other QoL measures.",
"title": ""
},
{
"docid": "3267c5a5f4ab9602d6f69c3d9d137c96",
"text": "This paper briefly discusses the measurement on soil moisture distribution using Electrical Capacitance Tomography (ECT) technique. ECT sensor with 12 electrodes was used for visualization measurement of permittivity distribution. ECT sensor was calibrated using low and high permittivity material i.e. dry sand and saturated soils (sand and clay) respectively. The measurements obtained were recorded and further analyzed by using Linear Back Projection (LBP) image reconstruction. Preliminary result shows that there is a positive correlation with increasing water volume.",
"title": ""
},
{
"docid": "caac45f02e29295d592ee784697c6210",
"text": "The studies included in this PhD thesis examined the interactions of syphilis, which is caused by Treponema pallidum, and HIV. Syphilis reemerged worldwide in the late 1990s and hereafter increasing rates of early syphilis were also reported in Denmark. The proportion of patients with concurrent HIV has been substantial, ranging from one third to almost two thirds of patients diagnosed with syphilis some years. Given that syphilis facilitates transmission and acquisition of HIV the two sexually transmitted diseases are of major public health concern. Further, syphilis has a negative impact on HIV infection, resulting in increasing viral loads and decreasing CD4 cell counts during syphilis infection. Likewise, HIV has an impact on the clinical course of syphilis; patients with concurrent HIV are thought to be at increased risk of neurological complications and treatment failure. Almost ten per cent of Danish men with syphilis acquired HIV infection within five years after they were diagnosed with syphilis during an 11-year study period. Interestingly, the risk of HIV declined during the later part of the period. Moreover, HIV-infected men had a substantial increased risk of re-infection with syphilis compared to HIV-uninfected men. As one third of the HIV-infected patients had viral loads >1,000 copies/ml, our conclusion supported the initiation of cART in more HIV-infected MSM to reduce HIV transmission. During a five-year study period, including the majority of HIV-infected patients from the Copenhagen area, we observed that syphilis was diagnosed in the primary, secondary, early and late latent stage. These patients were treated with either doxycycline or penicillin and the rate of treatment failure was similar in the two groups, indicating that doxycycline can be used as a treatment alternative - at least in an HIV-infected population. During a four-year study period, the T. pallidum strain type distribution was investigated among patients diagnosed by PCR testing of material from genital lesions. In total, 22 strain types were identified. HIV-infected patients were diagnosed with nine different strains types and a difference by HIV status was not observed indicating that HIV-infected patients did not belong to separate sexual networks. In conclusion, concurrent HIV remains common in patients diagnosed with syphilis in Denmark, both in those diagnosed by serological testing and PCR testing. Although the rate of syphilis has stabilized in recent years, a spread to low-risk groups is of concern, especially due to the complex symptomatology of syphilis. However, given the efficient treatment options and the targeted screening of pregnant women and persons at higher risk of syphilis, control of the infection seems within reach. Avoiding new HIV infections is the major challenge and here cART may play a prominent role.",
"title": ""
},
{
"docid": "4a84f6400edf8cf0d3a7245efae6e5f7",
"text": "The explosive use of social media also makes it a popular platform for malicious users, known as social spammers, to overwhelm normal users with unwanted content. One effective way for social spammer detection is to build a classifier based on content and social network information. However, social spammers are sophisticated and adaptable to game the system with fast evolving content and network patterns. First, social spammers continually change their spamming content patterns to avoid being detected. Second, reflexive reciprocity makes it easier for social spammers to establish social influence and pretend to be normal users by quickly accumulating a large number of “human” friends. It is challenging for existing anti-spamming systems based on batch-mode learning to quickly respond to newly emerging patterns for effective social spammer detection. In this paper, we present a general optimization framework to collectively use content and network information for social spammer detection, and provide the solution for efficient online processing. Experimental results on Twitter datasets confirm the effectiveness and efficiency of the proposed framework. Introduction Social media services, like Facebook and Twitter, are increasingly used in various scenarios such as marketing, journalism and public relations. While social media services have emerged as important platforms for information dissemination and communication, it has also become infamous for spammers who overwhelm other users with unwanted content. The (fake) accounts, known as social spammers (Webb et al. 2008; Lee et al. 2010), are a special type of spammers who coordinate among themselves to launch various attacks such as spreading ads to generate sales, disseminating pornography, viruses, phishing, befriending victims and then surreptitiously grabbing their personal information (Bilge et al. 2009), or simply sabotaging a system’s reputation (Lee et al. 2010). The problem of social spamming is a serious issue prevalent in social media sites. Characterizing and detecting social spammers can significantly improve the quality of user experience, and promote the healthy use and development of a social networking system. Following spammer detection in traditional platforms like Email and the Web (Chen et al. 2012), some efforts have Copyright c 2014, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. been devoted to detect spammers in various social networking sites, including Twitter (Lee et al. 2010), Renren (Yang et al. 2011), Blogosphere (Lin et al. 2007), etc. Existing methods can generally be divided into two categories. First category is to employ content analysis for detecting spammers in social media. Profile-based features (Lee et al. 2010) such as content and posting patterns are extracted to build an effective supervised learning model, and the model is applied on unseen data to filter social spammers. Another category of methods is to detect spammers via social network analysis (Ghosh et al. 2012). A widely used assumption in the methods is that spammers cannot establish an arbitrarily large number of social trust relations with legitimate users. The users with relatively low social influence or social status in the network will be determined as spammers. Traditional spammer detection methods become less effective due to the fast evolution of social spammers. First, social spammers show dynamic content patterns in social media. Spammers’ content information changes too fast to be detected by a static anti-spamming system based on offline modeling (Zhu et al. 2012). Spammers continue to change their spamming strategies and pretend to be normal users to fool the system. A built system may become less effective when the spammers create many new, evasive accounts. Second, many social media sites like Twitter have become a target of link farming (Ghosh et al. 2012). The reflexive reciprocity (Weng et al. 2010; Hu et al. 2013b) indicates that many users simply follow back when they are followed by someone for the sake of courtesy. It is easier for spammers to acquire a large number of follower links in social media. Thus, with the perceived social influence, they can avoid being detected by network-based methods. Similar results targeting other platforms such as Renren (Yang et al. 2011) have been reported in literature as well. Existing systems rely on building a new model to capture newly emerging content-based and network-based patterns of social spammers. Given the rapidly evolving nature, it is necessary to have a framework that efficiently reflects the effect of newly emerging data. One efficient approach to incrementally update existing model in large-scale data analysis is online learning. While online learning has been studied for years and shown its effectiveness in many applications such as image and video processing (Mairal et al. 2009) and human computer inProceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence",
"title": ""
},
{
"docid": "49445cfa92b95045d23a54eca9f9a592",
"text": "---------------------------------------------------------------------***--------------------------------------------------------------------Abstract In this competitive world, business is becoming highly saturated. Especially, the field of telecommunication faces complex challenges due to a number of vibrant competitive service providers. Therefore, it has become very difficult for them to retain existing customers. Since the cost of acquiring new customers is much higher than the cost of retaining the existing customers, it is the time for the telecom industries to take necessary steps to retain the customers to stabilize their market value. In the past decade, several data mining techniques have been proposed in the literature for predicting the churners using heterogeneous customer records. This paper reviews the different categories of customer data available in open datasets, predictive models and performance metrics used in the literature for churn prediction in telecom industry.",
"title": ""
},
{
"docid": "0108144cd6a40b8a6cd66517db3bad5e",
"text": "Level designers create gameplay through geometry, AI scripting, and item placement. There is little formal understanding of this process, but rather a large body of design lore and rules of thumb. As a result, there is no accepted common language for describing the building blocks of level design and the gameplay they create. This paper presents level design patterns for first-person shooter (FPS) games, providing cause-effect relationships between level design elements and gameplay. These patterns allow designers to create more interesting and varied levels.",
"title": ""
},
{
"docid": "19550da9d3aac45f228c814efdb19f2e",
"text": "M. B. SHORT∗, M. R. D’ORSOGNA†, V. B. PASOUR∗, G. E. TITA‡, P. J. BRANTINGHAM§, A. L. BERTOZZI∗ and L. B. CHAYES∗ ∗Department of Mathematics, University of California — Los Angeles, Los Angeles, CA 90095, USA †Department of Mathematics, California State University — Northridge, Los Angeles, CA 91330, USA ‡Department of Criminology, Law & Society, University of California — Irvine, Irvine, CA 92697, USA §Department of Anthropology, University of California — Los Angeles, Los Angeles, CA 90095, USA",
"title": ""
}
] |
scidocsrr
|
3ab9770cd1d0b44dea7df39c094a1e85
|
A procedural texture generation framework based on semantic descriptions
|
[
{
"docid": "60f94e4336d8e406097dd880f8054089",
"text": "In order to improve the retrieval accuracy of content-based image retrieval systems, research focus has been shifted from designing sophisticated low-level feature extraction algorithms to reducing the ‘semantic gap’ between the visual features and the richness of human semantics. This paper attempts to provide a comprehensive survey of the recent technical achievements in high-level semantic-based image retrieval. Major recent publications are included in this survey covering different aspects of the research in this area, including low-level image feature extraction, similarity measurement, and deriving high-level semantic features. We identify five major categories of the state-of-the-art techniques in narrowing down the ‘semantic gap’: (1) using object ontology to define high-level concepts; (2) using machine learning methods to associate low-level features with query concepts; (3) using relevance feedback to learn users’ intention; (4) generating semantic template to support high-level image retrieval; (5) fusing the evidences from HTML text and the visual content of images for WWW image retrieval. In addition, some other related issues such as image test bed and retrieval performance evaluation are also discussed. Finally, based on existing technology and the demand from real-world applications, a few promising future research directions are suggested. 2006 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "a0ca7d86ae79c263644c8cd5ae4c0aed",
"text": "Research in texture recognition often concentrates on the problem of material recognition in uncluttered conditions, an assumption rarely met by applications. In this work we conduct a first study of material and describable texture attributes recognition in clutter, using a new dataset derived from the OpenSurface texture repository. Motivated by the challenge posed by this problem, we propose a new texture descriptor, FV-CNN, obtained by Fisher Vector pooling of a Convolutional Neural Network (CNN) filter bank. FV-CNN substantially improves the state-of-the-art in texture, material and scene recognition. Our approach achieves 79.8% accuracy on Flickr material dataset and 81% accuracy on MIT indoor scenes, providing absolute gains of more than 10% over existing approaches. FV-CNN easily transfers across domains without requiring feature adaptation as for methods that build on the fully-connected layers of CNNs. Furthermore, FV-CNN can seamlessly incorporate multi-scale information and describe regions of arbitrary shapes and sizes. Our approach is particularly suited at localizing “stuff” categories and obtains state-of-the-art results on MSRC segmentation dataset, as well as promising results on recognizing materials and surface attributes in clutter on the OpenSurfaces dataset.",
"title": ""
},
{
"docid": "394ba7f036d578def70082b8c31f315f",
"text": "With the rapid growth of web images, hashing has received increasing interests in large scale image retrieval. Research efforts have been devoted to learning compact binary codes that preserve semantic similarity based on labels. However, most of these hashing methods are designed to handle simple binary similarity. The complex multi-level semantic structure of images associated with multiple labels have not yet been well explored. Here we propose a deep semantic ranking based method for learning hash functions that preserve multilevel semantic similarity between multi-label images. In our approach, deep convolutional neural network is incorporated into hash functions to jointly learn feature representations and mappings from them to hash codes, which avoids the limitation of semantic representation power of hand-crafted features. Meanwhile, a ranking list that encodes the multilevel similarity information is employed to guide the learning of such deep hash functions. An effective scheme based on surrogate loss is used to solve the intractable optimization problem of nonsmooth and multivariate ranking measures involved in the learning procedure. Experimental results show the superiority of our proposed approach over several state-of-the-art hashing methods in term of ranking evaluation metrics when tested on multi-label image datasets.",
"title": ""
}
] |
[
{
"docid": "dd95ccbc58a3a084eb0ca353f5a5b976",
"text": "This paper presents a 0 mW two-channel 28 GHz bi-directional phased-array chip packaged using flip-chip interconnects in 45nm CMOS SOI. The design alternates switched-LC phase shifters with switched attenuators to result in 5-bit phase control with an rms gain and phase error <0.8 dB and 5°, respectively at 25–33 GHz. In the RX mode, the measured gain is −10 dB and the NF is 10 dB with an input P1dB of 5 dBm. In the TX mode, the measured output P1dB is −2 dBm. This work presents an efficient solution for the construction of high-linearity and high-power phased-array base-stations by combining GaAs front-ends with a passive silicon core chip.",
"title": ""
},
{
"docid": "19b283a1438058088f9f9e337dd5aac7",
"text": "Analysis on Web search query logs has revealed that there is a large portion of entity-bearing queries, reflecting the increasing demand of users on retrieving relevant information about entities such as persons, organizations, products, etc. In the meantime, significant progress has been made in Web-scale information extraction, which enables efficient entity extraction from free text. Since an entity is expected to capture the semantic content of documents and queries more accurately than a term, it would be interesting to study whether leveraging the information about entities can improve the retrieval accuracy for entity-bearing queries. In this paper, we propose a novel retrieval approach, i.e., latent entity space (LES), which models the relevance by leveraging entity profiles to represent semantic content of documents and queries. In the LES, each entity corresponds to one dimension, representing one semantic relevance aspect. We propose a formal probabilistic framework to model the relevance in the high-dimensional entity space. Experimental results over TREC collections show that the proposed LES approach is effective in capturing latent semantic content and can significantly improve the search accuracy of several state-of-the-art retrieval models for entity-bearing queries.",
"title": ""
},
{
"docid": "408122795467ff0247f95a997a1ed90a",
"text": "With the popularity of mobile devices, photo retargeting has become a useful technique that adapts a high-resolution photo onto a low-resolution screen. Conventional approaches are limited in two aspects. The first factor is the de-emphasized role of semantic content that is many times more important than low-level features in photo aesthetics. Second is the importance of image spatial modeling: toward a semantically reasonable retargeted photo, the spatial distribution of objects within an image should be accurately learned. To solve these two problems, we propose a new semantically aware photo retargeting that shrinks a photo according to region semantics. The key technique is a mechanism transferring semantics of noisy image labels (inaccurate labels predicted by a learner like an SVM) into different image regions. In particular, we first project the local aesthetic features (graphlets in this work) onto a semantic space, wherein image labels are selectively encoded according to their noise level. Then, a category-sharing model is proposed to robustly discover the semantics of each image region. The model is motivated by the observation that the semantic distribution of graphlets from images tagged by a common label remains stable in the presence of noisy labels. Thereafter, a spatial pyramid is constructed to hierarchically encode the spatial layout of graphlet semantics. Based on this, a probabilistic model is proposed to enforce the spatial layout of a retargeted photo to be maximally similar to those from the training photos. Experimental results show that (1) noisy image labels predicted by different learners can improve the retargeting performance, according to both qualitative and quantitative analysis, and (2) the category-sharing model stays stable even when 32.36% of image labels are incorrectly predicted.",
"title": ""
},
{
"docid": "6351eeab8ac61cc08115f067b69399a3",
"text": "Electrocution is one of the rarest modes of suicide. In this case, one school going adolescent committed suicide by electrocution using bare electric wire. This is a rare case of suicidal death by applying live wires around the wrists, simulating the act of judicial electrocution. He positioned himself on armed chair and placed the nude wire loops from a cable around both wrists and switched on the current by plugging in to nearest socket by foot. There were linear electric contact wounds completely encircling around the both wrists. In addition to these linear electric burns all around wrists, there were electrical burns over both hands. This death highlights the need of supervision and close watch on children for self-destructing activities and behavior. This case also highlights unusual method adopted by adolescent to end his life.",
"title": ""
},
{
"docid": "406fab96a8fd49f4d898a9735ee1512f",
"text": "An otolaryngology phenol applicator kit can be successfully and safely used in the performance of chemical matricectomy. The applicator kit provides a convenient way to apply phenol to the nail matrix precisely and efficiently, whereas minimizing both the risk of application to nonmatrix surrounding soft tissue and postoperative recovery time.Given the smaller size of the foam-tipped applicator, we feel that this is a more precise tool than traditional cotton-tipped applicators for chemical matricectomy. Particularly with regard to lower extremity nail ablation and matricectomy, minimizing soft tissue inflammation could in turn reduce the risk of postoperative infections, decrease recovery time, as well and make for a more positive overall patient experience.",
"title": ""
},
{
"docid": "c9ea42872164e65424498c6a5c5e0c6d",
"text": "Inverse problems appear in many applications, such as image deblurring and inpainting. The common approach to address them is to design a specific algorithm for each problem. The Plug-and-Play (P&P) framework, which has been recently introduced, allows solving general inverse problems by leveraging the impressive capabilities of existing denoising algorithms. While this fresh strategy has found many applications, a burdensome parameter tuning is often required in order to obtain high-quality results. In this paper, we propose an alternative method for solving inverse problems using off-the-shelf denoisers, which requires less parameter tuning. First, we transform a typical cost function, composed of fidelity and prior terms, into a closely related, novel optimization problem. Then, we propose an efficient minimization scheme with a P&P property, i.e., the prior term is handled solely by a denoising operation. Finally, we present an automatic tuning mechanism to set the method’s parameters. We provide a theoretical analysis of the method and empirically demonstrate its competitiveness with task-specific techniques and the P&P approach for image inpainting and deblurring.",
"title": ""
},
{
"docid": "37a7de366210c2c56ec0f64992b71bef",
"text": "In this paper, we propose a novel neural approach for paraphrase generation. Conventional paraphrase generation methods either leverage hand-written rules and thesauri-based alignments, or use statistical machine learning principles. To the best of our knowledge, this work is the first to explore deep learning models for paraphrase generation. Our primary contribution is a stacked residual LSTM network, where we add residual connections between LSTM layers. This allows for efficient training of deep LSTMs. We experiment with our model and other state-of-the-art deep learning models on three different datasets: PPDB, WikiAnswers and MSCOCO. Evaluation results demonstrate that our model outperforms sequence to sequence, attention-based and bi-directional LSTM models on BLEU, METEOR, TER and an embedding-based sentence similarity metric.",
"title": ""
},
{
"docid": "f83a16d393c78d6ba0e65a4659446e7e",
"text": "Temporal action localization is an important yet challenging problem. Given a long, untrimmed video consisting of multiple action instances and complex background contents, we need not only to recognize their action categories, but also to localize the start time and end time of each instance. Many state-of-the-art systems use segment-level classifiers to select and rank proposal segments of pre-determined boundaries. However, a desirable model should move beyond segment-level and make dense predictions at a fine granularity in time to determine precise temporal boundaries. To this end, we design a novel Convolutional-De-Convolutional (CDC) network that places CDC filters on top of 3D ConvNets, which have been shown to be effective for abstracting action semantics but reduce the temporal length of the input data. The proposed CDC filter performs the required temporal upsampling and spatial downsampling operations simultaneously to predict actions at the frame-level granularity. It is unique in jointly modeling action semantics in space-time and fine-grained temporal dynamics. We train the CDC network in an end-to-end manner efficiently. Our model not only achieves superior performance in detecting actions in every frame, but also significantly boosts the precision of localizing temporal boundaries. Finally, the CDC network demonstrates a very high efficiency with the ability to process 500 frames per second on a single GPU server. Source code and trained models are available online at https://bitbucket.org/columbiadvmm/cdc.",
"title": ""
},
{
"docid": "3a502851ee6df1d210d709d8e8d4b831",
"text": "CREATION onsumers today have more choices of products and services than ever before, but they seem dissatisfied. Firms invest in greater product variety but are less able to differentiate themselves. Growth and value creation have become the dominant themes for managers. In this paper, we explain this paradox. The meaning of value and the process of value creation are rapidly shifting from a product-and firm-centric view to personalized consumer experiences. Informed, networked, empowered, and active consumers are increasingly co-creating value with the firm. The interaction between the firm and the consumer is becoming the locus of value creation and value extraction. As value shifts to experiences, the market is becoming a forum for conversation and interactions between consumers, consumer communities, and firms. It is this dialogue, access, transparency, and understanding of risk-benefits that is central to the next practice in value creation.",
"title": ""
},
{
"docid": "8bc0f3e8b2ab07a61b8d44f2873fea65",
"text": "Detection of arbitrarily rotated objects is a challenging task due to the difficulties of locating the multi-angle objects and separating them effectively from the background. The existing methods are not robust to angle varies of the objects because of the use of traditional bounding box, which is a rotation variant structure for locating rotated objects. In this article, a new detection method is proposed which applies the newly defined rotatable bounding box (RBox). The proposed detector (DRBox) can effectively handle the situation where the orientation angles of the objects are arbitrary. The training of DRBox forces the detection networks to learn the correct orientation angle of the objects, so that the rotation invariant property can be achieved. DRBox is tested to detect vehicles, ships and airplanes on satellite images, compared with Faster R-CNN and SSD, which are chosen as the benchmark of the traditional bounding box based methods. The results shows that DRBox performs much better than traditional bounding box based methods do on the given tasks, and is more robust against rotation of input image and target objects. Besides, results show that DRBox correctly outputs the orientation angles of the objects, which is very useful for locating multi-angle objects efficiently. The code and models are available at https://github.com/liulei01/DRBox.",
"title": ""
},
{
"docid": "b8bcd83f033587533d7502c54a2b67da",
"text": "The development of structural health monitoring (SHM) technology has evolved for over fifteen years in Hong Kong since the implementation of the “Wind And Structural Health Monitoring System (WASHMS)” on the suspension Tsing Ma Bridge in 1997. Five cable-supported bridges in Hong Kong, namely the Tsing Ma (suspension) Bridge, the Kap Shui Mun (cable-stayed) Bridge, the Ting Kau (cable-stayed) Bridge, the Western Corridor (cable-stayed) Bridge, and the Stonecutters (cable-stayed) Bridge, have been instrumented with sophisticated long-term SHM systems. These SHM systems mainly focus on the tracing of structural behavior and condition of the long-span bridges over their lifetime. Recently, a structural health monitoring and maintenance management system (SHM&MMS) has been designed and will be implemented on twenty-one sea-crossing viaduct bridges with a total length of 9,283 km in the Hong Kong Link Road (HKLR) of the Hong Kong – Zhuhai – Macao Bridge of which the construction commenced in mid-2012. The SHM&MMS gives more emphasis on durability monitoring of the reinforced concrete viaduct bridges in marine environment and integration of the SHM system and bridge maintenance management system. It is targeted to realize the transition from traditional corrective and preventive maintenance to condition-based maintenance (CBM) of in-service bridges. The CBM uses real-time and continuous monitoring data and monitoring-derived information on the condition of bridges (including structural performance and deterioration mechanisms) to identify when the actual maintenance is necessary and how cost-effective maintenance can be conducted. This paper outlines how to incorporate SHM technology into bridge maintenance strategy to realize CBM management of bridges.",
"title": ""
},
{
"docid": "f3fd1337e6c9eaef2e8345582afe706b",
"text": "Although the mammalian basal ganglia have long been implicated in motor behavior, it is generally recognized that the behavioral functions of this subcortical group of structures are not exclusively motoric in nature. Extensive evidence now indicates a role for the basal ganglia, in particular the dorsal striatum, in learning and memory. One prominent hypothesis is that this brain region mediates a form of learning in which stimulus-response (S-R) associations or habits are incrementally acquired. Support for this hypothesis is provided by numerous neurobehavioral studies in different mammalian species, including rats, monkeys, and humans. In rats and monkeys, localized brain lesion and pharmacological approaches have been used to examine the role of the basal ganglia in S-R learning. In humans, study of patients with neurodegenerative diseases that compromise the basal ganglia, as well as research using brain neuroimaging techniques, also provide evidence of a role for the basal ganglia in habit learning. Several of these studies have dissociated the role of the basal ganglia in S-R learning from those of a cognitive or declarative medial temporal lobe memory system that includes the hippocampus as a primary component. Evidence suggests that during learning, basal ganglia and medial temporal lobe memory systems are activated simultaneously and that in some learning situations competitive interference exists between these two systems.",
"title": ""
},
{
"docid": "bef64076bf62d9e8fbb6fbaf5534fdc6",
"text": "This paper presents an application of PageRank, a random-walk model originally devised for ranking Web search results, to ranking WordNet synsets in terms of how strongly they possess a given semantic property. The semantic properties we use for exemplifying the approach are positivity and negativity, two properties of central importance in sentiment analysis. The idea derives from the observation that WordNet may be seen as a graph in which synsets are connected through the binary relation “a term belonging to synset sk occurs in the gloss of synset si”, and on the hypothesis that this relation may be viewed as a transmitter of such semantic properties. The data for this relation can be obtained from eXtended WordNet, a publicly available sensedisambiguated version of WordNet. We argue that this relation is structurally akin to the relation between hyperlinked Web pages, and thus lends itself to PageRank analysis. We report experimental results supporting our intuitions.",
"title": ""
},
{
"docid": "42dfa7988f31403dba1c390741aa164c",
"text": "This study explored friendship variables in relation to body image, dietary restraint, extreme weight-loss behaviors (EWEBs), and binge eating in adolescent girls. From 523 girls, 79 friendship cliques were identified using social network analysis. Participants completed questionnaires that assessed body image concerns, eating, friendship relations, and psychological family, and media variables. Similarity was greater for within than for between friendship cliques for body image concerns, dietary restraint, and EWLBs, but not for binge eating. Cliques high in body image concerns and dieting manifested these concerns in ways consistent with a high weight/shape-preoccupied subculture. Friendship attitudes contributed significantly to the prediction of individual body image concern and eating behaviors. Use of EWLBs by friends predicted an individual's own level of use.",
"title": ""
},
{
"docid": "0b1b4c8d501c3b1ab350efe4f2249978",
"text": "Motivated by formation control of multiple non-holonomic mobile robots, this paper presents a trajectory tracking control scheme design for nonholonomic mobile robots that are equipped with low-level linear and angular velocities control systems. The design includes a nonlinear kinematic trajectory tracking control law and a tracking control gains selection method that provide a means to implement the nonlinear tracking control law systematically based on the dynamic control performance of the robot's low-level control systems. In addition, the proposed scheme, by design, enables the mobile robot to execute reference trajectories that are represented by time-parameterized waypoints. This feature provides the scheme a generic interface with higher-level trajectory planners. The trajectory tracking control scheme is validated using an iRobot Packbot's parameteric model estimated from experimental data.",
"title": ""
},
{
"docid": "e2fb4ed617cffabba2f28b95b80a30b3",
"text": "The importance of information security education, information security training, and information security awareness in organisations cannot be overemphasised. This paper presents working definitions for information security education, information security training and information security awareness. An investigation to determine if any differences exist between information security education, information security training and information security awareness was conducted. This was done to help institutions understand when they need to train or educate employees and when to introduce information security awareness programmes. A conceptual analysis based on the existing literature was used for proposing working definitions, which can be used as a reference point for future information security researchers. Three important attributes (namely focus, purpose and method) were identified as the distinguishing characteristics of information security education, information security training and information security awareness. It was found that these information security concepts are different in terms of their focus, purpose and methods of delivery.",
"title": ""
},
{
"docid": "914f9bf7d24d0a0ee8c42e1263a04646",
"text": "With the rapid growth in the usage of social networks worldwide, uploading and sharing of user-generated content, both text and visual, has become increasingly prevalent. An analysis of the content a user shares and engages with can provide valuable insights into an individual's preferences and lifestyle. In this paper, we present a system to automatically infer a user's interests by analysing the content of the photos they share online. We propose a way to leverage web image search engines for detecting high-level semantic concepts, such as interests, in images, without relying on a large set of labeled images. We demonstrate the effectiveness of our system through quantitative and qualitative results on data collected from Instagram.",
"title": ""
},
{
"docid": "2c7fe5484b2184564d71a03f19188251",
"text": "This paper focuses on running scans in a main memory data processing system at \"bare metal\" speed. Essentially, this means that the system must aim to process data at or near the speed of the processor (the fastest component in most system configurations). Scans are common in main memory data processing environments, and with the state-of-the-art techniques it still takes many cycles per input tuple to apply simple predicates on a single column of a table. In this paper, we propose a technique called BitWeaving that exploits the parallelism available at the bit level in modern processors. BitWeaving operates on multiple bits of data in a single cycle, processing bits from different columns in each cycle. Thus, bits from a batch of tuples are processed in each cycle, allowing BitWeaving to drop the cycles per column to below one in some case. BitWeaving comes in two flavors: BitWeaving/V which looks like a columnar organization but at the bit level, and BitWeaving/H which packs bits horizontally. In this paper we also develop the arithmetic framework that is needed to evaluate predicates using these BitWeaving organizations. Our experimental results show that both these methods produce significant performance benefits over the existing state-of-the-art methods, and in some cases produce over an order of magnitude in performance improvement.",
"title": ""
},
{
"docid": "36bf7c66b222006e1c286450595be824",
"text": "Recent terrorist attacks carried out on behalf of ISIS on American and European soil by lone wolf attackers or sleeper cells remind us of the importance of understanding the dynamics of radicalization mediated by social media communication channels. In this paper, we shed light on the social media activity of a group of twenty-five thousand users whose association with ISIS online radical propaganda has been manually verified. By using a computational tool known as dynamic activity-connectivity maps, based on network and temporal activity patterns, we investigate the dynamics of social influence within ISIS supporters. We finally quantify the effectiveness of ISIS propaganda by determining the adoption of extremist content in the general population and draw a parallel between radical propaganda and epidemics spreading, highlighting that information broadcasters and influential ISIS supporters generate highly-infectious cascades of information contagion. Our findings will help generate effective countermeasures to combat the group and other forms of online extremism.",
"title": ""
},
{
"docid": "249367e508f61804642ae37e27d70901",
"text": "For (semi-)automated subject indexing systems in digital libraries, it is often more practical to use metadata such as the title of a publication instead of the full-text or the abstract. Therefore, it is desirable to have good text mining and text classification algorithms that operate well already on the title of a publication. So far, the classification performance on titles is not competitive with the performance on the full-texts if the same number of training samples is used for training. However, it is much easier to obtain title data in large quantities and to use it for training than full-text data. In this paper, we investigate the question how models obtained from training on increasing amounts of title training data compare to models from training on a constant number of full-texts. We evaluate this question on a large-scale dataset from the medical domain (PubMed) and from economics (EconBiz). In these datasets, the titles and annotations of millions of publications are available, and they outnumber the available full-texts by a factor of 20 and 15, respectively. To exploit these large amounts of data to their full potential, we develop three strong deep learning classifiers and evaluate their performance on the two datasets. The results are promising. On the EconBiz dataset, all three classifiers outperform their full-text counterparts by a large margin. The best title-based classifier outperforms the best full-text method by 9.4%. On the PubMed dataset, the best title-based method almost reaches the performance of the best full-text classifier, with a difference of only 2.9%.",
"title": ""
}
] |
scidocsrr
|
1c7c07922c4c58c0bac2412a701fac81
|
Metatrace: Online Step-size Tuning by Meta-gradient Descent for Reinforcement Learning Control
|
[
{
"docid": "cdc3a11e556cb73f5629135cbb5f0527",
"text": "Reinforcement learning methods are often considered as a potential solution to enable a robot to adapt to changes in real time to an unpredictable environment. However, with continuous action, only a few existing algorithms are practical for real-time learning. In such a setting, most effective methods have used a parameterized policy structure, often with a separate parameterized value function. The goal of this paper is to assess such actor-critic methods to form a fully specified practical algorithm. Our specific contributions include 1) developing the extension of existing incremental policy-gradient algorithms to use eligibility traces, 2) an empirical comparison of the resulting algorithms using continuous actions, 3) the evaluation of a gradient-scaling technique that can significantly improve performance. Finally, we apply our actor-critic algorithm to learn on a robotic platform with a fast sensorimotor cycle (10ms). Overall, these results constitute an important step towards practical real-time learning control with continuous action.",
"title": ""
},
{
"docid": "4a6523b16ebe8bfa04421530f6252bd5",
"text": "Representations are fundamental to artificial intelligence. The performance of a learning system depends on how the data is represented. Typically, these representations are hand-engineered using domain knowledge. Recently, the trend is to learn these representations through stochastic gradient descent in multi-layer neural networks, which is called backprop. Learning representations directly from the incoming data stream reduces human labour involved in designing a learning system. More importantly, this allows in scaling up a learning system to difficult tasks. In this paper, we introduce a new incremental learning algorithm called crossprop, that learns incoming weights of hidden units based on the meta-gradient descent approach. This meta-gradient descent approach was previously introduced by Sutton (1992) and Schraudolph (1999) for learning stepsizes. The final update equation introduces an additional memory parameter for each of these weights and generalizes the backprop update equation. From our empirical experiments, we show that crossprop learns and reuses its feature representation while tackling new and unseen tasks whereas backprop relearns a new feature representation.",
"title": ""
}
] |
[
{
"docid": "d20068a72753d8c7238b1c0734ed5b2e",
"text": "Left atrial ablation is increasingly used to treat patients with symptomatic atrial fibrillation (AF). Prior to ablation, exclusion of left atrial appendage (LAA) thrombus is important. Whether ECG-gated dual-source computed tomography (DSCT) provides a sensitive means of detecting LAA thrombus in patients undergoing percutaneous AF ablation is unknown. Thus, we sought to determine the utility of ECG-gated DSCT in detecting LAA thrombus in patients with AF. A total of 255 patients (age 58 ± 11 years, 78% male, ejection fraction 58 ± 9%) who underwent ECG-gated DSCT and transesophageal echocardiography (TEE) prior to AF ablation between February 2006 and October 2007 were included. CHADS2 score and demographic data were obtained prospectively. Gated DSCT images were independently reviewed by two cardiac imagers blinded to TEE findings. The LAA was either defined as normal (fully opacified) or abnormal (under-filled) by DSCT. An under-filled LAA was identified in 33 patients (12.9%), of whom four had thrombus confirmed by TEE. All patients diagnosed with LAA thrombus using TEE also had an abnormal LAA by gated DSCT. Thus, sensitivity and specificity for gated DSCT were 100% and 88%, respectively. No cases of LAA filling defects were observed in patients <51 years old with a CHADS2 of 0. In patients referred for AF ablation, thrombus is uncommon in the absence of additional risk factors. Gated DSCT provides excellent sensitivity for the detection of thrombus. Thus, in AF patients with a CHADS2 of 0, gated DSCT may provide a useful stand-alone imaging modality.",
"title": ""
},
{
"docid": "4abceedb1f6c735a8bc91bc811ce4438",
"text": "The study of school bullying has recently assumed an international dimension, but is faced with difficulties in finding terms in different languages to correspond to the English word bullying. To investigate the meanings given to various terms, a set of 25 stick-figure cartoons was devised, covering a range of social situations between peers. These cartoons were shown to samples of 8- and 14-year-old pupils (N = 1,245; n = 604 at 8 years, n = 641 at 14 years) in schools in 14 different countries, who judged whether various native terms cognate to bullying, applied to them. Terms from 10 Indo-European languages and three Asian languages were sampled. Multidimensional scaling showed that 8-year-olds primarily discriminated nonaggressive and aggressive cartoon situations; however, 14-year-olds discriminated fighting from physical bullying, and also discriminated verbal bullying and social exclusion. Gender differences were less appreciable than age differences. Based on the 14-year-old data, profiles of 67 words were then constructed across the five major cartoon clusters. The main types of terms used fell into six groups: bullying (of all kinds), verbal plus physical bullying, solely verbal bullying, social exclusion, solely physical aggression, and mainly physical aggression. The findings are discussed in relation to developmental trends in how children understand bullying, the inferences that can be made from cross-national studies, and the design of such studies.",
"title": ""
},
{
"docid": "d2421e2458f6f2ce55cb9664542a7ea8",
"text": "Sensor webs consisting of nodes with limited battery power and wireless communications are deployed to collect useful information from the field. Gathering sensed information in an energy efficient manner is critical to operating the sensor network for a long period of time. In [12], a data collection problem is defined where, in a round of communication, each sensor node has a packet to be sent to the distant base station. There is some fixed amount of energy cost in the electronics when transmitting or receiving a packet and a variable cost when transmitting a packet which depends on the distance of transmission. If each node transmits its sensed data directly to the base station, then it will deplete its power quickly. The LEACH protocol presented in [12] is an elegant solution where clusters are formed to fuse data before transmitting to the base station. By randomizing the cluster-heads chosen to transmit to the base station, LEACH achieves a factor of 8 improvement compared to direct transmissions, as measured in terms of when nodes die. An improved version of LEACH, called LEACH-C, is presented in [14], where the central base station performs the clustering to improve energy efficiency. In this paper, we present an improved scheme, called PEGASIS (Power-Efficient GAthering in Sensor Information Systems), which is a near-optimal chain-based protocol that minimizes energy. In PEGASIS, each node communicates only with a close neighbor and takes turns transmitting to the base station, thus reducing the amount of energy spent per round. Simulation results show that PEGASIS performs better than LEACH by about 100 to 200 percent when 1 percent, 25 percent, 50 percent, and 100 percent of nodes die for different network sizes and topologies. For many applications, in addition to minimizing energy, it is also important to consider the delay incurred in gathering sensed data. We capture this with the energy delay metric and present schemes that attempt to balance the energy and delay cost for data gathering from sensor networks. Since most of the delay factor is in the transmission time, we measure delay in terms of number of transmissions to accomplish a round of data gathering. Therefore, delay can be reduced by allowing simultaneous transmissions when possible in the network. With CDMA capable sensor nodes [11], simultaneous data transmissions are possible with little interference. In this paper, we present two new schemes to minimize energy delay using CDMA and non-CDMA sensor nodes. If the goal is to minimize only the delay cost, then a binary combining scheme can be used to accomplish this task in about logN units of delay with parallel communications and incurring a slight increase in energy cost. With CDMA capable sensor nodes, a chain-based binary scheme performs best in terms of energy delay. If the sensor nodes are not CDMA capable, then parallel communications are possible only among spatially separated nodes and a chain-based 3-level hierarchy scheme performs well. We compared the performance of direct, LEACH, and our schemes with respect to energy delay using extensive simulations for different network sizes. Results show that our schemes perform 80 or more times better than the direct scheme and also outperform the LEACH protocol.",
"title": ""
},
{
"docid": "1d162153d7bbaf63991f79aa92eeae6e",
"text": "We describe a contextual parser for the Robot Commands Treebank, a new crowdsourced resource. In contrast to previous semantic parsers that select the most-probable parse, we consider the different problem of parsing using additional situational context to disambiguate between different readings of a sentence. We show that multiple semantic analyses can be searched using dynamic programming via interaction with a spatial planner, to guide the parsing process. We are able to parse sentences in near linear-time by ruling out analyses early on that are incompatible with spatial context. We report a 34% upper bound on accuracy, as our planner correctly processes spatial context for 3,394 out of 10,000 sentences. However, our parser achieves a 96.53% exactmatch score for parsing within the subset of sentences recognized by the planner, compared to 82.14% for a non-contextual parser.",
"title": ""
},
{
"docid": "4f43c8ba81a8b828f225923690e9f7dd",
"text": "Melody extraction algorithms aim to produce a sequence of frequency values corresponding to the pitch of the dominant melody from a musical recording. Over the past decade, melody extraction has emerged as an active research topic, comprising a large variety of proposed algorithms spanning a wide range of techniques. This article provides an overview of these techniques, the applications for which melody extraction is useful, and the challenges that remain. We start with a discussion of ?melody? from both musical and signal processing perspectives and provide a case study that interprets the output of a melody extraction algorithm for specific excerpts. We then provide a comprehensive comparative analysis of melody extraction algorithms based on the results of an international evaluation campaign. We discuss issues of algorithm design, evaluation, and applications that build upon melody extraction. Finally, we discuss some of the remaining challenges in melody extraction research in terms of algorithmic performance, development, and evaluation methodology.",
"title": ""
},
{
"docid": "563f331d3ab4ae7e7f6282276a792b88",
"text": "The exponential growth of the data may lead us to the information explosion era, an era where most of the data cannot be managed easily. Text mining study is believed to prevent the world from entering that era. One of the text mining studies that may prevent the explosion era is text classification. It is a way to classify articles into several predefined categories. In this research, the classifier implements TF-IDF algorithm. TF-IDF is an algorithm that counts the word weight by considering frequency of the word (TF) and in how many files the word can be found (IDF). Since the IDF could see the in how many files a term can be found, it can control the weight of each word. When a word can be found in so many files, it will be considered as an unimportant word. TF-IDF has been proven to create a classifier that could classify news articles in Bahasa Indonesia in a high accuracy; 98.3%.",
"title": ""
},
{
"docid": "f66c9aa537630fdbff62d8d49205123b",
"text": "This workshop will explore community based repositories for educational data and analytic tools that are used to connect researchers and reduce the barriers to data sharing. Leading innovators in the field, as well as attendees, will identify and report on bottlenecks that remain toward our goal of a unified repository. We will discuss these as well as possible solutions. We will present LearnSphere, an NSF funded system that supports collaborating on and sharing a wide variety of educational data, learning analytics methods, and visualizations while maintaining confidentiality. We will then have hands-on sessions in which attendees have the opportunity to apply existing learning analytics workflows to their choice of educational datasets in the repository (using a simple drag-and-drop interface), add their own learning analytics workflows (requires very basic coding experience), or both. Leaders and attendees will then jointly discuss the unique benefits as well as the limitations of these solutions. Our goal is to create building blocks to allow researchers to integrate their data and analysis methods with others, in order to advance the future of learning science.",
"title": ""
},
{
"docid": "2e5a3cd852a53b018032804f77088d03",
"text": "A general method for text localization and recognition in real-world images is presented. The proposed method is novel, as it (i) departs from a strict feed-forward pipeline and replaces it by a hypothesesverification framework simultaneously processing multiple text line hypotheses, (ii) uses synthetic fonts to train the algorithm eliminating the need for time-consuming acquisition and labeling of real-world training data and (iii) exploits Maximally Stable Extremal Regions (MSERs) which provides robustness to geometric and illumination conditions. The performance of the method is evaluated on two standard datasets. On the Char74k dataset, a recognition rate of 72% is achieved, 18% higher than the state-of-the-art. The paper is first to report both text detection and recognition results on the standard and rather challenging ICDAR 2003 dataset. The text localization works for number of alphabets and the method is easily adapted to recognition of other scripts, e.g. cyrillics.",
"title": ""
},
{
"docid": "1f50a6d6e7c48efb7ffc86bcc6a8271d",
"text": "Creating short summaries of documents with respect to a query has applications in for example search engines, where it may help inform users of the most relevant results. Constructing such a summary automatically, with the potential expressiveness of a human-written summary, is a difficult problem yet to be fully solved. In this thesis, a neural network model for this task is presented. We adapt an existing dataset of news article summaries for the task and train a pointer-generator model using this dataset to summarize such articles. The generated summaries are then evaluated by measuring similarity to reference summaries. We observe that the generated summaries exhibit abstractive properties, but also that they have issues, such as rarely being truthful. However, we show that a neural network summarization model, similar to existing neural network models for abstractive summarization, can be constructed to make use of queries for more targeted summaries.",
"title": ""
},
{
"docid": "4936f1c5dfa5da581c4bcaf147050041",
"text": "With the popularity of social networks, such as mi-croblogs and Twitter, topic inference for short text is increasingly significant and essential for many content analysis tasks. Biterm topic model (BTM) is superior to conventional topic models in uncovering latent semantic relevance for short text. However, Gibbs sampling employed by BTM is very time consuming when inferring topics, especially for large-scale datasets. It requires O{K) operations per sample for K topics, where K denotes the number of topics in the corpus. In this paper, we propose an acceleration algorithm of BTM, FastBTM, using an efficient sampling method for BTM which only requires O(1) amortized time while the traditional ones scale linearly with the number of topics. FastBTM is based on Metropolis-Hastings and alias method, both of which have been widely adopted in latent Dirichlet allocation (LDA) model and achieved outstanding speedup. We carry out a number of experiments on Tweets2011 Collection dataset and Enron dataset, indicating that our method is robust enough for both short texts and normal documents. Our work can be approximately 9 times faster than traditional Gibbs sampling method per iteration, when setting K = 1000. The source code of FastBTM can be obtained from https://github.com/paperstudy/FastBTM.",
"title": ""
},
{
"docid": "4e7443088eedf5e6199959a06ebc420c",
"text": "The development of computational-intelligence based strategies for electronic markets has been the focus of intense research. In order to be able to design efficient and effective automated trading strategies, one first needs to understand the workings of the market, the strategies that traders use and their interactions as well as the patterns emerging as a result of these interactions. In this paper, we develop an agent-based model of the FX market which is the market for the buying and selling of currencies. Our agent-based model of the FX market (ABFXM) comprises heterogeneous trading agents which employ a strategy that identifies and responds to periodic patterns in the price time series. We use the ABFXM to undertake a systematic exploration of its constituent elements and their impact on the stylized facts (statistical patterns) of transactions data. This enables us to identify a set of sufficient conditions which result in the emergence of the stylized facts similarly to the real market data, and formulate a model which closely approximates the stylized facts. We use a unique high frequency dataset of historical transactions data which enables us to run multiple simulation runs and validate our approach and draw comparisons and conclusions for each market setting.",
"title": ""
},
{
"docid": "9e32c4fed9c9aecfba909fd82287336b",
"text": "StructuredQueryLanguage injection (SQLi) attack is a code injection techniquewherehackers injectSQLcommandsintoadatabaseviaavulnerablewebapplication.InjectedSQLcommandscan modifytheback-endSQLdatabaseandthuscompromisethesecurityofawebapplication.Inthe previouspublications,theauthorhasproposedaNeuralNetwork(NN)-basedmodelfordetections andclassificationsof theSQLiattacks.Theproposedmodelwasbuiltfromthreeelements:1)a UniformResourceLocator(URL)generator,2)aURLclassifier,and3)aNNmodel.Theproposed modelwas successful to:1)detect eachgeneratedURLaseitherabenignURLoramalicious, and2)identifythetypeofSQLiattackforeachmaliciousURL.Thepublishedresultsprovedthe effectivenessoftheproposal.Inthispaper,theauthorre-evaluatestheperformanceoftheproposal throughtwoscenariosusingcontroversialdatasets.Theresultsoftheexperimentsarepresentedin ordertodemonstratetheeffectivenessoftheproposedmodelintermsofaccuracy,true-positiverate aswellasfalse-positiverate. KeyWoRDS Artificial Intelligence, Databases, Intrusion Detection, Machine Learning, Neural Networks, SQL Injection Attacks, Web Attacks",
"title": ""
},
{
"docid": "37f21df75f695dc6161272ca4c184849",
"text": "Recent genetic studies of the Tuareg have begun to uncover the origin of this semi-nomadic northwest African people and their relationship with African populations. For centuries they were caravan traders plying the trade routes between the Mediterranean coast and south-Saharan Africa. Their origin most likely coincides with the fall of the Garamantes who inhabited the Fezzan (Libya) between the 1st millennium BC and the 5th century AD. In this study we report novel data on the Y-chromosome variation in the Libyan Tuareg from Al Awaynat and Tahala, two villages in Fezzan, whose maternal genetic pool was previously characterized. High-resolution investigation of 37 Y-chromosome STR loci and analysis of 35 bi-allelic markers in 47 individuals revealed a predominant northwest African component (E-M81, haplogroup E1b1b1b) which likely originated in the second half of the Holocene in the same ancestral population that contributed to the maternal pool of the Libyan Tuareg. A significant paternal contribution from south-Saharan Africa (E-U175, haplogroup E1b1a8) was also detected, which may likely be due to recent secondary introduction, possibly through slavery practices or fusion between different tribal groups. The difference in haplogroup composition between the villages of Al Awaynat and Tahala suggests that founder effects and drift played a significant role in shaping the genetic pool of the Libyan Tuareg.",
"title": ""
},
{
"docid": "ab0994331a2074fe9b635342fed7331c",
"text": "This paper investigates to identify the requirement and the development of machine learning-based mobile big data analysis through discussing the insights of challenges in the mobile big data (MBD). Furthermore, it reviews the state-of-the-art applications of data analysis in the area of MBD. Firstly, we introduce the development of MBD. Secondly, the frequently adopted methods of data analysis are reviewed. Three typical applications of MBD analysis, namely wireless channel modeling, human online and offline behavior analysis, and speech recognition in the internet of vehicles, are introduced respectively. Finally, we summarize the main challenges and future development directions of mobile big data analysis.",
"title": ""
},
{
"docid": "192e124432b9ba8dfbb9b8e8b1c42a76",
"text": "Several recurrent networks have been proposed as representations for the task of formal language learning. After training a recurrent network recognize a formal language or predict the next symbol of a sequence, the next logical step is to understand the information processing carried out by the network. Some researchers have begun to extracting finite state machines from the internal state trajectories of their recurrent networks. This paper describes how sensitivity to initial conditions and discrete measurements can trick these extraction methods to return illusory finite state descriptions. INTRODUCTION Formal language learning (Gold, 1969) has been a topic of concern for cognitive science and artificial intelligence. It is the task of inducing a computational description of a formal language from a sequence of positive and negative examples of strings in the target language. Neural information processing approaches to this problem involve the use of recurrent networks that embody the internal state mechanisms underlying automata models (Cleeremans et al., 1989; Elman, 1990; Pollack, 1991; Giles et al, 1992; Watrous & Kuhn, 1992). Unlike traditional automata-based approaches, learning systems relying on recurrent networks have an additional burden: we are still unsure as to what these networks are doing.Some researchers have assumed that the networks are learning to simulate finite state Fool’s Gold: Extracting Finite State Machines From Recurrent Network Dynamics machines (FSMs) in their state dynamics and have begun to extract FSMs from the networks' state transition dynamics (Cleeremans et al., 1989; Giles et al., 1992; Watrous & Kuhn, 1992). These extraction methods employ various clustering techniques to partition the internal state space of the recurrent network into a finite number of regions corresponding to the states of a finite state automaton. This assumption of finite state behavior is dangerous on two accounts. First, these extraction techniques are based on a discretization of the state space which ignores the basic definition of information processing state. Second, discretization can give rise to incomplete computational explanations of systems operating over a continuous state space. SENSITIVITY TO INITIAL CONDITIONS In this section, I will demonstrate how sensitivity to initial conditions can confuse an FSM extraction system. The basis of this claim rests upon the definition of information processing state. Information processing (IP) state is the foundation underlying automata theory. Two IP states are the same if and only if they generate the same output responses for all possible future inputs (Hopcroft & Ullman, 1979). This definition is the fulcrum for many proofs and techniques, including finite state machine minimization. Any FSM extraction technique should embrace this definition, in fact it grounds the standard FSM minimization methods and the physical system modelling of Crutchfield and Young (Crutchfield & Young, 1989). Some dynamical systems exhibit exponential divergence for nearby state vectors, yet remain confined within an attractor. This is known as sensitivity to initial conditions. If this divergent behavior is quantized, it appears as nondeterministic symbol sequences (Crutchfield & Young, 1989) even though the underlying dynamical system is completely deterministic (Figure 1). Consider a recurrent network with one output and three recurrent state units. The output unit performs a threshold at zero activation for state unit one. That is, when the activation of the first state unit of the current state is less than zero then the output is A. Otherwise, the output is B. Equation 1 presents a mathematical description. is the current state of the system is the current output.",
"title": ""
},
{
"docid": "72c6a7a2d64c266a7555b373b21dcba0",
"text": "Antivirus companies, mobile application marketplaces, and the security research community, employ techniques based on dynamic code analysis to detect and analyze mobile malware. In this paper, we present a broad range of anti-analysis techniques that malware can employ to evade dynamic analysis in emulated Android environments. Our detection heuristics span three different categories based on (i) static properties, (ii) dynamic sensor information, and (iii) VM-related intricacies of the Android Emulator. To assess the effectiveness of our techniques, we incorporated them in real malware samples and submitted them to publicly available Android dynamic analysis systems, with alarming results. We found all tools and services to be vulnerable to most of our evasion techniques. Even trivial techniques, such as checking the value of the IMEI, are enough to evade some of the existing dynamic analysis frameworks. We propose possible countermeasures to improve the resistance of current dynamic analysis tools against evasion attempts.",
"title": ""
},
{
"docid": "ed0310fca65866e4d0a5efeb9107cc32",
"text": "Plastics are key resources in circular economy and recycling after the end of useful life with economic value creation and minimal damage to environment is the key to their sustainable management. Studies in a large stream of researches have explored impregnating waste plastics in concrete and reported encouraging results with multiple benefits. The present study makes a critical review of some of these findings and gleans some common useful trends in the properties reported in these studies. The study also presents results of experimental work on bricks made of: non-recyclable waste thermoplastic granules constituting 0 to 10% by weight, fly ash 15%, cement 15% and sand making up the remainder. The bricks were cured under water for 28 days and baked at temperature ranging from 90 o C to 110 o C for 2 hours. The key characteristics of these bricks are found to be lightweight, porous, of low thermal conductivity, and of appreciable mechanical strengths. Though such bricks hold promise, no similar study appears to have been reported so far. Unlike other processes of making porous bricks, which usually involve incineration to burn combustible materials in order to form pores with implication of high carbon emission, the proposed process is non-destructive in that the bricks are merely baked at low temperature, sufficient to melt the waste plastic that gets diffused within the body of the bricks. The compressive strengths after addition of waste plastic to the extent of 10% by weight is about 17MPa that is in conformity with the minimum specified in the ASTM standards. The bricks are likely to add energy efficiency in buildings and help create economic value to manufacturers, thereby, encouraging the ecosystem of plastic waste management involving all actors in the value chain. A mathematical model is developed to predict compressive strength of bricks at varying plastic contents. The study introduces a new strand of research on sustainable thermoplastic waste management.",
"title": ""
},
{
"docid": "eb31d3d6264e3a6aba0753b5ba14f572",
"text": "Using aggregate product search data from Amazon.com, we jointly estimate consumer information search and online demand for consumer durable goods. To estimate the demand and search primitives, we introduce an optimal sequential search process into a model of choice and treat the observed marketlevel product search data as aggregations of individual-level optimal search sequences. The model builds on the dynamic programming framework by Weitzman (1979) and combines it with a choice model. It can accommodate highly complex demand patterns at the market level. At the individual level, the model has a number of attractive properties in estimation, including closed-form expressions for the probability distribution of alternative sets of searched goods and breaking the curse of dimensionality. Using numerical experiments, we verify the model's ability to identify the heterogeneous consumer tastes and search costs from product search data. Empirically, the model is applied to the online market for camcorders and is used to answer manufacturer questions about market structure and competition, and to address policy maker issues about the e ect of selectively lowered search costs on consumer surplus outcomes. We nd that consumer search for camcorders at Amazon.com is typically limited to little over 10 choice options, and that this a ects the estimates of own and cross elasticities. In a policy simulation, we also nd that the vast majority of the households bene t from the Amazon.com's product recommendations via lower search costs.",
"title": ""
}
] |
scidocsrr
|
ac2ebaef3cebd6ba3b7e0580fb660f39
|
Encoderless Gimbal Calibration of Dynamic Multi-Camera Clusters
|
[
{
"docid": "6e0442a73d3201bafc7fa33e5f5fd7b8",
"text": "Gyroscope is playing a key role in helping estimate camera rotation during mobile video capture. The fusion of gyroscope and visual measurements needs the knowledge of camera projection parameters, the gyroscope bias and the relative orientation between gyroscope and camera. Moreover, the timestamps of gyroscope and video frames are usually not well synchronized. In this paper, we propose an online method that estimates all the necessary parameters while capturing videos. Our contributions are (1) simultaneous online camera self-calibration and camera-gyroscope calibration based on an implicit extended Kalman filter, and (2) generalization of coplanarity constraint of camera rotation in a rolling shutter camera model for cellphones. The proposed method is able to accurately estimate the needed parameters online with all kinds of camera motion, and can be embedded in gyro-aided applications such as video stabilization and feature tracking.",
"title": ""
}
] |
[
{
"docid": "484f6a3bd0679db1bf00fd9d53b53b74",
"text": "The paper presents the Intelligent Control System Laboratory's (ICSL) Cooperative Autonomous Mobile Robot technologies and their application to intelligent vehicles for cities. The deployed decision and control algorithms made the road-scaled vehicles capable of undertaking cooperative autonomous maneuvers. Because the focus of ICSL's research is in decision and control algorithms, it is therefore reasonable to consider replacing or upgrading the sensors used with more recent road sensory concepts as produced by other research groups. While substantial progress has been made, there are still some issues that need to be addressed such as: decision and control algorithms for navigating roundabouts, real-time integration of all data, and decision-making algorithms to enable intelligent vehicles to choose the driving maneuver as they go. With continued research, it is feasible that cooperative autonomous vehicles will coexist alongside human drivers in the not-too-distant future.",
"title": ""
},
{
"docid": "d502ae7ef127cbad62050b109e304fe4",
"text": "Increasingly, data is published in the form of semantic graphs. The most notable example is the Linked Open Data (LOD) initiative where an increasing number of data sources are published in the Semantic Web’s Resource Description Framework and where the various data sources are linked to reference one another. In this paper we apply machine learning to semantic graph data and argue that scalability and robustness can be achieved via an urn-based statistical sampling scheme. We apply the urn model to the SUNS framework which is based on multivariate prediction. We argue that multivariate prediction approaches are most suitable for dealing with the resulting high-dimensional sparse data matrix. Within the statistical framework, the approach scales up to large domains and is able to deal with highly sparse relationship data. We summarize experimental results using a friend-of-a-friend data set and a data set derived from DBpedia. In more detail, we describe novel experiments on disease gene prioritization using LOD data sources. The experiments confirm the ease-of-use, the scalability and the good performance of the approach.",
"title": ""
},
{
"docid": "aaf075f849b4e61f57aa2451cdccad70",
"text": "The spatial relation between mitochondria and endoplasmic reticulum (ER) in living HeLa cells was analyzed at high resolution in three dimensions with two differently colored, specifically targeted green fluorescent proteins. Numerous close contacts were observed between these organelles, and mitochondria in situ formed a largely interconnected, dynamic network. A Ca2+-sensitive photoprotein targeted to the outer face of the inner mitochondrial membrane showed that, upon opening of the inositol 1,4,5-triphosphate (IP3)-gated channels of the ER, the mitochondrial surface was exposed to a higher concentration of Ca2+ than was the bulk cytosol. These results emphasize the importance of cell architecture and the distribution of organelles in regulation of Ca2+ signaling.",
"title": ""
},
{
"docid": "0ff159433ed8958109ba8006822a2d67",
"text": "In this paper we present VideoSET, a method for Video Summary Evaluation through Text that can evaluate how well a video summary is able to retain the semantic information contained in its original video. We observe that semantics is most easily expressed in words, and develop a text-based approach for the evaluation. Given a video summary, a text representation of the video summary is first generated, and an NLP-based metric is then used to measure its semantic distance to ground-truth text summaries written by humans. We show that our technique has higher agreement with human judgment than pixel-based distance metrics. We also release text annotations and ground-truth text summaries for a number of publicly available video datasets, for use by the computer vision community.",
"title": ""
},
{
"docid": "ca8bb290339946e2d3d3e14c01023aa5",
"text": "OBJECTIVE\nTo establish a centile chart of cervical length between 18 and 32 weeks of gestation in a low-risk population of women.\n\n\nMETHODS\nA prospective longitudinal cohort study of women with a low risk, singleton pregnancy using public healthcare facilities in Cape Town, South Africa. Transvaginal measurement of cervical length was performed between 16 and 32 weeks of gestation and used to construct centile charts. The distribution of cervical length was determined for gestational ages and was used to establish estimates of longitudinal percentiles. Centile charts were constructed for nulliparous and multiparous women together and separately.\n\n\nRESULTS\nCentile estimation was based on data from 344 women. Percentiles showed progressive cervical shortening with increasing gestational age. Averaged over the entire follow-up period, mean cervical length was 1.5 mm shorter in nulliparous women compared with multiparous women (95% CI, 0.4-2.6).\n\n\nCONCLUSIONS\nEstablishment of longitudinal reference values of cervical length in a low-risk population will contribute toward a better understanding of cervical length in women at risk for preterm labor.",
"title": ""
},
{
"docid": "ceb6d99e16e2e93e57e65bf1ca89b44c",
"text": "The common use of smart devices encourages potential attackers to violate privacy. Sometimes taking control of one device allows the attacker to obtain secret data (such as password for home WiFi network) or tools to carry out DoS attack, and this, despite the limited resources of such devices. One of the solutions for gaining users’ confidence is to assign responsibility for detecting attacks to the service provider, particularly Internet Service Provider (ISP). It is possible, since ISP often provides also the Home Gateway (HG)—device that has multiple roles: residential router, entertainment center, and home’s “command and control” center which allows to manage the Smart Home entities. The ISP may extend this set of functionalities by implementing an intrusion detection software in HG provisioned to their customers. In this article we propose an Intrusion Detection System (IDS) distributed between devices residing at user’s and ISP’s premises. The Home Gateway IDS and the ISP’s IDS constitute together a distributed structure which allows spreading computations related to attacks against Smart Home ecosystem. On the other hand, it also leverages the operator’s knowledge of security incidents across the customer premises. This distributed structure is supported by the ISP’s expert system that helps to detect distributed attacks i.e., using botnets.",
"title": ""
},
{
"docid": "8a243d17a61f75ef9a881af120014963",
"text": "This paper presents a Deep Mayo Predictor model for predicting the outcomes of the matches in IPL 9 being played in April – May, 2016. The model has three components which are based on multifarious considerations emerging out of a deeper analysis of T20 cricket. The models are created using Data Analytics methods from machine learning domain. The prediction accuracy obtained is high as the Mayo Predictor Model is able to correctly predict the outcomes of 39 matches out of the 56 matches played in the league stage of the IPL IX tournament. Further improvement in the model can be attempted by using a larger training data set than the one that has been utilized in this work. No such effort at creating predictor models for cricket matches has been reported in the literature.",
"title": ""
},
{
"docid": "ac9fe51482b2cf6bba00d08dd2c228fe",
"text": "By focusing on the decoding process to improve coding efficiency, we propose an effective video coding method by employing parameter estimation in the decoding process. A bitrate reduction can be achieved when the proposed method is applied to the DC transform coefficient and the motion vector of H.264. 0.25%-0.84%.",
"title": ""
},
{
"docid": "f32ff72da2f90ed0e5279815b0fb10e0",
"text": "We investigate the application of non-orthogonal multiple access (NOMA) with successive interference cancellation (SIC) in downlink multiuser multiple-input multiple-output (MIMO) cellular systems, where the total number of receive antennas at user equipment (UE) ends in a cell is more than the number of transmit antennas at the base station (BS). We first dynamically group the UE receive antennas into a number of clusters equal to or more than the number of BS transmit antennas. A single beamforming vector is then shared by all the receive antennas in a cluster. We propose a linear beamforming technique in which all the receive antennas can significantly cancel the inter-cluster interference. On the other hand, the receive antennas in each cluster are scheduled on the power domain NOMA basis with SIC at the receiver ends. For inter-cluster and intra-cluster power allocation, we provide dynamic power allocation solutions with an objective to maximizing the overall cell capacity. An extensive performance evaluation is carried out for the proposed MIMO-NOMA system and the results are compared with those for conventional orthogonal multiple access (OMA)-based MIMO systems and other existing MIMO-NOMA solutions. The numerical results quantify the capacity gain of the proposed MIMO-NOMA model over MIMO-OMA and other existing MIMO-NOMA solutions.",
"title": ""
},
{
"docid": "e04f1d787b897fb941e147dcf253ce4f",
"text": "1. Developmental Perspective on the Evolution of Behavioral Strategies: Approach 1 2. Evidence of Ontogenetic Behavioral Linkages and Dependencies 3 2.1 Early Developmental Origins of Behavioral Variation 3 2.2 Trade-offs in Neural Processes and Personality 5 2.3 Maintenance of Variation in Behavioral Expression Along Trade-off Axes 8 3. Developmental Origins of Behavioral Variation 10 3.1 Design Principles of the Brain and Mechanisms Underlying Neural Tradeoffs 10 3.2 Developmental Channeling: Mechanism for Separating Individuals Along Trade-off Axes 12 4. Applying the Concept of Developmental Channeling: Dispersal Strategies as an Example 17 4.1 Evolution of Dispersal Strategies 17 4.2 Maternally Induced Dispersal Behavior 21 5. Conclusion and Future Directions 25 Acknowledgments 27 References 27",
"title": ""
},
{
"docid": "e0f797ff66a81b88bbc452e86864d7bc",
"text": "A key challenge in radar micro-Doppler classification is the difficulty in obtaining a large amount of training data due to costs in time and human resources. Small training datasets limit the depth of deep neural networks (DNNs), and, hence, attainable classification accuracy. In this work, a novel method for diversifying Kinect-based motion capture (MOCAP) simulations of human micro-Doppler to span a wider range of potential observations, e.g. speed, body size, and style, is proposed. By applying three transformations, a small set of MOCAP measurements is expanded to generate a large training dataset for network initialization of a 30-layer deep residual neural network. Results show that the proposed training methodology and residual DNN yield improved bottleneck feature performance and the highest overall classification accuracy among other DNN architectures, including transfer learning from the 1.5 million sample ImageNet database.",
"title": ""
},
{
"docid": "f6d81abce568dd297f0bf0f0b6fff837",
"text": "Recently, there emerged revived interests of designing automatic programs (e.g., using genetic/evolutionary algorithms) to optimize the structure of Convolutional Neural Networks (CNNs) [1] for a specific task. The challenge in designing such programs lies in how to balance between large search space of the network structures and high computational costs. Existing works either impose strong restrictions on the search space or use enormous computing resources. In this paper, we study how to design a genetic programming approach for optimizing the structure of a CNN for a given task under limited computational resources yet without imposing strong restrictions on the search space. To reduce the computational costs, we propose two general strategies that are observed to be helpful: (i) aggressively selecting strongest individuals for survival and reproduction, and killing weaker individuals at a very early age; (ii) increasing mutation frequency to encourage diversity and faster evolution. The combined strategy with additional optimization techniques allows us to explore a large search space but with affordable computational costs. Our results on standard benchmark datasets (MNIST [1], SVHN [2], CIFAR-10 [3], CIFAR-100 [3]) are competitive to similar approaches with significantly reduced computational costs.",
"title": ""
},
{
"docid": "4725b14e7c336c720ce4eb7747fa3ad9",
"text": "The support vector machine (SVM) has provided higher performance than traditional learning machines and has been widely applied in real-world classification problems and nonlinear function estimation problems. Unfortunately, the training process of the SVM is sensitive to the outliers or noises in the training set. In this paper, a common misunderstanding of Gaussian-function-based kernel fuzzy clustering is corrected, and a kernel fuzzy c-means clustering-based fuzzy SVM algorithm (KFCM-FSVM) is developed to deal with the classification problems with outliers or noises. In the KFCM-FSVM algorithm, we first use the FCM clustering to cluster each of two classes from the training set in the high-dimensional feature space. The farthest pair of clusters, where one cluster comes from the positive class and the other from the negative class, is then searched and forms one new training set with membership degrees. Finally, we adopt FSVM to induce the final classification results on this new training set. The computational complexity of the KFCM-FSVM algorithm is analyzed. A set of experiments is conducted on six benchmarking datasets and four artificial datasets for testing the generalization performance of the KFCM-FSVM algorithm. The results indicate that the KFCM-FSVM algorithm is robust for classification problems with outliers or noises.",
"title": ""
},
{
"docid": "0cb0d05320a9de415b51c99e4766bbeb",
"text": "We propose a novel approach for developing a two-stage document-level discourse parser. Our parser builds a discourse tree by applying an optimal parsing algorithm to probabilities inferred from two Conditional Random Fields: one for intrasentential parsing and the other for multisentential parsing. We present two approaches to combine these two stages of discourse parsing effectively. A set of empirical evaluations over two different datasets demonstrates that our discourse parser significantly outperforms the stateof-the-art, often by a wide margin.",
"title": ""
},
{
"docid": "137449952a30730185552ed6fca4d8ba",
"text": "BACKGROUND\nPoor sleep quality and depression negatively impact the health-related quality of life of patients with type 2 diabetes, but the combined effect of the two factors is unknown. This study aimed to assess the interactive effects of poor sleep quality and depression on the quality of life in patients with type 2 diabetes.\n\n\nMETHODS\nPatients with type 2 diabetes (n = 944) completed the Diabetes Specificity Quality of Life scale (DSQL) and questionnaires on sleep quality and depression. The products of poor sleep quality and depression were added to the logistic regression model to evaluate their multiplicative interactions, which were expressed as the relative excess risk of interaction (RERI), the attributable proportion (AP) of interaction, and the synergy index (S).\n\n\nRESULTS\nPoor sleep quality and depressive symptoms both increased DSQL scores. The co-presence of poor sleep quality and depressive symptoms significantly reduced DSQL scores by a factor of 3.96 on biological interaction measures. The relative excess risk of interaction was 1.08. The combined effect of poor sleep quality and depressive symptoms was observed only in women.\n\n\nCONCLUSIONS\nPatients with both depressive symptoms and poor sleep quality are at an increased risk of reduction in diabetes-related quality of life, and this risk is particularly high for women due to the interaction effect. Clinicians should screen for and treat sleep difficulties and depressive symptoms in patients with type 2 diabetes.",
"title": ""
},
{
"docid": "1f9615010f9b44f05cbaa77a443f4c35",
"text": "OBJECTIVE\nTo evaluate the safety, tolerability and efficacy of a probiotic supplementation for Helicobacter pylori (H. pylori) eradication therapy.\n\n\nDESIGN\nConsecutive adult naive patients with a diagnosis of H. pylori infection who were prescribed eradication therapy according to clinical practice (10-day triple or nonbismuth quadruple concomitant therapy) randomly received probiotics (1 × 109 colony-forming units each strain, Lactobacillus plantarum and Pediococcus acidilactici) or matching placebo. Side effects at the end of the treatment, measured through a modified De Boer Scale, were the primary outcome. Secondary outcomes were compliance with therapy and eradication rates.\n\n\nRESULTS\nA total of 209 patients (33% triple therapy, 66% non-bismuth quadruple therapy) were included [placebo (n = 106) or probiotic (n = 103)]. No differences were observed regarding side effects at the end of the treatment between groups (β -0.023, P 0.738). Female gender (P < 0.001) and quadruple therapy (P 0.007) were independent predictors of side effects. No differences in compliance were observed, regardless of the study group or eradication therapy. Eradication rates were similar between groups [placebo 95% (95% confidence interval (CI), 89% to 98%) vs probiotic 97% (95% CI, 92% to 99%), P 0.721]. There were no relevant differences in cure rates (>90% in all cases) between triple and quadruple concomitant therapy.\n\n\nCONCLUSION\nProbiotic supplementation containing Lactobacillus Plantarum and Pediococcus acidilactici to H. pylori treatment neither decreased side effects nor improved compliance with therapy or eradication rates.",
"title": ""
},
{
"docid": "4aa8316315617aec4c076a7679482fa9",
"text": "Continuous integration (CI) systems automate the compilation, building, and testing of software. Despite CI rising as a big success story in automated software engineering, it has received almost no attention from the research community. For example, how widely is CI used in practice, and what are some costs and benefits associated with CI? Without answering such questions, developers, tool builders, and researchers make decisions based on folklore instead of data. In this paper, we use three complementary methods to study the usage of CI in open-source projects. To understand which CI systems developers use, we analyzed 34,544 open-source projects from GitHub. To understand how developers use CI, we analyzed 1,529,291 builds from the most commonly used CI system. To understand why projects use or do not use CI, we surveyed 442 developers. With this data, we answered several key questions related to the usage, costs, and benefits of CI. Among our results, we show evidence that supports the claim that CI helps projects release more often, that CI is widely adopted by the most popular projects, as well as finding that the overall percentage of projects using CI continues to grow, making it important and timely to focus more research on CI.",
"title": ""
},
{
"docid": "3412d99c29f7672fe3846173c9a4d734",
"text": "In the last decade, the ease of online payment has opened up many new opportunities for e-commerce, lowering the geographical boundaries for retail. While e-commerce is still gaining popularity, it is also the playground of fraudsters who try to misuse the transparency of online purchases and the transfer of credit card records. This paper proposes APATE, a novel approach to detect fraudulent credit card ∗NOTICE: this is the author’s version of a work that was accepted for publication in Decision Support Systems in May 8, 2015, published online as a self-archive copy after the 24 month embargo period. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. Please cite this paper as follows: Van Vlasselaer, V., Bravo, C., Caelen, O., Eliassi-Rad, T., Akoglu, L., Snoeck, M., Baesens, B. (2015). APATE: A novel approach for automated credit card transaction fraud detection using network-based extensions. Decision Support Systems, 75, 38-48. Available Online: http://www.sciencedirect.com/science/article/pii/S0167923615000846",
"title": ""
},
{
"docid": "a81e4b95dfaa7887f66066343506d35f",
"text": "The purpose of making a “biobetter” biologic is to improve on the salient characteristics of a known biologic for which there is, minimally, clinical proof of concept or, maximally, marketed product data. There already are several examples in which second-generation or biobetter biologics have been generated by improving the pharmacokinetic properties of an innovative drug, including Neulasta® [a PEGylated, longer-half-life version of Neupogen® (filgrastim)] and Aranesp® [a longer-half-life version of Epogen® (epoetin-α)]. This review describes the use of protein fusion technologies such as Fc fusion proteins, fusion to human serum albumin, fusion to carboxy-terminal peptide, and other polypeptide fusion approaches to make biobetter drugs with more desirable pharmacokinetic profiles.",
"title": ""
},
{
"docid": "1a4d07d9a48668f7fa3bcf301c25f7f2",
"text": "A novel low-loss planar dielectric waveguide is proposed. It is based on a high-permittivity dielectric slab parallel to a metal ground. The guiding channel is limited at the sides by a number of air holes which are lowering the effective permittivity. A mode with the electric field primarily parallel to the ground plane is used, similar to the E11x mode of an insulated image guide. A rather thick gap layer between the ground and the high-permittivity slab makes this mode to show the highest effective permittivity. The paper discusses the mode dispersion behaviour and presents measured characteristics of a power divider circuit operating at a frequency of about 8 GHz. Low leakage of about 14% is observed at the discontinuities forming the power divider. Using a compact dipole antenna structure, excitation efficiency of more than 90% is obtained.",
"title": ""
}
] |
scidocsrr
|
3e76a4192bab7ac7b3396e0fe067870c
|
A humanitarian supply chain process reference model
|
[
{
"docid": "95196bd9be49b426217b7d81fc51a04b",
"text": "This paper builds on the idea that private sector logistics can and should be applied to improve the performance of disaster logistics but that before embarking on this the private sector needs to understand the core capabilities of humanitarian logistics. With this in mind, the paper walks us through the complexities of managing supply chains in humanitarian settings. It pinpoints the cross learning potential for both the humanitarian and private sectors in emergency relief operations as well as possibilities of getting involved through corporate social responsibility. It also outlines strategies for better preparedness and the need for supply chains to be agile, adaptable and aligned—a core competency of many humanitarian organizations involved in disaster relief and an area which the private sector could draw on to improve their own competitive edge. Finally, the article states the case for closer collaboration between humanitarians, businesses and academics to achieve better and more effective supply chains to respond to the complexities of today’s logistics be it the private sector or relieving the lives of those blighted by disaster. Journal of the Operational Research Society (2006) 57, 475–489. doi:10.1057/palgrave.jors.2602125 Published online 14 December 2005",
"title": ""
}
] |
[
{
"docid": "5b00cd4f4906f64e0d8d51444d255457",
"text": "33 I s Alice authorized to enter this facility? Is Bob entitled to access this Web site or privileged informa-tion? Are we administering our service exclusively to the enrolled users? Does Charlie have a criminal record? Every day, a variety of organizations pose questions such as these about personal recognition. One emerging technology that is becoming more widespread in such organizations is biometrics—automatic personal recognition based on physiological or behavioral characteristics. 1 The term comes from the Greek words bios (life) and metrikos (measure). To make a personal recognition, biometrics relies on who you are or what you do—as opposed to what you know (such as a password) or what you have (such as an ID card). Biometrics has several advantages compared with traditional recognition. In some applications, it can either replace or supplement existing technologies; in others, it is the only viable approach to personal recognition. With the increasing infrastructure for reliable automatic personal recognition and for associating an identity with other personal behavior, concern is naturally growing over whether this information might be abused to violate individuals' rights to anonymity. We argue here, however, that the accountable, responsible use of biometric systems can in fact protect individual privacy. What biological measurements qualify as biometrics? Any human physiological or behavioral trait can serve as a biometric characteristic as long as it satisfies the following requirements: • Universality. Each person should have the characteristic. • Distinctiveness. Any two persons should be different in terms of the characteristic. • Permanence. The characteristic should be sufficiently invariant (with respect to the matching criterion) over a period of time. • Collectibility. The characteristic should be quantitatively measurable. However, for a practical biometric system, we must also consider issues of performance, acceptability, and cir-cumvention. In other words, a practical system must meet accuracy, speed, and resource requirements, and it must be harmless to the users, accepted by the intended population, and sufficiently robust to various fraudulent methods and attacks. A biometric system is essentially a pattern-recognition system that recognizes a person based on a feature vector derived from a specific physiological or behavioral characteristic that the person possesses. Depending on the application context, a biometric system typically operates in one of two modes: verification or identification. (Throughout this article, we use the generic term \" recognition \" where we do not wish to distinguish between verification and identification.) In verification mode, the system validates …",
"title": ""
},
{
"docid": "da8e929b1599b3241e75e4a1ead06207",
"text": "The knowledge pyramid has been used for several years to illustrate the hierarchical relationships between data, information, knowledge, and wisdom. An earlier version of this paper presented a revised knowledge-KM pyramid that included processes such as filtering and sense making, reversed the pyramid by positing there was more knowledge than data, and showed knowledge management as an extraction of the pyramid. This paper expands the revised knowledge pyramid to include the Internet of Things and Big Data. The result is a revision of the data aspect of the knowledge pyramid. Previous thought was of data as reflections of reality as recorded by sensors. Big Data and the Internet of Things expand sensors and readings to create two layers of data. The top layer of data is the traditional transaction / operational data and the bottom layer of data is an expanded set of data reflecting massive data sets and sensors that are near mirrors of reality. The result is a knowledge pyramid that appears as an hourglass.",
"title": ""
},
{
"docid": "0898171c91896106b7c58d20e219ab23",
"text": "Blockchain may help solve several complex problems related to securing the integrity and trustworthiness of rapid, distributed, complex energy transactions and data exchanges. In a move towards grid resilience, blockchain commoditizes trust and enables automated smart contracts to support auditable multiparty transactions based on predefined rules between distributed energy providers and customers. Blockchain based smart contracts also help remove the need to interact with third-parties, facilitating the adoption and monetization of distributed energy transactions and exchanges, both energy flows as well as financial transactions. This may help reduce transactive energy costs and increase the security and sustainability of distributed energy resource (DER) integration, helping to remove barriers to a more decentralized and resilient power grid. This paper explores the application of blockchain and smart contracts to improve smart grid cyber resiliency and secure transactive energy applications.",
"title": ""
},
{
"docid": "0869fee5888a97f424856570f2b9dc2c",
"text": "This paper evaluates the four leading techniques proposed in the literature for construction of prediction intervals (PIs) for neural network point forecasts. The delta, Bayesian, bootstrap, and mean-variance estimation (MVE) methods are reviewed and their performance for generating high-quality PIs is compared. PI-based measures are proposed and applied for the objective and quantitative assessment of each method's performance. A selection of 12 synthetic and real-world case studies is used to examine each method's performance for PI construction. The comparison is performed on the basis of the quality of generated PIs, the repeatability of the results, the computational requirements and the PIs variability with regard to the data uncertainty. The obtained results in this paper indicate that: 1) the delta and Bayesian methods are the best in terms of quality and repeatability, and 2) the MVE and bootstrap methods are the best in terms of low computational load and the width variability of PIs. This paper also introduces the concept of combinations of PIs, and proposes a new method for generating combined PIs using the traditional PIs. Genetic algorithm is applied for adjusting the combiner parameters through minimization of a PI-based cost function subject to two sets of restrictions. It is shown that the quality of PIs produced by the combiners is dramatically better than the quality of PIs obtained from each individual method.",
"title": ""
},
{
"docid": "c6242a396cae32fa4d2a740674dd26fd",
"text": "Automatic word segmentation is a basic requirement for unsupervised learning in morphological analysis. In this paper, we formulate a novel recursivemethod for minimum description length (MDL) word segmentation, whose basic operation isresegmenting the corpus on a prefix (equivalently, a suffix). We derive a local expression for the change in description length under resegmentation, i.e., one which depends only on properties of the specific prefix (not on the rest of the corpus). Such a formulation permits use of a new and efficient algorithm for greedy morphological segmentation of the corpus in a recursive manner. In particular, our method does not restrict words to be segmented only once, into a stem+affix form, as do many extant techniques. Early results for English and Turkish corpora are promising.",
"title": ""
},
{
"docid": "93df3ce5213252f8ae7dbd396ebb71bd",
"text": "Role-Based Access Control (RBAC) has been the dominant access control model in industry since the 1990s. It is widely implemented in many applications, including major cloud platforms such as OpenStack, AWS, and Microsoft Azure. However, due to limitations of RBAC, there is a shift towards Attribute-Based Access Control (ABAC) models to enhance flexibility by using attributes beyond roles and groups. In practice, this shift has to be gradual since it is unrealistic for existing systems to abruptly adopt ABAC models, completely eliminating current RBAC implementations.In this paper, we propose an ABAC extension with user attributes for the OpenStack Access Control (OSAC) model and demonstrate its enforcement utilizing the Policy Machine (PM) developed by the National Institute of Standards and Technology. We utilize some of the PM's components along with a proof-of-concept implementation to enforce this ABAC extension for OpenStack, while keeping OpenStack's current RBAC architecture in place. This provides the benefits of enhancing access control flexibility with support of user attributes, while minimizing the overhead of altering the existing OpenStack access control framework. We present use cases to depict added benefits of our model and show enforcement results. We then evaluate the performance of our proposed ABAC extension, and discuss its applicability and possible performance enhancements.",
"title": ""
},
{
"docid": "685c4c7d06a210e03cb211b356847b93",
"text": "This paper enhances the Q-Iearning algorithm for optimal asset allocation proposed in (Neuneier, 1996 [6]). The new formulation simplifies the approach by using only one value-function for many assets and allows model-free policy-iteration. After testing the new algorithm on real data, the possibility of risk management within the framework of Markov decision problems is analyzed. The proposed methods allows the construction of a multi-period portfolio management system which takes into account transaction costs, the risk preferences of the investor, and several constraints on the allocation.",
"title": ""
},
{
"docid": "f6e73a3c09d7eac1ddd60843b82df7b0",
"text": "This paper presents text and data mining in tandem to detect the phishing email. The study employs Multilayer Perceptron (MLP), Decision Trees (DT), Support Vector Machine (SVM), Group Method of Data Handling (GMDH), Probabilistic Neural Net (PNN), Genetic Programming (GP) and Logistic Regression (LR) for classification. A dataset of 2500 phishing and non phishing emails is analyzed after extracting 23 keywords from the email bodies using text mining from the original dataset. Further, we selected 12 most important features using t-statistic based feature selection. Here, we did not find statistically significant difference in sensitivity as indicated by t-test at 1% level of significance, both with and without feature selection across all techniques except PNN. Since, the GP and DT are not statistically significantly different either with or without feature selection at 1% level of significance, DT should be preferred because it yields ‘if-then’ rules, thereby increasing the comprehensibility of the system.",
"title": ""
},
{
"docid": "e84699f276c807eb7fddb49d61bd8ae8",
"text": "Cyberbotics Ltd. develops Webots, a mobile robotics simulation software that provides you with a rapid prototyping environment for modelling, programming and simulating mobile robots. The provided robot libraries enable you to transfer your control programs to several commercially available real mobile robots. Webots lets you define and modify a complete mobile robotics setup, even several different robots sharing the same environment. For each object, you can define a number of properties, such as shape, color, texture, mass, friction, etc. You can equip each robot with a large number of available sensors and actuators. You can program these robots using your favorite development environment, simulate them and optionally transfer the resulting programs onto your real robots. Webots has been developed in collaboration with the Swiss Federal Institute of Technology in Lausanne, thoroughly tested, well documented and continuously maintained for over 7 years. It is now the main commercial product available from Cyberbotics Ltd.",
"title": ""
},
{
"docid": "dd650ed7ef6771d82e2b8486c9a1e97b",
"text": "BACKGROUND\nBehçet's disease (BD) is a systemic vasculitis disease with oral and genital aphthous ulceration, uveitis, skin manifestations, arthritis and neurological involvement. Many investigators have published articles on BD in the last two decades since introduction of diagnosis criteria by the International Study Group for Behçet's Disease in 1990. However, there is no scientometric analysis available for this increasing amount of literature.\n\n\nMETHODS\nA scientometric analysis method was used to achieve a view of scientific articles about BD which were published between 1990 and 2010, by data retrieving from ISI Web of Science. The specific features such as publication year, language of article, geographical distribution, main journal in this field, institutional affiliation and citation characteristics were retrieved and analyzed. International collaboration was analyzed using Intcoll and Pajek softwares.\n\n\nRESULTS\nThere was a growing trend in the number of BD articles from 1990 to 2010. The number of citations to BD literature also increased around 5.5-fold in this period. The countries found to have the highest output were Turkey, Japan, the USA and England; the first two universities were from Turkey. Most of the top 10 journals publishing BD articles were in the field of rheumatology, consistent with the subject areas of the articles. There was a correlation between the citations per paper and the impact factor of the publishing journal.\n\n\nCONCLUSION\nThis is the first scientometric analysis of BD, showing the scientometric characteristics of ISI publications on BD.",
"title": ""
},
{
"docid": "7a17ff6cbc7fcbdb2c867a23dc1be591",
"text": "Particle swarm optimization has become a common heuristic technique in the optimization community, with many researchers exploring the concepts, issues, and applications of the algorithm. In spite of this attention, there has as yet been no standard definition representing exactly what is involved in modern implementations of the technique. A standard is defined here which is designed to be a straightforward extension of the original algorithm while taking into account more recent developments that can be expected to improve performance on standard measures. This standard algorithm is intended for use both as a baseline for performance testing of improvements to the technique, as well as to represent PSO to the wider optimization community",
"title": ""
},
{
"docid": "8c6ec02821d17fbcf79d1a42ed92a971",
"text": "OBJECTIVE\nTo explore whether an association exists between oocyte meiotic spindle morphology visualized by polarized light microscopy at the time of intracytoplasmic sperm injection and the ploidy of the resulting embryo.\n\n\nDESIGN\nProspective cohort study.\n\n\nSETTING\nPrivate IVF clinic.\n\n\nPATIENT(S)\nPatients undergoing preimplantation genetic screening/diagnosis (n = 113 patients).\n\n\nINTERVENTION(S)\nOocyte meiotic spindles were assessed by polarized light microscopy and classified at the time of intracytoplasmic sperm injection as normal, dysmorphic, translucent, telophase, or no visible spindle. Single blastomere biopsy was performed on day 3 of culture for analysis by array comparative genomic hybridization.\n\n\nMAIN OUTCOME MEASURE(S)\nSpindle morphology and embryo ploidy association was evaluated by regression methods accounting for non-independence of data.\n\n\nRESULT(S)\nThe frequency of euploidy in embryos derived from oocytes with normal spindle morphology was significantly higher than all other spindle classifications combined (odds ratio [OR] 1.93, 95% confidence interval [CI] 1.33-2.79). Oocytes with translucent (OR 0.25, 95% CI 0.13-0.46) and no visible spindle morphology (OR 0.35, 95% CI 0.19-0.63) were significantly less likely to result in euploid embryos when compared with oocytes with normal spindle morphology. There was no significant difference between normal and dysmorphic spindle morphology (OR 0.73, 95% CI 0.49-1.08), whereas no telophase spindles resulted in euploid embryos (n = 11). Assessment of spindle morphology was found to be independently associated with embryo euploidy after controlling for embryo quality (OR 1.73, 95% CI 1.16-2.60).\n\n\nCONCLUSION(S)\nOocyte spindle morphology is associated with the resulting embryo's ploidy. Oocytes with normal spindle morphology are significantly more likely to produce euploid embryos compared with oocytes with meiotic spindles that are translucent or not visible.",
"title": ""
},
{
"docid": "77ac1b0810b308cf9e957189c832f421",
"text": "We describe TensorFlow-Serving, a system to serve machine learning models inside Google which is also available in the cloud and via open-source. It is extremely flexible in terms of the types of ML platforms it supports, and ways to integrate with systems that convey new models and updated versions from training to serving. At the same time, the core code paths around model lookup and inference have been carefully optimized to avoid performance pitfalls observed in naive implementations. Google uses it in many production deployments, including a multi-tenant model hosting service called TFS2.",
"title": ""
},
{
"docid": "853ac793e92b97d41e5ef6d1bc16d504",
"text": "We present a systematic study of parameters used in the construction of semantic vector space models. Evaluation is carried out on a variety of similarity tasks, including a compositionality dataset, using several source corpora. In addition to recommendations for optimal parameters, we present some novel findings, including a similarity metric that outperforms the alternatives on all tasks considered.",
"title": ""
},
{
"docid": "ca70bf377f8823c2ecb1cdd607c064ec",
"text": "To date, few studies have compared the effectiveness of topical silicone gels versus that of silicone gel sheets in preventing scars. In this prospective study, we compared the efficacy and the convenience of use of the 2 products. We enrolled 30 patients who had undergone a surgical procedure 2 weeks to 3 months before joining the study. These participants were randomly assigned to 2 treatment arms: one for treatment with a silicone gel sheet, and the other for treatment with a topical silicone gel. Vancouver Scar Scale (VSS) scores were obtained for all patients; in addition, participants completed scoring patient questionnaires 1 and 3 months after treatment onset. Our results reveal not only that no significant difference in efficacy exists between the 2 products but also that topical silicone gels are more convenient to use. While previous studies have advocated for silicone gel sheets as first-line therapies in postoperative scar management, we maintain that similar effects can be expected with topical silicone gel. The authors recommend that, when clinicians have a choice of silicone-based products for scar prevention, they should focus on each patient's scar location, lifestyle, and willingness to undergo scar prevention treatment.",
"title": ""
},
{
"docid": "4e1534459e030c8b0f487fc018fd9e65",
"text": "People spend a considerable amount of their time mentally simulating experiences other than the one in which they are presently engaged, as a means of distraction, coping, or preparation for the future. In this integrative review, we examine four (non-exhaustive) cases in which mentally simulating an experience serves a different function, as a substitute for the corresponding experience. In each case, mentally simulating an experience evokes similar cognitive, physiological, and/or behavioral consequences as having the corresponding experience in reality: (i) imagined experiences are attributed evidentiary value like physical evidence, (ii) mental practice instantiates the same performance benefits as physical practice, (iii) imagined consumption of a food reduces its actual consumption, and (iv) imagined goal achievement reduces motivation for actual goal achievement.We organize these cases under a common superordinate category and discuss their different methodological approaches and explanatory accounts. Our integration yields theoretical and practical insights into when and why mentally simulating an experience serves as its substitute. Much of life is spent thinking about experiences other than what one is doing. People frequently mentally simulate experiences by recalling episodes from their past, contemplating alternatives to their present circumstances, and anticipating or fantasizing about their future. Indeed, Americans explicitly divert their thoughts to experiences other than their present for more than a tenth of their day by watching television (American Time Use Survey, 2014). For roughly a third of waking hours, the mind wanders away from the activity in which it is engaged (Killingsworth & Gilbert, 2010; Schooler et al., 2011). Much of this simulation is engaged in for its immediate hedonic, semantic, and functional benefits: to divert themind toward more pleasure than is afforded by the present circumstances, regulate emotions, solve problems, or prepare for and anticipate the future (e.g., Morewedge & Buechel, 2013; Gollwitzer & Oettingen, 2012; Kahneman & Tversky, 1982; Kumar, Killingsworth, & Gilovich, 2014; MacInnis & Price, 1987; Markman, Klein, & Suhr, 2009; Morewedge & Hershfield, 2015; Schacter, Addis, & Buckner, 2008; Taylor, Pham, Rivkin, & Armor, 1998; Taylor & Schneider, 1989). Simulations, however, do not only serve as mental representations of other past, present, and future experiences. The permeable boundary between thought and reality leads simulations to sometimes produce the same downstream consequences as the corresponding actual experiences. In this paper, we elucidate these effects by presenting four cases in which mental simulations act as substitutes for experience.",
"title": ""
},
{
"docid": "30799ad2796b9715fb70be87438edf64",
"text": "This study investigated the impact of introducing the Klein-Bell ADL Scale into a rehabilitation medicine service. A pretest and a posttest questionnaire of rehabilitation team members and a pretest and a posttest audit of occupational therapy documentation were completed. Results of the questionnaire suggested that the ADL scale influenced rehabilitation team members' observations in the combined area of occupational therapy involvement in self-care, improvement in the identification of treatment goals and plans, and communication between team members. Results of the audit suggested that the thoroughness and quantification of occupational therapy documentation improved. The clinical implications of these findings recommend the use of the Klein-Bell ADL Scale in rehabilitation services for improving occupational therapy documentation and for enhancing rehabilitation team effectiveness.",
"title": ""
},
{
"docid": "5244ba8e7eac98e8b5b6156812055914",
"text": "A number of review or survey articles have previously appeared on human action recognition where either vision sensors or inertial sensors are used individually. Considering that each sensor modality has its own limitations, in a number of previously published papers, it has been shown that the fusion of vision and inertial sensor data improves the accuracy of recognition. This survey article provides an overview of the recent investigations where both vision and inertial sensors are used together and simultaneously to perform human action recognition more effectively. The thrust of this survey is on the utilization of depth cameras and inertial sensors as these two types of sensors are cost-effective, commercially available, and more significantly they both provide 3D human action data. An overview of the components necessary to achieve fusion of data from depth and inertial sensors is provided. In addition, a review of the publicly available datasets that include depth and inertial data which are simultaneously captured via depth and inertial sensors is presented.",
"title": ""
},
{
"docid": "02291035a4fc3016a39a92ed30e5dfe5",
"text": "Ranking based on passages addresses some of the shortcomings ofwhole-document ranking. It provides convenient units of text toreturn to the user, avoids the difficulties of comparing documentsof different length, and enables identification of short blocks ofrelevant material amongst otherwise irrelevant text. In this paperwe explore the potential of passage retrieval, based on anexperimental evaluation of the ability of passages to identifyrelevant documents. We compare our scheme of arbitrary passageretrieval to several other document retrieval and passage retrievalmethods; we show experimentally that, compared to these methods,ranking via fixed-length passages is robust and effective. Ourexperiments also show that, compared to whole-document ranking,ranking via fixed-length arbitrary passages significantly improvesretrieval effectiveness, by 8% for TREC disks 2 and 4 and by18%-37% for the Federal Register collection.",
"title": ""
},
{
"docid": "2f741815d744b0af7112aa349fc7115d",
"text": "Monocular depth estimation aims at estimating a pixelwise depth map for a single image, which has wide applications in scene understanding and autonomous driving. Existing supervised and unsupervised methods face great challenges. Supervised methods require large amounts of depth measurement data, which are generally difficult to obtain, while unsupervised methods are usually limited in estimation accuracy. Synthetic data generated by graphics engines provide a possible solution for collecting large amounts of depth data. However, the large domain gaps between synthetic and realistic data make directly training with them challenging. In this paper, we propose to use the stereo matching network as a proxy to learn depth from synthetic data and use predicted stereo disparity maps for supervising the monocular depth estimation network. Cross-domain synthetic data could be fully utilized in this novel framework. Different strategies are proposed to ensure learned depth perception capability well transferred across different domains. Our extensive experiments show state-of-the-art results of monocular depth estimation on KITTI dataset.",
"title": ""
}
] |
scidocsrr
|
92cc6daac0cd91d8eae7a93ee237a137
|
2D View Aggregation for Lymph Node Detection Using a Shallow Hierarchy of Linear Classifiers
|
[
{
"docid": "3ddcf5f0e4697a0d43eff2cca77a1ab7",
"text": "Lymph nodes are assessed routinely in clinical practice and their size is followed throughout radiation or chemotherapy to monitor the effectiveness of cancer treatment. This paper presents a robust learning-based method for automatic detection and segmentation of solid lymph nodes from CT data, with the following contributions. First, it presents a learning based approach to solid lymph node detection that relies on marginal space learning to achieve great speedup with virtually no loss in accuracy. Second, it presents a computationally efficient segmentation method for solid lymph nodes (LN). Third, it introduces two new sets of features that are effective for LN detection, one that self-aligns to high gradients and another set obtained from the segmentation result. The method is evaluated for axillary LN detection on 131 volumes containing 371 LN, yielding a 83.0% detection rate with 1.0 false positive per volume. It is further evaluated for pelvic and abdominal LN detection on 54 volumes containing 569 LN, yielding a 80.0% detection rate with 3.2 false positives per volume. The running time is 5-20 s per volume for axillary areas and 15-40 s for pelvic. An added benefit of the method is the capability to detect and segment conglomerated lymph nodes.",
"title": ""
}
] |
[
{
"docid": "3dce8f9acea488995450c99254b7ba7f",
"text": "In this paper, we present a generic Optical Character Recognition system for Arabic script languages called Nabocr. Nabocr uses OCR approaches specific for Arabic script recognition. Performing recognition on Arabic script text is relatively more difficult than Latin text due to the nature of Arabic script, which is cursive and context sensitive. Moreover, Arabic script has different writing styles that vary in complexity. Nabocr is initially trained to recognize both Urdu Nastaleeq and Arabic Naskh fonts. However, it can be trained by users to be used for other Arabic script languages. We have evaluated our system’s performance for both Urdu and Arabic. In order to evaluate Urdu recognition, we have generated a dataset of Urdu text called UPTI (Urdu Printed Text Image Database), which measures different aspects of a recognition system. The performance of our system for Urdu clean text is 91%. For Arabic clean text, the performance is 86%. Moreover, we have compared the performance of our system against Tesseract’s newly released Arabic recognition, and the performance of both systems on clean images is almost the same.",
"title": ""
},
{
"docid": "1b35e4be45e8e464b577e6bcd5c49342",
"text": "In this paper, we propose minimum-effort driven navigational techniques for enterprise database systems based on the faceted search paradigm. Our proposed techniques dynamically suggest facets for drilling down into the database such that the cost of navigation is minimized. At every step, the system asks the user a question or a set of questions on different facets and depending on the user response, dynamically fetches the next most promising set of facets, and the process repeats. Facets are selected based on their ability to rapidly drill down to the most promising tuples, as well as on the ability of the user to provide desired values for them. Our facet selection algorithms also work in conjunction with any ranked retrieval model where a ranking function imposes a bias over the user preferences for the selected tuples. Our methods are principled as well as efficient, and our experimental study validates their effectiveness on several application scenarios.",
"title": ""
},
{
"docid": "a0b9e873d406894eb1b411e808f0c3e6",
"text": "Pushing accuracy and reliability of radar systems to higher levels is a requirement to realize autonomous driving. To maximize its performance, the millimeter-wave radar has to be designed in consideration of its surroundings such as emblems, bumpers and so on, because the electric-field distortion will degrade the performance. We propose electro-optic (EO) measurement system to visualize amplitude and phase distribution of millimeter waves, aiming at the evaluation of the disturbance of car components with the radar module equipped inside a vehicle. Visualization of 76-GHz millimeter waves passing through plastic plates is presented to demonstrate our system's capability of diagnosing a local cause of the field disturbance.",
"title": ""
},
{
"docid": "59eb15885307870ee9270582f79b9cc0",
"text": "Vulnerable Android applications are traditionally exploited via malicious apps. In this paper, we study an underexplored class of Android attacks which do not require the user to install malicious apps, but merely to visit a malicious website in an Android browser. We call them web-to-app injection (or W2AI) attacks, and distinguish between different categories of W2AI sideeffects. To estimate their prevalence, we present an automated W2AIScanner to find and confirm W2AI vulnerabilities. Analyzing real apps from the official Google Play store – we found 286 confirmed vulnerabilities in 134 distinct applications. Our findings suggest that these attacks are pervasive and developers do not adequately protect apps against them. Our tool employs a novel combination of static analysis and symbolic execution with dynamic testing. We show through experiments that this design significantly enhances the detection accuracy compared with an existing state-of-the-art analysis.",
"title": ""
},
{
"docid": "3a787113aa597d6f4847c60b8c53da07",
"text": "Geospatially-oriented social media communications have emerged as a common information resource to support crisis management. Our research compares the capabilities of two popular systems used to collect and visualize such information Project Epic’s Tweak the Tweet (TtT) and Ushahidi. Our research uses geospatially-oriented social media gathered by both projects during recent disasters to compare and contrast the frequency, content, and location components of contributed information to both systems. We compare how data was gathered and filtered, how spatial information was extracted and mapped, and the mechanisms by which the resulting synthesized information was shared with response and recovery organizations. In addition, we categorize the degree to which each platform in each disaster led to actions by first responders and emergency managers. Based on the results of our comparisons we identify key design considerations for future social media mapping tools to support crisis management.",
"title": ""
},
{
"docid": "7662a9d5d31ed2307837a04ec7a4e27c",
"text": "Automating the navigation of unmanned aerial vehicles (UAVs) in diverse scenarios has gained much attention in recent years. However, teaching UAVs to fly in challenging environments remains an unsolved problem, mainly due to the lack of training data. In this paper, we train a deep neural network to predict UAV controls from raw image data for the task of autonomous UAV racing in a photo-realistic simulation. Training is done through imitation learning with data augmentation to allow for the correction of navigation mistakes. Extensive experiments demonstrate that our trained network (when sufficient data augmentation is used) outperforms state-of-the-art methods and flies more consistently than many human pilots. Additionally, we show that our optimized network architecture can run in real-time on embedded hardware, allowing for efficient onboard processing critical for real-world deployment. From a broader perspective, our results underline the importance of extensive data augmentation techniques to improve robustness in end-to-end learning setups.",
"title": ""
},
{
"docid": "9d8debb624d5981e16d39bae662449cc",
"text": "The use of reinforcement and rewards is known to enhance memory retention. However, the impact of reinforcement on higher-order forms of memory processing, such as integration and generalization, has not been directly manipulated in previous studies. Furthermore, there is evidence that sleep enhances the integration and generalization of memory, but these studies have only used reinforcement learning paradigms and have not examined whether reinforcement impacts or is critical for memory integration and generalization during sleep. Thus, the aims of the current study were to examine: (1) whether reinforcement during learning impacts the integration and generalization of memory; and (2) whether sleep and reinforcement interact to enhance memory integration and generalization. We investigated these questions using a transitive inference (TI) task, which is thought to require the integration and generalization of disparate relational memories in order to make novel inferences. To examine whether reinforcement influences or is required for the formation of inferences, we compared performance using a reinforcement or an observation based TI task. We examined the impact of sleep by comparing performance after a 12-h delay containing either wake or sleep. Our results showed that: (1) explicit reinforcement during learning is required to make transitive inferences and that sleep further enhances this effect; (2) sleep does not make up for the inability to make inferences when reinforcement does not occur during learning. These data expand upon previous findings and suggest intriguing possibilities for the mechanisms involved in sleep-dependent memory transformation.",
"title": ""
},
{
"docid": "38fe414175262260f705ce06bbfc1bc8",
"text": "Augmented reality, in which virtual content is seamlessly integrated with displays of real-world scenes, is a growing area of interactive design. With the rise of personal mobile devices capable of producing interesting augmented reality environments, the vast potential of AR has begun to be explored. This paper surveys the current state-of-the-art in augmented reality. It describes work performed in different application domains and explains the exiting issues encountered when building augmented reality applications considering the ergonomic and technical limitations of mobile devices. Future directions and areas requiring further research are introduced and discussed.",
"title": ""
},
{
"docid": "fd80f3e419d0db201ad0fb3bc4f24742",
"text": "Currently, Computer Vision (CV) is one of the most popular research topics in the world. This is because it can support the human daily life. Moreover, CV can also apply to various theories and researches. Human Detection is one of the most popular research topics in Computer Vision. In this paper, we present a study of technique for human detection from video, which is the Histograms of Oriented Gradients or HOG by developing a piece of application to import and detect the human from the video. We use the HOG Algorithm to analyze every frame from the video to find and count people. After analyzing video from starting to the end, the program generate histogram to show the number of detected people versus playing period of the video. As a result, the expected results are obtained, including the detection of people in the video and the histogram generation to show the appearance of human detected in the video file.",
"title": ""
},
{
"docid": "b68a62f6c4078e9666a8a3b9489fcf84",
"text": "Reviews the criticism on the 4P Marketing Mix framework as the basis of traditional and virtual marketing planning. Argues that the customary marketing management approach, based on the popular Marketing Mix 4Ps paradigm, is inadequate in the case of virtual marketing. Identifies two main limitations of the Marketing Mix when applied in online environments namely the role of the Ps in a virtual commercial setting and the lack of any strategic elements in the model. Identifies the critical factors of the Web marketing and argues that the basis for successful E-Commerce is the full integration of the virtual activities into the company’s physical strategy, marketing plan and organisational processes. The 4S elements of the Web Marketing Mix framework offer the basis for developing and commercialising Business to Consumer online projects. The model was originally developed for educational purposes and has been tested and refined by means of three case studies.",
"title": ""
},
{
"docid": "158cf429820a3cdf8743c52ffd859878",
"text": "Vulnerabilities in browser extensions put users at risk by providing a way for website and network attackers to gain access to users’ private data and credentials. Extensions can also introduce vulnerabilities into the websites that they modify. In 2009, Google Chrome introduced a new extension platform with several features intended to prevent and mitigate extension vulnerabilities: strong isolation between websites and extensions, privilege separation within an extension, and an extension permission system. We performed a security review of 100 Chrome extensions and found 70 vulnerabilities across 40 extensions. Given these vulnerabilities, we evaluate how well each of the security mechanisms defends against extension vulnerabilities. We find that the mechanisms mostly succeed at preventing direct web attacks on extensions, but new security mechanisms are needed to protect users from network attacks on extensions, website metadata attacks on extensions, and vulnerabilities that extensions add to websites. We propose and evaluate additional defenses, and we conclude that banning HTTP scripts and inline scripts would prevent 47 of the 50 most severe vulnerabilities with only modest impact on developers.",
"title": ""
},
{
"docid": "d99d83f8fbd062ddae5a8ab2d5e19e6d",
"text": "A low-distortion super-GOhm subthreshold MOS resistor is designed, fabricated and experimentally validated. The circuit is utilized as a feedback element in the body of a two-stage neural recording amplifier. Linearity is experimentally validated for 0.5 Hz to 5 kHz input frequency and over 0.3 to 0.9 V output voltage dynamic range. The implemented pseudo resistor is also tunable, making the high-pass filter pole adjustable. The circuit is fabricated in 0.13-μm CMOS process and consumes 96 nW from a 1.2 V supply to realize an over 500 GΩ resistance.",
"title": ""
},
{
"docid": "68e4c1122a2339a89cb3873e1013a26e",
"text": "Although there is a voluminous literature on mass media effects on body image concerns of young adult women in the U.S., there has been relatively little theoretically-driven research on processes and effects of social media on young women’s body image and self-perceptions. Yet given the heavy online presence of young adults, particularly women, and their reliance on social media, it is important to appreciate ways that social media can influence perceptions of body image and body image disturbance. Drawing on communication and social psychological theories, the present article articulates a series of ideas and a framework to guide research on social media effects on body image concerns of young adult women. The interactive format and content features of social media, such as the strong peer presence and exchange of a multitude of visual images, suggest that social media, working via negative social comparisons, transportation, and peer normative processes, can significantly influence body image concerns. A model is proposed that emphasizes the impact of predisposing individual vulnerability characteristics, social media uses, and mediating psychological processes on body dissatisfaction and eating disorders. Research-based ideas about social media effects on male body image, intersections with ethnicity, and ameliorative strategies are also discussed.",
"title": ""
},
{
"docid": "3eff4654a3bbf9aa3fbfe15033383e67",
"text": "Pizza is a strict superset of Java that incorporates three ideas from the academic community: parametric polymorphism, higher-order functions, and algebraic data types. Pizza is defined by translation into Java and compiles into the Java Virtual Machine, requirements which strongly constrain the design space. Nonetheless, Pizza fits smoothly to Java, with only a few rough edges.",
"title": ""
},
{
"docid": "f48d02ff3661d3b91c68d6fcf750f83e",
"text": "There have been a number of techniques developed in recent years for the efficient analysis of probabilistic inference problems, represented as Bayes' networks or influence diagrams (Lauritzen and Spiegelhalter [9], Pearl [12], Shachter [14]). To varying degrees these methods exploit the conditional independence assumed and revealed in the problem structure to analyze problems in polynomial time, essentially polynomial in the number of variables and the size of the largest state space encountered during the evaluation. Unfortunately, there are many problems of interest for which the variables of interest are continuous rather than discrete, so the relevant state spaces become infinite and the polynomial complexity is of little help.",
"title": ""
},
{
"docid": "2587fd3fa405a8e0fcbfd78bb1201e6d",
"text": "After many years of development the active electronically scanned array (AESA) radar technology reached a mature technology level. Many of today's and future radar systems will be equipped with the ASEA technology. T/R-modules are key elements in active phased array antennas for radar and electronic warfare applications. Meanwhile T/R-modules using GaAs MMICs are in mass production with high quantities. Top priority is on continuous improvement of yield figures by optimizing the spread of key performance parameters to come down with cost. To fulfill future demands on power, bandwidth, robustness, weight, multifunctional sensor capability, and overall sensor cost, new emerging semiconductor and packaging technologies have to be implemented for the next generation T/R-modules. Using GaN MMICs as HPAs and also as robust LNAs is a promising approach. Higher integration at the amplitude and phase setting section of the T/R-module is realized with GaAs core chips or even with SiGe multifunction chips. With increasing digital signal processing capability the digital beam forming will get more importance with a high impact on the T/R-modules. For lower production costs but also for sensor integration new packaging concepts are necessary. This includes the transition towards organic packages or the transition from brick style T/R-module to a tile T/R-module.",
"title": ""
},
{
"docid": "737ed1714642046c67694f44f8c0cb3f",
"text": "In this paper, we review some fuzzy linear programming methods and techniques from a practical point of view. In the rst part, the general history and the approach of fuzzy mathematical programming are introduced. Using a numerical example, some models of fuzzy linear programming are described. In the second part of the paper, fuzzy mathematical programming approaches are compared to stochastic programming ones. The advantages and disadvantages of fuzzy mathematical programming approaches are exempli ed in the setting of an optimal portfolio selection problem. Finally, some newly developed ideas and techniques in fuzzy mathematical programming are brie y reviewed. c © 2000 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "93aaea4fc6c617c078a858baafd22d22",
"text": "Network system designers need to understand the error performance of wireless mobile channels in order to improve the quality of communications by deploying better modulation and coding schemes, and better network architectures. It is also desirable to have an accurate and thoroughly reproducible error model, which would allow network designers to evaluate a protocol or algorithm and its variations in a controlled and repeatable way. However, the physical properties of radio propagation, and the diversities of error environments in a wireless medium, lead to complexity in modeling the error performance of wireless channels. This article surveys the error modeling methods of fading channels in wireless communications, and provides a novel user-requirement (researchers and designers) based approach to classify the existing wireless error models.",
"title": ""
},
{
"docid": "18f9eb6f90eb09393395e4cb5a12ea01",
"text": " Topic modeling refers to the process of algorithmically sorting documents into categories based on some common relationship between the documents. This common relationship between the documents is considered the “topic” of the documents. Sentiment analysis refers to the process of algorithmically sorting a document into a positive or negative category depending whether this document expresses a positive or negative opinion on its respective topic. In this paper, I consider the open problem of document classification into a topic category, as well as a sentiment category. This has a direct application to the retail industry where companies may want to scour the web in order to find documents (blogs, Amazon reviews, etc.) which both speak about their product, and give an opinion on their product (positive, negative or neutral). My solution to this problem uses a Non-negative Matrix Factorization (NMF) technique in order to determine the topic classifications of a document set, and further factors the matrix in order to discover the sentiment behind this category of product. Introduction to Sentiment Analysis: In the United States, the incredible accessibility of the internet gives a voice to every consumer. Furthermore, internet blogs and common review sites are the first places the majority of consumers turn to when researching the pros and cons behind a product they are looking to purchase. Discovering sentiment and insights behind a company's products is hardly a new challenge, but a constantly evolving one given the complexity and sheer number of reviews buried inside a multitude of internet domains. Internet reviews, perhaps more frequently than a traditionally written and formatted magazine or newspaper review, are riddled with sarcasm, abbreviations, and slang. Simple text classification techniques based on analyzing the number of positive and negative words that occur in a document are error prone because of this. The internet requires a new solution to the trend of “reviews in 140 characters or less” which will necessitate unsupervised or semi-supervised machine learning and natural language processing techniques. Observed in [Ng et al., 2009] semi-supervised dictionary based approaches yield unsatisfactory results, with resulting lexicons of large coverage and low precision, or limited coverage and higher precision. In this paper, I will attempt to utilize these previously created dictionaries (of positive and negative words) and incorporate them into a machine learning approach to classify unlabeled documents. Introduction to Non-negative Matrix Factorization and Topic Modeling: Non-negative Matrix Factorization has applications to many fields such as computer vision, but we are interested in the specific application to topic modeling (often referred to as document clustering). NMF is the process of factoring a matrix into (usually) two parts where a [A x B] matrix is approximated by [A x r] x [r x B] where r is a chosen value, less than A and B. Every element in [A x r] and [r x B] must be non-negative throughout this process.",
"title": ""
},
{
"docid": "92c72aa180d3dccd5fcc5504832780e9",
"text": "The site of S1-S2 root activation following percutaneous high-voltage electrical (ES) and magnetic stimulation were located by analyzing the variations of the time interval from M to H soleus responses elicited by moving the stimulus point from lumbar to low thoracic levels. ES was effective in activating S1-S2 roots at their origin. However supramaximal motor root stimulation required a dorsoventral montage, the anode being a large, circular surface electrode placed ventrally, midline between the apex of the xiphoid process and the umbilicus. Responses to magnetic stimuli always resulted from the activation of a fraction of the fiber pool, sometimes limited to the low-thresholds afferent component, near its exit from the intervertebral foramina, or even more distally. Normal values for conduction velocity in motor and 1a afferent fibers in the proximal nerve tract are provided.",
"title": ""
}
] |
scidocsrr
|
74a649fcc220eaa80249e5f1bfbdbce7
|
Attribute Discovery via Predictable Discriminative Binary Codes
|
[
{
"docid": "0784d5907a8e5f1775ad98a25b1b0b31",
"text": "The Internet contains billions of images, freely available online. Methods for efficiently searching this incredibly rich resource are vital for a large number of applications. These include object recognition, computer graphics, personal photo collections, online image search tools. In this paper, our goal is to develop efficient image search and scene matching techniques that are not only fast, but also require very little memory, enabling their use on standard hardware or even on handheld devices. Our approach uses recently developed machine learning techniques to convert the Gist descriptor (a real valued vector that describes orientation energies at different scales and orientations within an image) to a compact binary code, with a few hundred bits per image. Using our scheme, it is possible to perform real-time searches with millions from the Internet using a single large PC and obtain recognition results comparable to the full descriptor. Using our codes on high quality labeled images from the LabelMe database gives surprisingly powerful recognition results using simple nearest neighbor techniques.",
"title": ""
}
] |
[
{
"docid": "38da04256f605a4900c30a90ce2a3e13",
"text": "ÐIn wireless sensor networks, energy efficiency is crucial to achieving satisfactory network lifetime. To reduce the energy consumption significantly, a node should turn off its radio most of the time, except when it has to participate in data forwarding. We propose a new technique, called Sparse Topology and Energy Management (STEM), which efficiently wakes up nodes from a deep sleep state without the need for an ultra low-power radio. The designer can trade the energy efficiency of this sleep state for the latency associated with waking up the node. In addition, we integrate STEM with approaches that also leverage excess network density. We show that our hybrid wakeup scheme results in energy savings of over two orders of magnitude compared to sensor networks without topology management. Furthermore, the network designer is offered full flexibility in exploiting the energy-latency-density design space by selecting the appropriate parameter settings of our protocol. Index TermsÐSensor networks, energy efficiency, wakeup, topology.",
"title": ""
},
{
"docid": "2841e277a0b3d79b161abcb181f48344",
"text": "Mobile crowdsourced sensing (MCS) is a new paradigm which takes advantage of pervasive smartphones to efficiently collect data, enabling numerous novel applications. To achieve good service quality for a MCS application, incentive mechanisms are necessary to attract more user participation. Most of existing mechanisms apply only for the offline scenario where all users' information are known a priori. On the contrary, we focus on a more realistic scenario where users arrive one by one online in a random order. Based on the online auction model, we investigate the problem that users submit their private types to the crowdsourcer when arrive, and the crowdsourcer aims at selecting a subset of users before a specified deadline for maximizing the value of services (assumed to be a non-negative monotone submodular function) provided by selected users under a budget constraint. We design two online mechanisms, OMZ and OMG, satisfying the computational efficiency, individual rationality, budget feasibility, truthfulness, consumer sovereignty and constant competitiveness under the zero arrival-departure interval case and a more general case, respectively. Through extensive simulations, we evaluate the performance and validate the theoretical properties of our online mechanisms.",
"title": ""
},
{
"docid": "c507ce14998e9ef9e574b1b4cc021dec",
"text": "There are no scientific publications on a electric motor in Tesla cars, so let's try to deduce something. Tesla's induction motor is very enigmatic so the paper tries to introduce a basic model. This secrecy could be interesting for the engineering and physics students. Multidisciplinary problem is considered: kinematics, mechanics, electric motors, numerical methods, control of electric drives. Identification based on three points in the steady-state torque-speed curve of the induction motor is presented. The field weakening mode of operation of the motor is analyzed. The Kloss' formula is obtained. The main aim of the article is determination of a mathematical description of the torque vs. speed curve of induction motor and its application for vehicle motion modeling. Additionally, the moment of inertia of the motor rotor and the electric vehicle mass are considered in one equation as electromechanical system. Presented approach may seem like speculation, but it allows to understand the problem of a vehicle motion. The article composition is different from classical approach - studying should be intriguing.",
"title": ""
},
{
"docid": "4b284736c51435f9ab6f52f174dc7def",
"text": "Recognition of emotion draws on a distributed set of structures that include the occipitotemporal neocortex, amygdala, orbitofrontal cortex and right frontoparietal cortices. Recognition of fear may draw especially on the amygdala and the detection of disgust may rely on the insula and basal ganglia. Two important mechanisms for recognition of emotions are the construction of a simulation of the observed emotion in the perceiver, and the modulation of sensory cortices via top-down influences.",
"title": ""
},
{
"docid": "8c026a368fcf73d6f6bdac66e8f6a603",
"text": "In this paper, a novel reconfigurable open slot antenna has been proposed for LTE smartphone applications to cover a wide bandwidth of 698–960 and 1710–2690 MHz. The antenna is located at the bottom portion of the mobile phone and is integrated with metal rim, thereby occupying a small space and providing mechanical stability to the mobile phone. Varactor diode is used to cover the lower band frequencies, so as to achieve a good frequency coverage and antenna miniaturization. The operational principles of the antenna are studied and the final design is optimized, fabricated, and tested. It has achieved the desired impedance bandwidth and the total efficiency of minimum 50% in free space throughout the required bands. The antenna performance with mobile phone components and human hand is also been studied. Furthermore, the SAR in a human head is investigated and is found to be within allowable SAR limits. Finally a multiple-input multiple-output antenna configuration with high isolation is proposed; it has an identical reconfigurable open slot antenna integrated at the top edge of the mobile phone acting as the secondary antenna for 698–960 and 1710–2690 MHz. Thus the proposed antenna is an excellent candidate for LTE smartphones and mobile devices.",
"title": ""
},
{
"docid": "29e5d267bebdeb2aa22b137219b4407e",
"text": "Social networks are popular platforms for interaction, communication and collaboration between friends. Researchers have recently proposed an emerging class of applications that leverage relationships from social networks to improve security and performance in applications such as email, web browsing and overlay routing. While these applications often cite social network connectivity statistics to support their designs, researchers in psychology and sociology have repeatedly cast doubt on the practice of inferring meaningful relationships from social network connections alone.\n This leads to the question: Are social links valid indicators of real user interaction? If not, then how can we quantify these factors to form a more accurate model for evaluating socially-enhanced applications? In this paper, we address this question through a detailed study of user interactions in the Facebook social network. We propose the use of interaction graphs to impart meaning to online social links by quantifying user interactions. We analyze interaction graphs derived from Facebook user traces and show that they exhibit significantly lower levels of the \"small-world\" properties shown in their social graph counterparts. This means that these graphs have fewer \"supernodes\" with extremely high degree, and overall network diameter increases significantly as a result. To quantify the impact of our observations, we use both types of graphs to validate two well-known social-based applications (RE and SybilGuard). The results reveal new insights into both systems, and confirm our hypothesis that studies of social applications should use real indicators of user interactions in lieu of social graphs.",
"title": ""
},
{
"docid": "0ee97a3afcc2471a05924a1171ac82cf",
"text": "A number of researchers around the world have built machines that recognize, express, model, communicate, and respond to emotional information, instances of ‘‘affective computing.’’ This article raises and responds to several criticisms of affective computing, articulating state-of-the art research challenges, especially with respect to affect in humancomputer interaction. r 2003 Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "2f8439098872e3af2c8d0ade5fbb15e8",
"text": "Natural language explanations of deep neural network decisions provide an intuitive way for a AI agent to articulate a reasoning process. Current textual explanations learn to discuss class discriminative features in an image. However, it is also helpful to understand which attributes might change a classification decision if present in an image (e.g., “This is not a Scarlet Tanager because it does not have black wings.”) We call such textual explanations counterfactual explanations, and propose an intuitive method to generate counterfactual explanations by inspecting which evidence in an input is missing, but might contribute to a different classification decision if present in the image. To demonstrate our method we consider a fine-grained image classification task in which we take as input an image and a counterfactual class and output text which explains why the image does not belong to a counterfactual class. We then analyze our generated counterfactual explanations both qualitatively and quantitatively using proposed automatic metrics.",
"title": ""
},
{
"docid": "6650966d57965a626fd6f50afe6cd7a4",
"text": "This paper presents a generalized version of the linear threshold model for simulating multiple cascades on a network while allowing nodes to switch between them. The proposed model is shown to be a rapidly mixing Markov chain and the corresponding steady state distribution is used to estimate highly likely states of the cascades' spread in the network. Results on a variety of real world networks demonstrate the high quality of the estimated solution.",
"title": ""
},
{
"docid": "d0b8dc38b0a293e5442276676afc02c9",
"text": "A fundamental dilemma in reinforcement learning is the exploration-exploitation trade-off. Deep reinforcement learning enables agents to act and learn in complex environments, but also introduces new challenges to both exploration and exploitation. Concepts like intrinsic motivation, hierarchical learning or curriculum learning all inspire different methods for exploration, while other agents profit from better methods to exploit current knowledge. In this work a survey of a variety of different approaches to exploration and exploitation in deep reinforcement learning is presented.",
"title": ""
},
{
"docid": "734fc66c7c745498ca6b2b7fc6780919",
"text": "In this paper, we investigate the use of an unsupervised label clustering technique and demonstrate that it enables substantial improvements in visual relationship prediction accuracy on the Person in Context (PIC) dataset. We propose to group object labels with similar patterns of relationship distribution in the dataset into fewer categories. Label clustering not only mitigates both the large classification space and class imbalance issues, but also potentially increases data samples for each clustered category. We further propose to incorporate depth information as an additional feature into the instance segmentation model. The additional depth prediction path supplements the relationship prediction model in a way that bounding boxes or segmentation masks are unable to deliver. We have rigorously evaluated the proposed techniques and performed various ablation analysis to validate the benefits of them.",
"title": ""
},
{
"docid": "f5648e3bd38e876b53ee748021e165f2",
"text": "The existing image captioning approaches typically train a one-stage sentence decoder, which is difficult to generate rich fine-grained descriptions. On the other hand, multi-stage image caption model is hard to train due to the vanishing gradient problem. In this paper, we propose a coarse-to-fine multi-stage prediction framework for image captioning, composed of multiple decoders each of which operates on the output of the previous stage, producing increasingly refined image descriptions. Our proposed learning approach addresses the difficulty of vanishing gradients during training by providing a learning objective function that enforces intermediate supervisions. Particularly, we optimize our model with a reinforcement learning approach which utilizes the output of each intermediate decoder’s test-time inference algorithm as well as the output of its preceding decoder to normalize the rewards, which simultaneously solves the well-known exposure bias problem and the loss-evaluation mismatch problem. We extensively evaluate the proposed approach on MSCOCO and show that our approach can achieve the state-of-the-art performance.",
"title": ""
},
{
"docid": "ed509de8786ee7b4ba0febf32d0c87f7",
"text": "Threat detection and analysis are indispensable processes in today's cyberspace, but current state of the art threat detection is still limited to specific aspects of modern malicious activities due to the lack of information to analyze. By measuring and collecting various types of data, from traffic information to human behavior, at different vantage points for a long duration, the viewpoint seems to be helpful to deeply inspect threats, but faces scalability issues as the amount of collected data grows, since more computational resources are required for the analysis. In this paper, we report our experience from operating the Hadoop platform, called MATATABI, for threat detections, and present the micro-benchmarks with four different backends of data processing in typical use cases such as log data and packet trace analysis. The benchmarks demonstrate the advantages of distributed computation in terms of performance. Our extensive use cases of analysis modules showcase the potential benefit of deploying our threat analysis platform.",
"title": ""
},
{
"docid": "14dbf1851016161633e847e55e93cad3",
"text": "Direct drive permanent magnet generators(PMGs) are increasingly capturing the global wind market in large onshore and offshore applications. The aim of this paper is to provide a quick overview of permanent magnet generator design and related control issues for large wind turbines. Generator systems commonly used in wind turbines, the permanent magnet generator types, and control methods are reviewed in the paper. The current commercial PMG wind turbine on market is surveyed. The design of a 5 MW axial flux permanent magnet (AFPM) generator for large wind turbines is discussed and presented in detail.",
"title": ""
},
{
"docid": "580bdf8197e94c5bc82bc52bcc7cf6c7",
"text": "This article reports a theoretical and experimental attempt to relate and contrast 2 traditionally separate research programs: inattentional blindness and attention capture. Inattentional blindness refers to failures to notice unexpected objects and events when attention is otherwise engaged. Attention capture research has traditionally used implicit indices (e.g., response times) to investigate automatic shifts of attention. Because attention capture usually measures performance whereas inattentional blindness measures awareness, the 2 fields have existed side by side with no shared theoretical framework. Here, the authors propose a theoretical unification, adapting several important effects from the attention capture literature to the context of sustained inattentional blindness. Although some stimulus properties can influence noticing of unexpected objects, the most influential factor affecting noticing is a person's own attentional goals. The authors conclude that many--but not all--aspects of attention capture apply to inattentional blindness but that these 2 classes of phenomena remain importantly distinct.",
"title": ""
},
{
"docid": "6aee06316a24005ee2f8f4f1906e2692",
"text": "Sir, The origin of vestibular papillomatosis (VP) is controversial. VP describes the condition of multiple papillae that may cover the entire surface of the vestibule (1). Our literature search for vestibular papillomatosis revealed 13 reports in gynaecological journals and only one in a dermatological journal. Furthermore, searching for vulvar squamous papillomatosis revealed 6 reports in gynaecological journals and again only one in a dermatological journal. We therefore conclude that it is worthwhile drawing the attention of dermatologists to this entity.",
"title": ""
},
{
"docid": "193c60c3a14fe3d6a46b2624d45b70aa",
"text": "*Corresponding author: Shirin Sadat Ghiasi. Faculty of Medicine, Mashhad University of Medical Sciences, Mahshhad, Iran. E-mail: [email protected] Tel:+989156511388 This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons. org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. A Review Study on the Prenatal Diagnosis of Congenital Heart Disease Using Fetal Echocardiography",
"title": ""
},
{
"docid": "8669f1a511fab8d6a18b9905d6c6b630",
"text": "Consumer behavior study is a new, interdisciplinary nd emerging science, developed in the 1960s. Its main sources of information come from ec onomics, psychology, sociology, anthropology and artificial intelligence. If a cent ury ago, most people were living in small towns, with limited possibilities to leave their co mmunity, and few ways to satisfy their needs, now, due to the accelerated evolution of technology and the radical change of life style, consumers begin to have increasingly diverse needs. At the same time the instruments used to study their behavior have evolved, and today databa ses are included in consumer behavior research. Throughout time many models were develope d, first in order to analyze, and later in order to predict the consumer behavior. As a res ult, the concept of Big Data developed, and by applying it now, companies are trying to und erstand and predict the behavior of their consumers.",
"title": ""
},
{
"docid": "d4d9948e170edd124c57742d91a5d021",
"text": "The attribute set in an information system evolves in time when new information arrives. Both lower and upper approximations of a concept will change dynamically when attributes vary. Inspired by the former incremental algorithm in Pawlak rough sets, this paper focuses on new strategies of dynamically updating approximations in probabilistic rough sets and investigates four propositions of updating approximations under probabilistic rough sets. Two incremental algorithms based on adding attributes and deleting attributes under probabilistic rough sets are proposed, respectively. The experiments on five data sets from UCI and a genome data with thousand attributes validate the feasibility of the proposed incremental approaches. 2014 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "35756d57b4d322de9326aa0f71b49352",
"text": "A 32-Gb/s data-interpolator receiver for electrical chip-to-chip communications is introduced. The receiver front-end samples incoming data by using a blind clock signal, which has a plesiochronous frequency-phase relation with the data. Phase alignment between the data and decision timing is achieved by interpolating the input-signal samples in the analog domain. The receiver has a continuous-time linear equalizer and a two-tap loop unrolled DFE using adjustable-threshold comparators. The receiver occupies 0.24 mm2 and consumes 308.4 mW from a 0.9-V supply when it is implemented with a 28-nm CMOS process.",
"title": ""
}
] |
scidocsrr
|
5271205b861a43b88386f5966d2377b0
|
Big data analytics for transportation: Problems and prospects for its application in China
|
[
{
"docid": "77b4be1fb0b87eb1ee0399c073a7b78f",
"text": "In this work, we present an interactive system for visual analysis of urban traffic congestion based on GPS trajectories. For these trajectories we develop strategies to extract and derive traffic jam information. After cleaning the trajectories, they are matched to a road network. Subsequently, traffic speed on each road segment is computed and traffic jam events are automatically detected. Spatially and temporally related events are concatenated in, so-called, traffic jam propagation graphs. These graphs form a high-level description of a traffic jam and its propagation in time and space. Our system provides multiple views for visually exploring and analyzing the traffic condition of a large city as a whole, on the level of propagation graphs, and on road segment level. Case studies with 24 days of taxi GPS trajectories collected in Beijing demonstrate the effectiveness of our system.",
"title": ""
},
{
"docid": "b9b194410824bd769b708baef7953aaf",
"text": "Road and lane detection play an important role in autonomous driving and commercial driver-assistance systems. Vision-based road detection is an essential step towards autonomous driving, yet a challenging task due to illumination and complexity of the visual scenery. Urban scenes may present additional challenges such as intersections, multi-lane scenarios, or clutter due to heavy traffic. This paper presents an integrative approach to ego-lane detection that aims to be as simple as possible to enable real-time computation while being able to adapt to a variety of urban and rural traffic scenarios. The approach at hand combines and extends a road segmentation method in an illumination-invariant color image, lane markings detection using a ridge operator, and road geometry estimation using RANdom SAmple Consensus (RANSAC). Employing the segmented road region as a prior for lane markings extraction significantly improves the execution time and success rate of the RANSAC algorithm, and makes the detection of weakly pronounced ridge structures computationally tractable, thus enabling ego-lane detection even in the absence of lane markings. Segmentation performance is shown to increase when moving from a color-based to a histogram correlation-based model. The power and robustness of this algorithm has been demonstrated in a car simulation system as well as in the challenging KITTI data base of real-world urban traffic scenarios.",
"title": ""
}
] |
[
{
"docid": "e946deae6e1d441c152dca6e52268258",
"text": "The design of robust and high-performance gaze-tracking systems is one of the most important objectives of the eye-tracking community. In general, a subject calibration procedure is needed to learn system parameters and be able to estimate the gaze direction accurately. In this paper, we attempt to determine if subject calibration can be eliminated. A geometric analysis of a gaze-tracking system is conducted to determine user calibration requirements. The eye model used considers the offset between optical and visual axes, the refraction of the cornea, and Donder's law. This paper demonstrates the minimal number of cameras, light sources, and user calibration points needed to solve for gaze estimation. The underlying geometric model is based on glint positions and pupil ellipse in the image, and the minimal hardware needed for this model is one camera and multiple light-emitting diodes. This paper proves that subject calibration is compulsory for correct gaze estimation and proposes a model based on a single point for subject calibration. The experiments carried out show that, although two glints and one calibration point are sufficient to perform gaze estimation (error ~ 1deg), using more light sources and calibration points can result in lower average errors.",
"title": ""
},
{
"docid": "23676a52e1ed03d7b5c751a9986a7206",
"text": "Considering the increasingly complex media landscape and diversity of use, it is important to establish a common ground for identifying and describing the variety of ways in which people use new media technologies. Characterising the nature of media-user behaviour and distinctive user types is challenging and the literature offers little guidance in this regard. Hence, the present research aims to classify diverse user behaviours into meaningful categories of user types, according to the frequency of use, variety of use and content preferences. To reach a common framework, a review of the relevant research was conducted. An overview and meta-analysis of the literature (22 studies) regarding user typology was established and analysed with reference to (1) method, (2) theory, (3) media platform, (4) context and year, and (5) user types. Based on this examination, a unified Media-User Typology (MUT) is suggested. This initial MUT goes beyond the current research literature, by unifying all the existing and various user type models. A common MUT model can help the Human–Computer Interaction community to better understand both the typical users and the diversification of media-usage patterns more qualitatively. Developers of media systems can match the users’ preferences more precisely based on an MUT, in addition to identifying the target groups in the developing process. Finally, an MUT will allow a more nuanced approach when investigating the association between media usage and social implications such as the digital divide. 2010 Elsevier Ltd. All rights reserved. 1 Difficulties in understanding media-usage behaviour have also arisen because of",
"title": ""
},
{
"docid": "41a287c7ecc5921aedfa5b733a928178",
"text": "This research presents the inferential statistics for Cronbach's coefficient alpha on the basis of the standard statistical assumption of multivariate normality. The estimation of alpha's standard error (ASE) and confidence intervals are described, and the authors analytically and empirically investigate the effects of the components of these equations. The authors then demonstrate the superiority of this estimate compared with previous derivations of ASE in a separate Monte Carlo simulation. The authors also present a sampling error and test statistic for a test of independent sample alphas. They conclude with a recommendation that all alpha coefficients be reported in conjunction with standard error or confidence interval estimates and offer SAS and SPSS programming codes for easy implementation.",
"title": ""
},
{
"docid": "380f29e386a69bee3c4187950f41cfaf",
"text": "Human action recognition is one of the most active research areas in both computer vision and machine learning communities. Several methods for human action recognition have been proposed in the literature and promising results have been achieved on the popular datasets. However, the comparison of existing methods is often limited given the different datasets, experimental settings, feature representations, and so on. In particularly, there are no human action dataset that allow concurrent analysis on three popular scenarios, namely single view, cross view, and cross domain. In this paper, we introduce a Multi-modal & Multi-view & Interactive (M2I) dataset, which is designed for the evaluation of the performances of human action recognition under multi-view scenario. This dataset consists of 1760 action samples, including 9 person-person interaction actions and 13 person-object interaction actions. Moreover, we respectively evaluate three representative methods for the single-view, cross-view, and cross domain human action recognition on this dataset with the proposed evaluation protocol. It is experimentally demonstrated that this dataset is extremely challenging due to large intraclass variation, multiple similar actions, significant view difference. This benchmark can provide solid basis for the evaluation of this task and will benefit advancing related computer vision and machine learning research topics.",
"title": ""
},
{
"docid": "373d3549865647bd469b160d60db71c8",
"text": "The encoding of time and its binding to events are crucial for episodic memory, but how these processes are carried out in hippocampal–entorhinal circuits is unclear. Here we show in freely foraging rats that temporal information is robustly encoded across time scales from seconds to hours within the overall population state of the lateral entorhinal cortex. Similarly pronounced encoding of time was not present in the medial entorhinal cortex or in hippocampal areas CA3–CA1. When animals’ experiences were constrained by behavioural tasks to become similar across repeated trials, the encoding of temporal flow across trials was reduced, whereas the encoding of time relative to the start of trials was improved. The findings suggest that populations of lateral entorhinal cortex neurons represent time inherently through the encoding of experience. This representation of episodic time may be integrated with spatial inputs from the medial entorhinal cortex in the hippocampus, allowing the hippocampus to store a unified representation of what, where and when. Temporal information that is useful for episodic memory is encoded across a wide range of timescales in the lateral entorhinal cortex, arising inherently from its representation of ongoing experience.",
"title": ""
},
{
"docid": "147c1fb2c455325ff5e4e4e4659a0040",
"text": "A Ka-band 2D flat-profiled Luneburg lens antenna implemented with a glide-symmetric holey structure is presented. The required refractive index for the lens design has been investigated via an analysis of the hole depth and the gap between the two metallic layers constituting the lens. The final unit cell is described and applied to create the complete metasurface Luneburg lens showing that a plane wave is obtained when feeding at an opposite arbitrary point with a discrete source.",
"title": ""
},
{
"docid": "427d0d445985ac4eb31c7adbaf6f1e22",
"text": "In this work, we jointly address the problem of text detection and recognition in natural scene images based on convolutional recurrent neural networks. We propose a unified network that simultaneously localizes and recognizes text with a single forward pass, avoiding intermediate processes, such as image cropping, feature re-calculation, word separation, and character grouping. In contrast to existing approaches that consider text detection and recognition as two distinct tasks and tackle them one by one, the proposed framework settles these two tasks concurrently. The whole framework can be trained end-to-end, requiring only images, ground-truth bounding boxes and text labels. The convolutional features are calculated only once and shared by both detection and recognition, which saves processing time. Through multi-task training, the learned features become more informative and improves the overall performance. Our proposed method has achieved competitive performance on several benchmark datasets.",
"title": ""
},
{
"docid": "2c8b6d6e6b6c64d25fd885207eaa0327",
"text": "Many versions of Unix provide facilities for user-level packet capture, making possible the use of general purpose workstations for network monitoring. Because network monitors run as user-level processes, packets must be copied across the kernel/user-space protection boundary. This copying can be minimized by deploying a kernel agent called a packet filter , which discards unwanted packets as early as possible. The original Unix packet filter was designed around a stack-based filter evaluator that performs sub-optimally on current RISC CPUs. The BSD Packet Filter (BPF) uses a new, registerbased filter evaluator that is up to 20 times faster than the original design. BPF also uses a straightforward buffering strategy that makes its overall performance up to 100 times faster than Sun’s NIT running on the same hardware.",
"title": ""
},
{
"docid": "46ab119ffd9850fe1e5ff35b6cda267d",
"text": "Wireless sensor networks are expected to find wide applicability and increasing deployment in the near future. In this paper, we propose a formal classification of sensor networks, based on their mode of functioning, as proactive and reactive networks. Reactive networks, as opposed to passive data collecting proactive networks, respond immediately to changes in the relevant parameters of interest. We also introduce a new energy efficient protocol, TEEN (Threshold sensitive Energy Efficient sensor Network protocol) for reactive networks. We evaluate the performance of our protocol for a simple temperature sensing application. In terms of energy efficiency, our protocol has been observed to outperform existing conventional sensor network protocols.",
"title": ""
},
{
"docid": "10e8f675665b77aa6ddafbb684602ff1",
"text": "quantum state in Hilbert space. Given a decomposition of Hilbert spaceH into a tensor product of factors, we consider a class of “redundancy-constrained states” in H that generalize the area-law behavior for entanglement entropy usually found in condensed-matter systems with gapped local Hamiltonians. Using mutual information to define a distance measure on the graph, we employ classical multidimensional scaling to extract the best-fit spatial dimensionality of the emergent geometry. We then show that entanglement perturbations on such emergent geometries naturally give rise to local modifications of spatial curvature which obey a (spatial) analog of Einstein’s equation. The Hilbert space corresponding to a region of flat space is finite-dimensional and scales as the volume, though the entropy (and the maximum change thereof) scales like the area of the boundary. Aversion of the ER 1⁄4 EPR conjecture is recovered, in that perturbations that entangle distant parts of the emergent geometry generate a configuration that may be considered as a highly quantum wormhole. DOI: 10.1103/PhysRevD.95.024031",
"title": ""
},
{
"docid": "ce12e1d38a2757c621a50209db5ce008",
"text": "Schloss Reisensburg. Physica-Verlag, 1994. Summary Traditional tests of the accuracy of statistical software have been based on a few limited paradigms for ordinary least squares regression. Test suites based on these criteria served the statistical computing community well when software was limited to a few simple procedures. Recent developments in statistical computing require both more and less sophisticated measures, however. We need tests for a broader variety of procedures and ones which are more likely to reveal incompetent programming. This paper summarizes these issues.",
"title": ""
},
{
"docid": "db897ae99b6e8d2fc72e7d230f36b661",
"text": "All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher.",
"title": ""
},
{
"docid": "418ebc0424128ec1a89d5e5292872124",
"text": "Apocyni Veneti Folium (AVF) is a kind of staple traditional Chinese medicine with vast clinical consumption because of its positive effects. However, due to the habitats and adulterants, its quality is uneven. To control the quality of this medicinal herb, in this study, the quality of AVF was evaluated based on simultaneous determination of multiple bioactive constituents combined with multivariate statistical analysis. A reliable method based on ultra-fast liquid chromatography tandem triple quadrupole mass spectrometry (UFLC-QTRAP-MS/MS) was developed for the simultaneous determination of a total of 43 constituents, including 15 flavonoids, 6 organic acids, 13 amino acids, and 9 nucleosides in 41 Luobumaye samples from different habitats and commercial herbs. Furthermore, according to the contents of these 43 constituents, principal component analysis (PCA) was employed to classify and distinguish between AVF and its adulterants, leaves of Poacynum hendersonii (PHF), and gray relational analysis (GRA) was performed to evaluate the quality of the samples. The proposed method was successfully applied to the comprehensive quality evaluation of AVF, and all results demonstrated that the quality of AVF was higher than the PHF. This study will provide comprehensive information necessary for the quality control of AVF.",
"title": ""
},
{
"docid": "21716f5ceb3ba023f64b9fd7a4794b6f",
"text": "In this profile article, we report what we consider to be a rich learning experience which intertwines pedagogy and research: a process of community-based action research which has initiated a transition towards the sustainability of the University of British Columbia’s (UBC) food system. We call this initiative the UBC Food System Project (UBCFSP). The UBCFSP is a jointly initiated project between the Faculty of Land and Food Systems and the UBC Sustainability Office, and includes nine UBC organizational partners and one collaborator. The project emerged out of the recognition that our global, national, regional, and local food systems are increasingly characterized as socially, ecologically, and economically insecure and unsustainable. As a result, these food systems are experiencing an array of vulnerabilities, particularly those that are demonstrated by profound disruptions in our ecosystem and in a worldwide epidemic of malnutrition. The overall objective of the project is to conduct a campus-wide UBC food system sustainability assessment, where barriers that hinder and opportunities to make transitions towards food system sustainability are being collaboratively identified and implemented. This article is part of a series intending to share the experiences gathered so far through the project. The purpose of this profile is to provide a brief overview of the UBC Food System Project, including the context and significance, both the pedagogical approach and research methods, and some accomplishments to date.",
"title": ""
},
{
"docid": "248b68dbd8470cda6f804b99c343e12f",
"text": "This paper introduces a new technique for mapping Deep Recurrent Neural Networks (RNN) efficiently onto GPUs. We show how it is possible to achieve substantially higher computational throughput at low mini-batch sizes than direct implementations of RNNs based on matrix multiplications. The key to our approach is the use of persistent computational kernels that exploit the GPU’s inverted memory hierarchy to reuse network weights over multiple timesteps. Our initial implementation sustains 2.8 TFLOP/s at a minibatch size of 4 on an NVIDIA TitanX GPU. This provides a 16x reduction in activation memory footprint, enables model training with 12x more parameters on the same hardware, allows us to strongly scale RNN training to 128 GPUs, and allows us to efficiently explore end-to-end speech recognition models with over 100 layers.",
"title": ""
},
{
"docid": "3839daa795aaf81d202141fa3249e28a",
"text": "The design and implementation of software for extracting information from GIS files to a format appropriate for use in a spatial modeling software environment is described. This has resulted in publicly available c/c++ language programs for extracting polygons as well as database information from ArcView shape files into the Matlab software environment. In addition, a set of publicly available mapping functions that employ a graphical user interface (GUI) within Matlab are described. Particular attention is given to the interplay between spatial econometric/statistical modeling and the use of GIS information as well as mapping functions. In a recent survey of the interplay between GIS and regional modeling, Goodchild and Haining (2003) indicate the need for a convergence of these two dimensions of spatial modeling in regional science. Many of the design considerations discussed here would also apply to implementing similar functionality in other software environments for spatial statistical modeling such as R/Splus or Gauss. Toolboxes are the name given by the MathWorks to related sets of Matlab functions aimed at solving a particular class of problems. Toolboxes of functions useful in signal processing, optimization, statistics, finance and a host of other areas are available from the MathWorks as add-ons to the standard Matlab software distribution. We label the set of functions described here for extracting GIS file information as well as the GUI mapping functions the Arc Mat Toolbox.",
"title": ""
},
{
"docid": "c45faa60b1587fa5395163d4b365bc17",
"text": "Automatic facial expression analysis has received great attention in different applications over the last two decades. Facial Action Coding System (FACS), which describes all possible facial expressions based on a set of facial muscle movements called Action Unit (AU), has been used extensively to model and analyze facial expressions. FACS describes methods for coding the intensity of AUs, and AU intensity measurement is important in some studies in behavioral science and developmental psychology. However, in majority of the existing studies in the area of facial expression recognition, the focus has been on basic expression recognition or facial action unit detection. There are very few investigations on measuring the intensity of spontaneous facial actions. In addition, the few studies on AU intensity recognition usually try to measure the intensity of facial actions statically and individually, ignoring the dependencies among multilevel AU intensities as well as the temporal information. However, these spatiotemporal interactions among facial actions are crucial for understanding and analyzing spontaneous facial expressions, since these coherent, coordinated, and synchronized interactions are that produce a meaningful facial display. In this paper, we propose a framework based on Dynamic Bayesian Network (DBN) to systematically model the dynamic and semantic relationships among multilevel AU intensities. Given the extracted image observations, the AU intensity recognition is accomplished through probabilistic inference by systematically integrating the image observations with the proposed DBN model. Experiments on Denver Intensity of Spontaneous Facial Action (DISFA) database demonstrate the superiority of our method over single image-driven methods in AU intensity measurement. & 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "09d7bb1b4b976e6d398f20dc34fc7678",
"text": "A compact wideband quarter-wave transformer using microstrip lines is presented. The design relies on replacing a uniform microstrip line with a multi-stage equivalent circuit. The equivalent circuit is a cascade of either T or π networks. Design equations for both types of equivalent circuits have been derived. A quarter-wave transformer operating at 1 GHz is implemented. Simulation results indicate a −15 dB impedance bandwidth exceeding 64% for a 3-stage network with less than 0.25 dB of attenuation within the bandwidth. Both types of equivalent circuits provide more than 40% compaction with proper selection of components. Measured results for the fabricated unit deviate within acceptable limits. The designed quarter-wave transformer may be used to replace 90° transmission lines in various passive microwave components.",
"title": ""
},
{
"docid": "1a095e16a26837e65a1c6692190b34c6",
"text": "Increasing documentation on the size and appearance of muscles in the lumbar spine of low back pain (LBP) patients is available in the literature. However, a comparative study between unoperated chronic low back pain (CLBP) patients and matched (age, gender, physical activity, height and weight) healthy controls with regard to muscle cross-sectional area (CSA) and the amount of fat deposits at different levels has never been undertaken. Moreover, since a recent focus in the physiotherapy management of patients with LBP has been the specific training of the stabilizing muscles, there is a need for quantifying and qualifying the multifidus. A comparative study between unoperated CLBP patients and matched control subjects was conducted. Twenty-three healthy volunteers and 32 patients were studied. The muscle and fat CSAs were derived from standard computed tomography (CT) images at three different levels, using computerized image analysis techniques. The muscles studied were: the total paraspinal muscle mass, the isolated multifidus and the psoas. The results showed that only the CSA of the multifidus and only at the lowest level (lower end-plate of L4) was found to be statistically smaller in LBP patients. As regards amount of fat, in none of the three studied muscles was a significant difference found between the two groups. An aetiological relationship between atrophy of the multifidus and the occurrence of LBP can not be ruled out as a possible explanation. Alternatively, atrophy may be the consequence of LBP: after the onset of pain and possible long-loop inhibition of the multifidus a combination of reflex inhibition and substitution patterns of the trunk muscles may work together and could cause a selective atrophy of the multifidus. Since this muscle is considered important for lumbar segmental stability, the phenomenon of atrophy may be a reason for the high recurrence rate of LBP.",
"title": ""
}
] |
scidocsrr
|
e59acf0fe7799a52b57cd0bc1e72b31b
|
Complex Word Identification: Convolutional Neural Network vs. Feature Engineering
|
[
{
"docid": "a027c9dd3b4522cdf09a2238bfa4c37e",
"text": "Distributed word representations, or word vectors, have recently been applied to many tasks in natural language processing, leading to state-of-the-art performance. A key ingredient to the successful application of these representations is to train them on very large corpora, and use these pre-trained models in downstream tasks. In this paper, we describe how we trained such high quality word representations for 157 languages. We used two sources of data to train these models: the free online encyclopedia Wikipedia and data from the common crawl project. We also introduce three new word analogy datasets to evaluate these word vectors, for French, Hindi and Polish. Finally, we evaluate our pre-trained word vectors on 10 languages for which evaluation datasets exists, showing very strong performance compared to previous models.",
"title": ""
},
{
"docid": "b206a5f5459924381ef6c46f692c7052",
"text": "The Konstanz Information Miner is a modular environment, which enables easy visual assembly and interactive execution of a data pipeline. It is designed as a teaching, research and collaboration platform, which enables simple integration of new algorithms and tools as well as data manipulation or visualization methods in the form of new modules or nodes. In this paper we describe some of the design aspects of the underlying architecture, briey sketch how new nodes can be incorporated, and highlight some of the new features of version 2.0.",
"title": ""
}
] |
[
{
"docid": "531ac7d6500373005bae464c49715288",
"text": "We have used acceleration sensors to monitor the heart motion during surgery. A three-axis accelerometer was made from two commercially available two-axis sensors, and was used to measure the heart motion in anesthetized pigs. The heart moves due to both respiration and heart beating. The heart beating was isolated from respiration by high-pass filtering at 1.0 Hz, and heart wall velocity and position were calculated by numerically integrating the filtered acceleration traces. The resulting curves reproduced the heart motion in great detail, noise was hardly visible. Events that occurred during the measurements, e.g. arrhythmias and fibrillation, were recognized in the curves, and confirmed by comparison with synchronously recorded ECG data. We conclude that acceleration sensors are able to measure heart motion with good resolution, and that such measurements can reveal patterns that may be an indication of heart circulation failure.",
"title": ""
},
{
"docid": "1e2cd0ed3cf705de8c74452720b9e19a",
"text": "1389-1286/$ see front matter 2012 Elsevier B.V http://dx.doi.org/10.1016/j.comnet.2012.06.008 ⇑ Corresponding author. E-mail addresses: [email protected] (C. Huan (C.-T. Lea), [email protected] (A.K.-S. Wong). It is well-known that the Carrier Sense Multiple Access with Collision Avoidance (CSMA/ CA)-based wireless networks suffer seriously from the hidden terminal problem and the exposed terminal problem. So far, no satisfactory solutions that can resolve both problems simultaneously have been found. In this paper, we present a joint solution to the two problems. Our approach avoids the drawback of lessening one problem but aggregating the other. It is compatible with the IEEE 802.11 MAC and requires no protocol change. Analysis and simulations show that the proposed scheme can significantly reduce the hidden and exposed terminal problems. Not only it can significantly improve the throughput of the network and the fairness among different flows, it can also provide a much more stable link layer. In simulated scenarios under heavy traffic conditions, compared to the conventional IEEE 802.11 MAC, the new method can achieve up to 1.8 times gain in network throughput for single-hop flows and up to 2.6 times gain for multihop flows. 2012 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "167d4c17b456223e9f417ae972318415",
"text": "The current centrally controlled power grid is undergoing a drastic change in order to deal with increasingly diversified challenges, including environment and infrastructure. The next-generation power grid, known as the smart grid, will be realized with proactive usage of state-of-the-art technologies in the areas of sensing, communications, control, computing, and information technology. In a smart power grid, an efficient and reliable communication architecture plays a crucial role in improving efficiency, sustainability, and stability. In this article, we first identify the fundamental challenges in the data communications for the smart grid and introduce the ongoing standardization effort in the industry. Then we present an unprecedented cognitive radio based communications architecture for the smart grid, which is mainly motivated by the explosive data volume, diverse data traffic, and need for QoS support. The proposed architecture is decomposed into three subareas: cognitive home area network, cognitive neighborhood area network, and cognitive wide area network, depending on the service ranges and potential applications. Finally, we focus on dynamic spectrum access and sharing in each subarea. We also identify a very unique challenge in the smart grid, the necessity of joint resource management in the decomposed NAN and WAN geographic subareas in order to achieve network scale performance optimization. Illustrative results indicate that the joint NAN/WAN design is able to intelligently allocate spectra to support the communication requirements in the smart grid.",
"title": ""
},
{
"docid": "e09594fce400df1297c5c32afac85fee",
"text": "Results: Of the 74 ears tested, 45 (61%) had effusion on direct inspection. The effusion was purulent in 8 ears (18%), serous in 9 ears (20%), and mucoid in 28 ears (62%). Ultrasound identified the presence or absence of effusion in 71 cases (96%) (P=.04). Ultrasound distinguished between serous and mucoid effusion with 100% accuracy (P=.04). The probe did not distinguish between mucoid and purulent effusion.",
"title": ""
},
{
"docid": "8e8f3d504bdeb2b6c4b86999df3ece67",
"text": "Software released in binary form frequently uses third-party packages without respecting their licensing terms. For instance, many consumer devices have firmware containing the Linux kernel, without the suppliers following the requirements of the GNU General Public License. Such license violations are often accidental, e.g., when vendors receive binary code from their suppliers with no indication of its provenance. To help find such violations, we have developed the Binary Analysis Tool (BAT), a system for code clone detection in binaries. Given a binary, such as a firmware image, it attempts to detect cloning of code from repositories of packages in source and binary form. We evaluate and compare the effectiveness of three of BAT's clone detection techniques: scanning for string literals, detecting similarity through data compression, and detecting similarity by computing binary deltas.",
"title": ""
},
{
"docid": "7b043c7823ce7920848c06b3d339ba6c",
"text": "Compression artifacts arise in images whenever a lossy compression algorithm is applied. These artifacts eliminate details present in the original image, or add noise and small structures; because of these effects they make images less pleasant for the human eye, and may also lead to decreased performance of computer vision algorithms such as object detectors. To eliminate such artifacts, when decompressing an image, it is required to recover the original image from a disturbed version. To this end, we present a feed-forward fully convolutional residual network model trained using a generative adversarial framework. To provide a baseline, we show that our model can be also trained optimizing the Structural Similarity (SSIM), which is a better loss with respect to the simpler Mean Squared Error (MSE). Our GAN is able to produce images with more photorealistic details than MSE or SSIM based networks. Moreover we show that our approach can be used as a pre-processing step for object detection in case images are degraded by compression to a point that state-of-the art detectors fail. In this task, our GAN method obtains better performance than MSE or SSIM trained networks.",
"title": ""
},
{
"docid": "6888b5311d7246c5eb18142d2746ec68",
"text": "Forms of well-being vary in their activation as well as valence, differing in respect of energy-related arousal in addition to whether they are negative or positive. Those differences suggest the need to refine traditional assumptions that poor person-job fit causes lower well-being. More activated forms of well-being were proposed to be associated with poorer, rather than better, want-actual fit, since greater motivation raises wanted levels of job features and may thus reduce fit with actual levels. As predicted, activated well-being (illustrated by job engagement) and more quiescent well-being (here, job satisfaction) were found to be associated with poor fit in opposite directions--positively and negatively, respectively. Theories and organizational practices need to accommodate the partly contrasting implications of different forms of well-being.",
"title": ""
},
{
"docid": "ee6906550c2f9d294e411688bae5db71",
"text": "This position paper formalises an abstract model for complex negotiation dialogue. This model is to be used for the benchmark of optimisation algorithms ranging from Reinforcement Learning to Stochastic Games, through Transfer Learning, One-Shot Learning or others.",
"title": ""
},
{
"docid": "bcda77a0de7423a2a4331ff87ce9e969",
"text": "Because of the increasingly competitive nature of the computer manufacturing industry, Compaq Computer Corporation has made some trend-setting changes in the way it does business. One of these changes is the extension of Compaq's call-logging sy ste problem-resolution component that assists customer support personnel in determining the resolution to a customer's questions and problems. Recently, Compaq extended its customer service to provide not only dealer support but also direct end user support; it is also accepting ownership of any Compaq customer's problems in a Banyan, Mi-crosoft, Novell, or SCO UNIX operating environment. One of the tools that makes this feat possible is SMART (support management automated reasoning technology). SMART is part of a Compaq strategy to increase the effectiveness of the customer support staff and reduce overall cost to the organization by retaining problem-solving knowledge and making it available to the entire support staff at the point it is needed.",
"title": ""
},
{
"docid": "1b3afef7a857d436635a3de056559e1f",
"text": "This paper presents Haggle, an architecture for mobile devices that enables seamless network connectivity and application functionality in dynamic mobile environments. Current applications must contain significant network binding and protocol logic, which makes them inflexible to the dynamic networking environments facing mobile devices. Haggle allows separating application logic from transport bindings so that applications can be communication agnostic. Internally, the Haggle framework provides a mechanism for late-binding interfaces, names, protocols, and resources for network communication. This separation allows applications to easily utilize multiple communication modes and methods across infrastructure and infrastructure-less environments. We provide a prototype implementation of the Haggle framework and evaluate it by demonstrating support for two existing legacy applications, email and web browsing. Haggle makes it possible for these applications to seamlessly utilize mobile networking opportunities both with and without infrastructure.",
"title": ""
},
{
"docid": "b50918f904d08f678cb153b16b052344",
"text": "According to Earnshaw's theorem, the ratio between axial and radial stiffness is always -2 for pure permanent magnetic configurations with rotational symmetry. Using highly permeable material increases the force and stiffness of permanent magnetic bearings. However, the stiffness in the unstable direction increases more than the stiffness in the stable direction. This paper presents an analytical approach to calculating the axial force and the axial and radial stiffnesses of attractive passive magnetic bearings (PMBs) with back iron. The investigations are based on the method of image charges and show in which magnet geometries lead to reasonable axial to radial stiffness ratios. Furthermore, the magnet dimensions achieving maximum force and stiffness per magnet volume are outlined. Finally, the calculation method was applied to the PMB of a magnetically levitated fan, and the analytical results were compared with a finite element analysis.",
"title": ""
},
{
"docid": "36162ebd7d7c5418e4c78bad5bbba8ab",
"text": "In this paper we discuss the design of human-robot interaction focussing especially on social robot communication and multimodal information presentation. As a starting point we use the WikiTalk application, an open-domain conversational system which has been previously developed using a robotics simulator. We describe how it can be implemented on the Nao robot platform, enabling Nao to make informative spoken contributions on a wide range of topics during conversation. Spoken interaction is further combined with gesturing in order to support Nao’s presentation by natural multimodal capabilities, and to enhance and explore natural communication between human users and robots.",
"title": ""
},
{
"docid": "336d83fd5628d9325fed0d88c56bc617",
"text": "Influence of fruit development and ripening on the changes in physico-chemical properties, antiradical activity and the accumulation of polyphenolic compounds were investigated in Maoluang fruits. Total phenolics content (TP) was assayed according to the Folin-Ciocalteu method, and accounted for 19.60-8.66 mg GAE/g f.w. The TP gradually decreased from the immature to the over ripe stages. However, the total anthocyanin content (TA) showed the highest content at the over ripe stage, with an average value of 141.94 mg/100 g f.w. The antiradical activity (AA) of methanolic extracts from Maoluang fruits during development and ripening were determined with DPPH (2,2-diphenyl-1-picrylhydrazyl) radical scavenging. The highest AA was observed at the immature stage accompanied by the highest content of gallic acid and TP. Polyphenols were quantified by HPLC. The level of procyanidin B2, procyanidin B1, (+)-catechin, (–)-epicatechin, rutin and tran-resveratrol as the main polyphenol compounds, increased during fruit development and ripening. Other phenolic acids such as gallic, caffeic, and ellagic acids significantly decreased (p < 0.05) during fruit development and ripening. At over ripe stage, Maoluang possess the highest antioxidants. Thus, the over ripe stage would be the appropriate time to harvest when taking nutrition into consideration. This existing published information provides a helpful daily diet guide and useful guidance for industrial utilization of Maoluang fruits.",
"title": ""
},
{
"docid": "a8881a8635cf25c8b6fd5c67dc3488d5",
"text": "The paper presents a novel approach of measuring blood pressure using Photoplethysmography (PPG). It is a non-invasive, cuffless and painless technique that deploys infrared light to detect small variation in blood volume in the tissues with each cardiac cycle. Few specific features (viz. systolic upstroke time (ST), diastolic time (DT) and the time delay between the systolic and diastolic peak (T1)) of the waveform obtained via this technique were examined and correlated with the arterial blood pressure in 22 subjects of two age groups i) 18-25 years ii) 26-50 years. It was observed that there is a good correlation of blood pressure (both systolic blood pressure (SBP) and diastolic blood pressure (DBP)) with diastolic time and also with the time delay between systolic and diastolic peak.",
"title": ""
},
{
"docid": "6889f45db249d7054550ecb8df5ee822",
"text": "In this work a dynamic model of a planetary gear transmission is developed to study the sensitivity of the natural frequencies and vibration modes to system parameters in perturbed situation. Parameters under consideration include component masses ,moment of inertia , mesh and support stiff nesses .The model admits three planar degree of freedom for planets ,sun, ring, and carrier. Vibration modes are classified into translational, rotational and planet modes .Well-defined modal properties of tuned (cyclically symmetric) planetary gears for linear ,time-invariant case are used to calculate eigensensitivities and express them in simple formulae .These formulae provide efficient mean to determine the sensitivity to stiffness ,mass and inertia parameters in perturbed situation.",
"title": ""
},
{
"docid": "c0a75bf3a2d594fb87deb7b9f58a8080",
"text": "For WikiText-103 we swept over LSTM hidden sizes {1024, 2048, 4096}, no. LSTM layers {1, 2}, embedding dropout {0, 0.1, 0.2, 0.3}, use of layer norm (Ba et al., 2016b) {True,False}, and whether to share the input/output embedding parameters {True,False} totalling 96 parameters. A single-layer LSTM with 2048 hidden units with tied embedding parameters and an input dropout rate of 0.3 was selected, and we used this same model configuration for the other language corpora. We trained the models on 8 P100 Nvidia GPUs by splitting the batch size into 8 sub-batches, sending them to each GPU and summing the resulting gradients. The total batch size used was 512 and a sequence length of 100 was chosen. Gradients were clipped to a maximum norm value of 0.1. We did not pass the state of the LSTM between sequences during training, however the state is passed during evaluation.",
"title": ""
},
{
"docid": "1f7fb5da093f0f0b69b1cc368cea0701",
"text": "This tutorial focuses on the sense of touch within the context of a fully active human observer. It is intended for graduate students and researchers outside the discipline who seek an introduction to the rapidly evolving field of human haptics. The tutorial begins with a review of peripheral sensory receptors in skin, muscles, tendons, and joints. We then describe an extensive body of research on \"what\" and \"where\" channels, the former dealing with haptic perception of objects, surfaces, and their properties, and the latter with perception of spatial layout on the skin and in external space relative to the perceiver. We conclude with a brief discussion of other significant issues in the field, including vision-touch interactions, affective touch, neural plasticity, and applications.",
"title": ""
},
{
"docid": "50f09f5b2e579e878f041f136bafe07e",
"text": "We propose a new deep learning based approach for camera relocalization. Our approach localizes a given query image by using a convolutional neural network (CNN) for first retrieving similar database images and then predicting the relative pose between the query and the database images, whose poses are known. The camera location for the query image is obtained via triangulation from two relative translation estimates using a RANSAC based approach. Each relative pose estimate provides a hypothesis for the camera orientation and they are fused in a second RANSAC scheme. The neural network is trained for relative pose estimation in an end-to-end manner using training image pairs. In contrast to previous work, our approach does not require scene-specific training of the network, which improves scalability, and it can also be applied to scenes which are not available during the training of the network. As another main contribution, we release a challenging indoor localisation dataset covering 5 different scenes registered to a common coordinate frame. We evaluate our approach using both our own dataset and the standard 7 Scenes benchmark. The results show that the proposed approach generalizes well to previously unseen scenes and compares favourably to other recent CNN-based methods.",
"title": ""
},
{
"docid": "440f26cae0bc71c17856c6075ccd67d0",
"text": "The Agile project management methodology has been widely used in recent years as a means to counter the dangers of traditional, front-end planning methods that often lead to downstream development pathologies. Although numerous authors have pointed to the advantages of Agile, with its emphasis on individuals and interactions over processes, customer collaboration over contracts and formal negotiations, and responsiveness over rigid planning, there are, to date, very few large-scale, empirical studies to support the contention that Agile methods can improve the likelihood of project success. Developed originally for software development, it is still predominantly an IT phenomenon. But due to its success it has now spread to non-IT projects. Using a data sample of 1002 projects across multiple industries and countries, we tested the effect of Agile use in organizations on two dimensions of project success: efficiency and overall stakeholder satisfaction against organizational goals. We further examined the moderating effects of variables such as perceived quality of the vision/goals of the project, project complexity, and project team experience. Our findings suggest that Agile methods do have a positive impact on both dimensions of project success. Further, the quality of the vision/goals is a marginally significant moderator of this effect. Implications of these findings and directions for future research are discussed. © 2015 Elsevier Ltd. APM and IPMA. All rights reserved.",
"title": ""
},
{
"docid": "c0bd652e0a7d36f7901627782a5534e6",
"text": "The concept of community of practice was not born in the systems theory tradition. It has its roots in attempts to develop accounts of the social nature of human learning inspired by anthropology and social theory (Lave, 1988; Bourdieu, 1977; Giddens, 1984; Foucault, 1980; Vygostsky, 1978). But the concept of community of practice is well aligned with the perspective of the systems tradition. A community of practice itself can be viewed as a simple social system. And a complex social system can be viewed as constituted by interrelated communities of practice. In this essay I first explore the systemic nature of the concept at these two levels. Then I use this foundation to look at the applications of the concept, some of its main critiques, and its potential for developing a social discipline of learning.",
"title": ""
}
] |
scidocsrr
|
1a1e194489784b359a18e43385366a71
|
The effects of recommendations’ presentation on persuasion and satisfaction in a movie recommender system
|
[
{
"docid": "1e4f13016c846039f7bbed47810b8b3d",
"text": "This paper characterizes general properties of useful, or Effective, explanations of recommendations. It describes a methodology based on focus groups, in which we elicit what helps moviegoers decide whether or not they would like a movie. Our results highlight the importance of personalizing explanations to the individual user, as well as considering the source of recommendations, user mood, the effects of group viewing, and the effect of explanations on user expectations.",
"title": ""
},
{
"docid": "0eb75b719f523ca4e9be7fca04892249",
"text": "In this study 2,684 people evaluated the credibility of two live Web sites on a similar topic (such as health sites). We gathered the comments people wrote about each siteís credibility and analyzed the comments to find out what features of a Web site get noticed when people evaluate credibility. We found that the ìdesign lookî of the site was mentioned most frequently, being present in 46.1% of the comments. Next most common were comments about information structure and information focus. In this paper we share sample participant comments in the top 18 areas that people noticed when evaluating Web site credibility. We discuss reasons for the prominence of design look, point out how future studies can build on what we have learned in this new line of research, and outline six design implications for human-computer interaction professionals.",
"title": ""
}
] |
[
{
"docid": "f7554a083f17473e51fa03c111d3c03a",
"text": "In order to improve the security and expand the work space of minimally invasive celiac surgical robot, a new remote center-of motion (RCM) mechanism is designed. The remote center-of motion (RCM) mechanism can be used to hold surgical instrument instead of doctors, and then help doctors complete surgical operation (such as holding、 stapling or resecting the patient's diseased tissue and organs). In this paper, the function of the remote center-of motion (RCM) mechanism of minimally invasive surgical ( MIS) robot in the process of surgery are analyzed in detail. the design requirements of the remote center-of motion (RCM) are described. And then, a detailed analysis of the kinematic principle and motion decouplability of the circular tracking arc and double parallelogram structure are gave. On these foundations, a novel parallel robotic remote center-of motion (RCM) mechanism which owing a circular track structure and some series bar is proposed. Through analyzing the degree of freedom of this institution, the kinematics principle and decoupling characteristics, we find that the new institution has a good performance on “Fixed point motion” and the three rotational degrees of freedom of this mechanism can be fully decoupled. with respect to the remote center-of motion (RCM) mechanism which was made by the circular tracking arc and double parallelogram structure, the novel remote center-of motion (RCM) mechanism can realize the robot’s lightweight, improve the structural rigidity, reduce motion inertia, and also expand the working space. As a result, the new remote center-of motion (RCM) mechanism would be a good choice for a minimally invasive surgical robot.",
"title": ""
},
{
"docid": "1de1324d0f10a0e58c2adccdd8cb2c21",
"text": "In keyword search advertising, many advertisers operate on a limited budget. Yet how limited budgets affect keyword search advertising has not been extensively studied. This paper offers an analysis of the generalized second-price auction with budget constraints. We find that the budget constraint may induce advertisers to raise their bids to the highest possible amount for two different motivations: to accelerate the elimination of the budget-constrained competitor as well as to reduce their own advertising cost. Thus, in contrast to the current literature, our analysis shows that both budget-constrained and unconstrained advertisers could bid more than their own valuation. We further extend the model to consider dynamic bidding and budget-setting decisions.",
"title": ""
},
{
"docid": "50ec9d25a24e67481a4afc6a9519b83c",
"text": "Weakly supervised image segmentation is an important yet challenging task in image processing and pattern recognition fields. It is defined as: in the training stage, semantic labels are only at the image-level, without regard to their specific object/scene location within the image. Given a test image, the goal is to predict the semantics of every pixel/superpixel. In this paper, we propose a new weakly supervised image segmentation model, focusing on learning the semantic associations between superpixel sets (graphlets in this paper). In particular, we first extract graphlets from each image, where a graphlet is a small-sized graph measures the potential of multiple spatially neighboring superpixels (i.e., the probability of these superpixels sharing a common semantic label, such as the sky or the sea). To compare different-sized graphlets and to incorporate image-level labels, a manifold embedding algorithm is designed to transform all graphlets into equal-length feature vectors. Finally, we present a hierarchical Bayesian network to capture the semantic associations between postembedding graphlets, based on which the semantics of each superpixel is inferred accordingly. Experimental results demonstrate that: 1) our approach performs competitively compared with the state-of-the-art approaches on three public data sets and 2) considerable performance enhancement is achieved when using our approach on segmentation-based photo cropping and image categorization.",
"title": ""
},
{
"docid": "3f8f09645f4a5a922b8a82e3a54c613d",
"text": "Computers are ubiquitous and have been shown to be contaminated with potentially pathogenic bacteria in some communities. There is no economical way to test all the keyboards and mouse out there, but there are common-sense ways to prevent bacterial contamination or eliminate it if it exists. In this study, swabs specimens were collected from surfaces of 250 computer keyboards and mouse and plated on different bacteriological media. Organisms growing on the media were purified and identified using microbiological standards. It was found that all the tested computer keyboards and mouse devices, were positive for microbial contamination. The percentages of isolated bacteria (Staphylococcus spp., Escherichia spp., Pseudomonas spp. and Bacillus spp.) were 43.3, 40.9, 30.7, 34.1, 18.3, 18.2, 7.7 and 6.8% for computer keyboards and mouse respectively. The isolated bacteria were tested against the 6 different disinfectants (Dettol, Isol, Izal, JIK, Purit and Septol ® ). Antibacterial effects of the disinfectants were also concentration dependent. The agar well diffusion technique for determining Minimum Inhibitory Concentration (MIC) was employed. The Killing rate (K) and Decimal Reduction Time (DRT) of the disinfectants on the organism were also determined. The overall result of this study showed that Dettol ® , followed by JIK ® was highly effective against all the bacterial isolates tested while Septol and Izal ® were least effective. Isol and Purit ® showed moderate antibacterial effects. Keyboards and mouse should be disinfected daily. However, it is recommended that heightened surveillance of the microbial examination of computer keyboards should be undertaken at predetermined intervals.",
"title": ""
},
{
"docid": "0e218dd5654ae9125d40bdd5c0a326d6",
"text": "Dynamic data race detection incurs heavy runtime overheads. Recently, many sampling techniques have been proposed to detect data races. However, some sampling techniques (e.g., Pacer) are based on traditional happens-before relation and incur a large basic overhead. Others utilize hardware to reduce their sampling overhead (e.g., DataCollider) and they, however, detect a race only when the race really occurs by delaying program executions. In this paper, we study the limitations of existing techniques and propose a new data race definition, named as Clock Races, for low overhead sampling purpose. The innovation of clock races is that the detection of them does not rely on concrete locks and also avoids heavy basic overhead from tracking happens-before relation. We further propose CRSampler (Clock Race Sampler) to detect clock races via hardware based sampling without directly delaying program executions, to further reduce runtime overhead. We evaluated CRSampler on Dacapo benchmarks. The results show that CRSampler incurred less than 5% overhead on average at 1% sampling rate. Whereas, Pacer and DataCollider incurred larger than 25% and 96% overhead, respectively. Besides, at the same sampling rate, CRSampler detected significantly more data races than that by Pacer and DataCollider.",
"title": ""
},
{
"docid": "529a329c6d0cd82b7565426359bd04e0",
"text": "Despite the significant advancement in wireless technologies over the years, IEEE 802.11 still emerges as the de-facto standard to achieve the required short to medium range wireless device connectivity in anywhere from offices to homes. With it being ranked the highest among all deployed wireless technologies in terms of market adoption, vulnerability exploitation and attacks targeting it have also been commonly observed. IEEE 802.11 security has thus become a key concern over the years. In this paper, we analysed the threats and attacks targeting the IEEE 802.11 network and also identified the challenges of achieving accurate threat and attack classification, especially in situations where the attacks are novel and have never been encountered by the detection and classification system before. We then proposed a solution based on anomaly detection and classification using a deep learning approach. The deep learning approach self-learns the features necessary to detect network anomalies and is able to perform attack classification accurately. In our experiments, we considered the classification as a multi-class problem (that is, legitimate traffic, flooding type attacks, injection type attacks and impersonation type attacks), and achieved an overall accuracy of 98.6688% in classifying the attacks through the proposed solution.",
"title": ""
},
{
"docid": "7b7c418cefcd571b03e5c0a002a5e923",
"text": "A loop antenna having a gap has been investigated in the presence of a ground plane. The antenna configuration is optimized for the CP radiation, using the method of moments. It is found that, as the loop height above the ground plane is reduced, the optimized gap width approaches zero. Further antenna height reduction is found to be possible for an antenna whose wire radius is increased. On the basis of these results, we design an open-loop array antenna using a microstrip comb line as the feed network. It is demonstrated that an array antenna composed of eight open loop elements can radiate a CP wave with an axial ratio of 0.1 dB. The bandwidth for a 3-dB axial-ratio criterion is 4%, where the gain is almost constant at 15 dBi.",
"title": ""
},
{
"docid": "7b9194d0ad3832e9cff9387daeb7d560",
"text": "An emphasis has been placed on the use of ontologies for representing application domain knowledge. Determining a degree or measure of semantic similarity, semantic distance, or semantic relatedness between concepts from different systems or domains, is becoming an increasingly important task. This paper presents a brief overview of such measures between concepts within ontological representations and provides several examples of such measures found in the research literature. These measures are then examined within the framework of fuzzy set similarity measures. The use of a semantic similarity measure between elements that are part of a domain for which an ontological structure exists is explored in order to extend standard fuzzy set compatibility measures.",
"title": ""
},
{
"docid": "3f988178611f2d6f13d6fd72febf1542",
"text": "In today’s information-based society, there is abundant knowledge out there carried in the form of natural language texts (e.g., news articles, social media posts, scientific publications), which spans across various domains (e.g., corporate documents, advertisements, legal acts, medical reports), and grows at an astonishing rate. How to turn such massive and unstructured text data into structured, actionable knowledge for computational machines, and furthermore, how to teach machines learn to reason and complete the extracted knowledge is a grand challenge to the research community. Traditional IE systems assume abundant human annotations for training high quality machine learning models, which is impractical when trying to deploy IE systems to a broad range of domains, settings and languages. In the first part of the tutorial, we introduce how to extract structured facts (i.e., entities and their relations of different types) from text corpora to construct knowledge bases, with a focus on methods that are minimally-supervised and domain-independent for timely knowledge base construction across various application domains. In the second part, we introduce how to leverage other knowledge, such as the distributional statistics of characters and words, the annotations for other tasks and other domains, and the linguistics and problem structures, to combat the problem of inadequate supervision, and conduct low-resource information extraction. In the third part, we describe recent advances in knowledge base reasoning. We start with the gentle introduction to the literature, focusing on pathbased and embedding based methods. We then describe DeepPath, a recent attempt of using deep reinforcement learning to combine the best of both worlds for knowledge base reasoning.",
"title": ""
},
{
"docid": "d0c75242aad1230e168122930b078671",
"text": "Combinatorial graph cut algorithms have been successfully applied to a wide range of problems in vision and graphics. This paper focusses on possibly the simplest application of graph-cuts: segmentation of objects in image data. Despite its simplicity, this application epitomizes the best features of combinatorial graph cuts methods in vision: global optima, practical efficiency, numerical robustness, ability to fuse a wide range of visual cues and constraints, unrestricted topological properties of segments, and applicability to N-D problems. Graph cuts based approaches to object extraction have also been shown to have interesting connections with earlier segmentation methods such as snakes, geodesic active contours, and level-sets. The segmentation energies optimized by graph cuts combine boundary regularization with region-based properties in the same fashion as Mumford-Shah style functionals. We present motivation and detailed technical description of the basic combinatorial optimization framework for image segmentation via s/t graph cuts. After the general concept of using binary graph cut algorithms for object segmentation was first proposed and tested in Boykov and Jolly (2001), this idea was widely studied in computer vision and graphics communities. We provide links to a large number of known extensions based on iterative parameter re-estimation and learning, multi-scale or hierarchical approaches, narrow bands, and other techniques for demanding photo, video, and medical applications.",
"title": ""
},
{
"docid": "d8100fa0292a1f54fbfd3f9a0ccc4a87",
"text": "Article history: Received 30 August 2016 Received in revised form 2 February 2017 Accepted 20 February 2017 Available online 24 February 2017 Communicated by D. Goss",
"title": ""
},
{
"docid": "0a80057b2c43648e668809e185a68fe6",
"text": "A seminar that surveys state-of-the-art microprocessors offers an excellent forum for students to see how computer architecture techniques are employed in practice and for them to gain a detailed knowledge of the state of the art in microprocessor design. Princeton and the University of Virginia have developed such a seminar, organized around student presentations and a substantial research project. The course can accommodate a range of students, from advanced undergraduates to senior graduate students. The course can also be easily adapted to a survey of embedded processors. This paper describes the version taught at the University of Virginia and lessons learned from the experience.",
"title": ""
},
{
"docid": "d50154e67b1ef7d0af5cccc6560306eb",
"text": "N-methyl-D-aspartate receptors (NMDARs) are present at many excitatory glutamate synapses in the central nervous system and display unique properties that depend on their subunit composition. Biophysical, pharmacological and molecular methods have been used to determine the key features conferred by the various NMDAR subunits, and have helped to establish which NMDAR subtypes are present at particular synapses. Recent studies are beginning to address the functional significance of NMDAR diversity under normal and pathological conditions.",
"title": ""
},
{
"docid": "33c6a2c96fcb8236c9ce40b2f1770d04",
"text": "Intelligent Personal Assistant (IPA) agents are software agents which assist users in performing specific tasks. They should be able to communicate, cooperate, discuss, and guide people. This paper presents a proposal to add Semantic Web Knowledge to IPA agents. In our solution, the IPA agent has a modular knowledge organization composed by four differentiated areas: (i) the rational area, which adds semantic web knowledge, (ii) the association area, which simplifies building appropriate responses, (iii) the commonsense area, which provides commonsense responses, and (iv) the behavioral area, which allows IPA agents to show empathy. Our main objective is to create more intelligent and more human alike IPA agents, enhancing the current abilities that these software agents provide.",
"title": ""
},
{
"docid": "3230fba68358a08ab9112887bdd73bb9",
"text": "The local field potential (LFP) reflects activity of many neurons in the vicinity of the recording electrode and is therefore useful for studying local network dynamics. Much of the nature of the LFP is, however, still unknown. There are, for instance, contradicting reports on the spatial extent of the region generating the LFP. Here, we use a detailed biophysical modeling approach to investigate the size of the contributing region by simulating the LFP from a large number of neurons around the electrode. We find that the size of the generating region depends on the neuron morphology, the synapse distribution, and the correlation in synaptic activity. For uncorrelated activity, the LFP represents cells in a small region (within a radius of a few hundred micrometers). If the LFP contributions from different cells are correlated, the size of the generating region is determined by the spatial extent of the correlated activity.",
"title": ""
},
{
"docid": "2fcd7e151c658e29cacda5c4f5542142",
"text": "The connection between gut microbiota and energy homeostasis and inflammation and its role in the pathogenesis of obesity-related disorders are increasingly recognized. Animals models of obesity connect an altered microbiota composition to the development of obesity, insulin resistance, and diabetes in the host through several mechanisms: increased energy harvest from the diet, altered fatty acid metabolism and composition in adipose tissue and liver, modulation of gut peptide YY and glucagon-like peptide (GLP)-1 secretion, activation of the lipopolysaccharide toll-like receptor-4 axis, and modulation of intestinal barrier integrity by GLP-2. Instrumental for gut microbiota manipulation is the understanding of mechanisms regulating gut microbiota composition. Several factors shape the gut microflora during infancy: mode of delivery, type of infant feeding, hospitalization, and prematurity. Furthermore, the key importance of antibiotic use and dietary nutrient composition are increasingly recognized. The role of the Western diet in promoting an obesogenic gut microbiota is being confirmation in subjects. Following encouraging results in animals, several short-term randomized controlled trials showed the benefit of prebiotics and probiotics on insulin sensitivity, inflammatory markers, postprandial incretins, and glucose tolerance. Future research is needed to unravel the hormonal, immunomodulatory, and metabolic mechanisms underlying microbe-microbe and microbiota-host interactions and the specific genes that determine the health benefit derived from probiotics. While awaiting further randomized trials assessing long-term safety and benefits on clinical end points, a healthy lifestyle--including breast lactation, appropriate antibiotic use, and the avoidance of excessive dietary fat intake--may ensure a friendly gut microbiota and positively affect prevention and treatment of metabolic disorders.",
"title": ""
},
{
"docid": "40fd577cdff0e5c769127c91a3053fee",
"text": "Information Technology (IT) projects have a reputation of not delivering business requirements. Historical challenges like meeting cost, quality, and timeline targets remain despite the extensive experience most organizations have managing projects of all sizes. The profession continues to have high profile failures that make headlines, such as the recent healthcare.gov initiative. This research provides literary sources on agile methodology that can be used to help improve project processes and outcomes.",
"title": ""
},
{
"docid": "4028f1cd20127f3c6599e6073bb1974b",
"text": "This paper presents a power delivery monitor (PDM) peripheral integrated in a flip-chip packaged 28 nm system-on-chip (SoC) for mobile computing. The PDM is composed entirely of digital standard cells and consists of: 1) a fully integrated VCO-based digital sampling oscilloscope; 2) a synthetic current load; and 3) an event engine for triggering, analysis, and debug. Incorporated inside an SoC, it enables rapid, automated analysis of supply impedance, as well as monitoring supply voltage droop of multi-core CPUs running full software workloads and during scan-test operations. To demonstrate these capabilities, we describe a power integrity case study of a dual-core ARM Cortex-A57 cluster in a commercial 28 nm mobile SoC. Measurements are presented of power delivery network (PDN) electrical parameters, along with waveforms of the CPU cluster running test cases and benchmarks on bare metal and Linux OS. The effect of aggressive power management techniques, such as power gating on the dominant resonant frequency and peak impedance, is highlighted. Finally, we present measurements of supply voltage noise during various scan-test operations, an often-neglected aspect of SoC power integrity.",
"title": ""
},
{
"docid": "9ee1765f945c8164af6e09a836402e3e",
"text": "0167-8655/$ see front matter 2012 Elsevier B.V. A http://dx.doi.org/10.1016/j.patrec.2012.05.019 ⇑ Corresponding author at: Instituto Superior de E Portugal. E-mail address: [email protected] (A.J. Ferreira). Feature selection is a central problem in machine learning and pattern recognition. On large datasets (in terms of dimension and/or number of instances), using search-based or wrapper techniques can be computationally prohibitive. Moreover, many filter methods based on relevance/redundancy assessment also take a prohibitively long time on high-dimensional datasets. In this paper, we propose efficient unsupervised and supervised feature selection/ranking filters for high-dimensional datasets. These methods use low-complexity relevance and redundancy criteria, applicable to supervised, semi-supervised, and unsupervised learning, being able to act as pre-processors for computationally intensive methods to focus their attention on smaller subsets of promising features. The experimental results, with up to 10 features, show the time efficiency of our methods, with lower generalization error than state-of-the-art techniques, while being dramatically simpler and faster. 2012 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "bb9f86e800e3f00bf7b34be85d846ff0",
"text": "This paper presents a survey of the autopilot systems for small fixed-wing unmanned air vehicles (UAVs). The UAV flight control basics are introduced first. The radio control system and autopilot control system are then explained from both hardware and software viewpoints. Several typical commercial off-the-shelf autopilot packages are compared in detail. In addition, some research autopilot systems are introduced. Finally, conclusions are made with a summary of the current autopilot market and a remark on the future development.This paper presents a survey of the autopilot systems for small fixed-wing unmanned air vehicles (UAVs). The UAV flight control basics are introduced first. The radio control system and autopilot control system are then explained from both hardware and software viewpoints. Several typical commercial off-the-shelf autopilot packages are compared in detail. In addition, some research autopilot systems are introduced. Finally, conclusions are made with a summary of the current autopilot market and a remark on the future development.",
"title": ""
}
] |
scidocsrr
|
4ee370bafd36564e2f7b4b9f6f83e839
|
The potential of sustainable algal biofuel production using wastewater resources.
|
[
{
"docid": "a2f8cb66e02e87861a322ce50fef97af",
"text": "The conversion of biomass by gasification into a fuel suitable for use in a gas engine increases greatly the potential usefulness of biomass as a renewable resource. Gasification is a robust proven technology that can be operated either as a simple, low technology system based on a fixed-bed gasifier, or as a more sophisticated system using fluidized-bed technology. The properties of the biomass feedstock and its preparation are key design parameters when selecting the gasifier system. Electricity generation using a gas engine operating on gas produced by the gasification of biomass is applicable equally to both the developed world (as a means of reducing greenhouse gas emissions by replacing fossil fuel) and to the developing world (by providing electricity in rural areas derived from traditional biomass).",
"title": ""
},
{
"docid": "e3b3e4e75580f3dad0f2fb2b9e28fff4",
"text": "The present study introduced an integrated method for the production of biodiesel from microalgal oil. Heterotrophic growth of Chlorella protothecoides resulted in the accumulation of high lipid content (55%) in cells. Large amount of microalgal oil was efficiently extracted from these heterotrophic cells by using n-hexane. Biodiesel comparable to conventional diesel was obtained from heterotrophic microalgal oil by acidic transesterification. The best process combination was 100% catalyst quantity (based on oil weight) with 56:1 molar ratio of methanol to oil at temperature of 30 degrees C, which reduced product specific gravity from an initial value of 0.912 to a final value of 0.8637 in about 4h of reaction time. The results suggested that the new process, which combined bioengineering and transesterification, was a feasible and effective method for the production of high quality biodiesel from microalgal oil.",
"title": ""
}
] |
[
{
"docid": "7575e468e2ee37c9120efb5e73e4308a",
"text": "In this demo, we present Cleanix, a prototype system for cleaning relational Big Data. Cleanix takes data integrated from multiple data sources and cleans them on a shared-nothing machine cluster. The backend system is built on-top-of an extensible and flexible data-parallel substrate - the Hyracks framework. Cleanix supports various data cleaning tasks such as abnormal value detection and correction, incomplete data filling, de-duplication, and conflict resolution. We demonstrate that Cleanix is a practical tool that supports effective and efficient data cleaning at the large scale.",
"title": ""
},
{
"docid": "6c3be94fe73ef79d711ef5f8b9c789df",
"text": "• Belief update based on m last rewards • Gaussian belief model instead of Beta • Limited lookahead to h steps and a myopic function in the horizon. • Noisy rewards Motivation: Correct sequential decision-making is critical for life success, and optimal approaches require signi!cant computational look ahead. However, simple models seem to explain people’s behavior. Questions: (1) Why we seem so simple compared to a rational agent? (2) What is the built-in model that we use to sequentially choose between courses of actions?",
"title": ""
},
{
"docid": "eb56599f1c41563e7d8d9951f6dba061",
"text": "Tracking whole-body human pose in physical human-machine interactions is challenging because of highly dimensional human motions and lack of inexpensive, nonintrusive motion sensors in outdoor environment. In this paper, we present a computational scheme to estimate the human whole-body pose with application to bicycle riding using a small set of wearable sensors. The estimation scheme is built on the fusion of gyroscopes, accelerometers, force sensors, and physical rider-bicycle interaction constraints through an extended Kalman filter design. The use of physical rider-bicycle interaction constraints helps not only eliminate the integration drifts of inertial sensor measurements but also reduce the number of the needed wearable sensors for pose estimation. For each set of the upper and the lower limb, only one tri-axial gyroscope is needed to accurately obtain the 3-D pose information. The drift-free, reliable estimation performance is demonstrated through both indoor and outdoor riding experiments.",
"title": ""
},
{
"docid": "e93cda47089b4eeae1503bcc11275deb",
"text": "We study the problem of designing models for machine learning tasks defined on sets. In contrast to traditional approach of operating on fixed dimensional vectors, we consider objective functions defined on sets that are invariant to permutations. Such problems are widespread, ranging from estimation of population statistics [1], to anomaly detection in piezometer data of embankment dams [2], to cosmology [3, 4]. Our main theorem characterizes the permutation invariant functions and provides a family of functions to which any permutation invariant objective function must belong. This family of functions has a special structure which enables us to design a deep network architecture that can operate on sets and which can be deployed on a variety of scenarios including both unsupervised and supervised learning tasks. We also derive the necessary and sufficient conditions for permutation equivariance in deep models. We demonstrate the applicability of our method on population statistic estimation, point cloud classification, set expansion, and outlier detection.",
"title": ""
},
{
"docid": "63e715ae4f67f4c7261258531516deb3",
"text": "Query similarity calculation is an important problem and has a wide range of applications in IR, including query recommendation, query expansion, and even advertisement matching. Existing work on query similarity aims to provide a single similarity measure without considering the fact that queries are ambiguous and usually have multiple search intents. In this paper, we argue that query similarity should be defined upon search intents, so-called intent-aware query similarity. By introducing search intents into the calculation of query similarity, we can obtain more accurate and also informative similarity measures on queries and thus help a variety of applications, especially those related to diversification. Specifically, we first identify the potential search intents of queries, and then measure query similarity under different intents using intent-aware representations. A regularized topic model is employed to automatically learn the potential intents of queries by using both the words from search result snippets and the regularization from query co-clicks. Experimental results confirm the effectiveness of intent-aware query similarity on ambiguous queries which can provide significantly better similarity scores over the traditional approaches. We also experimentally verified the utility of intent-aware similarity in the application of query recommendation, which can suggest diverse queries in a structured way to search users.",
"title": ""
},
{
"docid": "fc3d4b4ac0d13b34aeadf5806013689d",
"text": "Internet of Things (IoT) is one of the emerging technologies of this century and its various aspects, such as the Infrastructure, Security, Architecture and Privacy, play an important role in shaping the future of the digitalised world. Internet of Things devices are connected through sensors which have significant impacts on the data and its security. In this research, we used IoT five layered architecture of the Internet of Things to address the security and private issues of IoT enabled services and applications. Furthermore, a detailed survey on Internet of Things infrastructure, architecture, security, and privacy of the heterogeneous objects were presented. The paper identifies the major challenge in the field of IoT; one of them is to secure the data while accessing the objects through sensing machines. This research advocates the importance of securing the IoT ecosystem at each layer resulting in an enhanced overall security of the connected devices as well as the data generated. Thus, this paper put forwards a security model to be utilised by the researchers, manufacturers and developers of IoT devices, applications and services.",
"title": ""
},
{
"docid": "c38a6685895c23620afb6570be4c646b",
"text": "Today, artificial neural networks (ANNs) are widely used in a variety of applications, including speech recognition, face detection, disease diagnosis, etc. And as the emerging field of ANNs, Long Short-Term Memory (LSTM) is a recurrent neural network (RNN) which contains complex computational logic. To achieve high accuracy, researchers always build large-scale LSTM networks which are time-consuming and power-consuming. In this paper, we present a hardware accelerator for the LSTM neural network layer based on FPGA Zedboard and use pipeline methods to parallelize the forward computing process. We also implement a sparse LSTM hidden layer, which consumes fewer storage resources than the dense network. Our accelerator is power-efficient and has a higher speed than ARM Cortex-A9 processor.",
"title": ""
},
{
"docid": "cb266f07461a58493d35f75949c4605e",
"text": "Zero shot learning in Image Classification refers to the setting where images from some novel classes are absent in the training data but other information such as natural language descriptions or attribute vectors of the classes are available. This setting is important in the real world since one may not be able to obtain images of all the possible classes at training. While previous approaches have tried to model the relationship between the class attribute space and the image space via some kind of a transfer function in order to model the image space correspondingly to an unseen class, we take a different approach and try to generate the samples from the given attributes, using a conditional variational autoencoder, and use the generated samples for classification of the unseen classes. By extensive testing on four benchmark datasets, we show that our model outperforms the state of the art, particularly in the more realistic generalized setting, where the training classes can also appear at the test time along with the novel classes.",
"title": ""
},
{
"docid": "91a09184cc67d169d983e8a01e980d32",
"text": "Real-world knowledge is growing rapidly nowadays. New entities arise with time, resulting in large volumes of relations that do not exist in current knowledge graphs (KGs). These relations containing at least one new entity are called emerging relations. They often appear in news, and hence the latest information about new entities and relations can be learned from news timely. In this paper, we focus on the problem of discovering emerging relations from news. However, there are several challenges for this task: (1) at the beginning, there is little information for emerging relations, causing problems for traditional sentence-based models; (2) no negative relations exist in KGs, creating difficulties in utilizing only positive cases for emerging relation detection from news; and (3) new relations emerge rapidly, making it necessary to keep KGs up to date with the latest emerging relations. In order to address these issues, we start from a global graph perspective and propose a novel Heterogeneous graph Embedding framework for Emerging Relation detection (HEER) that learns a classifier from positive and unlabeled instances by utilizing information from both news and KGs. Furthermore, we implement HEER in an incremental manner to timely update KGs with the latest detected emerging relations. Extensive experiments on real-world news datasets demonstrate the effectiveness of the proposed HEER model.",
"title": ""
},
{
"docid": "19dd8a5dd93964db26a8b8e26285b996",
"text": "In this article we argue that self-deception evolved to facilitate interpersonal deception by allowing people to avoid the cues to conscious deception that might reveal deceptive intent. Self-deception has two additional advantages: It eliminates the costly cognitive load that is typically associated with deceiving, and it can minimize retribution if the deception is discovered. Beyond its role in specific acts of deception, self-deceptive self-enhancement also allows people to display more confidence than is warranted, which has a host of social advantages. The question then arises of how the self can be both deceiver and deceived. We propose that this is achieved through dissociations of mental processes, including conscious versus unconscious memories, conscious versus unconscious attitudes, and automatic versus controlled processes. Given the variety of methods for deceiving others, it should come as no surprise that self-deception manifests itself in a number of different psychological processes, and we discuss various types of self-deception. We then discuss the interpersonal versus intrapersonal nature of self-deception before considering the levels of consciousness at which the self can be deceived. Finally, we contrast our evolutionary approach to self-deception with current theories and debates in psychology and consider some of the costs associated with self-deception.",
"title": ""
},
{
"docid": "81e8df79014a284a7982337807cfbd99",
"text": "Volume raycasting techniques are important for both visual arts and visualization. They allow an efficient generation of visual effects and the visualization of scientific data obtained by tomography or numerical simulation. Thanks to their flexibility, experts agree that GPU-based raycasting is the state-of-the art technique for interactive volume rendering. It will most likely replace existing slice-based techniques in the near future. Volume rendering techniques are also effective for the direct rendering of implicit surfaces used for soft body animation and constructive solid geometry.\n The lecture starts off with an in-depth introduction to the concepts behind GPU-based ray-casting to provide a common base for the following parts. The focus of this course is on advanced illumination techniques which approximate the physically-based light transport more convincingly. Such techniques include interactive implementation of soft and hard shadows, ambient occlusion and simple Monte-Carlo based approaches to global illumination including translucency and scattering.\n With the proposed techniques, users are able to interactively create convincing images from volumetric data whose visual quality goes far beyond traditional approaches. The optical properties in participating media are defined using the phase function. Many approximations to the physically based light transport applied for rendering natural phenomena such as clouds or smoke assume a rather homogenous phase function model. For rendering volumetric scans on the other hand different phase function models are required to account for both surface-like structures and fuzzy boundaries in the data. Using volume rendering techniques, artists who create medical visualization for science magazines may now work on tomographic scans directly, without the necessity to fall back to creating polygonal models of anatomical structures.",
"title": ""
},
{
"docid": "23e08b1f6886d8171fe2f46c88ea6ee2",
"text": "In recent years, there has been a significant interest in integrating probability theory with first order logic and relational representations [see De Raedt and Kersting, 2003, for an overview]. Muggleton [1996] and Cussens [1999] have upgraded stochastic grammars towards Stochastic Logic Programs, Sato and Kameya [2001] have introduced Probabilistic Distributional Semantics for logic programs, and Domingos and Richardson [2004] have upgraded Markov networks towards Markov Logic Networks. Another research stream including Poole’s Independent Choice Logic [1993], Ngo and Haddawy’s Probabilistic-Logic Programs [1997], Jäger’s Relational Bayesian Networks [1997], and Pfeffer’s Probabilistic Relational Models [2000] concentrates on first order logical and relational extensions of Bayesian networks.",
"title": ""
},
{
"docid": "a31f26b4c937805a800e33e7986ee929",
"text": "In this paper, we propose a novel shape interpolation approach based on Poisson equation. We formulate the trajectory problem of shape interpolation as solving Poisson equations defined on a domain mesh. A non-linear gradient field interpolation method is proposed to take both vertex coordinates and surface orientation into account. With proper boundary conditions, the in-between shapes are reconstructed implicitly from the interpolated gradient fields, while traditional methods usually manipulate vertex coordinates directly. Besides of global shape interpolation, our method is also applicable to local shape interpolation, and can be further enhanced by incorporating with deformation. Our approach can generate visual pleasing and physical plausible morphing sequences with stable area and volume changes. Experimental results demonstrate that our technique can avoid the shrinkage problem appeared in linear shape interpolation.",
"title": ""
},
{
"docid": "8aefd572e089cb29c13cefc6e59bdda8",
"text": "Different linguistic perspectives causes many diverse segmentation criteria for Chinese word segmentation (CWS). Most existing methods focus on improve the performance for each single criterion. However, it is interesting to exploit these different criteria and mining their common underlying knowledge. In this paper, we propose adversarial multi-criteria learning for CWS by integrating shared knowledge from multiple heterogeneous segmentation criteria. Experiments on eight corpora with heterogeneous segmentation criteria show that the performance of each corpus obtains a significant improvement, compared to single-criterion learning. Source codes of this paper are available on Github1.",
"title": ""
},
{
"docid": "5fe86529d14dec31d9d84c97e8b33481",
"text": "clayodor (\\klei-o-dor\\) is a clay-like malleable material that changes smell based on user manipulation of its shape. This work explores the tangibility of shape changing materials to capture smell, an ephemeral and intangible sensory input. We present the design of a proof-of-concept prototype, and discussions on the challenges of navigating smell though form.",
"title": ""
},
{
"docid": "8febd83ab32225be6a89b5f0236e01f6",
"text": "Tissue engineering can be used to restore, maintain, or enhance tissues and organs. The potential impact of this field, however, is far broader-in the future, engineered tissues could reduce the need for organ replacement, and could greatly accelerate the development of new drugs that may cure patients, eliminating the need for organ transplants altogether.",
"title": ""
},
{
"docid": "78ce4abc08e6c6a3ef0800accd0b8c4b",
"text": "For the first time, 20nm DRAM has been developed and fabricated successfully without extreme ultraviolet (EUV) lithography using the honeycomb structure (HCS) and the air-spacer technology. The cell capacitance (Cs) can be increased by 21% at the same cell size using a novel low-cost HCS technology with one argon fluoride immersion (ArF-i) lithography layer. The parasitic bit-line (BL) capacitance is reduced by 34% using an air-spacer technology whose breakdown voltage is 30% better than that of conventional technology.",
"title": ""
},
{
"docid": "f2742f6876bdede7a67f4ec63d73ead9",
"text": "Momentum methods play a central role in optimization. Several momentum methods are provably optimal, and all use a technique called estimate sequences to analyze their convergence properties. The technique of estimate sequences has long been considered difficult to understand, leading many researchers to generate alternative, “more intuitive” methods and analyses. In this paper we show there is an equivalence between the technique of estimate sequences and a family of Lyapunov functions in both continuous and discrete time. This framework allows us to develop a simple and unified analysis of many existing momentum algorithms, introduce several new algorithms, and most importantly, strengthen the connection between algorithms and continuous-time dynamical systems.",
"title": ""
},
{
"docid": "6d141d99945bfa55fe8cc187f8c1b864",
"text": "Many software development and maintenance tools involve matching between natural language words in different software artifacts (e.g., traceability) or between queries submitted by a user and software artifacts (e.g., code search). Because different people likely created the queries and various artifacts, the effectiveness of these tools is often improved by expanding queries and adding related words to textual artifact representations. Synonyms are particularly useful to overcome the mismatch in vocabularies, as well as other word relations that indicate semantic similarity. However, experience shows that many words are semantically similar in computer science situations, but not in typical natural language documents. In this paper, we present an automatic technique to mine semantically similar words, particularly in the software context. We leverage the role of leading comments for methods and programmer conventions in writing them. Our evaluation of our mined related comment-code word mappings that do not already occur in WordNet are indeed viewed as computer science, semantically-similar word pairs in high proportions.",
"title": ""
},
{
"docid": "f3002c5d152c8bf3d00473cbebdb6052",
"text": "I unstructured natural language — allow any statements, but make mistakes or failure. I controlled natural language — only allow unambiguous statements that can be interpreted (e.g., in supermarkets or for doctors). There is a vast amount of information in natural language. Understanding language to extract information or answering questions is more difficult than getting extracting gestalt properties such as topic, or choosing a help page. Many of the problems of AI are explicit in natural language understanding. “AI complete”.",
"title": ""
}
] |
scidocsrr
|
c48a11471e74026e572a4ced1f732d10
|
Performance analysis of active damped small DC-link capacitor based drive for unbalanced input voltage supply
|
[
{
"docid": "1969ac9972016f1644018eaea73cc330",
"text": "Previous results concerning instability of the dc link in inverter drives fed from a dc grid or via a rectifier are extended. It is shown that rectifier-inverter drives equipped with small (film) dc-link capacitors may need active stabilization. The impact of limited bandwidth and switching frequency in the inverter-motor current control loop is considered, and recommendations for selection of the dc-link capacitor, the switching frequency, and the dc-link stabilization control law in relation to each other are given. This control law is incorporated in a field-weakening (to enhance voltage sag ride-through) current controller for which design recommendations are presented",
"title": ""
}
] |
[
{
"docid": "499a4cdf37b1aac513496e630f54be7a",
"text": "Among thyroid papillary carcinomas (PTCs), the follicular variant is the most common and includes encapsulated forms (EFVPTCs). Noninvasive EFVPTCs have very low risk of recurrence or other adverse events and have been recently proposed to be designated as noninvasive follicular thyroid neoplasm with papillary-like nuclear features or NIFTP, thus eliminating the term carcinoma. This proposal is expected to significantly impact the risk of malignancy associated with the currently used diagnostic categories of thyroid cytology. In this study, we analyzed the fine needle aspiration biopsy (FNAB) cytology features of 96 histologically proven NIFTPs and determined how the main nuclear features of NIFTP correlate between cytological and histological samples. Blind review of FNAB cytology from NIFTP nodules yielded the diagnosis of \"follicular neoplasm\" (Bethesda category IV) in 56% of cases, \"suspicious for malignancy\" (category V) in 27%, \"atypia of undetermined significance/follicular lesion of undetermined significance\" (category III) in 15%, and \"malignant\" (category VI) in 2%. We found good correlation (κ=0.62) of nuclear features between histological and cytological specimens. NIFTP nuclear features (size, irregularities of contours, and chromatin clearing) were significantly different from those of benign nodules but not from those of invasive EFVPTC. Our data indicate that most of the NIFTP nodules yield an indeterminate cytological diagnosis in FNAB cytology and nuclear features found in cytology samples are reproducibly identified in corresponding histology samples. Because of the overlapping nuclear features with invasive EFVPTC, NIFTP cannot be reliably diagnosed preoperatively but should be listed in differential diagnosis of all indeterminate categories of thyroid cytology.",
"title": ""
},
{
"docid": "652e544ec32f5fde48d2435de81f5351",
"text": "As many as 50% of spontaneous preterm births are infection-associated. Intrauterine infection leads to a maternal and fetal inflammatory cascade, which produces uterine contractions and may also result in long-term adverse outcomes, such as cerebral palsy. This article addresses the prevalence, microbiology, and management of intrauterine infection in the setting of preterm labor with intact membranes. It also outlines antepartum treatment of infections for the purpose of preventing preterm birth.",
"title": ""
},
{
"docid": "ee03340751553afa79f6183a230f64f0",
"text": "We provide an overview of the recent trends toward digitalization and large-scale data analytics in healthcare. It is expected that these trends are instrumental in the dramatic changes in the way healthcare will be organized in the future. We discuss the recent political initiatives designed to shift care delivery processes from paper to electronic, with the goals of more effective treatments with better outcomes; cost pressure is a major driver of innovation. We describe newly developed networks of healthcare providers, research organizations, and commercial vendors to jointly analyze data for the development of decision support systems. We address the trend toward continuous healthcare where health is monitored by wearable and stationary devices; a related development is that patients increasingly assume responsibility for their own health data. Finally, we discuss recent initiatives toward a personalized medicine, based on advances in molecular medicine, data management, and data analytics.",
"title": ""
},
{
"docid": "37f55e03f4d1ff3b9311e537dc7122b5",
"text": "Extracting governing equations from data is a central challenge in many diverse areas of science and engineering. Data are abundant whereas models often remain elusive, as in climate science, neuroscience, ecology, finance, and epidemiology, to name only a few examples. In this work, we combine sparsity-promoting techniques and machine learning with nonlinear dynamical systems to discover governing equations from noisy measurement data. The only assumption about the structure of the model is that there are only a few important terms that govern the dynamics, so that the equations are sparse in the space of possible functions; this assumption holds for many physical systems in an appropriate basis. In particular, we use sparse regression to determine the fewest terms in the dynamic governing equations required to accurately represent the data. This results in parsimonious models that balance accuracy with model complexity to avoid overfitting. We demonstrate the algorithm on a wide range of problems, from simple canonical systems, including linear and nonlinear oscillators and the chaotic Lorenz system, to the fluid vortex shedding behind an obstacle. The fluid example illustrates the ability of this method to discover the underlying dynamics of a system that took experts in the community nearly 30 years to resolve. We also show that this method generalizes to parameterized systems and systems that are time-varying or have external forcing.",
"title": ""
},
{
"docid": "5d2190a63468e299bf755895488bd7ba",
"text": "We use logical inference techniques for recognising textual entailment, with theorem proving operating on deep semantic interpretations as the backbone of our system. However, the performance of theorem proving on its own turns out to be highly dependent on a wide range of background knowledge, which is not necessarily included in publically available knowledge sources. Therefore, we achieve robustness via two extensions. Firstly, we incorporate model building, a technique borrowed from automated reasoning, and show that it is a useful robust method to approximate entailment. Secondly, we use machine learning to combine these deep semantic analysis techniques with simple shallow word overlap. The resulting hybrid model achieves high accuracy on the RTE testset, given the state of the art. Our results also show that the various techniques that we employ perform very differently on some of the subsets of the RTE corpus and as a result, it is useful to use the nature of the dataset as a feature.",
"title": ""
},
{
"docid": "a0b8c5f8c9c8592a9d59502d0a4014d1",
"text": "OBJECTIVE\nPolymerase epsilon (POLE) is a DNA polymerase with a proofreading (exonuclease) domain, responsible for the recognition and excision of mispaired bases, thereby allowing high-fidelity DNA replication to occur. The Cancer Genome Atlas research network recently identified an ultramutated group of endometrial carcinomas, characterized by mutations in POLE, and exceptionally high substitution mutation rates. These POLE mutated endometrial tumors were almost exclusively of the endometrioid histotype. The prevalence and patterns of POLE mutated tumors in endometrioid carcinomas of the ovary, however, have not been studied in detail.\n\n\nMATERIALS AND METHODS\nIn this study, we investigate the frequency of POLE exonuclease domain mutations in a series of 89 ovarian endometrioid carcinomas.\n\n\nRESULTS\nWe found POLE mutations in 4 of 89 (4.5%) cases, occurring in 3 of 23 (13%) International Federation of Gynecology and Obstetrics (FIGO) grade 1, 1 of 43 (2%) FIGO grade 2, and 0 of 23 (0%) FIGO grade 3 tumors. All mutations were somatic missense point mutations, occurring at the commonly reported hotspots, P286R and V411L. All 3 POLE-mutated FIGO grade 1 tumors displayed prototypical histology, and the POLE-mutated FIGO grade 2 tumor displayed morphologic heterogeneity with focally high-grade features. All 4 patients with POLE-mutated tumors followed an uneventful clinical course with no disease recurrence; however, this finding was not statistically significant (P = 0.59).\n\n\nCONCLUSIONS\nThe low rate of POLE mutations in ovarian endometrioid carcinoma and their predominance within the low FIGO grade tumors are in contrast to the findings in the endometrium.",
"title": ""
},
{
"docid": "ea87bfc0d6086e367e8950b445529409",
"text": " Queue stability (Chapter 2.1) Scheduling for stability, capacity regions (Chapter 2.3) Linear programs (Chapter 2.3, Chapter 3) Energy optimality (Chapter 3.2) Opportunistic scheduling (Chapter 2.3, Chapter 3, Chapter 4.6) Lyapunov drift and optimization (Chapter 4.1.0-4.1.2, 4.2, 4.3) Inequality constraints and virtual queues (Chapter 4.4) Drift-plus-penalty algorithm (Chapter 4.5) Performance and delay tradeoffs (Chapter 3.2, 4.5) Backpressure routing (Ex. 4.16, Chapter 5.2, 5.3)",
"title": ""
},
{
"docid": "00309e5119bb0de1d7b2a583b8487733",
"text": "In this paper, we propose a novel Deep Reinforcement Learning framework for news recommendation. Online personalized news recommendation is a highly challenging problem due to the dynamic nature of news features and user preferences. Although some online recommendation models have been proposed to address the dynamic nature of news recommendation, these methods have three major issues. First, they only try to model current reward (e.g., Click Through Rate). Second, very few studies consider to use user feedback other than click / no click labels (e.g., how frequent user returns) to help improve recommendation. Third, these methods tend to keep recommending similar news to users, which may cause users to get bored. Therefore, to address the aforementioned challenges, we propose a Deep Q-Learning based recommendation framework, which can model future reward explicitly. We further consider user return pattern as a supplement to click / no click label in order to capture more user feedback information. In addition, an effective exploration strategy is incorporated to find new attractive news for users. Extensive experiments are conducted on the offline dataset and online production environment of a commercial news recommendation application and have shown the superior performance of our methods.",
"title": ""
},
{
"docid": "6e22b591075d1344ae34716854d96272",
"text": "This paper demonstrates a new structure of dual band microstrip bandpass filter (BPF) by cascading an interdigital structure (IDS) and a hairpin line structure. The use of IDS improves the quality factor of the proposed filter. The size of the filter is very small and it is very compact and simple to design. To reduce size of the proposed filter there is no use of via or defected ground structure which makes its fabrication easier and cost effective. The first band of filter covers 2.4GHz, 2.5GHz and 3.5GHz and second band covers 5.8GHz of WLAN/WiMAX standards with good insertion loss. The proposed filter is designed on FR4 with dielectric constant of 4.4 and of thickness 1.6mm. Performance of proposed filter is compared with previously reported filters and found better with reduced size.",
"title": ""
},
{
"docid": "7da83f5d7bc383e5a2b791a2d45e6422",
"text": "Generating logical form equivalents of human language is a fresh way to employ neural architectures where long shortterm memory effectively captures dependencies in both encoder and decoder units. The logical form of the sequence usually preserves information from the natural language side in the form of similar tokens, and recently a copying mechanism has been proposed which increases the probability of outputting tokens from the source input through decoding. In this paper we propose a caching mechanism as a more general form of the copying mechanism which also weighs all the words from the source vocabulary according to their relation to the current decoding context. Our results confirm that the proposed method achieves improvements in sequence/token-level accuracy on sequence to logical form tasks. Further experiments on cross-domain adversarial attacks show substantial improvements when using the most influential examples of other domains for training.",
"title": ""
},
{
"docid": "e587b5954c957f268d21878ede3359f8",
"text": "ing audit logs",
"title": ""
},
{
"docid": "688b702425c53e844d28758182306ce1",
"text": "DRAM is a precious resource in extreme-scale machines and is increasingly becoming scarce, mainly due to the growing number of cores per node. On future multi-petaflop and exaflop machines, the memory pressure is likely to be so severe that we need to rethink our memory usage models. Fortunately, the advent of non-volatile memory (NVM) offers a unique opportunity in this space. Current NVM offerings possess several desirable properties, such as low cost and power efficiency, but suffer from high latency and lifetime issues. We need rich techniques to be able to use them alongside DRAM. In this paper, we propose a novel approach for exploiting NVM as a secondary memory partition so that applications can explicitly allocate and manipulate memory regions therein. More specifically, we propose an NVMalloc library with a suite of services that enables applications to access a distributed NVM storage system. We have devised ways within NVMalloc so that the storage system, built from compute node-local NVM devices, can be accessed in a byte-addressable fashion using the memory mapped I/O interface. Our approach has the potential to re-energize out-of-core computations on large-scale machines by having applications allocate certain variables through NVMalloc, thereby increasing the overall memory capacity available. Our evaluation on a 128-core cluster shows that NVMalloc enables applications to compute problem sizes larger than the physical memory in a cost-effective manner. It can bring more performance/efficiency gain with increased computation time between NVM memory accesses or increased data access locality. In addition, our results suggest that while NVMalloc enables transparent access to NVM-resident variables, the explicit control it provides is crucial to optimize application performance.",
"title": ""
},
{
"docid": "9809521909e01140c367dbfbf3a4aacd",
"text": "Understanding how housing values evolve over time is important to policy makers, consumers and real estate professionals. Existing methods for constructing housing indices are computed at a coarse spatial granularity, such as metropolitan regions, which can mask or distort price dynamics apparent in local markets, such as neighborhoods and census tracts. A challenge in moving to estimates at, for example, the census tract level is the scarcity of spatiotemporally localized house sales observations. Our work aims to address this challenge by leveraging observations from multiple census tracts discovered to have correlated valuation dynamics. Our proposed Bayesian nonparametric approach builds on the framework of latent factor models to enable a flexible, data-driven method for inferring the clustering of correlated census tracts. We explore methods for scalability and parallelizability of computations, yielding a housing valuation index at the level of census tract rather than zip code, and on a monthly basis rather than quarterly. Our analysis is provided on a large Seattle metropolitan housing dataset.",
"title": ""
},
{
"docid": "a306ea0a425a00819b81ea7f52544cfb",
"text": "Early research in electronic markets seemed to suggest that E-Commerce transactions would result in decreased costs for buyers and sellers alike, and would therefore ultimately lead to the elimination of intermediaries from electronic value chains. However, a careful analysis of the structure and functions of electronic marketplaces reveals a different picture. Intermediaries provide many value-adding functions that cannot be easily substituted or ‘internalised’ through direct supplier-buyer dealings, and hence mediating parties may continue to play a significant role in the E-Commerce world. In this paper we provide an analysis of the potential roles of intermediaries in electronic markets and we articulate a number of hypotheses for the future of intermediation in such markets. Three main scenarios are discussed: the disintermediation scenario where market dynamics will favour direct buyer-seller transactions, the reintermediation scenario where traditional intermediaries will be forced to differentiate themselves and reemerge in the electronic marketplace, and the cybermediation scenario where wholly new markets for intermediaries will be created. The analysis suggests that the likelihood of each scenario dominating a given market is primarily dependent on the exact functions that intermediaries play in each case. A detailed discussion of such functions is presented in the paper, together with an analysis of likely outcomes in the form of a contingency model for intermediation in electronic markets.",
"title": ""
},
{
"docid": "0e679dfd2ff8ced7c1391486d4329253",
"text": "A significant portion of information needs in web search target entities. These may come in different forms or flavours, ranging from short keyword queries to more verbose requests, expressed in natural language. We address the task of automatically annotating queries with target types from an ontology. The identified types can subsequently be used, e.g., for creating semantically more informed query and retrieval models, filtering results, or directing the requests to specific verticals. Our study makes the following contributions. First, we formalise the task of hierarchical target type identification, argue that it is best viewed as a ranking problem, and propose multiple evaluation metrics. Second, we develop a purpose-built test collection by hand-annotating over 300 queries, from various recent entity search benchmarking campaigns, with target types from the DBpedia ontology. Finally, we introduce and examine two baseline models, inspired by federated search techniques. We show that these methods perform surprisingly well when target types are limited to a flat list of top level categories; finding the right level of granularity in the hierarchy, however, is particularly challenging and requires further investigation.",
"title": ""
},
{
"docid": "a1d96f46cd4fa625da9e1bf2f6299c81",
"text": "The availability of increasingly higher power commercial microwave monolithic integrated circuit (MMIC) amplifiers enables the construction of solid state amplifiers achieving output powers and performance previously achievable only from traveling wave tube amplifiers (TWTAs). A high efficiency power amplifier incorporating an antipodal finline antenna array within a coaxial waveguide is investigated at Ka Band. The coaxial waveguide combiner structure is used to demonstrate a 120 Watt power amplifier from 27 to 31GHz by combining quantity (16), 10 Watt GaN MMIC devices; achieving typical PAE of 25% for the overall power amplifier assembly.",
"title": ""
},
{
"docid": "20746cd01ff3b67b204cd2453f1d8ecb",
"text": "Quantification of human group-behavior has so far defied an empirical, falsifiable approach. This is due to tremendous difficulties in data acquisition of social systems. Massive multiplayer online games (MMOG) provide a fascinating new way of observing hundreds of thousands of simultaneously socially interacting individuals engaged in virtual economic activities. We have compiled a data set consisting of practically all actions of all players over a period of 3 years from a MMOG played by 300,000 people. This largescale data set of a socio-economic unit contains all social and economic data from a single and coherent source. Players have to generate a virtual income through economic activities to ‘survive’ and are typically engaged in a multitude of social activities offered within the game. Our analysis of high-frequency log files focuses on three types of social networks, and tests a series of social-dynamics hypotheses. In particular we study the structure and dynamics of friend-, enemyand communication networks. We find striking differences in topological structure between positive (friend) and negative (enemy) tie networks. All networks confirm the recently observed phenomenon of network densification. We propose two approximate social laws in communication networks, the first expressing betweenness centrality as the inverse square of the overlap, the second relating communication strength to the cube of the overlap. These empirical laws provide strong quantitative evidence for the Weak ties hypothesis of Granovetter. Further, the analysis of triad significance profiles validates well-established assertions from social balance theory. We find overrepresentation (underrepresentation) of complete (incomplete) triads in networks of positive ties, and vice versa for networks of negative ties. Empirical transition probabilities between triad classes provide evidence for triadic closure with extraordinarily high precision. For the first time we provide empirical results for large-scale networks of negative social ties. Whenever possible we compare our findings with data from non-virtual human groups and provide further evidence that online game communities serve as a valid model for a wide class of human societies. With this setup we demonstrate the feasibility for establishing a ‘socio-economic laboratory’ which allows to operate at levels of precision approaching those of the natural sciences. All data used in this study is fully anonymized; the authors have the written consent to publish from the legal department of the Medical University of Vienna. © 2010 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "6a91c45e0cfac9dd472f68aec15889eb",
"text": "UNLABELLED\nThe Insight Toolkit offers plenty of features for multidimensional image analysis. Current implementations, however, often suffer either from a lack of flexibility due to hard-coded C++ pipelines for a certain task or by slow execution times, e.g. caused by inefficient implementations or multiple read/write operations for separate filter execution. We present an XML-based wrapper application for the Insight Toolkit that combines the performance of a pure C++ implementation with an easy-to-use graphical setup of dynamic image analysis pipelines. Created XML pipelines can be interpreted and executed by XPIWIT in console mode either locally or on large clusters. We successfully applied the software tool for the automated analysis of terabyte-scale, time-resolved 3D image data of zebrafish embryos.\n\n\nAVAILABILITY AND IMPLEMENTATION\nXPIWIT is implemented in C++ using the Insight Toolkit and the Qt SDK. It has been successfully compiled and tested under Windows and Unix-based systems. Software and documentation are distributed under Apache 2.0 license and are publicly available for download at https://bitbucket.org/jstegmaier/xpiwit/downloads/.\n\n\nCONTACT\[email protected]\n\n\nSUPPLEMENTARY INFORMATION\nSupplementary data are available at Bioinformatics online.",
"title": ""
},
{
"docid": "d3f35e91d5d022de5fe816cf1234e415",
"text": "Rock mass description and characterisation is a basic task for exploration, mining work-flows and ground-water studies. Rock analysis can be performed using borehole logs that are created using a televiewer. Planar discontinuities in the rock appear as sinusoidal curves in borehole logs. The aim of this project is to develop a fast algorithm to analyse borehole imagery using image processing techniques, to identify and trace the discontinuities, and to perform quantitative analysis on their distribution.",
"title": ""
},
{
"docid": "3cb0232cd4b75a8691f9aa4f1d663e9a",
"text": "We introduce an approach for realtime segmentation of a scene into foreground objects, background, and object shadows superimposed on the background. To segment foreground objects, we use an adaptive thresholding method, which is able to deal with rapid changes of the overall brightness. The segmented image usually includes shadows cast by the objects onto the background. Our approach is able to robustly remove the shadow from the background while preserving the silhouette of the foreground object. We discuss a similarity measure for comparing color pixels, which improves the quality of shadow removal significantly. As the image segmentation is part of a real-time interaction environment, real-time processing is needed. Our implementation allows foreground segmentation and robust shadow removal with 15 Hz.",
"title": ""
}
] |
scidocsrr
|
57e3effce66e5ab749416390f41512c7
|
Towards Ontology-based Data Quality Inference in Large-Scale Sensor Networks
|
[
{
"docid": "74e15be321ec4e2d207f3331397f0399",
"text": "Interoperability has been a basic requirement for the modern information systems environment for over two decades. How have key requirements for interoperability changed over that time? How can we understand the full scope of interoperability issues? What has shaped research on information system interoperability? What key progress has been made? This chapter provides some of the answers to these questions. In particular, it looks at different levels of information system interoperability, while reviewing the changing focus of interoperability research themes, past achievements and new challenges in the emerging global information infrastructure (GII). It divides the research into three generations, and discusses some of achievements of the past. Finally, as we move from managing data to information, and in future knowledge, the need for achieving semantic interoperability is discussed and key components of solutions are introduced. Data and information interoperability has gained increasing attention for several reasons, including: • excellent progress in interconnection afforded by the Internet, Web and distributed computing infrastructures, leading to easy access to a large number of independently created and managed information sources of broad variety;",
"title": ""
}
] |
[
{
"docid": "227b995313994032ddeddc3cd4093790",
"text": "This paper describes and assesses underwater channel models for optical wireless communication. Models considered are: inherent optical properties; vector radiative transfer theory with the small-angle analytical solution and numerical solutions of the vector radiative transfer equation (Monte Carlo, discrete ordinates and invariant imbedding). Variable composition and refractive index, in addition to background light, are highlighted as aspects of the channel which advanced models must represent effectively. Models are assessed against these aspects in terms of their ability to predict transmitted power and spatial and temporal distributions of light a specified distance from a transmitter. Monte Carlo numerical methods are found to be the most versatile but are compromised by long computational time and greater errors than other methods.",
"title": ""
},
{
"docid": "283a1346f06fc8dead5911857da3e3d9",
"text": "The use of emoticons and emoji is increasingly popular across a variety of new platforms of online communication. They have also become popular as stimulus materials in scientific research. However, the assumption that emoji/emoticon users' interpretations always correspond to the developers'/researchers' intended meanings might be misleading. This article presents subjective norms of emoji and emoticons provided by everyday users. The Lisbon Emoji and Emoticon Database (LEED) comprises 238 stimuli: 85 emoticons and 153 emoji (collected from iOS, Android, Facebook, and Emojipedia). The sample included 505 Portuguese participants recruited online. Each participant evaluated a random subset of 20 stimuli for seven dimensions: aesthetic appeal, familiarity, visual complexity, concreteness, valence, arousal, and meaningfulness. Participants were additionally asked to attribute a meaning to each stimulus. The norms obtained include quantitative descriptive results (means, standard deviations, and confidence intervals) and a meaning analysis for each stimulus. We also examined the correlations between the dimensions and tested for differences between emoticons and emoji, as well as between the two major operating systems-Android and iOS. The LEED constitutes a readily available normative database (available at www.osf.io/nua4x ) with potential applications to different research domains.",
"title": ""
},
{
"docid": "dad138056c911c6a1d939747502799b2",
"text": "The brain interprets ambiguous sensory information faster and more reliably than modern computers, using neurons that are slower and less reliable than logic gates. But Bayesian inference, which underpins many computational models of perception and cognition, appears computationally challenging even given modern transistor speeds and energy budgets. The computational principles and structures needed to narrow this gap are unknown. Here we show how to build fast Bayesian computing machines using intentionally stochastic, digital parts, narrowing this efficiency gap by multiple orders of magnitude. We find that by connecting stochastic digital components according to simple mathematical rules, one can build massively parallel, low precision circuits that solve Bayesian inference problems and are compatible with the Poisson firing statistics of cortical neurons. We evaluate circuits for depth and motion perception, perceptual learning and causal reasoning, each performing inference over 10,000+ latent variables in real time — a 1,000x speed advantage over commodity microprocessors. These results suggest a new role for randomness in the engineering and reverse-engineering of intelligent computation.",
"title": ""
},
{
"docid": "7f2857c1bd23c7114d58c290f21bf7bd",
"text": "Many contemporary organizations are placing a greater emphasis on their performance management systems as a means of generating higher levels of job performance. We suggest that producing performance increments may be best achieved by orienting the performance management system to promote employee engagement. To this end, we describe a new approach to the performance management process that includes employee engagement and the key drivers of employee engagement at each stage. We present a model of engagement management that incorporates the main ideas of the paper and suggests a new perspective for thinking about how to foster and manage employee engagement to achieve high levels of job",
"title": ""
},
{
"docid": "d928f199fe3ececa09033ac636f5a147",
"text": "The paper reviews the development of the energy system simulation tool DNA (Dynamic Network Analysis). DNA has been developed since 1989 to be able to handle models of any kind of energy system based on the control volume approach, usually systems of lumped parameter components. DNA has proven to be a useful tool in the analysis and optimization of several types of thermal systems: Steam turbines, gas turbines, fuels cells, gasification, refrigeration and heat pumps for both conventional fossil fuels and different types of biomass. DNA is applicable for models of both steady state and dynamic operation. The program decides at runtime to apply the DAE solver if the system contains differential equations. This makes it easy to extend an existing steady state model to simulate dynamic operation of the plant. The use of the program is illustrated by examples of gas turbine models. The paper also gives an overview of the recent implementation of DNA as a Matlab extension (Mex).",
"title": ""
},
{
"docid": "408db96baaf513c65c66ced61e4d50a8",
"text": "This review highlights the use of bromelain in various applications with up-to-date literature on the purification of bromelain from pineapple fruit and waste such as peel, core, crown, and leaves. Bromelain, a cysteine protease, has been exploited commercially in many applications in the food, beverage, tenderization, cosmetic, pharmaceutical, and textile industries. Researchers worldwide have been directing their interest to purification strategies by applying conventional and modern approaches, such as manipulating the pH, affinity, hydrophobicity, and temperature conditions in accord with the unique properties of bromelain. The amount of downstream processing will depend on its intended application in industries. The breakthrough of recombinant DNA technology has facilitated the large-scale production and purification of recombinant bromelain for novel applications in the future.",
"title": ""
},
{
"docid": "dcedb6bee075c3b0b24bd1475cf5c536",
"text": "We study how to learn a semantic parser of state-of-the-art accuracy with less supervised training data. We conduct our study on WikiSQL, the largest hand-annotated semantic parsing dataset to date. First, we demonstrate that question generation is an effective method that empowers us to learn a state-ofthe-art neural network based semantic parser with thirty percent of the supervised training data. Second, we show that applying question generation to the full supervised training data further improves the state-of-the-art model. In addition, we observe that there is a logarithmic relationship between the accuracy of a semantic parser and the amount of training data.",
"title": ""
},
{
"docid": "5b3bbbdb963199058d3c899eea42879e",
"text": "This paper presents an integrated solution for a photovoltaic (PV)-fed water-pump drive system, which uses an open-end winding induction motor (OEWIM). The dual-inverter-fed OEWIM drive achieves the functionality of a three-level inverter and requires low value dc-bus voltage. This helps in an optimal arrangement of PV modules, which could avoid large strings and helps in improving the PV performance with wide bandwidth of operating voltage. It also reduces the voltage rating of the dc-link capacitors and switching devices used in the system. The proposed control strategy achieves an integration of both maximum power point tracking and V/f control for the efficient utilization of the PV panels and the motor. The proposed control scheme requires the sensing of PV voltage and current only. Thus, the system requires less number of sensors. All the analytical, simulation, and experimental results of this work under different environmental conditions are presented in this paper.",
"title": ""
},
{
"docid": "b160f27ffabbc1b046bf192f862526cb",
"text": "This paper proposes a notion of fuzzy graph database and describes a fuzzy query algebra that makes it possible to handle such database, which may be fuzzy or not, in a flexible way. The algebra, based on fuzzy set theory and the concept of a fuzzy graph, is composed of a set of operators that can be used to express preference queries on fuzzy graph databases. The preferences concern i) the content of the vertices of the graph and ii) the structure of the graph. In a similar way as relational algebra constitutes the basis of SQL, the fuzzy algebra proposed here underlies a user-oriented query language and an associated tool implementing this language that are also presented in the paper.",
"title": ""
},
{
"docid": "66d6f514c6bce09110780a1130b64dfe",
"text": "Today, with more competiveness of industries, markets, and working atmosphere in productive and service organizations what is very important for maintaining clients present, for attracting new clients and as a result increasing growth of success in organizations is having a suitable relation with clients. Bank is among organizations which are not an exception. Especially, at the moment according to increasing rate of banks` privatization, it can be argued that significance of attracting clients for banks is more than every time. The article tries to investigate effect of CRM on marketing performance in banking industry. The research method is applied and survey and descriptive. Statistical community of the research is 5 branches from Mellat Banks across Khoramabad Province and their clients. There are 45 personnel in this branch and according to Morgan Table the sample size was 40 people. Clients example was considered according to collected information, one questionnaire was designed for bank organization and another one was prepared for banks` clients in which reliability and validity are approved. The research result indicates that CRM is ineffective on marketing performance.",
"title": ""
},
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "0ffdbffcd47088afe07cbb7507b20853",
"text": "This paper presents an approach on recognising individuals based on 3D acceleration data from walking, which are collected using MEMS. Unlike most other gait recognition methods, which are based on video source, our approach uses walking acceleration in three directions: vertical, backward-forward and sideways. Using gait samples from 21 individuals and applying two methods, histogram similarity and cycle length, the equal error rates of 5% and 9% are achieved, respectively.",
"title": ""
},
{
"docid": "f82eb2d4cc45577f08c7e867bf012816",
"text": "OBJECTIVE\nThe purpose of this study was to compare the retrieval characteristics of the Option Elite (Argon Medical, Plano, Tex) and Denali (Bard, Tempe, Ariz) retrievable inferior vena cava filters (IVCFs), two filters that share a similar conical design.\n\n\nMETHODS\nA single-center, retrospective study reviewed all Option and Denali IVCF removals during a 36-month period. Attempted retrievals were classified as advanced if the routine \"snare and sheath\" technique was initially unsuccessful despite multiple attempts or an alternative endovascular maneuver or access site was used. Patient and filter characteristics were documented.\n\n\nRESULTS\nIn our study, 63 Option and 45 Denali IVCFs were retrieved, with an average dwell time of 128.73 and 99.3 days, respectively. Significantly higher median fluoroscopy times were experienced in retrieving the Option filter compared with the Denali filter (12.18 vs 6.85 minutes; P = .046). Use of adjunctive techniques was also higher in comparing the Option filter with the Denali filter (19.0% vs 8.7%; P = .079). No significant difference was noted between these groups in regard to gender, age, or history of malignant disease.\n\n\nCONCLUSIONS\nOption IVCF retrieval procedures required significantly longer retrieval fluoroscopy time compared with Denali IVCFs. Although procedure time was not analyzed in this study, as a surrogate, the increased fluoroscopy time may also have an impact on procedural direct costs and throughput.",
"title": ""
},
{
"docid": "3f96133d43179a76156763caef09d5c6",
"text": "Psychological theories of racial bias assume a pervasive motivation to avoid appearing racist, yet researchers know little regarding laypeople’s theories about what constitutes racism. By investigating lay theories of White racism across both college and community samples, we seek to develop a more complete understanding of the nature of race-related norms, motivations, and processes of social perception in the contemporary United States. Factor analyses in Studies 1 and 1a indicated three factors underlying the traits laypeople associate with White racism: evaluative, psychological, and demographic. Studies 2 and 2a revealed a three-factor solution for behaviors associated with White racism: discomfort/unfamiliarity, overt racism, and denial of problem. For both traits and behaviors, lay theories varied by participants’ race and their race-related attitudes and motivations. Specifically, support emerged for the prediction that lay theories of racism reflect a desire to distance the self from any aspect of the category ‘racist’.",
"title": ""
},
{
"docid": "a95b95792bf27000b64a5ef6546806d6",
"text": "Overfitting is one of the most critical challenges in deep neural networks, and there are various types of regularization methods to improve generalization performance. Injecting noises to hidden units during training, e.g., dropout, is known as a successful regularizer, but it is still not clear enough why such training techniques work well in practice and how we can maximize their benefit in the presence of two conflicting objectives—optimizing to true data distribution and preventing overfitting by regularization. This paper addresses the above issues by 1) interpreting that the conventional training methods with regularization by noise injection optimize the lower bound of the true objective and 2) proposing a technique to achieve a tighter lower bound using multiple noise samples per training example in a stochastic gradient descent iteration. We demonstrate the effectiveness of our idea in several computer vision applications.",
"title": ""
},
{
"docid": "91365154a173be8be29ef14a3a76b08e",
"text": "Fraud is a criminal practice for illegitimate gain of wealth or tampering information. Fraudulent activities are of critical concern because of their severe impact on organizations, communities as well as individuals. Over the last few years, various techniques from different areas such as data mining, machine learning, and statistics have been proposed to deal with fraudulent activities. Unfortunately, the conventional approaches display several limitations, which were addressed largely by advanced solutions proposed in the advent of Big Data. In this paper, we present fraud analysis approaches in the context of Big Data. Then, we study the approaches rigorously and identify their limits by exploiting Big Data analytics.",
"title": ""
},
{
"docid": "ea0f14098068dfe4e1d8919a0dd4dd5c",
"text": "Recently emerged app markets provide a centralized paradigm for software distribution in smartphones. The difficulty of massively collecting app data has led to a lack a good understanding of app market dynamics. In this paper we seek to address this problem, through a detailed temporal analysis of Google Play, Google's app market. We perform the analysis on data that we collected daily from 160,000 apps, over a period of six months in 2012. We report often surprising results. For instance, at most 50% of the apps are updated in all categories, which significantly impacts the median price. The average price does not exhibit seasonal monthly trends and a changing price does not show any observable correlation with the download count. In addition, productive developers are not creating many popular apps, but a few developers control apps which dominate the total number of downloads. We discuss the research implications of such analytics on improving developer and user experiences, and detecting emerging threat vectors.",
"title": ""
},
{
"docid": "55e977381cf25444be499ec0c320cef9",
"text": "Embedding network data into a low-dimensional vector space has shown promising performance for many real-world applications, such as node classification and entity retrieval. However, most existing methods focused only on leveraging network structure. For social networks, besides the network structure, there also exists rich information about social actors, such as user profiles of friendship networks and textual content of citation networks. These rich attribute information of social actors reveal the homophily effect, exerting huge impacts on the formation of social networks. In this paper, we explore the rich evidence source of attributes in social networks to improve network embedding. We propose a generic Attributed Social Network Embedding framework (ASNE), which learns representations for social actors (i.e., nodes) by preserving both the structural proximity and attribute proximity. While the structural proximity captures the global network structure, the attribute proximity accounts for the homophily effect. To justify our proposal, we conduct extensive experiments on four real-world social networks. Compared to the state-of-the-art network embedding approaches, ASNE can learn more informative representations, achieving substantial gains on the tasks of link prediction and node classification. Specifically, ASNE significantly outperforms node2vec with an 8.2 percent relative improvement on the link prediction task, and a 12.7 percent gain on the node classification task.",
"title": ""
},
{
"docid": "4a7bd38fcdcaa91cba875cecb8b7c7bd",
"text": "The aim of Search Based Software Engineering (SBSE) research is to move software engineering problems from human-based search to machine-based search, using a variety of techniques from the metaheuristic search, operations research and evolutionary computation paradigms. The idea is to exploit humans’ creativity and machines’ tenacity and reliability, rather than requiring humans to perform the more tedious, error prone and thereby costly aspects of the engineering process. SBSE can also provide insights and decision support. This tutorial will present the reader with a step-by-step guide to the application of SBSE techniques to Software Engineering. It assumes neither previous knowledge nor experience with Search Based Optimisation. The intention is that the tutorial will cover sufficient material to allow the reader to become productive in successfully applying search based optimisation to a chosen Software Engineering problem of interest.",
"title": ""
},
{
"docid": "d52a178526eac0438757c20c5a91e51e",
"text": "Recent convolutional neural networks, especially end-to-end disparity estimation models, achieve remarkable performance on stereo matching task. However, existed methods, even with the complicated cascade structure, may fail in the regions of non-textures, boundaries and tiny details. Focus on these problems, we propose a multi-task network EdgeStereo that is composed of a backbone disparity network and an edge sub-network. Given a binocular image pair, our model enables end-to-end prediction of both disparity map and edge map. Basically, we design a context pyramid to encode multi-scale context information in disparity branch, followed by a compact residual pyramid for cascaded refinement. To further preserve subtle details, our EdgeStereo model integrates edge cues by feature embedding and edge-aware smoothness loss regularization. Comparative results demonstrates that stereo matching and edge detection can help each other in the unified model. Furthermore, our method achieves state-of-art performance on both KITTI Stereo and Scene Flow benchmarks, which proves the effectiveness of our design.",
"title": ""
}
] |
scidocsrr
|
495d8e3d6ec1187afbfdc22d2e54167c
|
From Theories to Queries
|
[
{
"docid": "78e4a57eff6ffc7ad012639933f8ebcc",
"text": "In this paper, we describe active and semi-supervised learning methods for reducing the labeling effort for spoken language understanding. In a goal-oriented call routing system, understanding the intent of the user can be framed as a classification problem. State of the art statistical classification systems are trained using a large number of human-labeled utterances, preparation of which is labor intensive and time consuming. Active learning aims to minimize the number of labeled utterances by automatically selecting the utterances that are likely to be most informative for labeling. The method for active learning we propose, inspired by certainty-based active learning, selects the examples that the classifier is the least confident about. The examples that are classified with higher confidence scores (hence not selected by active learning) are exploited using two semi-supervised learning methods. The first method augments the training data by using the machine-labeled classes for the unlabeled utterances. The second method instead augments the classification model trained using the human-labeled utterances with the machine-labeled ones in a weighted manner. We then combine active and semi-supervised learning using selectively sampled and automatically labeled data. This enables us to exploit all collected data and alleviates the data imbalance problem caused by employing only active or semi-supervised learning. We have evaluated these active and semi-supervised learning methods with a call classification system used for AT&T customer care. Our results indicate that it is possible to reduce human labeling effort significantly. 2004 Elsevier B.V. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "6a3042419132c5bf19c5476b9e7e79fe",
"text": ". . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii",
"title": ""
},
{
"docid": "e3ac61e2a8fe211124446c22f7f88b69",
"text": "Requirement elicitation is a critical activity in the requirement development process and it explores the requirements of stakeholders. The common challenges that analysts face during elicitation process are to ensure effective communication between analyst and the users. Mostly errors in the systems are due to poor communication between user and analyst. This paper proposes an improved approach for requirements elicitation using paper prototype. The paper progresses through an assessment of the new approach using student projects developed for various organizations. A case study project is explained in the paper.",
"title": ""
},
{
"docid": "97a13a2a11db1b67230ab1047a43e1d6",
"text": "Road detection from the perspective of moving vehicles is a challenging issue in autonomous driving. Recently, many deep learning methods spring up for this task, because they can extract high-level local features to find road regions from raw RGB data, such as convolutional neural networks and fully convolutional networks (FCNs). However, how to detect the boundary of road accurately is still an intractable problem. In this paper, we propose siamesed FCNs (named “s-FCN-loc”), which is able to consider RGB-channel images, semantic contours, and location priors simultaneously to segment the road region elaborately. To be specific, the s-FCN-loc has two streams to process the original RGB images and contour maps, respectively. At the same time, the location prior is directly appended to the siamesed FCN to promote the final detection performance. Our contributions are threefold: 1) An s-FCN-loc is proposed that learns more discriminative features of road boundaries than the original FCN to detect more accurate road regions. 2) Location prior is viewed as a type of feature map and directly appended to the final feature map in s-FCN-loc to promote the detection performance effectively, which is easier than other traditional methods, namely, different priors for different inputs (image patches). 3) The convergent speed of training s-FCN-loc model is 30% faster than the original FCN because of the guidance of highly structured contours. The proposed approach is evaluated on the KITTI road detection benchmark and one-class road detection data set, and achieves a competitive result with the state of the arts.",
"title": ""
},
{
"docid": "6718aa3480c590af254a120376822d07",
"text": "This paper proposes a novel method for content-based watermarking based on feature points of an image. At each feature point, the watermark is embedded after scale normalization according to the local characteristic scale. Characteristic scale is the maximum scale of the scale-space representation of an image at the feature point. By binding watermarking with the local characteristics of an image, resilience against a5ne transformations can be obtained easily. Experimental results show that the proposed method is robust against various image processing steps including a5ne transformations, cropping, 7ltering and JPEG compression. ? 2004 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "21122ab1659629627c46114cc5c3b838",
"text": "The introduction of more onboard autonomy in future single and multi-satellite missions is both a question of limited onboard resources and of how far can we actually thrust the autonomous functionalities deployed on board. In-flight experience with nasa's Deep Space 1 and Earth Observing 1 has shown how difficult it is to design, build and test reliable software for autonomy. The degree to which system-level onboard autonomy will be deployed in the single and multi satellite systems of tomorrow will depend, among other things, on the progress made in two key software technologies: autonomous onboard planning and robust execution. Parallel to the developments in these two areas, the actual integration of planning and execution engines is still nowadays a crucial issue in practical application. This paper presents an onboard autonomous model-based executive for execution of time-flexible plans. It describes its interface with an apsi-based timeline-based planner, its control approaches, architecture and its modelling language as an extension of apsl's ddl. In addition, it introduces a modified version of the classical blocks world toy planning problem which has been extended in scope and with a runtime environment for evaluation of integrated planning and executive engines.",
"title": ""
},
{
"docid": "538302f10d223613fd756b9b0e70b32b",
"text": "Generative Adversarial Networks (GANs) have been successfully used to synthesize realistically looking images of faces, scenery and even medical images. Unfortunately, they usually require large training datasets, which are often scarce in the medical field, and to the best of our knowledge GANs have been only applied for medical image synthesis at fairly low resolution. However, many state-of-theart machine learning models operate on high resolution data as such data carries indispensable, valuable information. In this work, we try to generate realistically looking high resolution images of skin lesions with GANs, using only a small training dataset of 2000 samples. The nature of the data allows us to do a direct comparison between the image statistics of the generated samples and the real dataset. We both quantitatively and qualitatively compare state-of-the-art GAN architectures such as DCGAN and LAPGAN against a modification of the latter for the task of image generation at a resolution of 256x256px. Our investigation shows that we can approximate the real data distribution with all of the models, but we notice major differences when visually rating sample realism, diversity and artifacts. In a set of use-case experiments on skin lesion classification, we further show that we can successfully tackle the problem of heavy class imbalance with the help of synthesized high resolution melanoma samples.",
"title": ""
},
{
"docid": "a9cafa9b8788e3fa8bcdec1a7be49582",
"text": "Ensuring the safety of fully autonomous vehicles requires a multi-disciplinary approach across all the levels of functional hierarchy, from hardware fault tolerance, to resilient machine learning, to cooperating with humans driving conventional vehicles, to validating systems for operation in highly unstructured environments, to appropriate regulatory approaches. Significant open technical challenges include validating inductive learning in the face of novel environmental inputs and achieving the very high levels of dependability required for full-scale fleet deployment. However, the biggest challenge may be in creating an end-to-end design and deployment process that integrates the safety concerns of a myriad of technical specialties into a unified approach.",
"title": ""
},
{
"docid": "4147094e444521bcca3b24eceeabf45f",
"text": "Application designers must decide whether to store large objects (BLOBs) in a filesystem or in a database. Generally, this decision is based on factors such as application simplicity or manageability. Often, system performance affects these factors. Folklore tells us that databases efficiently handle large numbers of small objects, while filesystems are more efficient for large objects. Where is the break-even point? When is accessing a BLOB stored as a file cheaper than accessing a BLOB stored as a database record? Of course, this depends on the particular filesystem, database system, and workload in question. This study shows that when comparing the NTFS file system and SQL Server 2005 database system on a create, {read, replace}* delete workload, BLOBs smaller than 256KB are more efficiently handled by SQL Server, while NTFS is more efficient BLOBS larger than 1MB. Of course, this break-even point will vary among different database systems, filesystems, and workloads. By measuring the performance of a storage server workload typical of web applications which use get/put protocols such as WebDAV [WebDAV], we found that the break-even point depends on many factors. However, our experiments suggest that storage age, the ratio of bytes in deleted or replaced objects to bytes in live objects, is dominant. As storage age increases, fragmentation tends to increase. The filesystem we study has better fragmentation control than the database we used, suggesting the database system would benefit from incorporating ideas from filesystem architecture. Conversely, filesystem performance may be improved by using database techniques to handle small files. Surprisingly, for these studies, when average object size is held constant, the distribution of object sizes did not significantly affect performance. We also found that, in addition to low percentage free space, a low ratio of free space to average object size leads to fragmentation and performance degradation.",
"title": ""
},
{
"docid": "75ccea636210f4b4df490a7babdf7790",
"text": "BACKGROUND\nSmartphones are becoming a daily necessity for most undergraduates in Mainland China. Because the present scenario of problematic smartphone use (PSU) is largely unexplored, in the current study we aimed to estimate the prevalence of PSU and to screen suitable predictors for PSU among Chinese undergraduates in the framework of the stress-coping theory.\n\n\nMETHODS\nA sample of 1062 undergraduate smartphone users was recruited by means of the stratified cluster random sampling strategy between April and May 2015. The Problematic Cellular Phone Use Questionnaire was used to identify PSU. We evaluated five candidate risk factors for PSU by using logistic regression analysis while controlling for demographic characteristics and specific features of smartphone use.\n\n\nRESULTS\nThe prevalence of PSU among Chinese undergraduates was estimated to be 21.3%. The risk factors for PSU were majoring in the humanities, high monthly income from the family (≥1500 RMB), serious emotional symptoms, high perceived stress, and perfectionism-related factors (high doubts about actions, high parental expectations).\n\n\nCONCLUSIONS\nPSU among undergraduates appears to be ubiquitous and thus constitutes a public health issue in Mainland China. Although further longitudinal studies are required to test whether PSU is a transient phenomenon or a chronic and progressive condition, our study successfully identified socio-demographic and psychological risk factors for PSU. These results, obtained from a random and thus representative sample of undergraduates, opens up new avenues in terms of prevention and regulation policies.",
"title": ""
},
{
"docid": "1350f4e274947881f4562ab6596da6fd",
"text": "Calls for widespread Computer Science (CS) education have been issued from the White House down and have been met with increased enrollment in CS undergraduate programs. Yet, these programs often suffer from high attrition rates. One successful approach to addressing the problem of low retention has been a focus on group work and collaboration. This paper details the design of a collaborative ITS (CIT) for foundational CS concepts including basic data structures and algorithms. We investigate the benefit of collaboration to student learning while using the CIT. We compare learning gains of our prior work in a non-collaborative system versus two methods of supporting collaboration in the collaborative-ITS. In our study of 60 students, we found significant learning gains for students using both versions. We also discovered notable differences related to student perception of tutor helpfulness which we will investigate in subsequent work.",
"title": ""
},
{
"docid": "6f39c364603cdbf35b0053fc1ed5366c",
"text": "A dual-band unidirectional antenna has been developed. The dual-band antenna consists of a long dipole for the lower frequency band and two short dipoles for the higher frequency band. All dipoles are printed coplanar on a thin substrate. The printed dipole antenna is excited by a microstrip line. The higher-order mode in the higher frequency band has been suppressed, leading to a good unidirectional pattern in the both frequency bands. This dual-band unidirectional antenna may find application in base stations and/or access points for 2.4/5- GHz wireless communications.",
"title": ""
},
{
"docid": "b205dd971c6fb240b5fc85e9c3ee80a9",
"text": "Network embedding leverages the node proximity manifested to learn a low-dimensional node vector representation for each node in the network. The learned embeddings could advance various learning tasks such as node classification, network clustering, and link prediction. Most, if not all, of the existing works, are overwhelmingly performed in the context of plain and static networks. Nonetheless, in reality, network structure often evolves over time with addition/deletion of links and nodes. Also, a vast majority of real-world networks are associated with a rich set of node attributes, and their attribute values are also naturally changing, with the emerging of new content patterns and the fading of old content patterns. These changing characteristics motivate us to seek an effective embedding representation to capture network and attribute evolving patterns, which is of fundamental importance for learning in a dynamic environment. To our best knowledge, we are the first to tackle this problem with the following two challenges: (1) the inherently correlated network and node attributes could be noisy and incomplete, it necessitates a robust consensus representation to capture their individual properties and correlations; (2) the embedding learning needs to be performed in an online fashion to adapt to the changes accordingly. In this paper, we tackle this problem by proposing a novel dynamic attributed network embedding framework - DANE. In particular, DANE first provides an offline method for a consensus embedding and then leverages matrix perturbation theory to maintain the freshness of the end embedding results in an online manner. We perform extensive experiments on both synthetic and real attributed networks to corroborate the effectiveness and efficiency of the proposed framework.",
"title": ""
},
{
"docid": "c8967be119df778e98954a7e94bee4ca",
"text": "We consider the problem of predicting real valued scores for reviews based on various categories of features of the review text, and other metadata associated with the review, with the purpose of generating a rank for a given list of reviews. For this task, we explore various machine learning models and evaluate the effectiveness of them through a well known measure for goodness of fit. We also explored regularization methods to reduce variance in the model. Random forests was the most effective regressor in the end, outperforming all the other models that we have tried.",
"title": ""
},
{
"docid": "82335fb368198a2cf7e3021627449058",
"text": "While cancer treatments are constantly advancing, there is still a real risk of relapse after potentially curative treatments. At the risk of adverse side effects, certain adjuvant treatments can be given to patients that are at high risk of recurrence. The challenge, however, is in finding the best tradeoff between these two extremes. Patients that are given more potent treatments, such as chemotherapy, radiation, or systemic treatment, can suffer unnecessary consequences, especially if the cancer does not return. Predictive modeling of recurrence can help inform patients and practitioners on a case-by-case basis, personalized for each patient. For large-scale predictive models to be built, structured data must be captured for a wide range of diverse patients. This paper explores current methods for building cancer recurrence risk models using structured clinical patient data.",
"title": ""
},
{
"docid": "c17e6363762e0e9683b51c0704d43fa7",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "10a25736139db87efc5a3c2af6fa02fa",
"text": "Two main fields of interest form the background of actual demand for optimized levels of phenolic compounds in crop plants. These are human health and plant resistance to pathogens and to biotic and abiotic stress factors. A survey of agricultural technologies influencing the biosynthesis and accumulation of phenolic compounds in crop plants is presented, including observations on the effects of light, temperature, mineral nutrition, water management, grafting, elevated atmospheric CO(2), growth and differentiation of the plant and application of elicitors, stimulating agents and plant activators. The underlying mechanisms are discussed with respect to carbohydrate availability, trade-offs to competing demands as well as to regulatory elements. Outlines are given for genetic engineering and plant breeding. Constraints and possible physiological feedbacks are considered for successful and sustainable application of agricultural techniques with respect to management of plant phenol profiles and concentrations.",
"title": ""
},
{
"docid": "d4f7a87891fc1c626d033be09cdf45b7",
"text": "Type-2 fuzzy sets, which are characterized by membership functions (MFs) that are themselves fuzzy, have been attracting interest. This paper focuses on advancing the understanding of interval type-2 fuzzy logic controllers (FLCs). First, a type-2 FLC is evolved using Genetic Algorithms (GAs). The type-2 FLC is then compared with another three GA evolved type-1 FLCs that have different design parameters. The objective is to examine the amount by which the extra degrees of freedom provided by antecedent type-2 fuzzy sets is able to improve the control performance. Experimental results show that better control can be achieved using a type-2 FLC with fewer fuzzy sets/rules so one benefit of type-2 FLC is a lower trade-off between modeling accuracy and interpretability. r 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "f0f4087fbf23c523fff9fcc415de2ef7",
"text": "Neural machine translation (NMT) is a deep learning based approach for machine translation, which yields the state-of-the-art translation performance in scenarios where large-scale parallel corpora are available. Although the high-quality and domain-specific translation is crucial in the real world, domain-specific corpora are usually scarce or nonexistent, and thus vanilla NMT performs poorly in such scenarios. Domain adaptation that leverages both out-of-domain parallel corpora as well as monolingual corpora for in-domain translation, is very important for domainspecific translation. In this paper, we give a comprehensive survey of the state-of-the-art domain adaptation techniques for NMT.",
"title": ""
},
{
"docid": "26d5237c912977223e0ba45c0f949e3d",
"text": "Generally speaking, ‘Education’ is utilized in three senses: Knowledge, Subject and a Process. When a person achieves degree up to certain level we do not call it education .As for example if a person has secured Masters degree then we utilize education it a very narrower sense and call that the person has achieved education up to Masters Level. In the second sense, education is utilized in a sense of discipline. As for example if a person had taken education as a paper or as a discipline during his study in any institution then we utilize education as a subject. In the third sense, education is utilized as a process. In fact when we talk of education, we talk in the third sense i.e. education as a process. Thus, we talk what is education as a process? What are their importances etc.? The following debate on education will discuss education in this sense and we will talk education as a process.",
"title": ""
},
{
"docid": "7956124ea61dd0ddff827b769935287b",
"text": "This paper presents a fully automated atlas-based pancreas segmentation method from CT volumes utilizing 3D fully convolutional network (FCN) feature-based pancreas localization. Segmentation of the pancreas is difficult because it has larger inter-patient spatial variations than other organs. Previous pancreas segmentation methods failed to deal with such variations. We propose a fully automated pancreas segmentation method that contains novel localization and segmentation. Since the pancreas neighbors many other organs, its position and size are strongly related to the positions of the surrounding organs. We estimate the position and the size of the pancreas (localized) from global features by regression forests. As global features, we use intensity differences and 3D FCN deep learned features, which include automatically extracted essential features for segmentation. We chose 3D FCN features from a trained 3D U-Net, which is trained to perform multi-organ segmentation. The global features include both the pancreas and surrounding organ information. After localization, a patient-specific probabilistic atlas-based pancreas segmentation is performed. In evaluation results with 146 CT volumes, we achieved 60.6% of the Jaccard index and 73.9% of the Dice overlap.",
"title": ""
}
] |
scidocsrr
|
1f19534253894255374115fba86abb5d
|
Convolutional nets and watershed cuts for real-time semantic Labeling of RGBD videos
|
[
{
"docid": "e37b3a68c850d1fb54c9030c22b5792f",
"text": "We address a central problem of neuroanatomy, namely, the automatic segmentation of neuronal structures depicted in stacks of electron microscopy (EM) images. This is necessary to efficiently map 3D brain structure and connectivity. To segment biological neuron membranes, we use a special type of deep artificial neural network as a pixel classifier. The label of each pixel (membrane or nonmembrane) is predicted from raw pixel values in a square window centered on it. The input layer maps each window pixel to a neuron. It is followed by a succession of convolutional and max-pooling layers which preserve 2D information and extract features with increasing levels of abstraction. The output layer produces a calibrated probability for each class. The classifier is trained by plain gradient descent on a 512 × 512 × 30 stack with known ground truth, and tested on a stack of the same size (ground truth unknown to the authors) by the organizers of the ISBI 2012 EM Segmentation Challenge. Even without problem-specific postprocessing, our approach outperforms competing techniques by a large margin in all three considered metrics, i.e. rand error, warping error and pixel error. For pixel error, our approach is the only one outperforming a second human observer.",
"title": ""
}
] |
[
{
"docid": "cce75a31fde0740700087125c884e862",
"text": "Neural circuits of the basal ganglia are critical for motor planning and action selection. Two parallel basal ganglia pathways have been described, and have been proposed to exert opposing influences on motor function. According to this classical model, activation of the ‘direct’ pathway facilitates movement and activation of the ‘indirect’ pathway inhibits movement. However, more recent anatomical and functional evidence has called into question the validity of this hypothesis. Because this model has never been empirically tested, the specific function of these circuits in behaving animals remains unknown. Here we report direct activation of basal ganglia circuitry in vivo, using optogenetic control of direct- and indirect-pathway medium spiny projection neurons (MSNs), achieved through Cre-dependent viral expression of channelrhodopsin-2 in the striatum of bacterial artificial chromosome transgenic mice expressing Cre recombinase under control of regulatory elements for the dopamine D1 or D2 receptor. Bilateral excitation of indirect-pathway MSNs elicited a parkinsonian state, distinguished by increased freezing, bradykinesia and decreased locomotor initiations. In contrast, activation of direct-pathway MSNs reduced freezing and increased locomotion. In a mouse model of Parkinson’s disease, direct-pathway activation completely rescued deficits in freezing, bradykinesia and locomotor initiation. Taken together, our findings establish a critical role for basal ganglia circuitry in the bidirectional regulation of motor behaviour and indicate that modulation of direct-pathway circuitry may represent an effective therapeutic strategy for ameliorating parkinsonian motor deficits.",
"title": ""
},
{
"docid": "14863b1ca1d21c16319e40a34a0e3893",
"text": "Amyloid-beta peptide is central to the pathology of Alzheimer's disease, because it is neurotoxic--directly by inducing oxidant stress, and indirectly by activating microglia. A specific cell-surface acceptor site that could focus its effects on target cells has been postulated but not identified. Here we present evidence that the 'receptor for advanced glycation end products' (RAGE) is such a receptor, and that it mediates effects of the peptide on neurons and microglia. Increased expressing of RAGE in Alzheimer's disease brain indicates that it is relevant to the pathogenesis of neuronal dysfunction and death.",
"title": ""
},
{
"docid": "0a04562e76fd0f7b5743cd1491872853",
"text": "This paper describes a new Word Sense Disambiguation (WSD) algorithm which extends two well-known variations of the Lesk WSD method. Given a word and its context, Lesk algorithm exploits the idea of maximum number of shared words (maximum overlaps) between the context of a word and each definition of its senses (gloss) in order to select the proper meaning. The main contribution of our approach relies on the use of a word similarity function defined on a distributional semantic space to compute the gloss-context overlap. As sense inventory we adopt BabelNet, a large multilingual semantic network built exploiting both WordNet and Wikipedia. Besides linguistic knowledge, BabelNet also represents encyclopedic concepts coming from Wikipedia. The evaluation performed on SemEval-2013 Multilingual Word Sense Disambiguation shows that our algorithm goes beyond the most frequent sense baseline and the simplified version of the Lesk algorithm. Moreover, when compared with the other participants in SemEval-2013 task, our approach is able to outperform the best system for English.",
"title": ""
},
{
"docid": "c8c57c89f5bd92c726373f9cf77726e0",
"text": "Research of named entity recognition (NER) on electrical medical records (EMRs) focuses on verifying whether methods to NER in traditional texts are effective for that in EMRs, and there is no model proposed for enhancing performance of NER via deep learning from the perspective of multiclass classification. In this paper, we annotate a real EMR corpus to accomplish the model training and evaluation. And, then, we present a Convolutional Neural Network (CNN) based multiclass classification method for mining named entities from EMRs. The method consists of two phases. In the phase 1, EMRs are pre-processed for representing samples with word embedding. In the phase 2, the method is built by segmenting training data into many subsets and training a CNN binary classification model on each of subset. Experimental results showed the effectiveness of our method.",
"title": ""
},
{
"docid": "36fe6f57109e3a3ebc147ba06751774a",
"text": "Driven by the need for higher bandwidth and complexity reduction, off-chip interconnect has evolved from proprietary busses to networked architectures. A similar evolution is occurring in on-chip interconnect. This paper presents the design, implementation and evaluation of one such on-chip network, the TRIPS OCN. The OCN is a wormhole routed, 4x10, 2D mesh network with four virtual channels. It provides a high bandwidth, low latency interconnect between the TRIPS processors, L2 cache banks and I/O units. We discuss the tradeoffs made in the design of the OCN, in particular why area and complexity were traded off against latency. We then evaluate the OCN using synthetic as well as realistic loads. We found that synthetic benchmarks do not provide sufficient indication of the behavior of realistic loads on this network. Finally, we examine the effect of link bandwidth and router FIFO depth on overall performance.",
"title": ""
},
{
"docid": "a4ecdccf4370292a31fc38d6602b3f50",
"text": "Loop gain analysis for performance evaluation of current sensors for switching converters is presented. The MOS transistor scaling technique is reviewed and employed in developing high-speed and high-accuracy current-sensors with offset-current cancellation. Using a standard 0.35/spl mu/m CMOS process, and integrated full-range inductor current sensor for a boost converter is designed. It operated at a supply voltage of 1.5 V with a DC loop gain of 38 dB, and a unity gain frequency of 10 MHz. The sensor worked properly at a converter switching frequency of 500 kHz.",
"title": ""
},
{
"docid": "c2c056ae22c22e2a87b9eca39d125cc2",
"text": "The web provides an unprecedented opportunity to evaluate ideas quickly using controlled experiments, also called randomized experiments, A/B tests (and their generalizations), split tests, Control/Treatment tests, MultiVariable Tests (MVT) and parallel flights. Controlled experiments embody the best scientific design for establishing a causal relationship between changes and their influence on user-observable behavior. We provide a practical guide to conducting online experiments, where end-users can help guide the development of features. Our experience indicates that significant learning and return-on-investment (ROI) are seen when development teams listen to their customers, not to the Highest Paid Person’s Opinion (HiPPO). We provide several examples of controlled experiments with surprising results. We review the important ingredients of running controlled experiments, and discuss their limitations (both technical and organizational). We focus on several areas that are critical to experimentation, including statistical power, sample size, and techniques for variance reduction. We describe common architectures for experimentation systems and analyze their advantages and disadvantages. We evaluate randomization and hashing techniques, which we show are not as simple in practice as is often assumed. Controlled experiments typically generate large amounts of data, which can be analyzed using data mining techniques to gain deeper understanding of the factors influencing the outcome of interest, leading to new hypotheses and creating a virtuous cycle of improvements. Organizations that embrace controlled experiments with clear evaluation criteria can evolve their systems with automated optimizations and real-time analyses. Based on our extensive practical experience with multiple systems and organizations, we share key lessons that will help practitioners in running trustworthy controlled experiments.",
"title": ""
},
{
"docid": "13be873fdb53f25d81e35c0ee245fc40",
"text": "Deep neural networks are learning models with a very high capacity and therefore prone to over- fitting. Many regularization techniques such as Dropout, DropConnect, and weight decay all attempt to solve the problem of over-fitting by reducing the capacity of their respective models (Srivastava et al., 2014), (Wan et al., 2013), (Krogh & Hertz, 1992). In this paper we introduce a new form of regularization that guides the learning problem in a way that reduces over- fitting without sacrificing the capacity of the model. The mistakes that models make in early stages of training carry information about the learning problem. By adjusting the labels of the current epoch of training through a weighted average of the real labels, and an exponential average of the past soft-targets we achieved a regularization scheme as powerful as Dropout without necessarily reducing the capacity of the model, and simplified the complexity of the learning problem. SoftTarget regularization proved to be an effective tool in various neural network architectures.",
"title": ""
},
{
"docid": "f20e0b50b72b4b2796b77757ff20210e",
"text": "The dominant neural architectures in question answer retrieval are based on recurrent or convolutional encoders configured with complex word matching layers. Given that recent architectural innovations are mostly new word interaction layers or attention-based matching mechanisms, it seems to be a well-established fact that these components are mandatory for good performance. Unfortunately, the memory and computation cost incurred by these complex mechanisms are undesirable for practical applications. As such, this paper tackles the question of whether it is possible to achieve competitive performance with simple neural architectures. We propose a simple but novel deep learning architecture for fast and efficient question-answer ranking and retrieval. More specifically, our proposed model, HyperQA, is a parameter efficient neural network that outperforms other parameter intensive models such as Attentive Pooling BiLSTMs and Multi-Perspective CNNs on multiple QA benchmarks. The novelty behind HyperQA is a pairwise ranking objective that models the relationship between question and answer embeddings in Hyperbolic space instead of Euclidean space. This empowers our model with a self-organizing ability and enables automatic discovery of latent hierarchies while learning embeddings of questions and answers. Our model requires no feature engineering, no similarity matrix matching, no complicated attention mechanisms nor over-parameterized layers and yet outperforms and remains competitive to many models that have these functionalities on multiple benchmarks.",
"title": ""
},
{
"docid": "555afe09318573b475e96e72d2c7e54e",
"text": "A conflict-free replicated data type (CRDT) is an abstract data type, with a well defined interface, designed to be replicated at multiple processes and exhibiting the following properties: (i) any replica can be modified without coordinating with another replicas; (ii) when any two replicas have received the same set of updates, they reach the same state, deterministically, by adopting mathematically sound rules to guarantee state convergence.",
"title": ""
},
{
"docid": "a6fbd3f79105fd5c9edfc4a0292a3729",
"text": "The widespread use of templates on the Web is considered harmful for two main reasons. Not only do they compromise the relevance judgment of many web IR and web mining methods such as clustering and classification, but they also negatively impact the performance and resource usage of tools that process web pages. In this paper we present a new method that efficiently and accurately removes templates found in collections of web pages. Our method works in two steps. First, the costly process of template detection is performed over a small set of sample pages. Then, the derived template is removed from the remaining pages in the collection. This leads to substantial performance gains when compared to previous approaches that combine template detection and removal. We show, through an experimental evaluation, that our approach is effective for identifying terms occurring in templates - obtaining F-measure values around 0.9, and that it also boosts the accuracy of web page clustering and classification methods.",
"title": ""
},
{
"docid": "fb03d0abb1a69f53fd89d427af122162",
"text": "Numerous studies have suggested that biodiversity reduces variability in ecosystem productivity through compensatory effects; that is, a species increases in its abundance in response to the reduction of another in a fluctuating environment. But this view has been challenged on several grounds. Because most studies have been based on artificially constructed grasslands with short duration, long-term studies of natural ecosystems are needed. On the basis of a 24-year study of the Inner Mongolia grassland, here we present three key findings. First, that January–July precipitation is the primary climatic factor causing fluctuations in community biomass production; second, that ecosystem stability (conversely related to variability in community biomass production) increases progressively along the hierarchy of organizational levels (that is, from species to functional group to whole community); and finally, that the community-level stability seems to arise from compensatory interactions among major components at both species and functional group levels. From a hierarchical perspective, our results corroborate some previous findings of compensatory effects. Undisturbed mature steppe ecosystems seem to culminate with high biodiversity, productivity and ecosystem stability concurrently. Because these relationships are correlational, further studies are necessary to verify the causation among these factors. Our study provides new insights for better management and restoration of the rapidly degrading Inner Mongolia grassland.",
"title": ""
},
{
"docid": "97f0cb39907fea698b833e8c8d5feaa9",
"text": "This paper is a reaction to the poor performance of symmetry detection algorithms on real-world images, benchmarked since CVPR 2011. Our systematic study reveals significant difference between human labeled (reflection and rotation) symmetries on photos and the output of computer vision algorithms on the same photo set. We exploit this human-machine symmetry perception gap by proposing a novel symmetry-based Turing test. By leveraging a comprehensive user interface, we collected more than 78,000 symmetry labels from 400 Amazon Mechanical Turk raters on 1,200 photos from the Microsoft COCO dataset. Using a set of ground-truth symmetries automatically generated from noisy human labels, the effectiveness of our work is evidenced by a separate test where over 96% success rate is achieved. We demonstrate statistically significant outcomes for using symmetry perception as a powerful, alternative, image-based reCAPTCHA.",
"title": ""
},
{
"docid": "4ecf150613d45ae0f92485b8faa0deef",
"text": "Query optimizers in current database systems are designed to pick a single efficient plan for a given query based on current statistical properties of the data. However, different subsets of the data can sometimes have very different statistical properties. In such scenarios it can be more efficient to process different subsets of the data for a query using different plans. We propose a new query processing technique called content-based routing (CBR) that eliminates the single-plan restriction in current systems. We present low-overhead adaptive algorithms that partition input data based on statistical properties relevant to query execution strategies, and efficiently route individual tuples through customized plans based on their partition. We have implemented CBR as an extension to the Eddies query processor in the TelegraphCQ system, and we present an extensive experimental evaluation showing the significant performance benefits of CBR.",
"title": ""
},
{
"docid": "c142826a8cacd553b3212a0359dcf3d7",
"text": "In the past few years, a lot of attention has been devoted to multimedia indexing by fusing multimodal informations. Two kinds of fusion schemes are generally considered: The early fusion and the late fusion. We focus on late classifier fusion, where one combines the scores of each modality at the decision level. To tackle this problem, we investigate a recent and elegant well-founded quadratic program named MinCq coming from the machine learning PAC-Bayesian theory. MinCq looks for the weighted combination, over a set of real-valued functions seen as voters, leading to the lowest misclassification rate, while maximizing the voters’ diversity. We propose an extension of MinCq tailored to multimedia indexing. Our method is based on an order-preserving pairwise loss adapted to ranking that allows us to improve Mean Averaged Precision measure while taking into account the diversity of the voters that we want to fuse. We provide evidence that this method is naturally adapted to late fusion procedures and confirm the good behavior of our approach on the challenging PASCAL VOC’07 benchmark.",
"title": ""
},
{
"docid": "4cfd4f09a88186cb7e5f200e340d1233",
"text": "Keyword spotting (KWS) aims to detect predefined keywords in continuous speech. Recently, direct deep learning approaches have been used for KWS and achieved great success. However, these approaches mostly assume fixed keyword vocabulary and require significant retraining efforts if new keywords are to be detected. For unrestricted vocabulary, HMM based keywordfiller framework is still the mainstream technique. In this paper, a novel deep learning approach is proposed for unrestricted vocabulary KWS based on Connectionist Temporal Classification (CTC) with Long Short-Term Memory (LSTM). Here, an LSTM is trained to discriminant phones with the CTC criterion. During KWS, an arbitrary keyword can be specified and it is represented by one or more phone sequences. Due to the property of peaky phone posteriors of CTC, the LSTM can produce a phone lattice. Then, a fast substring matching algorithm based on minimum edit distance is used to search the keyword phone sequence on the phone lattice. The approach is highly efficient and vocabulary independent. Experiments showed that the proposed approach can achieve significantly better results compared to a DNN-HMM based keyword-filler decoding system. In addition, the proposed approach is also more efficient than the DNN-HMM KWS baseline.",
"title": ""
},
{
"docid": "b54045769ce80654400706a2489a2968",
"text": "This study aims to develop a methodology for predicting cycle time based on domain knowledge and data mining algorithms given production status including WIP, throughput. The proposed model and derived rules were validated with real data and demonstrated its practical viability for supporting production planning decisions",
"title": ""
},
{
"docid": "66b87120f9d908960131b0e059b0a526",
"text": "In this paper, we present the design and implementation of a wireless, wearable brain machine interface (BMI) system dedicated to signal sensing and processing for driver drowsiness detection (DDD). Owing to the importance of driver drowsiness and the possibility for brainwaves-based DDD, many electroencephalogram (EEG)-based approaches have been proposed. However, few studies focus on the early detection of driver drowsiness and on the early management of driver drowsiness using a closed-loop algorithm. The reported wireless and wearable BMI system is used for 1) simultaneous EEG and gyroscope-based head movement measurement for the early detection of driver drowsiness and 2) simultaneous EEG and transcranial direct current stimulation (tDCS) for the early management of driver drowsiness. To achieve the purposes of easy-to-use and distraction-free driving, a Bluetooth low-energy module is embedded in this BMI system and used to communicate with a fully wearable consumer device, a smartwatch, which coordinates the work of drowsiness monitoring and brain stimulation with its embedded closed-loop algorithm. The proposed system offers a 128 Hz sampling rate per channel, 12-bit and 16-bit resolution for a single-channel EEG and a three-channel gyroscope, and a maximum 2 mA current for the tDCS. The current consumption of the whole headset system is 56 mA. The battery life of the smartwatch is 9 h. The DDD experimental results show that the proposed system obtained a 93.67% five-level overall accuracy, a 96.15% two-level (alert versus slightly drowsy) accuracy, and maximum 16- to 23-min wakefulness maintenance.",
"title": ""
},
{
"docid": "ea984f909f77ae17de76b2160d5c404f",
"text": "Knowledge construction is expensive for Computer Assisted Assessment. When setting exercise questions, teachers use Test Makers to construct Question Banks. The addition of Automatic Generation to assessment applications decreases the time spent on constructing examination papers. In this article, we present ArikIturri, an Automatic Question Generator for Basque language test questions, which is independent from the test assessment application that uses it. The information source for this question generator consists of linguistically analysed real corpora, represented in XML markup language. ArikIturri makes use of NLP tools. The influence of the robustness of those tools and the used corpora is highlighted in the article. We have proved the viability of ArikIturri when constructing fill-in-the-blank, word formation, multiple choice, and error correction question types. In the evaluation of this automatic generator, we have obtained positive results as regards the generation process and its usefulness.",
"title": ""
}
] |
scidocsrr
|
459c8d99d0fde79af25991a62afb56c0
|
Value-Decomposition Networks For Cooperative Multi-Agent Learning
|
[
{
"docid": "577e7903eb355cbf790fb1c159a08e49",
"text": "We present several new algorithms for multiagent reinforcement learning. A common feature of these algorithms i a parameterized, structured representation of a policy or value function. This structure is leveraged in an approach we call coordinated reinforcement learning, by which agents coordinate both their action selection activities and their parameter updates. Within the limits of our parametric representations, the agents will determine a jointly optimal action without explicitly considering every possible action in their exponentially large joint action space. Our methods iffer from many previous reinforcement learning approaches to multiagent coordination in that structured communication and coordination between agents appears at the core of both the learning algorithm and the execution architecture.",
"title": ""
},
{
"docid": "a9dfddc3812be19de67fc4ffbc2cad77",
"text": "Many real-world problems, such as network packet routing and the coordination of autonomous vehicles, are naturally modelled as cooperative multi-agent systems. There is a great need for new reinforcement learning methods that can efficiently learn decentralised policies for such systems. To this end, we propose a new multi-agent actor-critic method called counterfactual multi-agent (COMA) policy gradients. COMA uses a centralised critic to estimate the Q-function and decentralised actors to optimise the agents’ policies. In addition, to address the challenges of multi-agent credit assignment, it uses a counterfactual baseline that marginalises out a single agent’s action, while keeping the other agents’ actions fixed. COMA also uses a critic representation that allows the counterfactual baseline to be computed efficiently in a single forward pass. We evaluate COMA in the testbed of StarCraft unit micromanagement, using a decentralised variant with significant partial observability. COMA significantly improves average performance over other multi-agent actorcritic methods in this setting, and the best performing agents are competitive with state-of-the-art centralised controllers that get access to the full state.",
"title": ""
},
{
"docid": "10202f2c14808988ca74b7efe5079949",
"text": "Multiagent systems are rapidly finding applications in a variety of domains, including robotics, distributed control, telecommunications, and economics. The complexity of many tasks arising in these domains makes them difficult to solve with preprogrammed agent behaviors. The agents must, instead, discover a solution on their own, using learning. A significant part of the research on multiagent learning concerns reinforcement learning techniques. This paper provides a comprehensive survey of multiagent reinforcement learning (MARL). A central issue in the field is the formal statement of the multiagent learning goal. Different viewpoints on this issue have led to the proposal of many different goals, among which two focal points can be distinguished: stability of the agents' learning dynamics, and adaptation to the changing behavior of the other agents. The MARL algorithms described in the literature aim---either explicitly or implicitly---at one of these two goals or at a combination of both, in a fully cooperative, fully competitive, or more general setting. A representative selection of these algorithms is discussed in detail in this paper, together with the specific issues that arise in each category. Additionally, the benefits and challenges of MARL are described along with some of the problem domains where the MARL techniques have been applied. Finally, an outlook for the field is provided.",
"title": ""
}
] |
[
{
"docid": "e7d415728a3bb0d015fca679299791e3",
"text": "Decentralized cryptocurrencies rely on participants to keep track of the state of the system in order to verify new transactions. As the number of users and transactions grows, this requirement places a significant burden on the users, as they need to download, verify, and store a large amount of data in order to participate. Vault is a new cryptocurrency designed to minimize these storage and bootstrapping costs for participants. Vault builds on Algorand’s proof-of-stake consensus protocol and uses several techniques to achieve its goals. First, Vault decouples the storage of recent transactions from the storage of account balances, which enables Vault to delete old account state. Second, Vault allows sharding state across participants in a way that preserves strong security guarantees. Finally, Vault introduces the notion of stamping certificates that allow a new client to catch up securely and efficiently in a proof-of-stake system without having to verify every single block. Experiments with a prototype implementation of Vault’s data structures shows that Vault reduces the bandwidth cost of joining the network as a full client by 99.7% compared to Bitcoin and 90.5% compared to Ethereum when downloading a ledger containing 500 million transactions.",
"title": ""
},
{
"docid": "c06c13af6d89c66e2fa065534bfc2975",
"text": "Complex foldings of the vaginal wall are unique to some cetaceans and artiodactyls and are of unknown function(s). The patterns of vaginal length and cumulative vaginal fold length were assessed in relation to body length and to each other in a phylogenetic context to derive insights into functionality. The reproductive tracts of 59 female cetaceans (20 species, 6 families) were dissected. Phylogenetically-controlled reduced major axis regressions were used to establish a scaling trend for the female genitalia of cetaceans. An unparalleled level of vaginal diversity within a mammalian order was found. Vaginal folds varied in number and size across species, and vaginal fold length was positively allometric with body length. Vaginal length was not a significant predictor of vaginal fold length. Functional hypotheses regarding the role of vaginal folds and the potential selection pressures that could lead to evolution of these structures are discussed. Vaginal folds may present physical barriers, which obscure the pathway of seawater and/or sperm travelling through the vagina. This study contributes broad insights to the evolution of reproductive morphology and aquatic adaptations and lays the foundation for future functional morphology analyses.",
"title": ""
},
{
"docid": "ddff0a3c6ed2dc036cf5d6b93d2da481",
"text": "Dense video captioning is a newly emerging task that aims at both localizing and describing all events in a video. We identify and tackle two challenges on this task, namely, (1) how to utilize both past and future contexts for accurate event proposal predictions, and (2) how to construct informative input to the decoder for generating natural event descriptions. First, previous works predominantly generate temporal event proposals in the forward direction, which neglects future video context. We propose a bidirectional proposal method that effectively exploits both past and future contexts to make proposal predictions. Second, different events ending at (nearly) the same time are indistinguishable in the previous works, resulting in the same captions. We solve this problem by representing each event with an attentive fusion of hidden states from the proposal module and video contents (e.g., C3D features). We further propose a novel context gating mechanism to balance the contributions from the current event and its surrounding contexts dynamically. We empirically show that our attentively fused event representation is superior to the proposal hidden states or video contents alone. By coupling proposal and captioning modules into one unified framework, our model outperforms the state-of-the-arts on the ActivityNet Captions dataset with a relative gain of over 100% (Meteor score increases from 4.82 to 9.65).",
"title": ""
},
{
"docid": "bffbecf26ca3a6e5586b240e0131f325",
"text": "The development of high-resolution neuroimaging and multielectrode electrophysiological recording provides neuroscientists with huge amounts of multivariate data. The complexity of the data creates a need for statistical summary, but the local averaging standardly applied to this end may obscure the effects of greatest neuroscientific interest. In neuroimaging, for example, brain mapping analysis has focused on the discovery of activation, i.e., of extended brain regions whose average activity changes across experimental conditions. Here we propose to ask a more general question of the data: Where in the brain does the activity pattern contain information about the experimental condition? To address this question, we propose scanning the imaged volume with a \"searchlight,\" whose contents are analyzed multivariately at each location in the brain.",
"title": ""
},
{
"docid": "49e875364e2551dda40b682bd37d4ea6",
"text": "The short-circuit current calculation of any equipment in the power system is very important for selection of appropriate relay characteristics and circuit breaker for the protection of the system. The power system is undergoing changes because of large scale penetration of renewable energy sources in the conventional system. Major renewable sources which are included in the power system are wind energy and solar energy sources. The wind energy is supplied by wind turbine generators of various types. Type III generators i.e. Doubly Fed Induction Generator (DFIG) is the most common types of generator employed offering different behavior compared to conventionally employed synchronous generators. In this paper; the short circuit current contribution of DFIG is calculated analytically and the same is validated by PSCAD/EMTDC software under various wind speeds and by considering certain voltage drops of the generator output.",
"title": ""
},
{
"docid": "4e924d619325ca939955657db1280db1",
"text": "This paper presents the dynamic modeling of a nonholonomic mobile robot and the dynamic stabilization problem. The dynamic model is based on the kinematic one including nonholonomic constraints. The proposed control strategy allows to solve the control problem using linear controllers and only requires the robot localization coordinates. This strategy was tested by simulation using Matlab-Simulink. Key-words: Mobile robot, kinematic and dynamic modeling, simulation, point stabilization problem.",
"title": ""
},
{
"docid": "ea4da468a0e7f84266340ba5566f4bdb",
"text": "We present a novel realtime algorithm to compute the trajectory of each pedestrian in a crowded scene. Our formulation is based on an adaptive scheme that uses a combination of deterministic and probabilistic trackers to achieve high accuracy and efficiency simultaneously. Furthermore, we integrate it with a multi-agent motion model and local interaction scheme to accurately compute the trajectory of each pedestrian. We highlight the performance and benefits of our algorithm on well-known datasets with tens of pedestrians.",
"title": ""
},
{
"docid": "1f0926abdff68050ef88eea49adaf382",
"text": "Words are the essence of communication: They are the building blocks of any language. Learning the meaning of words is thus one of the most important aspects of language acquisition: Children must first learn words before they can combine them into complex utterances. Many theories have been developed to explain the impressive efficiency of young children in acquiring the vocabulary of their language, as well as the developmental patterns observed in the course of lexical acquisition. A major source of disagreement among the different theories is whether children are equipped with special mechanisms and biases for word learning, or their general cognitive abilities are adequate for the task. We present a novel computational model of early word learning to shed light on the mechanisms that might be at work in this process. The model learns word meanings as probabilistic associations between words and semantic elements, using an incremental and probabilistic learning mechanism, and drawing only on general cognitive abilities. The results presented here demonstrate that much about word meanings can be learned from naturally occurring child-directed utterances (paired with meaning representations), without using any special biases or constraints, and without any explicit developmental changes in the underlying learning mechanism. Furthermore, our model provides explanations for the occasionally contradictory child experimental data, and offers predictions for the behavior of young word learners in novel situations.",
"title": ""
},
{
"docid": "d053f8b728f94679cd73bc91193f0ba6",
"text": "Deep learning is an important new area of machine learning which encompasses a wide range of neural network architectures designed to complete various tasks. In the medical imaging domain, example tasks include organ segmentation, lesion detection, and tumor classification. The most popular network architecture for deep learning for images is the convolutional neural network (CNN). Whereas traditional machine learning requires determination and calculation of features from which the algorithm learns, deep learning approaches learn the important features as well as the proper weighting of those features to make predictions for new data. In this paper, we will describe some of the libraries and tools that are available to aid in the construction and efficient execution of deep learning as applied to medical images.",
"title": ""
},
{
"docid": "611eacd767f1ea709c1c4aca7acdfcdb",
"text": "This paper presents a bi-directional converter applied in electric bike. The main structure is a cascade buck-boost converter, which transfers the energy stored in battery for driving motor, and can recycle the energy resulted from the back electromotive force (BEMF) to charge battery by changing the operation mode. Moreover, the proposed converter can also serve as a charger by connecting with AC line directly. Besides, the single-chip DSP TMS320F2812 is adopted as a control core to manage the switching behaviors of each mode and to detect the battery capacity. In this paper, the equivalent models of each mode and complete design considerations are all detailed. All the experimental results are used to demonstrate the feasibility.",
"title": ""
},
{
"docid": "90ec92d3e8c6c149083cae03f2adccf8",
"text": "Primary cardiac tumours are rare, and metastases to the heart are much more frequent. Myxoma is the commonest benign primary tumour and sarcomas account for the majority of malignant lesions. Clinical manifestations are diverse, non-specific, and governed by the location, size, and aggressiveness. Imaging plays a central role in their evaluation, and familiarity with characteristic features is essential to generate a meaningful differential diagnosis. Cardiac magnetic resonance imaging (MRI) has become the reference technique for evaluation of a suspected cardiac mass. Computed tomography (CT) provides complementary information and, with the advent of electrocardiographic gating, has become a powerful tool in its own right for cardiac morphological assessment. This paper reviews the MRI and CT features of primary and secondary cardiac malignancy. Important differential considerations and potential diagnostic pitfalls are also highlighted.",
"title": ""
},
{
"docid": "4a9ad387ad16727d9ac15ac667d2b1c3",
"text": "In recent years face recognition has received substantial attention from both research communities and the market, but still remained very challenging in real applications. A lot of face recognition algorithms, along with their modifications, have been developed during the past decades. A number of typical algorithms are presented, being categorized into appearancebased and model-based schemes. For appearance-based methods, three linear subspace analysis schemes are presented, and several non-linear manifold analysis approaches for face recognition are briefly described. The model-based approaches are introduced, including Elastic Bunch Graph matching, Active Appearance Model and 3D Morphable Model methods. A number of face databases available in the public domain and several published performance evaluation results are digested. Future research directions based on the current recognition results are pointed out.",
"title": ""
},
{
"docid": "080a14f6eb96b04c11c0cb65897dadd2",
"text": "Enterococcus faecalis is a microorganism commonly detected in asymptomatic, persistent endodontic infections. Its prevalence in such infections ranges from 24% to 77%. This finding can be explained by various survival and virulence factors possessed by E. faecalis, including its ability to compete with other microorganisms, invade dentinal tubules, and resist nutritional deprivation. Use of good aseptic technique, increased apical preparation sizes, and inclusion of 2% chlorhexidine in combination with sodium hypochlorite are currently the most effective methods to combat E. faecalis within the root canal systems of teeth. In the changing face of dental care, continued research on E. faecalis and its elimination from the dental apparatus may well define the future of the endodontic specialty.",
"title": ""
},
{
"docid": "af679ae83d0995d70fd86cf3d65f3183",
"text": "Client-side JavaScript is being widely used in popular web applications to improve functionality, increase responsiveness, and decrease load times. However, it is challenging to build reliable applications using JavaScript. This paper presents an empirical characterization of the error messages printed by JavaScript code in web applications, and attempts to understand their root causes. We find that JavaScript errors occur in production web applications, and that the errors fall into a small number of categories. We further find that both non-deterministic and deterministic errors occur in the applications, and that the speed of testing plays an important role in exposing errors. Finally, we study the correlations among the static and dynamic properties of the application and the frequency of errors in it in order to understand the root causes of the errors.",
"title": ""
},
{
"docid": "fd208ec9a2d74306495ac8c6d454bfd6",
"text": "This qualitative study investigates the perceptions of suburban middle school students’ on academic motivation and student engagement. Ten students, grades 6-8, were randomly selected by the researcher from school counselors’ caseloads and the primary data collection techniques included two types of interviews; individual interviews and focus group interviews. Findings indicate students’ motivation and engagement in middle school is strongly influenced by the social relationships in their lives. The interpersonal factors identified by students were peer influence, teacher support and teacher characteristics, and parental behaviors. Each of these factors consisted of academic and social-emotional support which hindered and/or encouraged motivation and engagement. Students identified socializing with their friends as a means to want to be in school and to engage in learning. Also, students are more engaged and motivated if they believe their teachers care about their academic success and value their job. Lastly, parental involvement in academics appeared to be more crucial for younger students than older students in order to encourage motivation and engagement in school. MIDDLE SCHOOL STUDENTS’ PERCEPTIONS 5 Middle School Students’ Perceptions on Student Engagement and Academic Motivation Middle School Students’ Perceptions on Student Engagement and Academic Motivation Early adolescence marks a time for change for students academically and socially. Students are challenged academically in the sense that there is greater emphasis on developing specific intellectual and cognitive capabilities in school, while at the same time they are attempting to develop social skills and meaningful relationships. It is often easy to overlook the social and interpersonal challenges faced by students in the classroom when there is a large focus on grades in education, especially since teachers’ competencies are often assessed on their students’ academic performance. When schools do not consider psychosocial needs of students, there is a decrease in academic motivation and interest, lower levels of student engagement and poorer academic performance (i.e. grades) for middle school students (Wang & Eccles, 2013). In fact, students who report high levels of engagement in school are 75% more likely to have higher grades and higher attendance rates. Disengaged students tend to have lower grades and are more likely to drop out of school (Klem & Connell, 2004). Therefore, this research has focused on understanding the connections between certain interpersonal influences and academic motivation and engagement.",
"title": ""
},
{
"docid": "04b32423acd23c03188ca8bf208a24fd",
"text": "We extend the notion of memristive systems to capacitive and inductive elements, namely, capacitors and inductors whose properties depend on the state and history of the system. All these elements typically show pinched hysteretic loops in the two constitutive variables that define them: current-voltage for the memristor, charge-voltage for the memcapacitor, and current-flux for the meminductor. We argue that these devices are common at the nanoscale, where the dynamical properties of electrons and ions are likely to depend on the history of the system, at least within certain time scales. These elements and their combination in circuits open up new functionalities in electronics and are likely to find applications in neuromorphic devices to simulate learning, adaptive, and spontaneous behavior.",
"title": ""
},
{
"docid": "43a60c1509b37943860ceddf740bf604",
"text": "In this paper, a design method for the co-design and integration of a CMOS rectifier and small loop antenna is described. In order to improve the sensitivity, the antenna-rectifier interface is analyzed as it plays a crucial role in the co-design optimization. Subsequently, a 5-stage cross-connected differential rectifier with a 7-bit binary-weighted capacitor bank is designed and fabricated in standard 90 nm CMOS technology. The rectifier is brought at resonance with a high-Q loop antenna by means of a control loop that compensates for any variation at the antenna-rectifier interface and passively boosts the antenna voltage to enhance the sensitivity. A complementary MOS diode is proposed to improve the harvester's ability to store and hold energy over a long period of time during which there is insufficient power for rectification. The chip is ESD protected and integrated on a compact loop antenna. Measurements in an anechoic chamber at 868 MHz demonstrate a -27 dBm sensitivity for 1 V output across a capacitive load and 27 meter range for a 1.78 W RF source in an office corridor. The end-to-end power conversion efficiency equals 40% at -17 dBm.",
"title": ""
},
{
"docid": "8a08bb5a952589615c9054d4fc0e8c1f",
"text": "The classical plain-text representation of source code is c onvenient for programmers but requires parsing to uncover t he deep structure of the program. While sophisticated software too ls parse source code to gain access to the program’s structur e, many lightweight programming aids such as grep rely instead on only the lexical structure of source code. I d escribe a new XML application that provides an alternative representation o f Java source code. This XML-based representation, called J avaML, is more natural for tools and permits easy specification of nume rous software-engineering analyses by leveraging the abun dance of XML tools and techniques. A robust converter built with th e Jikes Java compiler framework translates from the classic l Java source code representation to JavaML, and an XSLT style sheet converts from JavaML back into the classical textual f orm.",
"title": ""
},
{
"docid": "77a09b094d4622d01d09f042f1ae3045",
"text": "Depth maps captured by consumer-level depth cameras such as Kinect are usually degraded by noise, missing values, and quantization. In this paper, we present a data-driven approach for refining degraded RAWdepth maps that are coupled with an RGB image. The key idea of our approach is to take advantage of a training set of high-quality depth data and transfer its information to the RAW depth map through multi-scale dictionary learning. Utilizing a sparse representation, our method learns a dictionary of geometric primitives which captures the correlation between high-quality mesh data, RAW depth maps and RGB images. The dictionary is learned and applied in a manner that accounts for various practical issues that arise in dictionary-based depth refinement. Compared to previous approaches that only utilize the correlation between RAW depth maps and RGB images, our method produces improved depth maps without over-smoothing. Since our approach is data driven, the refinement can be targeted to a specific class of objects by employing a corresponding training set. In our experiments, we show that this leads to additional improvements in recovering depth maps of human faces.",
"title": ""
},
{
"docid": "4e938aed527769ad65d85bba48151d21",
"text": "We provide a thorough description of all the artifacts that are generated by the messenger application Telegram on Android OS. We also provide interpretation of messages that are generated and how they relate to one another. Based on the results of digital forensics investigation and analysis in this paper, an analyst/investigator will be able to read, reconstruct and provide chronological explanations of messages which are generated by the user. Using three different smartphone device vendors and Android OS versions as the objects of our experiments, we conducted tests in a forensically sound manner.",
"title": ""
}
] |
scidocsrr
|
7a6c9b682afdc925efedc7f8d41dc75d
|
Visual Reinforcement Learning with Imagined Goals
|
[
{
"docid": "c0767c58b4a5e81ddc35d045ccaa137f",
"text": "A reinforcement learning agent that needs to pursue different goals across episodes requires a goal-conditional policy. In addition to their potential to generalize desirable behavior to unseen goals, such policies may also enable higher-level planning based on subgoals. In sparse-reward environments, the capacity to exploit information about the degree to which an arbitrary goal has been achieved while another goal was intended appears crucial to enable sample efficient learning. However, reinforcement learning agents have only recently been endowed with such capacity for hindsight. In this paper, we demonstrate how hindsight can be introduced to policy gradient methods, generalizing this idea to a broad class of successful algorithms. Our experiments on a diverse selection of sparse-reward environments show that hindsight leads to a remarkable increase in sample efficiency.",
"title": ""
},
{
"docid": "ddae1c6469769c2c7e683bfbc223ad1a",
"text": "Deep reinforcement learning has achieved many impressive results in recent years. However, tasks with sparse rewards or long horizons continue to pose significant challenges. To tackle these important problems, we propose a general framework that first learns useful skills in a pre-training environment, and then leverages the acquired skills for learning faster in downstream tasks. Our approach brings together some of the strengths of intrinsic motivation and hierarchical methods: the learning of useful skill is guided by a single proxy reward, the design of which requires very minimal domain knowledge about the downstream tasks. Then a high-level policy is trained on top of these skills, providing a significant improvement of the exploration and allowing to tackle sparse rewards in the downstream tasks. To efficiently pre-train a large span of skills, we use Stochastic Neural Networks combined with an information-theoretic regularizer. Our experiments1 show2 that this combination is effective in learning a wide span of interpretable skills in a sample-efficient way, and can significantly boost the learning performance uniformly across a wide range of downstream tasks.",
"title": ""
}
] |
[
{
"docid": "b7ae9cae900253f270d43c4b34e68c57",
"text": "In this paper, a complete voiceprint recognition based on Matlab was realized, including speech processing and feature extraction at early stage, and model training and recognition at later stage. For speech processing and feature extraction at early stage, Mel Frequency Cepstrum Coefficient (MFCC) was taken as feature parameter. For speaker model method, DTW model was adopted to reflect the voiceprint characteristics of speech, converting voiceprint recognition into speaker speech data evaluation, and breaking up complex speech training and matching into model parameter training and probability calculation. Simulation experiment results show that this system is effective to recognize voiceprint.",
"title": ""
},
{
"docid": "e16f013717320ab7dcac54f752f9d79d",
"text": "In order to drive safely and efficiently on public roads, autonomous vehicles will have to understand the intentions of surrounding vehicles, and adapt their own behavior accordingly. If experienced human drivers are generally good at inferring other vehicles' motion up to a few seconds in the future, most current Advanced Driving Assistance Systems (ADAS) are unable to perform such medium-term forecasts, and are usually limited to high-likelihood situations such as emergency braking. In this article, we present a first step towards consistent trajectory prediction by introducing a long short-term memory (LSTM) neural network, which is capable of accurately predicting future longitudinal and lateral trajectories for vehicles on highway. Unlike previous work focusing on a low number of trajectories collected from a few drivers, our network was trained and validated on the NGSIM US-101 dataset, which contains a total of 800 hours of recorded trajectories in various traffic densities, representing more than 6000 individual drivers.",
"title": ""
},
{
"docid": "0dd09b3a5049f7de94605aa5cdc51d6c",
"text": "Current security and privacy solution to fail to meet the IoT requirements due to computational restrictions and portable nature of IoT objects. In this demo, a novel authentication framework is proposed that exploits the device-specific information to authenticate each object in the IoT. The framework is shown to effectively track the physical environment effects on objects. The experiment shows a sample IoT environment consisting of multiple Raspberry PI units operating as IoT objects. The proposed framework monitors the changes of the physical environment surrounding objects by examining the changes in the device-specific information of each IoT object. The scenario of an emulation attacker is presented in the demo as a case study. The attacker is capable of replicating all the transmitted messages and security keys of one of the IoT objects. Our proposed framework is able to effectively detect such an attack and improve the authentication accuracy for all IoT objects. -Demo Abstract.",
"title": ""
},
{
"docid": "ed2503a3fbba365178bf520cecbae19a",
"text": "A prerequisite for training corpus-based machine translation (MT) systems – either Statistical MT (SMT) or Neural MT (NMT) – is the availability of high-quality parallel data. This is arguably more important today than ever before, as NMT has been shown in many studies to outperform SMT, but mostly when large parallel corpora are available; in cases where data is limited, SMT can still outperform NMT. Recently researchers have shown that back-translating monolingual data can be used to create synthetic parallel corpora, which in turn can be used in combination with authentic parallel data to train a highquality NMT system. Given that large collections of new parallel text become available only quite rarely, backtranslation has become the norm when building state-of-the-art NMT systems, especially in resource-poor scenarios. However, we assert that there are many unknown factors regarding the actual effects of back-translated data on the translation capabilities of an NMT model. Accordingly, in this work we investigate how using back-translated data as a training corpus – both as a separate standalone dataset as well as combined with human-generated parallel data – affects the performance of an NMT model. We use incrementally larger amounts of back-translated data to train a range of NMT systems for Germanc © 2018 The authors. This article is licensed under a Creative Commons 3.0 licence, no derivative works, attribution, CCBY-ND. to-English, and analyse the resulting translation performance.",
"title": ""
},
{
"docid": "d1799273e1c3ef81a305f904f340b910",
"text": "Frameshift mutations in protein-coding DNA sequences produce a drastic change in the resulting protein sequence, which prevents classic protein alignment methods from revealing the proteins' common origin. Moreover, when a large number of substitutions are additionally involved in the divergence, the homology detection becomes difficult even at the DNA level. We developed a novel method to infer distant homology relations of two proteins, that accounts for frameshift and point mutations that may have affected the coding sequences. We design a dynamic programming alignment algorithm over memory-efficient graph representations of the complete set of putative DNA sequences of each protein, with the goal of determining the two putative DNA sequences which have the best scoring alignment under a powerful scoring system designed to reflect the most probable evolutionary process. Our implementation is freely available at http://bioinfo.lifl.fr/path/ . Our approach allows to uncover evolutionary information that is not captured by traditional alignment methods, which is confirmed by biologically significant examples.",
"title": ""
},
{
"docid": "04d5824991ada6194f3028a900d7f31b",
"text": "In this work, we present a solution to real-time monocular dense mapping. A tightly-coupled visual-inertial localization module is designed to provide metric and high-accuracy odometry. A motion stereo algorithm is proposed to take the video input from one camera to produce local depth measurements with semi-global regularization. The local measurements are then integrated into a global map for noise filtering and map refinement. The global map obtained is able to support navigation and obstacle avoidance for aerial robots through our indoor and outdoor experimental verification. Our system runs at 10Hz on an Nvidia Jetson TX1 by properly distributing computation to CPU and GPU. Through onboard experiments, we demonstrate its ability to close the perception-action loop for autonomous aerial robots. We release our implementation as open-source software1.",
"title": ""
},
{
"docid": "740a83306dddd3123a910acbbd01ff80",
"text": "We present a framework to understand GAN training as alternating density ratio estimation, and approximate divergence minimization. This provides an interpretation for the mismatched GAN generator and discriminator objectives often used in practice, and explains the problem of poor sample diversity. Further, we derive a family of generator objectives that target arbitrary f -divergences without minimizing a lower bound, and use them to train generative image models that target either improved sample quality or greater sample diversity.",
"title": ""
},
{
"docid": "8ba9439094fae89d6ff14d03476878b9",
"text": "In this paper we present a framework for the real-time control of lightweight autonomous vehicles which comprehends a proposed hardand software design. The system can be used for many kinds of vehicles and offers high computing power and flexibility in respect of the control algorithms and additional application dependent tasks. It was originally developed to control a small quad-rotor UAV where stringent restrictions in weight and size of the hardware components exist, but has been transfered to a fixed-wing UAV and a ground vehicle for inand outdoor search and rescue missions. The modular structure and the use of a standard PC architecture at an early stage simplifies reuse of components and fast integration of new features. Figure 1: Quadrotor UAV controlled by the proposed system",
"title": ""
},
{
"docid": "6ad90319d07abce021eda6f3a1d3886e",
"text": "Despite recent progress in generative image modeling, successfully generating high-resolution, diverse samples from complex datasets such as ImageNet remains an elusive goal. To this end, we train Generative Adversarial Networks at the largest scale yet attempted, and study the instabilities specific to such scale. We find that applying orthogonal regularization to the generator renders it amenable to a simple “truncation trick,” allowing fine control over the trade-off between sample fidelity and variety by truncating the latent space. Our modifications lead to models which set the new state of the art in class-conditional image synthesis. When trained on ImageNet at 128×128 resolution, our models (BigGANs) achieve an Inception Score (IS) of 166.3 and Fréchet Inception Distance (FID) of 9.6, improving over the previous best IS of 52.52 and FID of 18.65.",
"title": ""
},
{
"docid": "8a5e4a6f418975f352a6b9e3d8958d50",
"text": "BACKGROUND\nDysphagia is associated with poor outcome in stroke patients. Studies investigating the association of dysphagia and early dysphagia screening (EDS) with outcomes in patients with acute ischemic stroke (AIS) are rare. The aims of our study are to investigate the association of dysphagia and EDS within 24 h with stroke-related pneumonia and outcomes.\n\n\nMETHODS\nOver a 4.5-year period (starting November 2007), all consecutive AIS patients from 15 hospitals in Schleswig-Holstein, Germany, were prospectively evaluated. The primary outcomes were stroke-related pneumonia during hospitalization, mortality, and disability measured on the modified Rankin Scale ≥2-5, in which 2 indicates an independence/slight disability to 5 severe disability.\n\n\nRESULTS\nOf 12,276 patients (mean age 73 ± 13; 49% women), 9,164 patients (74%) underwent dysphagia screening; of these patients, 55, 39, 4.7, and 1.5% of patients had been screened for dysphagia within 3, 3 to <24, 24 to ≤72, and >72 h following admission. Patients who underwent dysphagia screening were likely to be older, more affected on the National Institutes of Health Stroke Scale score, and to have higher rates of neurological symptoms and risk factors than patients who were not screened. A total of 3,083 patients (25.1%; 95% CI 24.4-25.8) had dysphagia. The frequency of dysphagia was higher in patients who had undergone dysphagia screening than in those who had not (30 vs. 11.1%; p < 0.001). During hospitalization (mean 9 days), 1,271 patients (10.2%; 95% CI 9.7-10.8) suffered from stroke-related pneumonia. Patients with dysphagia had a higher rate of pneumonia than those without dysphagia (29.7 vs. 3.7%; p < 0.001). Logistic regression revealed that dysphagia was associated with increased risk of stroke-related pneumonia (OR 3.4; 95% CI 2.8-4.2; p < 0.001), case fatality during hospitalization (OR 2.8; 95% CI 2.1-3.7; p < 0.001) and disability at discharge (OR 2.0; 95% CI 1.6-2.3; p < 0.001). EDS within 24 h of admission appeared to be associated with decreased risk of stroke-related pneumonia (OR 0.68; 95% CI 0.52-0.89; p = 0.006) and disability at discharge (OR 0.60; 95% CI 0.46-0.77; p < 0.001). Furthermore, dysphagia was independently correlated with an increase in mortality (OR 3.2; 95% CI 2.4-4.2; p < 0.001) and disability (OR 2.3; 95% CI 1.8-3.0; p < 0.001) at 3 months after stroke. The rate of 3-month disability was lower in patients who had received EDS (52 vs. 40.7%; p = 0.003), albeit an association in the logistic regression was not found (OR 0.78; 95% CI 0.51-1.2; p = 0.2).\n\n\nCONCLUSIONS\nDysphagia exposes stroke patients to a higher risk of pneumonia, disability, and death, whereas an EDS seems to be associated with reduced risk of stroke-related pneumonia and disability.",
"title": ""
},
{
"docid": "94a64f143c19f2815f101eb0c4dc304f",
"text": "Information technology can improve the quality, efficiency, and cost of healthcare. In this survey, we examine the privacy requirements of mobile computing technologies that have the potential to transform healthcare. Such mHealth technology enables physicians to remotely monitor patients' health and enables individuals to manage their own health more easily. Despite these advantages, privacy is essential for any personal monitoring technology. Through an extensive survey of the literature, we develop a conceptual privacy framework for mHealth, itemize the privacy properties needed in mHealth systems, and discuss the technologies that could support privacy-sensitive mHealth systems. We end with a list of open research questions.",
"title": ""
},
{
"docid": "a9000262e389ba8ab09f8d6cd2b2b60a",
"text": "CONTEXT\nCaffeine, often in the form of coffee, is frequently used as a supplement by athletes in an attempt to facilitate improved performance during exercise.\n\n\nPURPOSE\nTo investigate the effectiveness of coffee ingestion as an ergogenic aid prior to a 1-mile (1609 m) race.\n\n\nMETHODS\nIn a double-blind, randomized, cross-over, and placebo-controlled design, 13 trained male runners completed a 1-mile race 60 minutes following the ingestion of 0.09 g·kg-1 coffee (COF), 0.09 g·kg-1 decaffeinated coffee (DEC), or a placebo (PLA). All trials were dissolved in 300 mL of hot water.\n\n\nRESULTS\nThe race completion time was 1.3% faster following the ingestion of COF (04:35.37 [00:10.51] min:s.ms) compared with DEC (04:39.14 [00:11.21] min:s.ms; P = .018; 95% confidence interval [CI], -0.11 to -0.01; d = 0.32) and 1.9% faster compared with PLA (04:41.00 [00:09.57] min:s.ms; P = .006; 95% CI, -0.15 to -0.03; d = 0.51). A large trial and time interaction for salivary caffeine concentration was observed (P < .001; [Formula: see text]), with a very large increase (6.40 [1.57] μg·mL-1; 95% CI, 5.5-7.3; d = 3.86) following the ingestion of COF. However, only a trivial difference between DEC and PLA was observed (P = .602; 95% CI, -0.09 to 0.03; d = 0.17). Furthermore, only trivial differences were observed for blood glucose (P = .839; [Formula: see text]) and lactate (P = .096; [Formula: see text]) and maximal heart rate (P = .286; [Formula: see text]) between trials.\n\n\nCONCLUSIONS\nThe results of this study show that 60 minutes after ingesting 0.09 g·kg-1 of caffeinated coffee, 1-mile race performance was enhanced by 1.9% and 1.3% compared with placebo and decaffeinated coffee, respectively, in trained male runners.",
"title": ""
},
{
"docid": "e090bb879e35dbabc5b3c77c98cd6832",
"text": "Immunity of analog circuit blocks is becoming a major design risk. This paper presents an automated methodology to simulate the susceptibility of a circuit during the design phase. More specifically, we propose a CAD tool which determines the fail/pass criteria of a signal under direct power injection (DPI). This contribution describes the function of the tool which is validated by a LDO regulator.",
"title": ""
},
{
"docid": "7c3457a5ca761b501054e76965b41327",
"text": "Background learning is a pre-processing of motion detection which is a basis step of video analysis. For the static background, many previous works have already achieved good performance. However, the results on learning dynamic background are still much to be improved. To address this challenge, in this paper, a novel and practical method is proposed based on deep auto-encoder networks. Firstly, dynamic background images are extracted through a deep auto-encoder network (called Background Extraction Network) from video frames containing motion objects. Then, a dynamic background model is learned by another deep auto-encoder network (called Background Learning Network) using the extracted background images as the input. To be more flexible, our background model can be updated on-line to absorb more training samples. Our main contributions are 1) a cascade of two deep auto-encoder networks which can deal with the separation of dynamic background and foregrounds very efficiently; 2) a method of online learning is adopted to accelerate the training of Background Extraction Network. Compared with previous algorithms, our approach obtains the best performance over six benchmark data sets. Especially, the experiments show that our algorithm can handle large variation background very well.",
"title": ""
},
{
"docid": "28ba2fbb243e739b225bd37f07714b3a",
"text": "Implementation of Neuromorphic Systems using post Complementary Metal-Oxide-Semiconductor (CMOS) technology based Memristive Crossbar Array (MCA) has emerged as a promising solution to enable low-power acceleration of neural networks. However, the recent trend to design Deep Neural Networks (DNNs) for achieving human-like cognitive abilities poses significant challenges towards the scalable design of neuromorphic systems (due to the increase in computation/storage demands). Network pruning [7] is a powerful technique to remove redundant connections for designing optimally connected (maximally sparse) DNNs. However, such pruning techniques induce irregular connections that are incoherent to the crossbar structure. Eventually they produce DNNs with highly inefficient hardware realizations (in terms of area and energy). In this work, we propose TraNNsformer — an integrated training framework that transforms DNNs to enable their efficient realization on MCA-based systems. TraNNsformer first prunes the connectivity matrix while forming clusters with the remaining connections. Subsequently, it retrains the network to fine tune the connections and reinforce the clusters. This is done iteratively to transform the original connectivity into an optimally pruned and maximally clustered mapping. We evaluated the proposed framework by transforming different Multi-Layer Perceptron (MLP) based Spiking Neural Networks (SNNs) on a wide range of datasets (MNIST, SVHN and CIFAR10) and executing them on MCA-based systems to analyze the area and energy benefits. Without accuracy loss, TraNNsformer reduces the area (energy) consumption by 28%–55% (49%–67%) with respect to the original network. Compared to network pruning, TraNNsformer achieves 28%–49% (15%–29%) area (energy) savings. Furthermore, TraNNsformer is a technology-aware framework that allows mapping a given DNN to any MCA size permissible by the memristive technology for reliable operations.",
"title": ""
},
{
"docid": "39fe1618fad28ec6ad72d326a1d00f24",
"text": "Popular real-time public events often cause upsurge of traffic in Twitter while the event is taking place. These posts range from real-time update of the event's occurrences highlights of important moments thus far, personal comments and so on. A large user group has evolved who seeks these live updates to get a brief summary of the important moments of the event so far. However, major social search engines including Twitter still present the tweets satisfying the Boolean query in reverse chronological order, resulting in thousands of low quality matches agglomerated in a prosaic manner. To get an overview of the happenings of the event, a user is forced to read scores of uninformative tweets causing frustration. In this paper, we propose a method for multi-tweet summarization of an event. It allows the search users to quickly get an overview about the important moments of the event. We have proposed a graph-based retrieval algorithm that identifies tweets with popular discussion points among the set of tweets returned by Twitter search engine in response to a query comprising the event related keywords. To ensure maximum coverage of topical diversity, we perform topical clustering of the tweets before applying the retrieval algorithm. Evaluation performed by summarizing the important moments of a real-world event revealed that the proposed method could summarize the proceeding of different segments of the event with up to 81.6% precision and up to 80% recall.",
"title": ""
},
{
"docid": "bbaf5d599e944707535219191290b747",
"text": "Nonlinear electromagnetic (EM) inverse scattering is a quantitative and super-resolution imaging technique, in which more realistic interactions between the internal structure of scene and EM wavefield are taken into account in the imaging procedure, in contrast to conventional tomography. However, it poses important challenges arising from its intrinsic strong nonlinearity, ill-posedness, and expensive computational costs. To tackle these difficulties, we, for the first time to our best knowledge, exploit a connection between the deep neural network (DNN) architecture and the iterative method of nonlinear EM inverse scattering. This enables the development of a novel DNN-based methodology for nonlinear EM inverse problems (termed here DeepNIS). The proposed DeepNIS consists of a cascade of multilayer complex-valued residual convolutional neural network modules. We numerically and experimentally demonstrate that the DeepNIS outperforms remarkably conventional nonlinear inverse scattering methods in terms of both the image quality and computational time. We show that DeepNIS can learn a general model approximating the underlying EM inverse scattering system. It is expected that the DeepNIS will serve as powerful tool in treating highly nonlinear EM inverse scattering problems over different frequency bands, which are extremely hard and impractical to solve using conventional inverse scattering methods.",
"title": ""
},
{
"docid": "ae5142ef32fde6096ea4e4a41ba60cb6",
"text": "Social media is playing a growing role in elections world-wide. Thus, automatically analyzing electoral tweets has applications in understanding how public sentiment is shaped, tracking public sentiment and polarization with respect to candidates and issues, understanding the impact of tweets from various entities, etc. Here, for the first time, we automatically annotate a set of 2012 US presidential election tweets for a number of attributes pertaining to sentiment, emotion, purpose, and style by crowdsourcing. Overall, more than 100,000 crowdsourced responses were obtained for 13 questions on emotions, style, and purpose. Additionally, we show through an analysis of these annotations that purpose, even though correlated with emotions, is significantly different. Finally, we describe how we developed automatic classifiers, using features from state-of-the-art sentiment analysis systems, to predict emotion and purpose labels, respectively, in new unseen tweets. These experiments establish baseline results for automatic systems on this new data.",
"title": ""
},
{
"docid": "fceecabcbcbd4786adc755370d8eb635",
"text": "A Halbach array permanent magnet spherical motor (HPMSM) can provide 3 degrees-of-freedom motion in a single joint, simplify the mechanical structure greatly, improve the positioning precision and response speed. However, a HPMSM is a multivariable, nonlinear and strong coupling system with serious inter-axis nonlinear coupling. The dynamic model of a HPMSM is described in this paper, and a control algorithm based on computed torque method is proposed to realize the dynamic decoupling control of the HPMSM. Simulations results indicate that this algorithm can make the system track continues trajectory ideally and eliminate the influences of inter-axis nonlinear coupling effectively to achieve a good control performance.",
"title": ""
},
{
"docid": "2b952c455c9f8daa7f6c0c024620aef8",
"text": "Broadband use is booming around the globe as the infrastructure is built to provide high speed Internet and Internet Protocol television (IPTV) services. Driven by fierce competition and the search for increasing average revenue per user (ARPU), operators are evolving so they can deliver services within the home that involve a wide range of technologies, terminals, and appliances, as well as software that is increasingly rich and complex. “It should all work” is the key theme on the end user's mind, yet call centers are confronted with a multitude of consumer problems. The demarcation point between provider network and home network is blurring, in fact, if not yet in the consumer's mind. In this context, operators need to significantly rethink service lifecycle management. This paper explains how home and access support systems cover the most critical part of the network in service delivery. They build upon the inherent operation support features of access multiplexers, network termination devices, and home devices to allow the planning, fulfillment, operation, and assurance of new services.",
"title": ""
}
] |
scidocsrr
|
c993bd768e37914d6f4dc811e8c49c94
|
Design of a Rule-based Stemmer for Natural Language Text in Bengali
|
[
{
"docid": "20e8be9e9dbd62a56be0b64e7c2ae070",
"text": "Stemmers attempt to reduce a word to its stem or root form and are used widely in information retrieval tasks to increase the recall rate. Most popular stemmers encode a large number of language-specific rules built over a length of time. Such stemmers with comprehensive rules are available only for a few languages. In the absence of extensive linguistic resources for certain languages, statistical language processing tools have been successfully used to improve the performance of IR systems. In this article, we describe a clustering-based approach to discover equivalence classes of root words and their morphological variants. A set of string distance measures are defined, and the lexicon for a given text collection is clustered using the distance measures to identify these equivalence classes. The proposed approach is compared with Porter's and Lovin's stemmers on the AP and WSJ subcollections of the Tipster dataset using 200 queries. Its performance is comparable to that of Porter's and Lovin's stemmers, both in terms of average precision and the total number of relevant documents retrieved. The proposed stemming algorithm also provides consistent improvements in retrieval performance for French and Bengali, which are currently resource-poor.",
"title": ""
}
] |
[
{
"docid": "03bd5c0e41aa5948a5545fa3fca75bc2",
"text": "In the application of lead-acid series batteries, the voltage imbalance of each battery should be considered. Therefore, additional balancer circuits must be integrated into the battery. An active battery balancing circuit with an auxiliary storage can employ a sequential battery imbalance detection algorithm by comparing the voltage of a battery and auxiliary storage. The system is being in balance if the battery voltage imbalance is less than 10mV/cell. In this paper, a new algorithm is proposed so that the battery voltage balancing time can be improved. The battery balancing system is based on the LTC3305 working principle. The simulation verifies that the proposed algorithm can achieve permitted battery voltage imbalance faster than that of the previous algorithm.",
"title": ""
},
{
"docid": "3c103640a41779e8069219b9c4849ba7",
"text": "Electronic banking is becoming more popular every day. Financial institutions have accepted the transformation to provide electronic banking facilities to their customers in order to remain relevant and thrive in an environment that is competitive. A contributing factor to the customer retention rate is the frequent use of multiple online functionality however despite all the benefits of electronic banking, some are still hesitant to use it because of security concerns. The perception is that gender, age, education level, salary, culture and profession all have an impact on electronic banking usage. This study reports on how the Knowledge Discovery and Data Mining (KDDM) process was used to determine characteristics and electronic banking behavior of high net worth individuals at a South African bank. Findings JIBC December 2017, Vol. 22, No.3 2 indicate that product range and age had the biggest impact on electronic banking behavior. The value of user segmentation is that the financial institution can provide a more accurate service to their users based on their preferences and online banking behavior.",
"title": ""
},
{
"docid": "06672f6316878c80258ad53988a7e953",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/journals/astata.html. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission.",
"title": ""
},
{
"docid": "ae38bb46fd3ceed3f4800b6421b45d74",
"text": "Medicinal data mining methods are used to analyze the medical data information resources. Medical data mining content mining and structure methods are used to analyze the medical data contents. The effort to develop knowledge and experience of frequent specialists and clinical selection data of patients collected in databases to facilitate the diagnosis process is considered a valuable option. Diagnosis of heart disease is a significant and tedious task in medicine. The term Heart disease encompasses the various diseases that affect the heart. The exposure of heart disease from various factors or symptom is an issue which is not complimentary from false presumptions often accompanied by unpredictable effects. Association rule mining procedures are used to extract item set relations. Item set regularities are used in the rule mining process. The data classification is based on MAFIA algorithms which result in accuracy, the data is evaluated using entropy based cross validations and partition techniques and the results are compared. Here using the C4.5 algorithm as the training algorithm to show rank of heart attack with the decision tree. Finally, the heart disease database is clustered using the K-means clustering algorithm, which will remove the data applicable to heart attack from the database. The results showed that the medicinal prescription and designed prediction system is capable of prophesying the heart attack successfully.",
"title": ""
},
{
"docid": "23aa04378f4eed573d1290c6bb9d3670",
"text": "The ability to compare systems from the same domain is of central importance for their introduction into complex applications. In the domains of named entity recognition and entity linking, the large number of systems and their orthogonal evaluation w.r.t. measures and datasets has led to an unclear landscape regarding the abilities and weaknesses of the different approaches. We present GERBIL—an improved platform for repeatable, storable and citable semantic annotation experiments— and its extension since being release. GERBIL has narrowed this evaluation gap by generating concise, archivable, humanand machine-readable experiments, analytics and diagnostics. The rationale behind our framework is to provide developers, end users and researchers with easy-to-use interfaces that allow for the agile, fine-grained and uniform evaluation of annotation tools on multiple datasets. By these means, we aim to ensure that both tool developers and end users can derive meaningful insights into the extension, integration and use of annotation applications. In particular, GERBIL provides comparable results to tool developers, simplifying the discovery of strengths and weaknesses of their implementations with respect to the state-of-the-art. With the permanent experiment URIs provided by our framework, we ensure the reproducibility and archiving of evaluation results. Moreover, the framework generates data in a machine-processable format, allowing for the efficient querying and postprocessing of evaluation results. Additionally, the tool diagnostics provided by GERBIL provide insights into the areas where tools need further refinement, thus allowing developers to create an informed agenda for extensions and end users to detect the right tools for their purposes. Finally, we implemented additional types of experiments including entity typing. GERBIL aims to become a focal point for the state-of-the-art, driving the research agenda of the community by presenting comparable objective evaluation results. Furthermore, we tackle the central problem of the evaluation of entity linking, i.e., we answer the question of how an evaluation algorithm can compare two URIs to each other without being bound to a specific knowledge base. Our approach to this problem opens a way to address the deprecation of URIs of existing gold standards for named entity recognition and entity linking, a feature which is currently not supported by the state-of-the-art. We derived the importance of this feature from usage and dataset requirements collected from the GERBIL user community, which has already carried out more than 24.000 single evaluations using our framework. Through the resulting updates, GERBIL now supports 8 tasks, 46 datasets and 20 systems.",
"title": ""
},
{
"docid": "242a2f64fc103af641320c1efe338412",
"text": "The availability of data on digital traces is growing to unprecedented sizes, but inferring actionable knowledge from large-scale data is far from being trivial. This is especially important for computational finance, where digital traces of human behaviour offer a great potential to drive trading strategies. We contribute to this by providing a consistent approach that integrates various datasources in the design of algorithmic traders. This allows us to derive insights into the principles behind the profitability of our trading strategies. We illustrate our approach through the analysis of Bitcoin, a cryptocurrency known for its large price fluctuations. In our analysis, we include economic signals of volume and price of exchange for USD, adoption of the Bitcoin technology and transaction volume of Bitcoin. We add social signals related to information search, word of mouth volume, emotional valence and opinion polarization as expressed in tweets related to Bitcoin for more than 3 years. Our analysis reveals that increases in opinion polarization and exchange volume precede rising Bitcoin prices, and that emotional valence precedes opinion polarization and rising exchange volumes. We apply these insights to design algorithmic trading strategies for Bitcoin, reaching very high profits in less than a year. We verify this high profitability with robust statistical methods that take into account risk and trading costs, confirming the long-standing hypothesis that trading-based social media sentiment has the potential to yield positive returns on investment.",
"title": ""
},
{
"docid": "c4f8528e1c623785d3057e18560a144f",
"text": "In a global economy, manufacturers mainly compete with cost efficiency of production, as the price of raw materials are similar worldwide. Heavy industry has two big issues to deal with. On the one hand there is lots of data which needs to be analyzed in an effective manner, and on the other hand making big improvements via investments in cooperate structure or new machinery is neither economically nor physically viable. Machine learning offers a promising way for manufacturers to address both these problems as they are in an excellent position to employ learning techniques with their massive resource of historical production data. However, choosing modelling a strategy in this setting is far from trivial and this is the objective of this article. The article investigates characteristics of the most popular classifiers used in industry today. Support Vector Machines, Multilayer Perceptron, Decision Trees, Random Forests, and the meta-algorithms Bagging and Boosting are mainly investigated in this work. Lessons from real-world implementations of these learners are also provided together with future directions when different learners are expected to perform well. The importance of feature selection and relevant selection methods in an industrial setting are further investigated. Performance metrics have also been discussed for the sake of completion.",
"title": ""
},
{
"docid": "a55b953caaedc2414fb100c3130d4773",
"text": "BACKGROUND\nVariation in the anatomical position of the inframammary fold (IMF) in women remains poorly studied.\n\n\nOBJECTIVES\nThe purpose of this study was to evaluate the incidence of asymmetry between IMF locations on the chest wall of women undergoing breast augmentation and to determine breast measurements associated with IMF asymmetry.\n\n\nMETHODS\nThree-dimensional imaging analysis of the breasts was performed in 111 women with micromastia, using the Vectra Imaging System(TM). The following measurements were recorded: vertical distance between right and left IMF (inter-fold distance), vertical distance between nipples (inter-nipple distance), and difference between projection of right and left breasts in anterior-posterior direction.\n\n\nRESULTS\nAsymmetry between the right and left IMF positions was found in the majority of patients (95.4%), with symmetry only found in 5 patients (4.6%). In the majority of patients (60.3%), the right IMF was located inferior to the left IMF with median inter-fold distance 0.4 cm (range, 0.1, 2.1 cm). In 39 patients (35.1%), the left IMF was located inferior to the right with median inter-fold distance 0.4 cm (range, 0.1, 1.7 cm). There was strong correlation between the degree of asymmetry of IMF and asymmetry of nipple areola complex (NAC) positions (r = 0.687, P < .01).\n\n\nCONCLUSIONS\nThe majority of women with micromastia demonstrate asymmetry of the IMF, which correlates with asymmetry of NAC location. The authors propose a classification system based on most commonly observed IMF locations as types I (right IMF inferior to left), type II (left IMF inferior to right) and type III (both IMF located on the same level). LEVEL OF EVIDENCE 4: Diagnostic.",
"title": ""
},
{
"docid": "9cd18dd8709ae798c787ec44128bf8cd",
"text": "This paper presents a cascaded coil flux control based on a Current Source Parallel Resonant Push-Pull Inverter (CSPRPI) for Induction Heating (IH) applications. The most important problems associated with current source parallel resonant inverters are start-up problems and the variable response of IH systems under load variations. This paper proposes a simple cascaded control method to increase an IH system’s robustness to load variations. The proposed IH has been analyzed in both the steady state and the transient state. Based on this method, the resonant frequency is tracked using Phase Locked Loop (PLL) circuits using a Multiplier Phase Detector (MPD) to achieve ZVS under the transient condition. A laboratory prototype was built with an operating frequency of 57-59 kHz and a rated power of 300 W. Simulation and experimental results verify the validity of the proposed power control method and the PLL dynamics.",
"title": ""
},
{
"docid": "84c87c50659d18b130f4aaf8c1b3c7f1",
"text": "We describe initial work on an extension of the Kaldi toolkit that supports weighted finite-state transducer (WFST) decoding on Graphics Processing Units (GPUs). We implement token recombination as an atomic GPU operation in order to fully parallelize the Viterbi beam search, and propose a dynamic load balancing strategy for more efficient token passing scheduling among GPU threads. We also redesign the exact lattice generation and lattice pruning algorithms for better utilization of the GPUs. Experiments on the Switchboard corpus show that the proposed method achieves identical 1-best results and lattice quality in recognition and confidence measure tasks, while running 3 to 15 times faster than the single process Kaldi decoder. The above results are reported on different GPU architectures. Additionally we obtain a 46-fold speedup with sequence parallelism and multi-process service (MPS) in GPU.",
"title": ""
},
{
"docid": "965a74c2291d585550cc8d3895c28a4f",
"text": "This paper proposes a new meta-heuristic method for optimal sizing of distributed generator (DG). The objective of this paper is to reduce power losses and to improve voltage profile of the radial distribution network by placing multiple distributed generators at optimal locations. Analytical expressions are used in solving this paper for setting up of DG. In this study, Crow Search Algorithm (CSA) is used for optimal sizing of distributed generators (DG). CSA is a meta-heuristic algorithm inspired by the intelligent behavior of the crows. Crows stores their excess food in different locations and memorizes those locations to retrieve it when it is needed. They follow each other to do thievery to obtain better food source. This analysis is tested on IEEE 33 bus and IEEE 69 bus under MATLAB environment and the results are compared with the results of Improved analytical (IA) method and identified that percentage loss reduction in crow search algorithm is more than the percentage loss reduction in improved analytical method.",
"title": ""
},
{
"docid": "7946e414908e2863ad0e2ba21dbee0be",
"text": "This paper presents a symbolic-execution-based approach and its implementation by POM/JLEC for checking the logical equivalence between two programs in the system replacement context. The primary contributions lie in the development of POM/JLEC, a fully automatic equivalence checker for Java enterprise systems. POM/JLEC consists of three main components: Domain Specific Pre-Processor for extracting the target code from the original system and adjusting it to a suitable scope for verification, Symbolic Execution for generating symbolic summaries, and solver-based EQuality comparison for comparing the symbolic summaries together and returning counter examples in the case of non-equivalence. We have evaluated POM/JLEC with a large-scale benchmark created from the function layer code of an industrial enterprise system. The evaluation result with 54% test cases passed shows the feasibility for deploying its mature version into software development industry.",
"title": ""
},
{
"docid": "913478fa2a53363c4d8dc6212c960cbf",
"text": "The rapidly growing world energy use has already raised concerns over supply difficulties, exhaustion of energy resources and heavy environmental impacts (ozone layer depletion, global warming, climate change, etc.). The global contribution from buildings towards energy consumption, both residential and commercial, has steadily increased reaching figures between 20% and 40% in developed countries, and has exceeded the other major sectors: industrial and transportation. Growth in population, increasing demand for building services and comfort levels, together with the rise in time spent inside buildings, assure the upward trend in energy demand will continue in the future. For this reason, energy efficiency in buildings is today a prime objective for energy policy at regional, national and international levels. Among building services, the growth in HVAC systems energy use is particularly significant (50% of building consumption and 20% of total consumption in the USA). This paper analyses available information concerning energy consumption in buildings, and particularly related to HVAC systems. Many questions arise: Is the necessary information available? Which are the main building types? What end uses should be considered in the breakdown? Comparisons between different countries are presented specially for commercial buildings. The case of offices is analysed in deeper detail. # 2007 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "ca1005dddee029e92bc50717513a53d0",
"text": "Citation recommendation is an interesting but challenging research problem. Most existing studies assume that all papers adopt the same criterion and follow the same behavioral pattern in deciding relevance and authority of a paper. However, in reality, papers have distinct citation behavioral patterns when looking for different references, depending on paper content, authors and target venues. In this study, we investigate the problem in the context of heterogeneous bibliographic networks and propose a novel cluster-based citation recommendation framework, called ClusCite, which explores the principle that citations tend to be softly clustered into interest groups based on multiple types of relationships in the network. Therefore, we predict each query's citations based on related interest groups, each having its own model for paper authority and relevance. Specifically, we learn group memberships for objects and the significance of relevance features for each interest group, while also propagating relative authority between objects, by solving a joint optimization problem. Experiments on both DBLP and PubMed datasets demonstrate the power of the proposed approach, with 17.68% improvement in Recall@50 and 9.57% growth in MRR over the best performing baseline.",
"title": ""
},
{
"docid": "f8639b0d3a5792bda63dd2f22bfc496a",
"text": "The animal metaphor in poststructuralists thinkers like Roland Barthes and Jacques Derrida, offers an understanding into the human self through the relational modes of being and co-being. The present study focuses on the concept of “semiotic animal” proposed by John Deely with reference to Roland Barthes. Human beings are often considered as “rational animal” (Descartes) capable of reason and thinking. By analyzing the “semiotic animal” in Roland Barthes, the intention is to study him as a “mind-dependent” being who discovers the contrast between ens reale and ens rationis through his writing. For Barthes “it is the intimate which seeks utterance” in one and makes “it cry, heard, confronting generality, confronting science.” Roland Barthes attempts to read “his body” from the “tissues of signs” that is driven by the unconscious desires. The study is an attempt to explore the semiological underpinnings in Barthes which are found in the form of rhetorical tropes of cats and dogs and the way he relates it with the ‘self’.",
"title": ""
},
{
"docid": "b58cd3109a0acadfcf56aa47c7243e27",
"text": "This paper introduces a multifingered robotic hand eNAIST-Handf and a grip force control by slip margin feedback. The developed prototype finger of the NAIST-hand has a new mechanism by which all 3 motors can be placed inside the palm without using wire-driven mechanisms. A method of grip force control is proposed using an incipient slip estimation. A new tactile sensor is designed to active the proposed control method by the NAIST-Hand. This sensor consists of a transparent semispherical gel, an embedded small camera, and a force sensor in order to implement the direct slip margin estimation. The structure and the principle of sensing are described.",
"title": ""
},
{
"docid": "b546b8425e00906576f8684e65dcf3f8",
"text": "Neural networks have become increasingly popular for the task of language modeling. Whereas feed-forward networks only exploit a fixed context length to predict the next word of a sequence, conceptually, standard recurrent neural networks can take into account all of the predecessor words. On the other hand, it is well known that recurrent networks are difficult to train and therefore are unlikely to show the full potential of recurrent models. These problems are addressed by a the Long Short-Term Memory neural network architecture. In this work, we analyze this type of network on an English and a large French language modeling task. Experiments show improvements of about 8 % relative in perplexity over standard recurrent neural network LMs. In addition, we gain considerable improvements in WER on top of a state-of-the-art speech recognition system.",
"title": ""
},
{
"docid": "6a96678b14ec12cb4bb3db4e1c4c6d4e",
"text": "Emoticons are widely used to express positive or negative sentiment on Twitter. We report on a study with live users to determine whether emoticons are used to merely emphasize the sentiment of tweets, or whether they are the main elements carrying the sentiment. We found that the sentiment of an emoticon is in substantial agreement with the sentiment of the entire tweet. Thus, emoticons are useful as predictors of tweet sentiment and should not be ignored in sentiment classification. However, the sentiment expressed by an emoticon agrees with the sentiment of the accompanying text only slightly better than random. Thus, using the text accompanying emoticons to train sentiment models is not likely to produce the best results, a fact that we show by comparing lexicons generated using emoticons with others generated using simple textual features.",
"title": ""
},
{
"docid": "b64664ba7102e070d276cce2e06dc8ab",
"text": "BACKGROUND\nNew technologies have recently been used for monitoring signs and symptoms of mental health illnesses and particularly have been tested to improve the outcomes in bipolar disorders. Web-based psychoeducational programs for bipolar disorders have also been implemented, yet to our knowledge, none of them have integrated both approaches in one single intervention. The aim of this project is to develop and validate a smartphone application to monitor symptoms and signs and empower the self-management of bipolar disorder, offering customized embedded psychoeducation contents, in order to identify early symptoms and prevent relapses and hospitalizations.\n\n\nMETHODS/DESIGN\nThe project will be carried out in three complementary phases, which will include a feasibility study (first phase), a qualitative study (second phase) and a randomized controlled trial (third phase) comparing the smartphone application (SIMPLe) on top of treatment as usual with treatment as usual alone. During the first phase, feasibility and satisfaction will be assessed with the application usage log data and with an electronic survey. Focus groups will be conducted and technical improvements will be incorporated at the second phase. Finally, at the third phase, survival analysis with multivariate data analysis will be performed and relationships between socio-demographic, clinical variables and assessments scores with relapses in each group will be explored.\n\n\nDISCUSSION\nThis project could result in a highly available, user-friendly and not costly monitoring and psychoeducational intervention that could improve the outcome of people suffering from bipolar disorders in a practical and secure way.\n\n\nTRIAL REGISTRATION\nClinical Trials.gov: NCT02258711 (October 2014).",
"title": ""
},
{
"docid": "d2d39b17b4047dd43e19ac4272b31c7e",
"text": "Lignocellulose is a term for plant materials that are composed of matrices of cellulose, hemicellulose, and lignin. Lignocellulose is a renewable feedstock for many industries. Lignocellulosic materials are used for the production of paper, fuels, and chemicals. Typically, industry focuses on transforming the polysaccharides present in lignocellulose into products resulting in the incomplete use of this resource. The materials that are not completely used make up the underutilized streams of materials that contain cellulose, hemicellulose, and lignin. These underutilized streams have potential for conversion into valuable products. Treatment of these lignocellulosic streams with bacteria, which specifically degrade lignocellulose through the action of enzymes, offers a low-energy and low-cost method for biodegradation and bioconversion. This review describes lignocellulosic streams and summarizes different aspects of biological treatments including the bacteria isolated from lignocellulose-containing environments and enzymes which may be used for bioconversion. The chemicals produced during bioconversion can be used for a variety of products including adhesives, plastics, resins, food additives, and petrochemical replacements.",
"title": ""
}
] |
scidocsrr
|
b46ba4b45ae74071826c757335bd516d
|
Blockchain Design for Trusted Decentralized IoT Networks
|
[
{
"docid": "8574612823cccbb5f8bcc80532dae74e",
"text": "The decentralized cryptocurrency Bitcoin has experienced great success but also encountered many challenges. One of the challenges has been the long confirmation time and low transaction throughput. Another challenge is the lack of incentives at certain steps of the protocol, raising concerns for transaction withholding, selfish mining, etc. To address these challenges, we propose Solidus, a decentralized cryptocurrency based on permissionless Byzantine consensus. A core technique in Solidus is to use proof of work for leader election to adapt the Practical Byzantine Fault Tolerance (PBFT) protocol to a permissionless setting. We also design Solidus to be incentive compatible and to mitigate selfish mining. Solidus improves on Bitcoin in confirmation time, and provides safety and liveness assuming Byzantine players and the largest coalition of rational players collectively control less than one-third of the computation power.",
"title": ""
}
] |
[
{
"docid": "773bd34632ce1afe27f994edf906fea3",
"text": "Crossed-guide X-band waveguide couplers with bandwidths of up to 40% and coupling factors of better than 5 dB are presented. The tight coupling and wide bandwidth are achieved by using reduced height waveguide. Design graphs and measured data are presented.",
"title": ""
},
{
"docid": "9b72d423e13bdd125b3a8c30b40e6b49",
"text": "With the increasing popularity of the web, some new web technologies emerged and introduced dynamics to web applications, in comparison to HTML, as a static programming language. JavaScript is the language that provided a dynamic web site which actively communicates with users. JavaScript is used in today's web applications as a client script language and on the server side. The JavaScript language supports the Model View Controller (MVC) architecture that maintains a readable code and clearly separates parts of the program code. The topic of this research is to compare the popular JavaScript frameworks: AngularJS, Ember, Knockout, Backbone. All four frameworks are based on MVC or similar architecture. In this paper, the advantages and disadvantages of each framework, the impact on application speed, the ways of testing such JS applications and ways to improve code security are presented.",
"title": ""
},
{
"docid": "bd4b0951dc32d973cd8a0f1baba3b8d0",
"text": "Imitation learning is the study of learning how to act given a set of demonstrations provided by a human expert. It is intuitively apparent that learning to take optimal actions is a simpler undertaking in situations that are similar to the ones shown by the teacher. However, imitation learning approaches do not tend to use this insight directly. In this paper, we introduce State Aware Imitation Learning (SAIL), an imitation learning algorithm that allows an agent to learn how to remain in states where it can confidently take the correct action and how to recover if it is lead astray. Key to this algorithm is a gradient learned using a temporal difference update rule which leads the agent to prefer states similar to the demonstrated states. We show that estimating a linear approximation of this gradient yields similar theoretical guarantees to online temporal difference learning approaches and empirically show that SAIL can effectively be used for imitation learning in continuous domains with non-linear function approximators used for both the policy representation and the gradient estimate.",
"title": ""
},
{
"docid": "13584c61e4caecf3828f2a11037f492e",
"text": "Privacy in social networks is a large and growing concern in recent times. It refers to various issues in a social network which include privacy of users, links, and their attributes. Each privacy component of a social network is vast and consists of various sub-problems. For example, user privacy includes multiple sub-problems like user location privacy, and user personal information privacy. This survey on privacy in social networks is intended to serve as an initial introduction and starting step to all further researchers. We present various privacy preserving models and methods include naive anonymization, perturbation, or building a complete alternative network. We show the work done by multiple researchers in the past, where social networks are stated as network graphs with users represented as nodes and friendship between users represented as links between the nodes. We study ways and mechanisms developed to protect these nodes and links in the network. We also review other systems proposed, along with all the available databases for future researchers in this area.",
"title": ""
},
{
"docid": "b0709248d08564b7d1a1f23243aa0946",
"text": "TrustZone-based Real-time Kernel Protection (TZ-RKP) is a novel system that provides real-time protection of the OS kernel using the ARM TrustZone secure world. TZ-RKP is more secure than current approaches that use hypervisors to host kernel protection tools. Although hypervisors provide privilege and isolation, they face fundamental security challenges due to their growing complexity and code size. TZ-RKP puts its security monitor, which represents its entire Trusted Computing Base (TCB), in the TrustZone secure world; a safe isolated environment that is dedicated to security services. Hence, the security monitor is safe from attacks that can potentially compromise the kernel, which runs in the normal world. Using the secure world for kernel protection has been crippled by the lack of control over targets that run in the normal world. TZ-RKP solves this prominent challenge using novel techniques that deprive the normal world from the ability to control certain privileged system functions. These functions are forced to route through the secure world for inspection and approval before being executed. TZ-RKP's control of the normal world is non-bypassable. It can effectively stop attacks that aim at modifying or injecting kernel binaries. It can also stop attacks that involve modifying the system memory layout, e.g, through memory double mapping. This paper presents the implementation and evaluation of TZ-RKP, which has gone through rigorous and thorough evaluation of effectiveness and performance. It is currently deployed on the latest models of the Samsung Galaxy series smart phones and tablets, which clearly demonstrates that it is a practical real-world system.",
"title": ""
},
{
"docid": "43f3908d103ab31ab3a958c0ead9eaf8",
"text": "Decision making and risk assessment are becoming a challenging task in oil and gas due to the risk related to the uncertainty and imprecision. This paper proposed a model for the risk assessment based on multi-criteria decision making (MCDM) method by integrating Fuzzy-set theory. In this model, decision makers (experts) provide their preference of risk assessment information in four categories; people, environment, asset, and reputation. A fuzzy set theory is used to evaluate likelihood, consequence and total risk level associated with each category. A case study is presented to demonstrate the proposed model. The results indicate that the proposed Fuzzy MCDM method has the potential to be used by decision makers in evaluating the risk based on multiple inputs and criteria.",
"title": ""
},
{
"docid": "040d94d33e04889e06ddcc2241f6a4b6",
"text": "Existing chatbot knowledge bases are mostly hand-constructed, which is time consuming and difficult to adapt to new domains. Automatic chatbot knowledge acquisition method from online forums is presented in this paper. It includes a classification model based on rough set, and the theory of ensemble learning is combined to make a decision. Given a forum, multiple rough set classifiers are constructed and trained first. Then all replies are classified with these classifiers. The final recognition results are drawn by voting to the output of these classifiers. Finally, the related replies are selected as chatbot knowledge. Relevant experiments on a child-care forum prove that the method based on rough set has high recognition efficiency to related replies and the combination of ensemble learning improves the results.",
"title": ""
},
{
"docid": "fa60689f35fd1a468356e10366f52b79",
"text": "A comfortable health monitoring system named WEALTHY is presented. The system is based on a wearable interface implemented by integrating fabric sensors, advanced signal processing techniques and modern telecommunication systems, on a textile platform. Conducting and piezoresistive materials in form of fibre and yarn are integrated in a garment and used as sensors, connectors and electrode elements. Simultaneous recording of vital signs allows extrapolation of more complex parameters and inter-signal elaboration that contribute to produce alert messages and patient table. The purpose of this publication is to evaluate the performance of the textile platform and the possibility of the simultaneous acquisition of several biomedical signals. Keywords— fabric sensors, fabric electrodes, physiological",
"title": ""
},
{
"docid": "dbf3650aadb4c18500ec3676d23dba99",
"text": "Current search engines do not, in general, perform well with longer, more verbose queries. One of the main issues in processing these queries is identifying the key concepts that will have the most impact on effectiveness. In this paper, we develop and evaluate a technique that uses query-dependent, corpus-dependent, and corpus-independent features for automatic extraction of key concepts from verbose queries. We show that our method achieves higher accuracy in the identification of key concepts than standard weighting methods such as inverse document frequency. Finally, we propose a probabilistic model for integrating the weighted key concepts identified by our method into a query, and demonstrate that this integration significantly improves retrieval effectiveness for a large set of natural language description queries derived from TREC topics on several newswire and web collections.",
"title": ""
},
{
"docid": "ecdeb5b8665661c55d91b782dd8fb3a7",
"text": "We present a classifier-based parser that produces constituent trees in linear time. The parser uses a basic bottom-up shiftreduce algorithm, but employs a classifier to determine parser actions instead of a grammar. This can be seen as an extension of the deterministic dependency parser of Nivre and Scholz (2004) to full constituent parsing. We show that, with an appropriate feature set used in classification, a very simple one-path greedy parser can perform at the same level of accuracy as more complex parsers. We evaluate our parser on section 23 of the WSJ section of the Penn Treebank, and obtain precision and recall of 87.54% and 87.61%, respectively.",
"title": ""
},
{
"docid": "49517920ddecf10a384dc3e98e39459b",
"text": "Machine learning models are vulnerable to adversarial examples: small changes to images can cause computer vision models to make mistakes such as identifying a school bus as an ostrich. However, it is still an open question whether humans are prone to similar mistakes. Here, we address this question by leveraging recent techniques that transfer adversarial examples from computer vision models with known parameters and architecture to other models with unknown parameters and architecture, and by matching the initial processing of the human visual system. We find that adversarial examples that strongly transfer across computer vision models influence the classifications made by time-limited human observers.",
"title": ""
},
{
"docid": "16a384727d6a323437a0b6ed3cdcc230",
"text": "The ability to learn from a small number of examples has been a difficult problem in machine learning since its inception. While methods have succeeded with large amounts of training data, research has been underway in how to accomplish similar performance with fewer examples, known as one-shot or more generally few-shot learning. This technique has been shown to have promising performance, but in practice requires fixed-size inputs making it impractical for production systems where class sizes can vary. This impedes training and the final utility of few-shot learning systems. This paper describes an approach to constructing and training a network that can handle arbitrary example sizes dynamically as the system is used.",
"title": ""
},
{
"docid": "1563aecd97daeef5fbb1b904091f52fa",
"text": "We release the Simple Paraphrase Database, a subset of of the Paraphrase Database (PPDB) adapted for the task of text simplification. We train a supervised model to associate simplification scores with each phrase pair, producing rankings competitive with state-of-theart lexical simplification models. Our new simplification database contains 4.5 million paraphrase rules, making it the largest available resource for lexical simplification.",
"title": ""
},
{
"docid": "086a70e10e5c00ff771698728b0d01a4",
"text": "We report an autopsy case of a 42-year-old woman who, when discovered, had been dead in her apartment for approximately 1 week under circumstances involving treachery, assault and possible drug overdose. This case is unique as it involved two autopsies of the deceased by two different medical examiners who reached opposing conclusions. The first autopsy was performed about 10 days after death. The second autopsy was performed after an exhumation approximately 2 years after burial. Evidence collected at the crime scene included blood samples from which DNA was extracted and analysed, fingerprints and clothing containing dried body fluids. The conclusion of the first autopsy was accidental death due to cocaine toxicity; the conclusion of the second autopsy was death due to homicide given the totality of evidence. Suspects 1 and 2 were linked to the death of the victim by physical evidence and suspect 3 was linked by testimony. Suspect 1 received life in prison, and suspects 2 and 3 received 45 and 20 years in prison, respectively. This case indicates that cocaine toxicity is difficult to determine in putrefied tissue and that exhumations can be important in collecting forensic information. It further reveals that the combined findings of medical examiners, even though contradictory, are useful in determining the circumstances leading to death in criminal justice. Thus, this report demonstrates that such criminal circumstances require comparative forensic review and, in such cases, scientific conclusions can be difficult.",
"title": ""
},
{
"docid": "d34b81ac6c521cbf466b4b898486a201",
"text": "We introduce the novel task of identifying important citations in scholarly literature, i.e., citations that indicate that the cited work is used or extended in the new effort. We believe this task is a crucial component in algorithms that detect and follow research topics and in methods that measure the quality of publications. We model this task as a supervised classification problem at two levels of detail: a coarse one with classes (important vs. non-important), and a more detailed one with four importance classes. We annotate a dataset of approximately 450 citations with this information, and release it publicly. We propose a supervised classification approach that addresses this task with a battery of features that range from citation counts to where the citation appears in the body of the paper, and show that, our approach achieves a precision of 65% for a recall of 90%.",
"title": ""
},
{
"docid": "fbf30d2032b0695b5ab2d65db2fe8cbc",
"text": "Artificial Intelligence for computer games is an interesting topic which attracts intensive attention recently. In this context, Mario AI Competition modifies a Super Mario Bros game to be a benchmark software for people who program AI controller to direct Mario and make him overcome the different levels. This competition was handled in the IEEE Games Innovation Conference and the IEEE Symposium on Computational Intelligence and Games since 2009. In this paper, we study the application of Reinforcement Learning to construct a Mario AI controller that learns from the complex game environment. We train the controller to grow stronger for dealing with several difficulties and types of levels. In controller developing phase, we design the states and actions cautiously to reduce the search space, and make Reinforcement Learning suitable for the requirement of online learning.",
"title": ""
},
{
"docid": "8f621c393298a81ef46c104a92297231",
"text": "A new method of free-form deformation, t-FFD, is proposed. An original shape of large-scale polygonal mesh or point-cloud is deformed by using a control mesh, which is constituted of a set of triangles with arbitrary topology and geometry, including the cases of disconnection or self-intersection. For modeling purposes, a designer can handle the shape directly or indirectly, and also locally or globally. This method works on a simple mapping mechanism. First, each point of the original shape is parametrized by the local coordinate system on each triangle of the control mesh. After modifying the control mesh, the point is mapped according to each modified triangle. Finally, the mapped locations are blended as a new position of the original point, then a smoothly deformed shape is achieved. Details of the t-FFD are discussed and examples are shown.",
"title": ""
},
{
"docid": "be1b9731df45408571e75d1add5dfe9c",
"text": "We investigate a new commonsense inference task: given an event described in a short free-form text (“X drinks coffee in the morning”), a system reasons about the likely intents (“X wants to stay awake”) and reactions (“X feels alert”) of the event’s participants. To support this study, we construct a new crowdsourced corpus of 25,000 event phrases covering a diverse range of everyday events and situations. We report baseline performance on this task, demonstrating that neural encoder-decoder models can successfully compose embedding representations of previously unseen events and reason about the likely intents and reactions of the event participants. In addition, we demonstrate how commonsense inference on people’s intents and reactions can help unveil the implicit gender inequality prevalent in modern movie scripts.",
"title": ""
},
{
"docid": "81667ba5e59bd04d979b2206b54b5b32",
"text": "Parallelism is an important rhetorical device. We propose a machine learning approach for automated sentence parallelism identification in student essays. We b uild an essay dataset with sentence level parallelism annotated. We derive features by combining gen eralized word alignment strategies and the alignment measures between word sequences. The experiment al r sults show that sentence parallelism can be effectively identified with a F1 score of 82% at pair-wise level and 72% at parallelism chunk l evel. Based on this approach, we automatically identify sentence parallelism in more than 2000 student essays and study the correlation between the use of sentence parall elism and the types and quality of essays.",
"title": ""
},
{
"docid": "144d1ad172d5dd2ca7b3fc93a83b5942",
"text": "This paper extends the recently introduced approach to the modeling and control design in the framework of model predictive control of the dc-dc boost converter to the dc-dc parallel interleaved boost converter. Based on the converter's model a constrained optimal control problem is formulated and solved. This allows the controller to achieve (a) the regulation of the output voltage to a predefined reference value, despite changes in the input voltage and the load, and (b) the load current balancing to the converter's individual legs, by regulating the currents of the circuit's inductors to proper references, set by an outer loop based on an observer. Simulation results are provided to illustrate the merits of the proposed control scheme.",
"title": ""
}
] |
scidocsrr
|
6ac806b55355be2de1f6f21f6adaf05c
|
A Circuit for Energy Harvesting Using On-Chip Solar Cells
|
[
{
"docid": "8dfdd829881074dc002247c9cd38eba8",
"text": "The limited battery lifetime of modern embedded systems and mobile devices necessitates frequent battery recharging or replacement. Solar energy and small-size photovoltaic (PV) systems are attractive solutions to increase the autonomy of embedded and personal devices attempting to achieve perpetual operation. We present a battery less solar-harvesting circuit that is tailored to the needs of low-power applications. The harvester performs maximum-power-point tracking of solar energy collection under nonstationary light conditions, with high efficiency and low energy cost exploiting miniaturized PV modules. We characterize the performance of the circuit by means of simulation and extensive testing under various charging and discharging conditions. Much attention has been given to identify the power losses of the different circuit components. Results show that our system can achieve low power consumption with increased efficiency and cheap implementation. We discuss how the scavenger improves upon state-of-the-art technology with a measured power consumption of less than 1 mW. We obtain increments of global efficiency up to 80%, diverging from ideality by less than 10%. Moreover, we analyze the behavior of super capacitors. We find that the voltage across the supercapacitor may be an unreliable indicator for the stored energy under some circumstances, and this should be taken into account when energy management policies are used.",
"title": ""
},
{
"docid": "83530198697ed04a3870a1e9d403728b",
"text": "Conventional charge pump circuits use a fixed switching frequency that leads to power efficiency degradation for loading less than the rated loading. This paper proposes a level shifter design that also functions as a frequency converter to automatically vary the switching frequency of a dual charge pump circuit according to the loading. The switching frequency is designed to be 25 kHz with 12 mA loading on both inverting and noninverting outputs. The switching frequency is automatically reduced when loading is lighter to improve the power efficiency. The frequency tuning range of this circuit is designed to be from 100 Hz to 25 kHz. A start-up circuit is included to ensure proper pumping action and avoid latch-up during power-up. A slow turn-on, fast turn-off driving scheme is used in the clock buffer to reduce power dissipation. The new dual charge pump circuit was fabricated in a 3m p-well double-poly single-metal CMOS technology with breakdown voltage of 18 V, the die size is 4.7 4.5 mm2. For comparison, a charge pump circuit with conventional level shifter and clock buffer was also fabricated. The measured results show that the new charge pump has two advantages: 1) the power dissipation of the charge pump is improved by a factor of 32 at no load and by 2% at rated loading of 500 and 2) the breakdown voltage requirement is reduced from 19.2 to 17 V.",
"title": ""
}
] |
[
{
"docid": "7431f48f8792d74e43f7df13795d6338",
"text": "Automatic generation of paraphrases from a given sentence is an important yet challenging task in natural language processing (NLP). In this paper, we present a deep reinforcement learning approach to paraphrase generation. Specifically, we propose a new framework for the task, which consists of a generator and an evaluator, both of which are learned from data. The generator, built as a sequenceto-sequence learning model, can produce paraphrases given a sentence. The evaluator, constructed as a deep matching model, can judge whether two sentences are paraphrases of each other. The generator is first trained by deep learning and then further fine-tuned by reinforcement learning in which the reward is given by the evaluator. For the learning of the evaluator, we propose two methods based on supervised learning and inverse reinforcement learning respectively, depending on the type of available training data. Experimental results on two datasets demonstrate the proposed models (the generators) can produce more accurate paraphrases and outperform the stateof-the-art methods in paraphrase generation in both automatic evaluation and human evaluation.",
"title": ""
},
{
"docid": "c020a3ba9a2615cb5ed9a7e9d5aa3ce0",
"text": "Neural network approaches to Named-Entity Recognition reduce the need for carefully handcrafted features. While some features do remain in state-of-the-art systems, lexical features have been mostly discarded, with the exception of gazetteers. In this work, we show that this is unfair: lexical features are actually quite useful. We propose to embed words and entity types into a lowdimensional vector space we train from annotated data produced by distant supervision thanks to Wikipedia. From this, we compute — offline — a feature vector representing each word. When used with a vanilla recurrent neural network model, this representation yields substantial improvements. We establish a new state-of-the-art F1 score of 87.95 on ONTONOTES 5.0, while matching state-of-the-art performance with a F1 score of 91.73 on the over-studied CONLL-2003 dataset.",
"title": ""
},
{
"docid": "2c5ab4dddbb6aeae4542b42f57e54d72",
"text": "Online action detection is a challenging problem: a system needs to decide what action is happening at the current frame, based on previous frames only. Fortunately in real-life, human actions are not independent from one another: there are strong (long-term) dependencies between them. An online action detection method should be able to capture these dependencies, to enable a more accurate early detection. At first sight, an LSTM seems very suitable for this problem. It is able to model both short-term and long-term patterns. It takes its input one frame at the time, updates its internal state and gives as output the current class probabilities. In practice, however, the detection results obtained with LSTMs are still quite low. In this work, we start from the hypothesis that it may be too difficult for an LSTM to learn both the interpretation of the input and the temporal patterns at the same time. We propose a two-stream feedback network, where one stream processes the input and the other models the temporal relations. We show improved detection accuracy on an artificial toy dataset and on the Breakfast Dataset [21] and the TVSeries Dataset [7], reallife datasets with inherent temporal dependencies between the actions.",
"title": ""
},
{
"docid": "8836fddeb496972fa38005fd2f8a4ed4",
"text": "Energy harvesting has grown from long-established concepts into devices for powering ubiquitously deployed sensor networks and mobile electronics. Systems can scavenge power from human activity or derive limited energy from ambient heat, light, radio, or vibrations. Ongoing power management developments enable battery-powered electronics to live longer. Such advances include dynamic optimization of voltage and clock rate, hybrid analog-digital designs, and clever wake-up procedures that keep the electronics mostly inactive. Exploiting renewable energy resources in the device's environment, however, offers a power source limited by the device's physical survival rather than an adjunct energy store. Energy harvesting's true legacy dates to the water wheel and windmill, and credible approaches that scavenge energy from waste heat or vibration have been around for many decades. Nonetheless, the field has encountered renewed interest as low-power electronics, wireless standards, and miniaturization conspire to populate the world with sensor networks and mobile devices. This article presents a whirlwind survey through energy harvesting, spanning historic and current developments.",
"title": ""
},
{
"docid": "e7a9584974596768d888d1d065135554",
"text": "Footwear is an integral part of daily life. Embedding sensors and electronics in footwear for various different applications started more than two decades ago. This review article summarizes the developments in the field of footwear-based wearable sensors and systems. The electronics, sensing technologies, data transmission, and data processing methodologies of such wearable systems are all principally dependent on the target application. Hence, the article describes key application scenarios utilizing footwear-based systems with critical discussion on their merits. The reviewed application scenarios include gait monitoring, plantar pressure measurement, posture and activity classification, body weight and energy expenditure estimation, biofeedback, navigation, and fall risk applications. In addition, energy harvesting from the footwear is also considered for review. The article also attempts to shed light on some of the most recent developments in the field along with the future work required to advance the field.",
"title": ""
},
{
"docid": "bb85695b909f2c1e2274fc423ce1defc",
"text": "Understanding the intent behind a user's query can help search engine to automatically route the query to some corresponding vertical search engines to obtain particularly relevant contents, thus, greatly improving user satisfaction. There are three major challenges to the query intent classification problem: (1) Intent representation; (2) Domain coverage and (3) Semantic interpretation. Current approaches to predict the user's intent mainly utilize machine learning techniques. However, it is difficult and often requires many human efforts to meet all these challenges by the statistical machine learning approaches. In this paper, we propose a general methodology to the problem of query intent classification. With very little human effort, our method can discover large quantities of intent concepts by leveraging Wikipedia, one of the best human knowledge base. The Wikipedia concepts are used as the intent representation space, thus, each intent domain is represented as a set of Wikipedia articles and categories. The intent of any input query is identified through mapping the query into the Wikipedia representation space. Compared with previous approaches, our proposed method can achieve much better coverage to classify queries in an intent domain even through the number of seed intent examples is very small. Moreover, the method is very general and can be easily applied to various intent domains. We demonstrate the effectiveness of this method in three different applications, i.e., travel, job, and person name. In each of the three cases, only a couple of seed intent queries are provided. We perform the quantitative evaluations in comparison with two baseline methods, and the experimental results shows that our method significantly outperforms other methods in each intent domain.",
"title": ""
},
{
"docid": "c969b4ad07cefc81c3b39ac8e71e520e",
"text": "In this tutorial paper we give a general introduction to verification and validation of simulation models, define the various validation techniques, and present a recommended model validation procedure.",
"title": ""
},
{
"docid": "d36e473370acb6217a747a001217c257",
"text": "class Loan { ... protected Loan(...) { ... } } public class TermLoan extends Loan { public TermLoan(...) { super(...); } } public class Revolver extends Loan { public Revolver(...) { super(...); } } public class RCTL extends Loan { public RCTL(...) super(...); } } Refactoring To Patterns, Copyright © 2001, Joshua Kerievsky, Industrial Logic, Inc. All Rights Reserved. Page 17 of 87 The abstract Loan superclass constructor is protected, and the constructors for the three subclasses are public. We’ll focus on the TermLoan class. The first step is to protect its constructor: public class TermLoan extends Loan { protected TermLoan(...){",
"title": ""
},
{
"docid": "597522575f1bc27394da2f1040e9eaa5",
"text": "Many natural language processing systems rely on machine learning models that are trained on large amounts of manually annotated text data. The lack of sufficient amounts of annotated data is, however, a common obstacle for such systems, since manual annotation of text is often expensive and time-consuming. The aim of “PAL, a tool for Pre-annotation and Active Learning” is to provide a ready-made package that can be used to simplify annotation and to reduce the amount of annotated data required to train a machine learning classifier. The package provides support for two techniques that have been shown to be successful in previous studies, namely active learning and pre-annotation. The output of the pre-annotation is provided in the annotation format of the annotation tool BRAT, but PAL is a stand-alone package that can be adapted to other formats.",
"title": ""
},
{
"docid": "f9c37f460fc0a4e7af577ab2cbe7045b",
"text": "Declines in various cognitive abilities, particularly executive control functions, are observed in older adults. An important goal of cognitive training is to slow or reverse these age-related declines. However, opinion is divided in the literature regarding whether cognitive training can engender transfer to a variety of cognitive skills in older adults. In the current study, the authors trained older adults in a real-time strategy video game for 23.5 hr in an effort to improve their executive functions. A battery of cognitive tasks, including tasks of executive control and visuospatial skills, were assessed before, during, and after video-game training. The trainees improved significantly in the measures of game performance. They also improved significantly more than the control participants in executive control functions, such as task switching, working memory, visual short-term memory, and reasoning. Individual differences in changes in game performance were correlated with improvements in task switching. The study has implications for the enhancement of executive control processes of older adults.",
"title": ""
},
{
"docid": "ca2cc9e21fd1aacc345238c1d609bedf",
"text": "The aim of the present study was to evaluate the long-term effect of implants installed in different dental areas in adolescents. The sample consisted of 18 subjects with missing teeth (congenital absence or trauma). The patients were of different chronological ages (between 13 and 17 years) and of different skeletal maturation. In all subjects, the existing permanent teeth were fully erupted. In 15 patients, 29 single implants (using the Brånemark technique) were installed to replace premolars, canines, and upper incisors. In three patients with extensive aplasia, 18 implants were placed in various regions. The patients were followed during a 10-year period, the first four years annually and then every second year. Photographs, study casts, peri-apical radiographs, lateral cephalograms, and body height measurements were recorded at each control. The results show that dental implants are a good treatment option for replacing missing teeth in adolescents, provided that the subject's dental and skeletal development is complete. However, different problems are related to the premolar and the incisor regions, which have to be considered in the total treatment planning. Disadvantages may be related to the upper incisor region, especially for lateral incisors, due to slight continuous eruption of adjacent teeth and craniofacial changes post-adolescence. Periodontal problems may arise, with marginal bone loss around the adjacent teeth and bone loss buccally to the implants. The shorter the distance between the implant and the adjacent teeth, the larger the reduction of marginal bone level. Before placement of the implant sufficient space must be gained in the implant area, and the adjacent teeth uprighted and paralleled, even in the apical area, using non-intrusive movements. In the premolar area, excess space is needed, not only in the mesio-distal, but above all in the bucco-lingual direction. Thus, an infraoccluded lower deciduous molar should be extracted shortly before placement of the implant to avoid reduction of the bucco-lingual bone volume. Oral rehabilitation with implant-supported prosthetic constructions seems to be a good alternative in adolescents with extensive aplasia, provided that craniofacial growth has ceased or is almost complete.",
"title": ""
},
{
"docid": "34964b0f46c09c5eeb962f26465c3ee1",
"text": "Attention mechanism advanced state-of-the-art neural machine translation (NMT) by jointly learning to align and translate. However, attentional NMT ignores past alignment information, which leads to over-translation and undertranslation problems. In response to this problem, we maintain a coverage vector to keep track of the attention history. The coverage vector is fed to the attention model to help adjust the future attention, which guides NMT to pay more attention to the untranslated source words. Experiments show that coverage-based NMT significantly improves both alignment and translation quality over NMT without coverage.",
"title": ""
},
{
"docid": "9086d8f1d9a0978df0bd93cff4bce20a",
"text": "Australian government enterprises have shown a significant interest in the cloud technology-enabled enterprise transformation. Australian government suggests the whole-of-a-government strategy to cloud adoption. The challenge is how best to realise this cloud adoption strategy for the cloud technology-enabled enterprise transformation? The cloud adoption strategy realisation requires concrete guidelines and a comprehensive practical framework. This paper proposes the use of an agile enterprise architecture framework to developing and implementing the adaptive cloud technology-enabled enterprise architecture in the Australian government context. The results of this paper indicate that a holistic strategic agile enterprise architecture approach seems appropriate to support the strategic whole-of-a-government approach to cloud technology-enabled government enterprise transformation.",
"title": ""
},
{
"docid": "3480f6559b0db816c535c38e5e17cffd",
"text": "BACKGROUND\nReliable and timely information on the leading causes of death in populations, and how these are changing, is a crucial input into health policy debates. In the Global Burden of Diseases, Injuries, and Risk Factors Study 2010 (GBD 2010), we aimed to estimate annual deaths for the world and 21 regions between 1980 and 2010 for 235 causes, with uncertainty intervals (UIs), separately by age and sex.\n\n\nMETHODS\nWe attempted to identify all available data on causes of death for 187 countries from 1980 to 2010 from vital registration, verbal autopsy, mortality surveillance, censuses, surveys, hospitals, police records, and mortuaries. We assessed data quality for completeness, diagnostic accuracy, missing data, stochastic variations, and probable causes of death. We applied six different modelling strategies to estimate cause-specific mortality trends depending on the strength of the data. For 133 causes and three special aggregates we used the Cause of Death Ensemble model (CODEm) approach, which uses four families of statistical models testing a large set of different models using different permutations of covariates. Model ensembles were developed from these component models. We assessed model performance with rigorous out-of-sample testing of prediction error and the validity of 95% UIs. For 13 causes with low observed numbers of deaths, we developed negative binomial models with plausible covariates. For 27 causes for which death is rare, we modelled the higher level cause in the cause hierarchy of the GBD 2010 and then allocated deaths across component causes proportionately, estimated from all available data in the database. For selected causes (African trypanosomiasis, congenital syphilis, whooping cough, measles, typhoid and parathyroid, leishmaniasis, acute hepatitis E, and HIV/AIDS), we used natural history models based on information on incidence, prevalence, and case-fatality. We separately estimated cause fractions by aetiology for diarrhoea, lower respiratory infections, and meningitis, as well as disaggregations by subcause for chronic kidney disease, maternal disorders, cirrhosis, and liver cancer. For deaths due to collective violence and natural disasters, we used mortality shock regressions. For every cause, we estimated 95% UIs that captured both parameter estimation uncertainty and uncertainty due to model specification where CODEm was used. We constrained cause-specific fractions within every age-sex group to sum to total mortality based on draws from the uncertainty distributions.\n\n\nFINDINGS\nIn 2010, there were 52·8 million deaths globally. At the most aggregate level, communicable, maternal, neonatal, and nutritional causes were 24·9% of deaths worldwide in 2010, down from 15·9 million (34·1%) of 46·5 million in 1990. This decrease was largely due to decreases in mortality from diarrhoeal disease (from 2·5 to 1·4 million), lower respiratory infections (from 3·4 to 2·8 million), neonatal disorders (from 3·1 to 2·2 million), measles (from 0·63 to 0·13 million), and tetanus (from 0·27 to 0·06 million). Deaths from HIV/AIDS increased from 0·30 million in 1990 to 1·5 million in 2010, reaching a peak of 1·7 million in 2006. Malaria mortality also rose by an estimated 19·9% since 1990 to 1·17 million deaths in 2010. Tuberculosis killed 1·2 million people in 2010. Deaths from non-communicable diseases rose by just under 8 million between 1990 and 2010, accounting for two of every three deaths (34·5 million) worldwide by 2010. 8 million people died from cancer in 2010, 38% more than two decades ago; of these, 1·5 million (19%) were from trachea, bronchus, and lung cancer. Ischaemic heart disease and stroke collectively killed 12·9 million people in 2010, or one in four deaths worldwide, compared with one in five in 1990; 1·3 million deaths were due to diabetes, twice as many as in 1990. The fraction of global deaths due to injuries (5·1 million deaths) was marginally higher in 2010 (9·6%) compared with two decades earlier (8·8%). This was driven by a 46% rise in deaths worldwide due to road traffic accidents (1·3 million in 2010) and a rise in deaths from falls. Ischaemic heart disease, stroke, chronic obstructive pulmonary disease (COPD), lower respiratory infections, lung cancer, and HIV/AIDS were the leading causes of death in 2010. Ischaemic heart disease, lower respiratory infections, stroke, diarrhoeal disease, malaria, and HIV/AIDS were the leading causes of years of life lost due to premature mortality (YLLs) in 2010, similar to what was estimated for 1990, except for HIV/AIDS and preterm birth complications. YLLs from lower respiratory infections and diarrhoea decreased by 45-54% since 1990; ischaemic heart disease and stroke YLLs increased by 17-28%. Regional variations in leading causes of death were substantial. Communicable, maternal, neonatal, and nutritional causes still accounted for 76% of premature mortality in sub-Saharan Africa in 2010. Age standardised death rates from some key disorders rose (HIV/AIDS, Alzheimer's disease, diabetes mellitus, and chronic kidney disease in particular), but for most diseases, death rates fell in the past two decades; including major vascular diseases, COPD, most forms of cancer, liver cirrhosis, and maternal disorders. For other conditions, notably malaria, prostate cancer, and injuries, little change was noted.\n\n\nINTERPRETATION\nPopulation growth, increased average age of the world's population, and largely decreasing age-specific, sex-specific, and cause-specific death rates combine to drive a broad shift from communicable, maternal, neonatal, and nutritional causes towards non-communicable diseases. Nevertheless, communicable, maternal, neonatal, and nutritional causes remain the dominant causes of YLLs in sub-Saharan Africa. Overlaid on this general pattern of the epidemiological transition, marked regional variation exists in many causes, such as interpersonal violence, suicide, liver cancer, diabetes, cirrhosis, Chagas disease, African trypanosomiasis, melanoma, and others. Regional heterogeneity highlights the importance of sound epidemiological assessments of the causes of death on a regular basis.\n\n\nFUNDING\nBill & Melinda Gates Foundation.",
"title": ""
},
{
"docid": "ff6a2e6b0fbb4e195b095981ab97aae0",
"text": "As broadband speeds increase, latency is becoming a bottleneck for many applications—especially for Web downloads. Latency affects many aspects of Web page load time, from DNS lookups to the time to complete a three-way TCP handshake; it also contributes to the time it takes to transfer the Web objects for a page. Previous work has shown that much of this latency can occur in the last mile [2]. Although some performance bottlenecks can be mitigated by increasing downstream throughput (e.g., by purchasing a higher service plan), in many cases, latency introduces performance bottlenecks, particularly for connections with higher throughput. To mitigate latency bottlenecks in the last mile, we have implemented a system that performs DNS prefetching and TCP connection caching to the Web sites that devices inside a home visit most frequently, a technique we call popularity-based prefetching. Many devices and applications already perform DNS prefetching and maintain persistent TCP connections, but most prefetching is predictive based on the content of the page, rather than on past site popularity. We evaluate the optimizations using a simulator that we drive from traffic traces that we collected from five homes in the BISmark testbed [1]. We find that performing DNS prefetching and TCP connection caching for the twenty most popular sites inside the home can double DNS and connection cache hit rates.",
"title": ""
},
{
"docid": "c66e38f3be7760c8ca0b6ef2dfc5bec2",
"text": "Gesture recognition remains a very challenging task in the field of computer vision and human computer interaction (HCI). A decade ago the task seemed to be almost unsolvable with the data provided by a single RGB camera. Due to recent advances in sensing technologies, such as time-of-flight and structured light cameras, there are new data sources available, which make hand gesture recognition more feasible. In this work, we propose a highly precise method to recognize static gestures from a depth data, provided from one of the above mentioned devices. The depth images are used to derive rotation-, translation- and scale-invariant features. A multi-layered random forest (MLRF) is then trained to classify the feature vectors, which yields to the recognition of the hand signs. The training time and memory required by MLRF are much smaller, compared to a simple random forest with equivalent precision. This allows to repeat the training procedure of MLRF without significant effort. To show the advantages of our technique, we evaluate our algorithm on synthetic data, on publicly available dataset, containing 24 signs from American Sign Language(ASL) and on a new dataset, collected using recently appeared Intel Creative Gesture Camera.",
"title": ""
},
{
"docid": "6fc9000394cc05b2f70909dd2d0c76fb",
"text": "Thesupport-vector network is a new learning machine for two-group classification problems. The machine conceptually implements the following idea: input vectors are non-linearly mapped to a very high-dimension feature space. In this feature space a linear decision surface is constructed. Special properties of the decision surface ensures high generalization ability of the learning machine. The idea behind the support-vector network was previously implemented for the restricted case where the training data can be separated without errors. We here extend this result to non-separable training data. High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated. We also compare the performance of the support-vector network to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition.",
"title": ""
},
{
"docid": "1fe00a08e1eb2124d2608e1244228524",
"text": "A 6.4MS/s 13b ADC with a low-power background calibration for DAC mismatch and comparator offset errors is presented. Redundancy deals with DAC settling and facilitates calibration. A two-mode comparator and 0.3fF capacitors reduce power and area. The background calibration can directly detect the sign of the dynamic comparator offset error and the DAC mismatch errors and correct both of them simultaneously in a stepwise feedback loop. The calibration achieves 20dB spur reduction with little area and power overhead. The chip is implemented in 40nm CMOS and consumes 46μW from a 1V supply, and achieves 64.1dB SNDR and a FoM of 5.5 fJ/conversion-step at Nyquist.",
"title": ""
},
{
"docid": "18ab984fd9f6fa68e004c76689d566af",
"text": "The purpose of this study was to apply a magnetic resonance (MR) imaging-compatible positron emission tomographic (PET) detector technology for simultaneous MR/PET imaging of the human brain and skull base. The PET detector ring consists of lutetium oxyorthosilicate (LSO) scintillation crystals in combination with avalanche photodiodes (APDs) mounted in a clinical 3-T MR imager with use of the birdcage transmit/receive head coil. Following phantom studies, two patients were simultaneously examined by using fluorine 18 fluorodeoxyglucose (FDG) PET and MR imaging and spectroscopy. MR/PET data enabled accurate coregistration of morphologic and multifunctional information. Simultaneous MR/PET imaging is feasible in humans, opening up new possibilities for the emerging field of molecular imaging.",
"title": ""
}
] |
scidocsrr
|
67a2e5f965d404218f793993c625c994
|
Knock-Knock: The unbearable lightness of Android Notifications
|
[
{
"docid": "a3642ac7aff09f038df823bc2bab3b95",
"text": "We assess the risk of phishing on mobile platforms. Mobile operating systems and browsers lack secure application identity indicators, so the user cannot always identify whether a link has taken her to the expected application. We conduct a systematic analysis of ways in which mobile applications and web sites link to each other. To evaluate the risk, we study 85 web sites and 100 mobile applications and discover that web sites and applications regularly ask users to type their passwords into contexts that are vulnerable to spoofing. Our implementation of sample phishing attacks on the Android and iOS platforms demonstrates that attackers can spoof legitimate applications with high accuracy, suggesting that the risk of phishing attacks on mobile platforms is greater than has previously been appreciated.",
"title": ""
}
] |
[
{
"docid": "a5627b1135ebd45e7d5278761875b3f2",
"text": "The purpose of this study is to incorporate the core brand image, brand attitude and brand attachment with environmental consequences to testify the impact on the consumer purchase intentions. Does environmental consequences has some role while formatting purchase intention of the customer or people do not think about it. Either customers want to attach themselves with brand only or they also keep into account the corporate social responsibility index as well. Results show that core brand image and brand attitude has positive impact whereas environmental consequences have negative effect on the purchasing intention of customers (smokers).",
"title": ""
},
{
"docid": "11e2ec2aab62ba8380e82a18d3fcb3d8",
"text": "In this paper we describe our effort to create a dataset for the evaluation of cross-language textual similarity detection. We present preexisting corpora and their limits and we explain the various gathered resources to overcome these limits and build our enriched dataset. The proposed dataset is multilingual, includes cross-language alignment for different granularities (from chunk to document), is based on both parallel and comparable corpora and contains human and machine translated texts. Moreover, it includes texts written by multiple types of authors (from average to professionals). With the obtained dataset, we conduct a systematic and rigorous evaluation of several state-of-the-art cross-language textual similarity detection methods. The evaluation results are reviewed and discussed. Finally, dataset and scripts are made publicly available on GitHub: http://github.com/FerreroJeremy/Cross-Language-Dataset.",
"title": ""
},
{
"docid": "aba1bbd9163e5f9d16ef2d98d16ce1c2",
"text": "The basic reproduction number (0) is arguably the most important quantity in infectious disease epidemiology. The next-generation matrix (NGM) is the natural basis for the definition and calculation of (0) where finitely many different categories of individuals are recognized. We clear up confusion that has been around in the literature concerning the construction of this matrix, specifically for the most frequently used so-called compartmental models. We present a detailed easy recipe for the construction of the NGM from basic ingredients derived directly from the specifications of the model. We show that two related matrices exist which we define to be the NGM with large domain and the NGM with small domain. The three matrices together reflect the range of possibilities encountered in the literature for the characterization of (0). We show how they are connected and how their construction follows from the basic model ingredients, and establish that they have the same non-zero eigenvalues, the largest of which is the basic reproduction number (0). Although we present formal recipes based on linear algebra, we encourage the construction of the NGM by way of direct epidemiological reasoning, using the clear interpretation of the elements of the NGM and of the model ingredients. We present a selection of examples as a practical guide to our methods. In the appendix we present an elementary but complete proof that (0) defined as the dominant eigenvalue of the NGM for compartmental systems and the Malthusian parameter r, the real-time exponential growth rate in the early phase of an outbreak, are connected by the properties that (0) > 1 if and only if r > 0, and (0) = 1 if and only if r = 0.",
"title": ""
},
{
"docid": "ff826e50f789d4e47f30ec22396c365d",
"text": "In present Scenario of the world, Internet has almost reached to every aspect of our lives. Due to this, most of the information sharing and communication is carried out using web. With such rapid development of Internet technology, a big issue arises of unauthorized access to confidential data, which leads to utmost need of information security while transmission. Cryptography and Steganography are two of the popular techniques used for secure transmission. Steganography is more reliable over cryptography as it embeds secret data within some cover material. Unlike cryptography, Steganography is not for keeping message hidden from intruders but it does not allow anyone to know that hidden information even exist in communicated material, as the transmitted material looks like any normal message which seem to be of no use for intruders. Although, Steganography covers many types of covers to hide data like text, image, audio, video and protocols but recent developments focuses on Image Steganography due to its large data hiding capacity and difficult identification, also due to their greater scope and bulk sharing within social networks. A large number of techniques are available to hide secret data within digital images such as LSB, ISB, and MLSB etc. In this paper, a detailed review will be presented on Image Steganography and also different data hiding and security techniques using digital images with their scope and features.",
"title": ""
},
{
"docid": "025e76755193277b2ea55d06d4f22d03",
"text": "Bioprinting technology shows potential in tissue engineering for the fabrication of scaffolds, cells, tissues and organs reproducibly and with high accuracy. Bioprinting technologies are mainly divided into three categories, inkjet-based bioprinting, pressure-assisted bioprinting and laser-assisted bioprinting, based on their underlying printing principles. These various printing technologies have their advantages and limitations. Bioprinting utilizes biomaterials, cells or cell factors as a “bioink” to fabricate prospective tissue structures. Biomaterial parameters such as biocompatibility, cell viability and the cellular microenvironment strongly influence the printed product. Various printing technologies have been investigated, and great progress has been made in printing various types of tissue, including vasculature, heart, bone, cartilage, skin and liver. This review introduces basic principles and key aspects of some frequently used printing technologies. We focus on recent advances in three-dimensional printing applications, current challenges and future directions.",
"title": ""
},
{
"docid": "58fda5b08ffe26440b173f363ca36292",
"text": "The dependence on information technology became critical and IT infrastructure, critical data, intangible intellectual property are vulnerable to threats and attacks. Organizations install Intrusion Detection Systems (IDS) to alert suspicious traffic or activity. IDS generate a large number of alerts and most of them are false positive as the behavior construe for partial attack pattern or lack of environment knowledge. Monitoring and identifying risky alerts is a major concern to security administrator. The present work is to design an operational model for minimization of false positive alarms, including recurring alarms by security administrator. The architecture, design and performance of model in minimization of false positives in IDS are explored and the experimental results are presented with reference to lab environment.",
"title": ""
},
{
"docid": "53a05c0438a0a26c8e3e74e1fa7b192b",
"text": "This paper presents a simple method based on sinusoidal-amplitude detector for realizing the resolver-signal demodulator. The proposed demodulator consists of two full-wave rectifiers, two ±unity-gain amplifiers, and two sinusoidal-amplitude detectors with control switches. Two output voltages are proportional to sine and cosine envelopes of resolver-shaft angle without low-pass filter. Experimental results demonstrating characteristic of the proposed circuit are included.",
"title": ""
},
{
"docid": "91dcedc72a6f5a1e6df2b66203e9f194",
"text": "Collecting 3D object data sets involves a large amount of manual work and is time consuming. Getting complete models of objects either requires a 3D scanner that covers all the surfaces of an object or one needs to rotate it to completely observe it. We present a system that incrementally builds a database of objects as a mobile agent traverses a scene. Our approach requires no prior knowledge of the shapes present in the scene. Object-like segments are extracted from a global segmentation map, which is built online using the input of segmented RGB-D images. These segments are stored in a database, matched among each other, and merged with other previously observed instances. This allows us to create and improve object models on the fly and to use these merged models to reconstruct also unobserved parts of the scene. The database contains each (potentially merged) object model only once, together with a set of poses where it was observed. We evaluate our pipeline with one public dataset, and on a newly created Google Tango dataset containing four indoor scenes with some of the objects appearing multiple times, both within and across scenes.",
"title": ""
},
{
"docid": "db434a6815fe963beedbec2078979543",
"text": "Effective regulation of affect: An action control perspective on emotion regulation Thomas L. Webb a , Inge Schweiger Gallo b , Eleanor Miles a , Peter M. Gollwitzer c d & Paschal Sheeran a a Department of Psychology, University of Sheffield, Sheffield, UK b Departamento de Psicología Social, Universidad Complutense de Madrid, Madrid, Spain c Department of Psychology, New York University, New York, USA d Department of Psychology, University of Konstanz, Konstanz, Germany",
"title": ""
},
{
"docid": "9ec718dd1b7eb98fb4fe895d76474c85",
"text": "The multibillion-dollar online advertising industry continues to debate whether to use the CPC (cost per click) or CPA (cost per action) pricing model as an industry standard. This article applies the economic framework of incentive contracts to study how these pricing models can lead to risk sharing between the publisher and the advertiser and incentivize them to make e orts that improve the performance of online ads. We nd that, compared to the CPC model, the CPA model can better incentivize the publisher to make e orts that can improve the purchase rate. However, the CPA model can cause an adverse selection problem: the winning advertiser tends to have a lower pro t margin under the CPA model than under the CPC model. We identify the conditions under which the CPA model leads to higher publisher (or advertiser) payo s than the CPC model. Whether publishers (or advertisers) prefer the CPA model over the CPC model depends on the advertisers' risk aversion, uncertainty in the product market, and the presence of advertisers with low immediate sales ratios. Our ndings indicate a con ict of interest between publishers and advertisers in their preferences for these two pricing models. We further consider which pricing model o ers greater social welfare.",
"title": ""
},
{
"docid": "5d5c036d03bd15688fec89c5af9dfbd8",
"text": "OBJECTIVE\nThis study evaluated the efficacy and tolerability of desvenlafaxine succinate (desvenlafaxine) in the treatment of major depressive disorder (MDD).\n\n\nMETHOD\nIn this 8-week, multicenter, randomized, double-blind, placebo-controlled trial, adult outpatients (aged 18-75 years) with a primary diagnosis of MDD (DSM-IV criteria) were randomly assigned to treatment with desvenlafaxine (100-200 mg/day) or placebo. The primary outcome measure was the 17-item Hamilton Rating Scale for Depression (HAM-D(17)) score at final on-therapy evaluation. The Clinical Global Impressions-Improvement scale (CGI-I) was the key secondary measure. Other secondary measures included the Montgomery-Asberg Depression Rating Scale (MADRS), Clinical Global Impressions-Severity of Illness scale, Visual Analog Scale-Pain Intensity (VAS-PI) overall and subcomponent scores, and HAM-D(17) response and remission rates. The study was conducted from June 2003 to May 2004.\n\n\nRESULTS\nOf the 247 patients randomly assigned to treatment, 234 comprised the intent-to-treat population. Following titration, mean daily desvenlafaxine doses ranged from 179 to 195 mg/day. At endpoint, there were no significant differences in scores between the desvenlafaxine (N = 120) and placebo (N = 114) groups on the HAM-D(17) or CGI-I. However, the desvenlafaxine group had significantly greater improvement in MADRS scores (p = .047) and in VAS-PI overall pain (p = .008), back pain (p = .006), and arm, leg, or joint pain (p < .001) scores than the placebo group. The most common treatment-emergent adverse events (at least 10% and twice the rate of placebo) were nausea, dry mouth, constipation, anorexia, somnolence, and nervousness.\n\n\nCONCLUSION\nDesvenlafaxine was generally safe and well tolerated. In this study, it did not show significantly greater efficacy than placebo on the primary or key secondary efficacy endpoints, but it did demonstrate efficacy on an alternate depression scale and pain measure associated with MDD.\n\n\nCLINICAL TRIALS REGISTRATION\nClinicalTrials.gov identifier NCT00063206.",
"title": ""
},
{
"docid": "f7ed4fb9015dad13d47dec677c469c4b",
"text": "In this paper, a low-cost, power efficient and fast Differential Cascode Voltage-Switch-Logic (DCVSL) based delay cell (named DCVSL-R) is proposed. We use the DCVSL-R cell to implement high frequency and power-critical delay cells and flip-flops of ring oscillators and frequency dividers. When compared to TSPC, DCVSL circuits offer small input and clock capacitance and a symmetric differential loading for previous RF stages. When compared to CML, they offer low transistor count, no headroom limitation, rail-to-rail swing and no static current consumption. However, DCVSL circuits suffer from a large low-to-high propagation delay, which limits their speed and results in asymmetrical output waveforms. The proposed DCVSL-R circuit embodies the benefits of DCVSL while reducing the total propagation delay, achieving faster operation. DCVSL-R also generates symmetrical output waveforms which are critical for differential circuits. Another contribution of this work is a closed-form delay model that predicts the speed of DCVSL circuits with 8% worst case accuracy. We implement two ring-oscillator-based VCOs in 0.13 μm technology with DCVSL and DCVSL-R delay cells. Measurements show that the proposed DCVSL-R based VCO consumes 30% less power than the DCVSL VCO for the same oscillation frequency (2.4 GHz) and same phase noise (-113 dBc/Hz at 10 MHz). DCVSL-R circuits are also used to implement the high frequency dual modulus prescaler (DMP) of a 2.4 GHz frequency synthesizer in 0.18 μm technology. The DMP consumes only 0.8 mW at 2.48 GHz, a 40% reduction in power when compared to other reported DMPs with similar division ratios and operating frequencies. The RF buffer that drives the DMP consumes only 0.27 mW, demonstrating the lowest combined DMP and buffer power consumption among similar synthesizers in literature.",
"title": ""
},
{
"docid": "3d2666ab3b786fd02bb15e81b0eaeb37",
"text": "BACKGROUND\n The analysis of nursing errors in clinical management highlighted that clinical handover plays a pivotal role in patient safety. Changes to handover including conducting handover at the bedside and the use of written handover summary sheets were subsequently implemented.\n\n\nAIM\n The aim of the study was to explore nurses' perspectives on the introduction of bedside handover and the use of written handover sheets.\n\n\nMETHOD\n Using a qualitative approach, data were obtained from six focus groups containing 30 registered and enrolled (licensed practical) nurses. Thematic analysis revealed several major themes.\n\n\nFINDINGS\n Themes identified included: bedside handover and the strengths and weaknesses; patient involvement in handover, and good communication is about good communicators. Finally, three sources of patient information and other issues were also identified as key aspects.\n\n\nCONCLUSIONS\n How bedside handover is delivered should be considered in relation to specific patient caseloads (patients with cognitive impairments), the shift (day, evening or night shift) and the model of service delivery (team versus patient allocation).\n\n\nIMPLICATIONS FOR NURSING MANAGEMENT\n Flexible handover methods are implicit within clinical setting issues especially in consideration to nursing teamwork. Good communication processes continue to be fundamental for successful handover processes.",
"title": ""
},
{
"docid": "6d52a9877ddf18eb7e43c83000ed4da1",
"text": "Cyberbullying has recently emerged as a new form of bullying and harassment. 360 adolescents (12-20 years), were surveyed to examine the nature and extent of cyberbullying in Swedish schools. Four categories of cyberbullying (by text message, email, phone call and picture/video clip) were examined in relation to age and gender, perceived impact, telling others, and perception of adults becoming aware of such bullying. There was a significant incidence of cyberbullying in lower secondary schools, less in sixth-form colleges. Gender differences were few. The impact of cyberbullying was perceived as highly negative for picture/video clip bullying. Cybervictims most often chose to either tell their friends or no one at all about the cyberbullying, so adults may not be aware of cyberbullying, and (apart from picture/video clip bullying) this is how it was perceived by pupils. Findings are discussed in relation to similarities and differences between cyberbullying and the more traditional forms of bullying.",
"title": ""
},
{
"docid": "014d428b370434d7fdc1678640f88fe0",
"text": "This paper describes the design rules of a compact microstrip patch antenna with polarization reconfigurable features (right-handed circular polarization (CP)/left-handed CP). The basic antenna is a circular coplanar-waveguide (CPW)-fed microstrip antenna excited by a diagonal slot and the CPW open end. This device is developed for short-range communications or contactless identification systems requiring polarization reconfigurability to optimize the link reliability. First, experimental and simulated results are presented for the passive version of the antenna excited by an asymmetric slot. A reconfigurable antenna using beam-lead p-i-n diodes to switch the polarization sense is then simulated with an electrical modeling of the diodes. Finally, the efficiency reduction resulting from the diode losses is discussed",
"title": ""
},
{
"docid": "aac94dec9aacac522f0d3fd05b71a92d",
"text": "Nonparametric data from multi-factor experiments arise often in human-computer interaction (HCI). Examples may include error counts, Likert responses, and preference tallies. But because multiple factors are involved, common nonparametric tests (e.g., Friedman) are inadequate, as they are unable to examine interaction effects. While some statistical techniques exist to handle such data, these techniques are not widely available and are complex. To address these concerns, we present the Aligned Rank Transform (ART) for nonparametric factorial data analysis in HCI. The ART relies on a preprocessing step that \"aligns\" data before applying averaged ranks, after which point common ANOVA procedures can be used, making the ART accessible to anyone familiar with the F-test. Unlike most articles on the ART, which only address two factors, we generalize the ART to N factors. We also provide ARTool and ARTweb, desktop and Web-based programs for aligning and ranking data. Our re-examination of some published HCI results exhibits advantages of the ART.",
"title": ""
},
{
"docid": "3bf35473dbed1029c9ed1e75470b7af1",
"text": "Swarm intelligence (SI)-based metaheuristics are well applied to solve real-time optimization problems of efficient node clustering and energy-aware data routing in wireless sensor networks. This paper presents another superior approach for these optimization problems based on an artificial bee colony metaheuristic. The proposed clustering algorithm presents an efficient cluster formation mechanism with improved cluster head selection criteria based on a multi-objective fitness function, whereas the routing algorithm is devised to consume minimum energy with least hop-count for data transmission. Extensive evaluation and comparison of the proposed approach with existing wellknown SI-based algorithms demonstrate its superiority over others in terms of packet delivery ratio, average energy consumed, average throughput and network life.",
"title": ""
},
{
"docid": "82bccde9ee6370a2e15dfc1363e3ae6a",
"text": "Frontal EEG asymmetry appears to serve as (1) an individual difference variable related to emotional responding and emotional disorders, and (2) a state-dependent concomitant of emotional responding. Such findings, highlighted in this review, suggest that frontal EEG asymmetry may serve as both a moderator and a mediator of emotion- and motivation-related constructs. Unequivocal evidence supporting frontal EEG asymmetry as a moderator and/or mediator of emotion is lacking, as insufficient attention has been given to analyzing the frontal EEG asymmetries in terms of moderators and mediators. The present report reviews the frontal EEG asymmetry literature from the framework of moderators and mediators, and overviews data analytic strategies that would support claims of moderation and mediation.",
"title": ""
},
{
"docid": "8cb6a2a3014bd3a7f945abd4cb2ffe88",
"text": "In order to identify and explore the strength and weaknesses of particular organizational designs, a wide range of maturity models have been developed by both, practitioners and academics over the past years. However, a systematization and generalization of the procedure on how to design maturity models as well as a synthesis of design science research with the rather behavioural field of organization theory is still lacking. Trying to combine the best of both fields, a first design proposition of a situational maturity model is presented in this paper. The proposed maturity model design is illustrated with the help of an instantiation for the healthcare domain.",
"title": ""
},
{
"docid": "30921bc63227f5c67e9d5e36cacfbb8b",
"text": "Image quality is an important practical challenge that is often overlooked in the design of machine vision systems. Commonly, machine vision systems are trained and tested on high quality image datasets, yet in practical applications the input images can not be assumed to be of high quality. Recently, deep neural networks have obtained state-of-the-art performance on many machine vision tasks. In this paper we provide an evaluation of 4 state-of-the-art deep neural network models for image classification under quality distortions. We consider five types of quality distortions: blur, noise, contrast, JPEG, and JPEG2000 compression. We show that the existing networks are susceptible to these quality distortions, particularly to blur and noise. These results enable future work in developing deep neural networks that are more invariant to quality distortions.",
"title": ""
}
] |
scidocsrr
|
f245ba648fe41208eaaaea9a1159f690
|
Stereophonic Acoustic Echo Suppression Incorporating Spectro-Temporal Correlations
|
[
{
"docid": "cd98932832d8821a98032ae6bbef2576",
"text": "An open-loop stereophonic acoustic echo suppression (SAES) method without preprocessing is presented for teleconferencing systems, where the Wiener filter in the short-time Fourier transform (STFT) domain is employed. Instead of identifying the echo path impulse responses with adaptive filters, the proposed algorithm estimates the echo spectra from the stereo signals using two weighting functions. The spectral modification technique originally proposed for noise reduction is adopted to remove the echo from the microphone signal. Moreover, a priori signal-to-echo ratio (SER) based Wiener filter is used as the gain function to achieve a trade-off between musical noise reduction and computational load for real-time operations. Computer simulation shows the effectiveness and the robustness of the proposed method in several different scenarios.",
"title": ""
},
{
"docid": "65ffbc6ee36ae242c697bb81ff3be23a",
"text": "Full-duplex hands-free telecommunication systems employ an acoustic echo canceler (AEC) to remove the undesired echoes that result from the coupling between a loudspeaker and a microphone. Traditionally, the removal is achieved by modeling the echo path impulse response with an adaptive finite impulse response (FIR) filter and subtracting an echo estimate from the microphone signal. It is not uncommon that an adaptive filter with a length of 50-300 ms needs to be considered, which makes an AEC highly computationally expensive. In this paper, we propose an echo suppression algorithm to eliminate the echo effect. Instead of identifying the echo path impulse response, the proposed method estimates the spectral envelope of the echo signal. The suppression is done by spectral modification-a technique originally proposed for noise reduction. It is shown that this new approach has several advantages over the traditional AEC. Properties of human auditory perception are considered, by estimating spectral envelopes according to the frequency selectivity of the auditory system, resulting in improved perceptual quality. A conventional AEC is often combined with a post-processor to reduce the residual echoes due to minor echo path changes. It is shown that the proposed algorithm is insensitive to such changes. Therefore, no post-processor is necessary. Furthermore, the new scheme is computationally much more efficient than a conventional AEC.",
"title": ""
}
] |
[
{
"docid": "1792a50d95592e958df6ad0be6e88764",
"text": "This paper constructs and calibrates a parsimonious model of occupational choice that allows for entrepreneurial entry, exit, and investment decisions in presence of borrowing constraints. The model fits very well a number of empirical observations, including the observed wealth distribution for entrepreneurs and workers. At the aggregate level, more restrictive borrowing constraints generate less wealth concentration, and reduce average firm size, aggregate capital, and the fraction of entrepreneurs. Voluntary bequests allow some high-ability workers to establish or enlarge an entrepreneurial activity. With accidental bequests only, there would be fewer very large firms, and less aggregate capital and wealth concentration. J.E.L. Classification: E21, E23, J23,",
"title": ""
},
{
"docid": "3c58e5fa9c216edc12533f0ca13bb44d",
"text": "Nanocelluloses, including nanocrystalline cellulose, nanofibrillated cellulose and bacterial cellulose nanofibers, have become fascinating building blocks for the design of new biomaterials. Derived from the must abundant and renewable biopolymer, they are drawing a tremendous level of attention, which certainly will continue to grow in the future driven by the sustainability trend. This growing interest is related to their unsurpassed quintessential physical and chemical properties. Yet, owing to their hydrophilic nature, their utilization is restricted to applications involving hydrophilic or polar media, which limits their exploitation. With the presence of a large number of chemical functionalities within their structure, these building blocks provide a unique platform for significant surface modification through various chemistries. These chemical modifications are prerequisite, sometimes unavoidable, to adapt the interfacial properties of nanocellulose substrates or adjust their hydrophilic-hydrophobic balance. Therefore, various chemistries have been developed aiming to surface-modify these nano-sized substrates in order to confer to them specific properties, extending therefore their use to highly sophisticated applications. This review collocates current knowledge in the research and development of nanocelluloses and emphasizes more particularly on the chemical modification routes developed so far for their functionalization.",
"title": ""
},
{
"docid": "ce2ff18063f16dca4c5d3aee414def8d",
"text": "Understanding 3D object structure from a single image is an important but challenging task in computer vision, mostly due to the lack of 3D object annotations to real images. Previous research tackled this problem by either searching for a 3D shape that best explains 2D annotations, or training purely on synthetic data with ground truth 3D information. In this work, we propose 3D INterpreter Networks (3D-INN), an end-to-end trainable framework that sequentially estimates 2D keypoint heatmaps and 3D object skeletons and poses. Our system learns from both 2D-annotated real images and synthetic 3D data. This is made possible mainly by two technical innovations. First, heatmaps of 2D keypoints serve as an intermediate representation to connect real and synthetic data. 3D-INN is trained on real images to estimate 2D keypoint heatmaps from an input image; it then predicts 3D object structure from heatmaps using knowledge learned from synthetic 3D shapes. By doing so, 3D-INN benefits from the variation and abundance of synthetic 3D objects, without suffering from the domain difference between real and synthesized images, often due to imperfect rendering. Second, we propose a Projection Layer, mapping estimated 3D structure back to 2D. During training, it ensures 3D-INN to predict 3D structure whose projection is consistent with the 2D annotations to real images. Experiments show that the proposed system performs well on both 2D keypoint estimation and 3D structure recovery. We also demonstrate that the recovered 3D information has wide vision applications, such as image retrieval.",
"title": ""
},
{
"docid": "79101132835328557d91b123d99e3526",
"text": "We present a transmit subaperturing (TS) approach for multiple-input multiple-output (MIMO) radars with co-located antennas. The proposed scheme divides the transmit array elements into multiple groups, each group forms a directional beam and modulates a distinct waveform, and all beams are steerable and point to the same direction. The resulting system is referred to as a TS-MIMO radar. A TS-MIMO radar is a tunable system that offers a continuum of operating modes from the phased-array radar, which achieves the maximum directional gain but the least interference rejection ability, to the omnidirectional transmission based MIMO radar, which can handle the largest number of interference sources but offers no directional gain. Tuning of the TS-MIMO system can be easily made by changing the configuration of the transmit subapertures, which provides a direct tradeoff between the directional gain and interference rejection power of the system. The performance of the TS-MIMO radar is examined in terms of the output signal-to-interference-plus-noise ratio (SINR) of an adaptive beamformer in an interference and training limited environment, where we show analytically how the output SINR is affected by several key design parameters, including the size/number of the subapertures and the number of training signals. Our results are verified by computer simulation and comparisons are made among various operating modes of the proposed TS-MIMO system.",
"title": ""
},
{
"docid": "c398f43a02ba52d1b7ccc62af5cfc847",
"text": "The top-k error is a common measure of performance in machine learning and computer vision. In practice, top-k classification is typically performed with deep neural networks trained with the cross-entropy loss. Theoretical results indeed suggest that cross-entropy is an optimal learning objective for such a task in the limit of infinite data. In the context of limited and noisy data however, the use of a loss function that is specifically designed for top-k classification can bring significant improvements. Our empirical evidence suggests that the loss function must be smooth and have non-sparse gradients in order to work well with deep neural networks. Consequently, we introduce a family of smoothed loss functions that are suited to top-k optimization via deep learning. The widely used cross-entropy is a special case of our family. Evaluating our smooth loss functions is computationally challenging: a naı̈ve algorithm would require O( ( n k ) ) operations, where n is the number of classes. Thanks to a connection to polynomial algebra and a divideand-conquer approach, we provide an algorithm with a time complexity of O(kn). Furthermore, we present a novel approximation to obtain fast and stable algorithms on GPUs with single floating point precision. We compare the performance of the cross-entropy loss and our margin-based losses in various regimes of noise and data size, for the predominant use case of k = 5. Our investigation reveals that our loss is more robust to noise and overfitting than cross-entropy.",
"title": ""
},
{
"docid": "3355c37593ee9ef1b2ab29823ca8c1d4",
"text": "The paper overviews the 11th evaluation campaign organized by the IWSLT workshop. The 2014 evaluation offered multiple tracks on lecture transcription and translation based on the TED Talks corpus. In particular, this year IWSLT included three automatic speech recognition tracks, on English, German and Italian, five speech translation tracks, from English to French, English to German, German to English, English to Italian, and Italian to English, and five text translation track, also from English to French, English to German, German to English, English to Italian, and Italian to English. In addition to the official tracks, speech and text translation optional tracks were offered, globally involving 12 other languages: Arabic, Spanish, Portuguese (B), Hebrew, Chinese, Polish, Persian, Slovenian, Turkish, Dutch, Romanian, Russian. Overall, 21 teams participated in the evaluation, for a total of 76 primary runs submitted. Participants were also asked to submit runs on the 2013 test set (progress test set), in order to measure the progress of systems with respect to the previous year. All runs were evaluated with objective metrics, and submissions for two of the official text translation tracks were also evaluated with human post-editing.",
"title": ""
},
{
"docid": "ef0ce55309cf2e353f58f18d20990cb5",
"text": "The quality of a Neural Machine Translation system depends substantially on the availability of sizable parallel corpora. For low-resource language pairs this is not the case, resulting in poor translation quality. Inspired by work in computer vision, we propose a novel data augmentation approach that targets low-frequency words by generating new sentence pairs containing rare words in new, synthetically created contexts. Experimental results on simulated low-resource settings show that our method improves translation quality by up to 2.9 BLEU points over the baseline and up to 3.2 BLEU over back-translation.",
"title": ""
},
{
"docid": "be2137514d2c1431d82c28a4ae2719ad",
"text": "The Exact Set Similarity Join problem aims to find all similar sets between two collections of sets, with respect to a threshold and a similarity function such as overlap, Jaccard, dice or cosine. The näıve approach verifies all pairs of sets and it is often considered impractical due the high number of combinations. So, Exact Set Similarity Join algorithms are usually based on the Filter-Verification Framework, that applies a series of filters to reduce the number of verified pairs. This paper presents a new filtering technique called Bitmap Filter, which is able to accelerate state-of-the-art algorithms for the exact Set Similarity Join problem. The Bitmap Filter uses hash functions to create bitmaps of fixed b bits, representing characteristics of the sets. Then, it applies bitwise operations (such as xor and population count) on the bitmaps in order to infer a similarity upper bound for each pair of sets. If the upper bound is below a given similarity threshold, the pair of sets is pruned. The Bitmap Filter benefits from the fact that bitwise operations are efficiently implemented by many modern general-purpose processors and it was easily applied to four state-of-the-art algorithms implemented in CPU: AllPairs, PPJoin, AdaptJoin and GroupJoin. Furthermore, we propose a Graphic Processor Unit (GPU) algorithm based on the näıve approach but using the Bitmap Filter to speedup the computation. The experiments considered 9 collections containing from 100 thousands up to 10 million sets and the joins were made using Jaccard thresholds from 0.50 to 0.95. The Bitmap Filter was able to improve 90% of the experiments in CPU, with speedups of up to 4.50× and 1.43× on average. Using the GPU algorithm, the experiments were able to speedup the original CPU algorithms by up to 577× using an Nvidia Geforce GTX 980 Ti.",
"title": ""
},
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "d76b7b25bce29cdac24015f8fa8ee5bb",
"text": "A circularly polarized magnetoelectric dipole antenna with high efficiency based on printed ridge gap waveguide is presented. The antenna gain is improved by using a wideband lens in front of the antennas. The lens consists of three layers dual-polarized mu-near zero (MNZ) inclusions. Each layer consists of a <inline-formula> <tex-math notation=\"LaTeX\">$3\\times4$ </tex-math></inline-formula> MNZ unit cell. The measured results indicate that the magnitude of <inline-formula> <tex-math notation=\"LaTeX\">$S_{11}$ </tex-math></inline-formula> is below −10 dB in the frequency range of 29.5–37 GHz. The resulting 3-dB axial ratio is over a frequency range of 32.5–35 GHz. The measured realized gain of the antenna is more than 10 dBi over a frequency band of 31–35 GHz achieving a radiation efficiency of 94% at 34 GHz.",
"title": ""
},
{
"docid": "5d673d1b6755e3e1d451ca17644cf3ec",
"text": "The Achilles Heel of stochastic optimization algorithms is getting trapped on local optima. Novelty Search mitigates this problem by encouraging exploration in all interesting directions by replacing the performance objective with a reward for novel behaviors. This reward for novel behaviors has traditionally required a human-crafted, behavioral distance function. While Novelty Search is a major conceptual breakthrough and outperforms traditional stochastic optimization on certain problems, it is not clear how to apply it to challenging, high-dimensional problems where specifying a useful behavioral distance function is difficult. For example, in the space of images, how do you encourage novelty to produce hawks and heroes instead of endless pixel static? Here we propose a new algorithm, the Innovation Engine, that builds on Novelty Search by replacing the human-crafted behavioral distance with a Deep Neural Network (DNN) that can recognize interesting differences between phenotypes. The key insight is that DNNs can recognize similarities and differences between phenotypes at an abstract level, wherein novelty means interesting novelty. For example, a DNN-based novelty search in the image space does not explore in the low-level pixel space, but instead creates a pressure to create new types of images (e.g., churches, mosques, obelisks, etc.). Here, we describe the long-term vision for the Innovation Engine algorithm, which involves many technical challenges that remain to be solved. We then implement a simplified version of the algorithm that enables us to explore some of the algorithm’s key motivations. Our initial results, in the domain of images, suggest that Innovation Engines could ultimately automate the production of endless streams of interesting solutions in any domain: for example, producing intelligent software, robot controllers, optimized physical components, and art.",
"title": ""
},
{
"docid": "5e2c4ebf3c2b4f0e9aabc5eacd2d4b80",
"text": "Manually annotating object bounding boxes is central to building computer vision datasets, and it is very time consuming (annotating ILSVRC [53] took 35s for one high-quality box [62]). It involves clicking on imaginary comers of a tight box around the object. This is difficult as these comers are often outside the actual object and several adjustments are required to obtain a tight box. We propose extreme clicking instead: we ask the annotator to click on four physical points on the object: the top, bottom, left- and right-most points. This task is more natural and these points are easy to find. We crowd-source extreme point annotations for PASCAL VOC 2007 and 2012 and show that (1) annotation time is only 7s per box, 5 × faster than the traditional way of drawing boxes [62]: (2) the quality of the boxes is as good as the original ground-truth drawn the traditional way: (3) detectors trained on our annotations are as accurate as those trained on the original ground-truth. Moreover, our extreme clicking strategy not only yields box coordinates, but also four accurate boundary points. We show (4) how to incorporate them into GrabCut to obtain more accurate segmentations than those delivered when initializing it from bounding boxes: (5) semantic segmentations models trained on these segmentations outperform those trained on segmentations derived from bounding boxes.",
"title": ""
},
{
"docid": "412b616f4fcb9399c8220c542ecac83e",
"text": "Image cropping aims at improving the aesthetic quality of images by adjusting their composition. Most weakly supervised cropping methods (without bounding box supervision) rely on the sliding window mechanism. The sliding window mechanism requires fixed aspect ratios and limits the cropping region with arbitrary size. Moreover, the sliding window method usually produces tens of thousands of windows on the input image which is very time-consuming. Motivated by these challenges, we firstly formulate the aesthetic image cropping as a sequential decision-making process and propose a weakly supervised Aesthetics Aware Reinforcement Learning (A2-RL) framework to address this problem. Particularly, the proposed method develops an aesthetics aware reward function which especially benefits image cropping. Similar to human's decision making, we use a comprehensive state representation including both the current observation and the historical experience. We train the agent using the actor-critic architecture in an end-to-end manner. The agent is evaluated on several popular unseen cropping datasets. Experiment results show that our method achieves the state-of-the-art performance with much fewer candidate windows and much less time compared with previous weakly supervised methods.",
"title": ""
},
{
"docid": "b0dd406b590658aa262b103e8cea4296",
"text": "This paper proposes the differential energy watermarking (DEW) algorithm for JPEG/MPEG streams. The DEW algorithm embeds label bits by selectively discarding high frequency discrete cosine transform (DCT) coefficients in certain image regions. The performance of the proposed watermarking algorithm is evaluated by the robustness of the watermark, the size of the watermark, and the visual degradation the watermark introduces. These performance factors are controlled by three parameters, namely the maximal coarseness of the quantizer used in pre-encoding, the number of DCT blocks used to embed a single watermark bit, and the lowest DCT coefficient that we permit to be discarded. We follow a rigorous approach to optimizing the performance and choosing the correct parameter settings by developing a statistical model for the watermarking algorithm. Using this model, we can derive the probability that a label bit cannot be embedded. The resulting model can be used, for instance, for maximizing the robustness against re-encoding and for selecting adequate error correcting codes for the label bit string.",
"title": ""
},
{
"docid": "f33df3cbfc890bda347956c485184ac9",
"text": "Ontologies aim at capturing domain knowledge in a generic way. An ontology, therefore, provides a commonly agreed understanding of a domain, which can be reused and shared across ■ The Workshop on Applications of Ontologies and Problem-Solving Methods (PSMs), held in conjunction with the Thirteenth Biennial European Conference on Artificial Intelligence (ECAI ’98), was held on 24 to 25 August 1998. Twenty-six people participated, and 16 papers were presented. Participants included scientists and practitioners from both the ontology and PSM communities. The first day was devoted to paper presentations and discussions. The second (half) day, a joint session was held with two other workshops: (1) Building, Maintaining, and Using Organizational Memories and (2) Intelligent Information Integration. The reason for the joint session was that in all three workshops, ontologies play a prominent role, and the goal was to bring together researchers working on related issues in different communities. The workshop ended with a discussion about the added value of a combined ontologies-PSM workshop compared to separate workshops.",
"title": ""
},
{
"docid": "3070929256d250c502d4f9f24772191c",
"text": "KNOWLEDGE of the kinematic structure of storms is important for understanding the internal physical processes. Radar has long provided information on the three-dimensional structure of storms from measurements of the radar reflectivity factor alone. Early users of radar gave total storm movement only, whereas later radar data were used to reveal internal motions based on information related to cloud physics such as the three-dimensional morphology of the storm volume. Such approaches have continued by using the increasingly finer scale details provided by more modern radar systems. Both Barge and Bergwall2 and Browning and Foote3 have used fine scale reflectivity structure to determine airflow in hailstorms. Doppler radar added a new dimension to our capabilities through its ability to measure directly the radial component of motion of an ensemble of hydrometeor particles. Two4 or three5 Doppler radars collecting data in conjunction, the equation of mass continuity, and an empirical radar reflectivity–terminal velocity relationship have enabled the estimation of the full three-dimensional airflow fields in parts of storms. Because of the inherent advantage of Doppler radar in motion detection, little effort has been directed toward developing objective schemes of determining internal storm motions with conventional meteorological radars. Pattern recognition schemes using correlation coefficient techniques6, Fourier analysis7, and gaussian curve fitting8 have been used with radar and satellite data, but primarily for detecting overall storm motions, echo merging and echo splitting. Here we describe an objective use of radar reflectivity factor data from a single conventional weather radar to give information related to the three-dimensional motions within a storm.",
"title": ""
},
{
"docid": "2b32e29760ba9745e59ae629c46eff93",
"text": "We present a novel recurrent neural network (RNN)–based model that combines the remembering ability of unitary evolution RNNs with the ability of gated RNNs to effectively forget redundant or irrelevant information in its memory. We achieve this by extending restricted orthogonal evolution RNNs with a gating mechanism similar to gated recurrent unit RNNs with a reset gate and an update gate. Our model is able to outperform long short-term memory, gated recurrent units, and vanilla unitary or orthogonal RNNs on several long-term-dependency benchmark tasks. We empirically show that both orthogonal and unitary RNNs lack the ability to forget. This ability plays an important role in RNNs. We provide competitive results along with an analysis of our model on many natural sequential tasks, including question answering, speech spectrum prediction, character-level language modeling, and synthetic tasks that involve long-term dependencies such as algorithmic, denoising, and copying tasks.",
"title": ""
},
{
"docid": "bbd219f59ab4211a387cb7a721c797c8",
"text": "Wireless network virtualization and information-centric networking (ICN) are two promising techniques in software-defined 5G mobile wireless networks. Traditionally, these two technologies have been addressed separately. In this paper we show that integrating wireless network virtualization with ICN techniques can significantly improve the end-to-end network performance. In particular, we propose an information- centric wireless network virtualization architecture for integrating wireless network virtualization with ICN. We develop the key components of this architecture: radio spectrum resource, wireless network infrastructure, virtual resources (including content-level slicing, network-level slicing, and flow-level slicing), and informationcentric wireless virtualization controller. Then we formulate the virtual resource allocation and in-network caching strategy as an optimization problem, considering the gain of not only virtualization but also in-network caching in our proposed information-centric wireless network virtualization architecture. The obtained simulation results show that our proposed information-centric wireless network virtualization architecture and the related schemes significantly outperform the other existing schemes.",
"title": ""
},
{
"docid": "c6954957e6629a32f9845df15c60be85",
"text": "Some mathematical and natural objects (a random sequence, a sequence of zeros, a perfect crystal, a gas) are intuitively trivial, while others (e.g. the human body, the digits of π) contain internal evidence of a nontrivial causal history. We formalize this distinction by defining an object’s “logical depth” as the time required by a standard universal Turing machine to generate it from an input that is algorithmically random (i.e. Martin-Löf random). This definition of depth is shown to be reasonably machineindependent, as well as obeying a slow-growth law: deep objects cannot be quickly produced from shallow ones by any deterministic process, nor with much probability by a probabilistic process, but can be produced slowly. Next we apply depth to the physical problem of “self-organization,” inquiring in particular under what conditions (e.g. noise, irreversibility, spatial and other symmetries of the initial conditions and equations of motion) statistical-mechanical model systems can imitate computers well enough to undergo unbounded increase of depth in the limit of infinite space and time.",
"title": ""
},
{
"docid": "144d1ad172d5dd2ca7b3fc93a83b5942",
"text": "This paper extends the recently introduced approach to the modeling and control design in the framework of model predictive control of the dc-dc boost converter to the dc-dc parallel interleaved boost converter. Based on the converter's model a constrained optimal control problem is formulated and solved. This allows the controller to achieve (a) the regulation of the output voltage to a predefined reference value, despite changes in the input voltage and the load, and (b) the load current balancing to the converter's individual legs, by regulating the currents of the circuit's inductors to proper references, set by an outer loop based on an observer. Simulation results are provided to illustrate the merits of the proposed control scheme.",
"title": ""
}
] |
scidocsrr
|
117c1f478e2e5f669142eaefab5524a1
|
Explanations for command recommendations : an experimental study
|
[
{
"docid": "ca7c505806bf19ca835c3d90b2e0f58e",
"text": "Extreme Programming (XP) is a new and controversial software process for small teams. A practical training course at the university of Karlsruhe led to the following observations about the key practices of XP. First, it is unclear how to reap the potential benefits of pair programming, although pair programming produces high quality code. Second, designing in small increments appears problematic but ensures rapid feedback about the code. Third, while automated testing is helpful, writing test cases before coding is a challenge. And last, it is difficult to implement XP without coaching. This paper also provides some guidelines for those starting out with XP.",
"title": ""
},
{
"docid": "97561632e9d87093a5de4f1e4b096df7",
"text": "Recommender systems are now popular both commercially and in the research community, where many approaches have been suggested for providing recommendations. In many cases a system designer that wishes to employ a recommendation system must choose between a set of candidate approaches. A first step towards selecting an appropriate algorithm is to decide which properties of the application to focus upon when making this choice. Indeed, recommendation systems have a variety of properties that may affect user experience, such as accuracy, robustness, scalability, and so forth. In this paper we discuss how to compare recommenders based on a set of properties that are relevant for the application. We focus on comparative studies, where a few algorithms are compared using some evaluation metric, rather than absolute benchmarking of algorithms. We describe experimental settings appropriate for making choices between algorithms. We review three types of experiments, starting with an offline setting, where recommendation approaches are compared without user interaction, then reviewing user studies, where a small group of subjects experiment with the system and report on the experience, and finally describe large scale online experiments, where real user populations interact with the system. In each of these cases we describe types of questions that can be answered, and suggest protocols for experimentation. We also discuss how to draw trustworthy conclusions from the conducted experiments. We then review a large set of properties, and explain how to evaluate systems given relevant properties. We also survey a large set of evaluation metrics in the context of the property that they evaluate. Guy Shani Microsoft Research, One Microsoft Way, Redmond, WA, e-mail: [email protected] Asela Gunawardana Microsoft Research, One Microsoft Way, Redmond, WA, e-mail: [email protected]",
"title": ""
},
{
"docid": "ef598ba4f9a4df1f42debc0eabd1ead8",
"text": "Software developers interact with the development environments they use by issuing commands that execute various programming tools, from source code formatters to build tools. However, developers often only use a small subset of the commands offered by modern development environments, reducing their overall development fluency. In this paper, we use several existing command recommender algorithms to suggest new commands to developers based on their existing command usage history, and also introduce several new algorithms. By running these algorithms on data submitted by several thousand Eclipse users, we describe two studies that explore the feasibility of automatically recommending commands to software developers. The results suggest that, while recommendation is more difficult in development environments than in other domains, it is still feasible to automatically recommend commands to developers based on their usage history, and that using patterns of past discovery is a useful way to do so.",
"title": ""
}
] |
[
{
"docid": "56d8fe382c30c19b8b700a2509e0edd8",
"text": "In exploratory data analysis, the choice of tools depends on the data to be analyzed and the analysis tasks, i.e. the questions to be answered. The same applies to design of new analysis tools. In this paper, we consider a particular type of data: data that describe transient events having spatial and temporal references, such as earthquakes, traffic incidents, or observations of rare plants or animals. We focus on the task of detecting spatio-temporal patterns in event occurrences. We demonstrate the insufficiency of the existing techniques and approaches to event exploration and substantiate the need in a new exploratory tool. The technique of space-time cube, which has been earlier proposed for the visualization of movement in geographical space, possesses the required properties. However, it must be implemented so as to allow particular interactive manipulations: changing the viewing perspective, temporal focusing, and dynamic linking with a map display through simultaneous highlighting of corresponding symbols. We describe our implementation of the space-time cube technique and demonstrate by an example how it can be used for detecting spatio-temporal clusters of events.",
"title": ""
},
{
"docid": "c4b744e2e308dd2927aec79b19b570bc",
"text": "This paper describes a semantic role labeling system that uses features derived from different syntactic views, and combines them within a phrase-based chunking paradigm. For an input sentence, syntactic constituent structure parses are generated by a Charniak parser and a Collins parser. Semantic role labels are assigned to the constituents of each parse using Support Vector Machine classifiers. The resulting semantic role labels are converted to an IOB representation. These IOB representations are used as additional features, along with flat syntactic chunks, by a chunking SVM classifier that produces the final SRL output. This strategy for combining features from three different syntactic views gives a significant improvement in performance over roles produced by using any one of the syntactic views individually.",
"title": ""
},
{
"docid": "9b2b04acbbf5c847885c37c448fb99c8",
"text": "We address the problem of substring searchable encryption. A single user produces a big stream of data and later on wants to learn the positions in the string that some patterns occur. Although current techniques exploit auxiliary data structures to achieve efficient substring search on the server side, the cost at the user side may be prohibitive. We revisit the work of substring searchable encryption in order to reduce the storage cost of auxiliary data structures. Our solution entails a suffix array based index design, which allows optimal storage cost $O(n)$ with small hidden factor at the size of the string n. Moreover, we implemented our scheme and the state of the art protocol \\citeChase to demonstrate the performance advantage of our solution with precise benchmark results.",
"title": ""
},
{
"docid": "c95980f3f1921426c20757e6020f62c2",
"text": "Recent successes of deep learning have been largely driven by the ability to train large models on vast amounts of data. We believe that High Performance Computing (HPC) will play an increasingly important role in helping deep learning achieve the next level of innovation fueled by neural network models that are orders of magnitude larger and trained on commensurately more training data. We are targeting the unique capabilities of both current and upcoming HPC systems to train massive neural networks and are developing the Livermore Big Artificial Neural Network (LBANN) toolkit to exploit both model and data parallelism optimized for large scale HPC resources. This paper presents our preliminary results in scaling the size of model that can be trained with the LBANN toolkit.",
"title": ""
},
{
"docid": "10832dce0cf5d242f32d72da35e0b1c1",
"text": "Object detection in high resolution remote sensing images is a fundamental and challenging problem in the field of remote sensing imagery analysis for civil and military application due to the complex neighboring environments, which can cause the recognition algorithms to mistake irrelevant ground objects for target objects. Deep Convolution Neural Network(DCNN) is the hotspot in object detection for its powerful ability of feature extraction and has achieved state-of-the-art results in Computer Vision. Common pipeline of object detection based on DCNN consists of region proposal, CNN feature extraction, region classification and post processing. YOLO model frames object detection as a regression problem, using a single CNN predicts bounding boxes and class probabilities in an end-to-end way and make the predict faster. In this paper, a YOLO based model is used for object detection in high resolution sensing images. The experiments on NWPU VHR-10 dataset and our airport/airplane dataset gain from GoogleEarth show that, compare with the common pipeline, the proposed model speeds up the detection process and have good accuracy.",
"title": ""
},
{
"docid": "8c4e7e441a45ec0cccf2e1ce12adfc73",
"text": "Purpose – The purpose of this paper is to present a study of knowledge management understanding and usage in small and medium knowledge-intensive enterprises. Design/methodology/approach – The study has taken an interpretitivist approach, using two knowledge-intensive South Yorkshire (England) companies as case studies, both of which are characterised by the need to process and use knowledge on a daily basis in order to remain competitive. The case studies were analysed using qualitative research methodology, composed of interviews and concept mapping, thus deriving a characterisation of understandings, perceptions and requirements of SMEs in relation to knowledge management. Findings – The study provides evidence that, while SMEs, including knowledge intensive ones, acknowledge that adequately capturing, storing, sharing and disseminating knowledge can lead to greater innovation and productivity, their managers are not prepared to invest the relatively high effort on long term knowledge management goals for which they have difficulty in establishing the added value. Thus, knowledge management activities within SMEs tend to happen in an informal way, rarely supported by purposely designed ICT systems. Research limitations/implications – This paper proposes that further studies in this field are required that focus on organisational and practical issues in order to close the gap between theoretical propositions and the reality of practice. Practical implications – The study suggests that in order to implement an appropriate knowledge management strategy in SMEs cultural, behavioural, and organisational issues need to be tackled before even considering technical issues. Originality/value – KM seems to have been successfully applied in large companies, but it is largely disregarded by small and medium sized enterprises (SMEs). This has been attributed primarily to a lack of a formal approach to the sharing, recording, transferring, auditing and exploiting of organisational knowledge, together with a lack of utilisation of available information technologies. This paper debates these concepts from a research findings point of view.",
"title": ""
},
{
"docid": "bbc984f02b81ee66d7dc617ed34a7e98",
"text": "Packet losses are common in data center networks, may be caused by a variety of reasons (e.g., congestion, blackhole), and have significant impacts on application performance and network operations. Thus, it is important to provide fast detection of packet losses independent of their root causes. We also need to capture both the locations and packet header information of the lost packets to help diagnose and mitigate these losses. Unfortunately, existing monitoring tools that are generic in capturing all types of network events often fall short in capturing losses fast with enough details and low overhead. Due to the importance of loss in data centers, we propose a specific monitoring system designed for loss detection. We propose LossRadar, a system that can capture individual lost packets and their detailed information in the entire network on a fine time scale. Our extensive evaluation on prototypes and simulations demonstrates that LossRadar is easy to implement in hardware switches, achieves low memory and bandwidth overhead, while providing detailed information about individual lost packets. We also build a loss analysis tool that demonstrates the usefulness of LossRadar with a few example applications.",
"title": ""
},
{
"docid": "9979f112cd2617a150721e3a2dd70739",
"text": "In the academic literature, the matching accuracy of a biometric system is typically quantified through measures such as the Receiver Operating Characteristic (ROC) curve and Cumulative Match Characteristic (CMC) curve. The ROC curve, measuring verification performance, is based on aggregate statistics of match scores corresponding to all biometric samples, while the CMC curve, measuring identification performance, is based on the relative ordering of match scores corresponding to each biometric sample (in closed-set identification). In this study, we determine whether a set of genuine and impostor match scores generated from biometric data can be reassigned to virtual identities, such that the same ROC curve can be accompanied by multiple CMC curves. The reassignment is accomplished by modeling the intra- and inter-class relationships between identities based on the “Doddington Zoo” or “Biometric Menagerie” phenomenon. The outcome of the study suggests that a single ROC curve can be mapped to multiple CMC curves in closed-set identification, and that presentation of a CMC curve should be accompanied by a ROC curve when reporting biometric system performance, in order to better understand the performance of the matcher.",
"title": ""
},
{
"docid": "3f225efbccb63d0c5170fce44fadb3c6",
"text": "Pelvic pain is a common gynaecological complaint, sometimes without any obvious etiology. We report a case of pelvic congestion syndrome, an often overlooked cause of pelvic pain, diagnosed by helical computed tomography. This seems to be an effective and noninvasive imaging modality. RID=\"\"ID=\"\"<e5>Correspondence to:</e5> J. H. Desimpelaere",
"title": ""
},
{
"docid": "dd1b20766f2b8099b914c780fb8cc03c",
"text": "Many computer vision algorithms limit their performance by ignoring the underlying 3D geometric structure in the image. We show that we can estimate the coarse geometric properties of a scene by learning appearance-based models of geometric classes, even in cluttered natural scenes. Geometric classes describe the 3D orientation of an image region with respect to the camera. We provide a multiple-hypothesis framework for robustly estimating scene structure from a single image and obtaining confidences for each geometric label. These confidences can then be used to improve the performance of many other applications. We provide a thorough quantitative evaluation of our algorithm on a set of outdoor images and demonstrate its usefulness in two applications: object detection and automatic single-view reconstruction.",
"title": ""
},
{
"docid": "c0e804e69c87c73ee844628d70752506",
"text": "Hydrogels are physically or chemically cross-linked polymer networks that are able to absorb large amounts of water. They can be classified into different categories depending on various parameters including the preparation method, the charge, and the mechanical and structural characteristics. The present review aims to give an overview of hydrogels based on natural polymers and their various applications in the field of tissue engineering. In a first part, relevant parameters describing different hydrogel properties and the strategies applied to finetune these characteristics will be described. In a second part, an important class of biopolymers that possess thermosensitive properties (UCST or LCST behavior) will be discussed. Another part of the review will be devoted to the application of cryogels. Finally, the most relevant biopolymer-based hydrogel systems, the different methods of preparation, as well as an in depth overview of the applications in the field of tissue engineering will be given.",
"title": ""
},
{
"docid": "ab0541d9ec1ea0cf7ad85d685267c142",
"text": "Umbilical catheters have been used in NICUs for drawing blood samples, measuring blood pressure, and administering fluid and medications for more than 25 years. Complications associated with umbilical catheters include thrombosis; embolism; vasospasm; vessel perforation; hemorrhage; infection; gastrointestinal, renal, and limb tissue damage; hepatic necrosis; hydrothorax; cardiac arrhythmias; pericardial effusion and tamponade; and erosion of the atrium and ventricle. A review of the literature provides conflicting accounts of the superiority of high versus low placement of umbilical arterial catheters. This article reviews the current literature regarding use of umbilical catheters in neonates. It also highlights the policy developed for the authors' NICU, a 34-bed tertiary care unit of a children's hospital, and analyzes complications associated with umbilical catheter use for 1 year in that unit.",
"title": ""
},
{
"docid": "b776307764d3946fc4e7f6158b656435",
"text": "Recent development advances have allowed silicon (Si) semiconductor technology to approach the theoretical limits of the Si material; however, power device requirements for many applications are at a point that the present Si-based power devices can not handle. The requirements include higher blocking voltages, switching frequencies, efficiency, and reliability. To overcome these limitations, new semiconductor materials for power device applications are needed. For high power requirements, wide band gap semiconductors like silicon carbide (SiC), gallium nitride (GaN), and diamond with their superior electrical properties are likely candidates to replace Si in the near future. This paper compares all the aforementioned wide bandgap semiconductors with respect to their promise and applicability for power applications and predicts the future of power device semiconductor materials.",
"title": ""
},
{
"docid": "7f3c453d52b100245b67c87e992f4bfa",
"text": "In this work, a frequency-based model is presented to examine limit cycle and spurious behavior in a bang-bang all-digital phase locked loop (BB-ADPLL). The proposed model considers different type of nonlinearities such as quantization effects of the digital controlled oscillator (DCO), quantization effects of the bang-bang phase detector (BB-PD) in noiseless BB-ADPLLs by a proposed novel discrete-time model. In essence, the traditional phase-locked model is transformed into a frequency-locked topology equivalent to a sigma delta modulator (SDM) with a dc-input which represents frequency deviation in phase locked state. The frequency deviation must be introduced and placed correctly within the proposed model to enable the accurate prediction of limit cycles. Thanks to the SDM-like topology, traditional techniques used in the SDM nonlinear analysis such as the discrete describing function (DDF) and number theory can be applied to predict limit cycles in first and second-order BB-ADPLLs. The inherent DCO and reference phase noise can also be easily integrated into the proposed model to accurately predict their effect on the stability of the limit cycle. The results obtained from the proposed model show good agreement with time-domain simulations.",
"title": ""
},
{
"docid": "ce22073b8dbc3a910fa8811a2a8e5c87",
"text": "Ethernet is going to play a major role in automotive communications, thus representing a significant paradigm shift in automotive networking. Ethernet technology will allow for multiple in-vehicle systems (such as, multimedia/infotainment, camera-based advanced driver assistance and on-board diagnostics) to simultaneously access information over a single unshielded twisted pair cable. The leading technology for automotive applications is the IEEE Audio Video Bridging (AVB), which offers several advantages, such as open specification, multiple sources of electronic components, high bandwidth, the compliance with the challenging EMC/EMI automotive requirements, and significant savings on cabling costs, thickness and weight. This paper surveys the state of the art on Ethernet-based automotive communications and especially on the IEEE AVB, with a particular focus on the way to provide support to the so-called scheduled traffic, that is a class of time-sensitive traffic (e.g., control traffic) that is transmitted according to a time schedule.",
"title": ""
},
{
"docid": "94a2b34eaa02ffeffdde5aa74e7836d2",
"text": "Drought is a stochastic natural hazard that is instigated by intense and persistent shortage of precipitation. Following an initial meteorological phenomenon, subsequent impacts are realized on agriculture and hydrology. Among the natural hazards, droughts possess certain unique features; in addition to delayed effects, droughts vary by multiple dynamic dimensions including severity and duration, which in addition to causing a pervasive and subjective network of impacts makes them difficult to characterize. In order manage drought, drought characterization is essential enabling both retrospective analyses (e.g., severity versus impacts analysis) and prospective planning (e.g., risk assessment). The adaptation of a simplified method by drought indices has facilitated drought characterization for various users and entities. More than 100 drought indices have so far been proposed, some of which are operationally used to characterize drought using gridded maps at regional and national levels. These indices correspond to different types of drought, including meteorological, agricultural, and hydrological drought. By quantifying severity levels and declaring drought’s start and end, drought indices currently aid in a variety of operations including drought early warning and monitoring and contingency planning. Given their variety and ongoing development, it is crucial to provide a comprehensive overview of available drought indices that highlights their difference and examines the trend in their development. This paper reviews 74 operational and proposed drought indices and describes research directions.",
"title": ""
},
{
"docid": "892bad91cfae82dfe3d06d2f93edfe8b",
"text": "Fine-grained image recognition is a challenging computer vision problem, due to the small inter-class variations caused by highly similar subordinate categories, and the large intra-class variations in poses, scales and rotations. In this paper, we prove that selecting useful deep descriptors contributes well to fine-grained image recognition. Specifically, a novel Mask-CNN model without the fully connected layers is proposed. Based on the part annotations, the proposed model consists of a fully convolutional network to both locate the discriminative parts ( e.g. , head and torso), and more importantly generate weighted object/part masks for selecting useful and meaningful convolutional descriptors. After that, a three-stream Mask-CNN model is built for aggregating the selected objectand part-level descriptors simultaneously. Thanks to discarding the parameter redundant fully connected layers, our Mask-CNN has a small feature dimensionality and efficient inference speed by comparing with other fine-grained approaches. Furthermore, we obtain a new state-of-the-art accuracy on two challenging fine-grained bird species categorization datasets, which validates the effectiveness of both the descriptor selection scheme and the proposed",
"title": ""
},
{
"docid": "897a15431a2194f1fa5770243e1dc707",
"text": "While the popularity of cloud computing is exploding, a new network computing paradigm is just beginning. In this paper, we examine this exciting area of research known as dew computing and propose a new design of clouddew architecture. Instead of hosting only one dew server on a user’s PC — as adopted in the current dewsite application — our design promotes the hosting of multiple dew servers instead, one for each installed domain. Our design intends to improve upon existing cloud-dew architecture by providing significantly increased freedom in dewsite development, while also automating the chore of managing dewsite content based on the user’s interests and browsing habits. Other noteworthy benefits, all at no added cost to dewsite users, are briefly explored as well. TYPE OF PAPER AND",
"title": ""
},
{
"docid": "0408aeb750ca9064a070248f0d32d786",
"text": "Mood, attention and motivation co-vary with activity in the neuromodulatory systems of the brain to influence behaviour. These psychological states, mediated by neuromodulators, have a profound influence on the cognitive processes of attention, perception and, particularly, our ability to retrieve memories from the past and make new ones. Moreover, many psychiatric and neurodegenerative disorders are related to dysfunction of these neuromodulatory systems. Neurons of the brainstem nucleus locus coeruleus are the sole source of noradrenaline, a neuromodulator that has a key role in all of these forebrain activities. Elucidating the factors that control the activity of these neurons and the effect of noradrenaline in target regions is key to understanding how the brain allocates attention and apprehends the environment to select, store and retrieve information for generating adaptive behaviour.",
"title": ""
}
] |
scidocsrr
|
76b13aa888596c0f13e07d58bb0ec7e0
|
Motion Robust Remote-PPG in Infrared
|
[
{
"docid": "2531d8d05d262c544a25dbffb7b43d67",
"text": "Plethysmographic signals were measured remotely (> 1m) using ambient light and a simple consumer level digital camera in movie mode. Heart and respiration rates could be quantified up to several harmonics. Although the green channel featuring the strongest plethysmographic signal, corresponding to an absorption peak by (oxy-) hemoglobin, the red and blue channels also contained plethysmographic information. The results show that ambient light photo-plethysmography may be useful for medical purposes such as characterization of vascular skin lesions (e.g., port wine stains) and remote sensing of vital signs (e.g., heart and respiration rates) for triage or sports purposes.",
"title": ""
}
] |
[
{
"docid": "6f34ef57fcf0a2429e7dc2a3e56a99fd",
"text": "Service-Oriented Architecture (SOA) provides a flexible framework for service composition. Using standard-based protocols (such as SOAP and WSDL), composite services can be constructed by integrating atomic services developed independently. Algorithms are needed to select service components with various QoS levels according to some application-dependent performance requirements. We design a broker-based architecture to facilitate the selection of QoS-based services. The objective of service selection is to maximize an application-specific utility function under the end-to-end QoS constraints. The problem is modeled in two ways: the combinatorial model and the graph model. The combinatorial model defines the problem as a multidimension multichoice 0-1 knapsack problem (MMKP). The graph model defines the problem as a multiconstraint optimal path (MCOP) problem. Efficient heuristic algorithms for service processes of different composition structures are presented in this article and their performances are studied by simulations. We also compare the pros and cons between the two models.",
"title": ""
},
{
"docid": "d6f52736d78a5b860bdb364f64e4523c",
"text": "Deep convolutional neural networks (CNN) have recently been shown to generate promising results for aesthetics assessment. However, the performance of these deep CNN methods is often compromised by the constraint that the neural network only takes the fixed-size input. To accommodate this requirement, input images need to be transformed via cropping, warping, or padding, which often alter image composition, reduce image resolution, or cause image distortion. Thus the aesthetics of the original images is impaired because of potential loss of fine grained details and holistic image layout. However, such fine grained details and holistic image layout is critical for evaluating an images aesthetics. In this paper, we present an Adaptive Layout-Aware Multi-Patch Convolutional Neural Network (A-Lamp CNN) architecture for photo aesthetic assessment. This novel scheme is able to accept arbitrary sized images, and learn from both fined grained details and holistic image layout simultaneously. To enable training on these hybrid inputs, we extend the method by developing a dedicated double-subnet neural network structure, i.e. a Multi-Patch subnet and a Layout-Aware subnet. We further construct an aggregation layer to effectively combine the hybrid features from these two subnets. Extensive experiments on the large-scale aesthetics assessment benchmark (AVA) demonstrate significant performance improvement over the state-of-the-art in photo aesthetic assessment.",
"title": ""
},
{
"docid": "bfd94756f73fc7f9eb81437f5d192ac3",
"text": "Technological advances in upper-limb prosthetic design offer dramatically increased possibilities for powered movement. The DEKA Arm system allows users 10 powered degrees of movement. Learning to control these movements by utilizing a set of motions that, in most instances, differ from those used to obtain the desired action prior to amputation is a challenge for users. In the Department of Veterans Affairs \"Study to Optimize the DEKA Arm,\" we attempted to facilitate motor learning by using a virtual reality environment (VRE) program. This VRE program allows users to practice controlling an avatar using the controls designed to operate the DEKA Arm in the real world. In this article, we provide highlights from our experiences implementing VRE in training amputees to use the full DEKA Arm. This article discusses the use of VRE in amputee rehabilitation, describes the VRE system used with the DEKA Arm, describes VRE training, provides qualitative data from a case study of a subject, and provides recommendations for future research and implementation of VRE in amputee rehabilitation. Our experience has led us to believe that training with VRE is particularly valuable for upper-limb amputees who must master a large number of controls and for those amputees who need a structured learning environment because of cognitive deficits.",
"title": ""
},
{
"docid": "a75933a59c2aa42d0ee3904e283303b0",
"text": "We propose an end-to-end learning framework for foreground object segmentation. Given a single novel image, our approach produces a pixel-level mask for all “object-like” regions—even for object categories never seen during training. We formulate the task as a structured prediction problem of assigning a foreground/background label to each pixel, implemented using a deep fully convolutional network. Key to our idea is training with a mix of image-level object category examples together with relatively few images with boundary-level annotations. Our method substantially improves the state-of-the-art on foreground segmentation for ImageNet and MIT Object Discovery datasets. Furthermore, on over 1 million images, we show that it generalizes well to segment object categories unseen in the foreground maps used for training. Finally, we demonstrate how our approach benefits image retrieval and image retargeting, both of which flourish when given our high-quality foreground maps.",
"title": ""
},
{
"docid": "542d56f52e6ab59c45c95fa673e8a059",
"text": "This paper contributes a new machine learning solution for stock movement prediction, which aims to predict whether the price of a stock will be up or down in the near future. The key novelty is that we propose to employ adversarial training to improve the generalization of a recurrent neural network model. The rationality of adversarial training here is that the input features to stock prediction are typically based on stock price, which is essentially a stochastic variable and continuously changed with time by nature. As such, normal training with stationary price-based features (e.g., the closing price) can easily overfit the data, being insufficient to obtain reliable models. To address this problem, we propose to add perturbations to simulate the stochasticity of continuous price variable, and train the model to work well under small yet intentional perturbations. Extensive experiments on two real-world stock data show that our method outperforms the state-of-the-art solution (Xu and Cohen 2018) with 3.11% relative improvements on average w.r.t. accuracy, verifying the usefulness of adversarial training for stock prediction task. Codes will be made available upon acceptance.",
"title": ""
},
{
"docid": "3afba1b1120923d28ab3d1dd6c79945e",
"text": "Signal processing on antenna arrays has received much recent attention in the mobile and wireless networking research communities, with array signal processing approaches addressing the problems of human movement detection, indoor mobile device localization, and wireless network security. However, there are two important challenges inherent in the design of these systems that must be overcome if they are to be of practical use on commodity hardware. First, phase differences between the radio oscillators behind each antenna can make readings unusable, and so must be corrected in order for most techniques to yield high-fidelity results. Second, while the number of antennas on commodity access points is usually limited, most array processing increases in fidelity with more antennas. These issues work in synergistic opposition to array processing: without phase offset correction, no phase-difference array processing is possible, and with fewer antennas, automatic correction of these phase offsets becomes even more challenging. We present Phaser, a system that solves these intertwined problems to make phased array signal processing truly practical on the many WiFi access points deployed in the real world. Our experimental results on three- and five-antenna 802.11-based hardware show that 802.11 NICs can be calibrated and synchronized to a 20° median phase error, enabling inexpensive deployment of numerous phase-difference based spectral analysis techniques previously only available on costly, special-purpose hardware.",
"title": ""
},
{
"docid": "d06dc916942498014f9d00498c1d1d1f",
"text": "In this paper we propose a state space modeling approach for trust evaluation in wireless sensor networks. In our state space trust model (SSTM), each sensor node is associated with a trust metric, which measures to what extent the data transmitted from this node would better be trusted by the server node. Given the SSTM, we translate the trust evaluation problem to be a nonlinear state filtering problem. To estimate the state based on the SSTM, a component-wise iterative state inference procedure is proposed to work in tandem with the particle filter, and thus the resulting algorithm is termed as iterative particle filter (IPF). The computational complexity of the IPF algorithm is theoretically linearly related with the dimension of the state. This property is desirable especially for high dimensional trust evaluation and state filtering problems. The performance of the proposed algorithm is evaluated by both simulations and real data analysis. Index Terms state space trust model, wireless sensor network, trust evaluation, particle filter, high dimensional. ✦",
"title": ""
},
{
"docid": "34e73a1b7bb2f2c9549219d8194c924b",
"text": "•At the beginning of training, a high learning rate or small batch size influences SGD to visit flatter loss regions. •The evolution of the largest eigenvalues always follow a similar pattern, with a fast increase in the first epochs and a steady decrease thereafter, where the peak value is determined by the learning rate and batch size. •By altering the learning rate in the sharpest direction, SGD can be steered towards regions which are an order of magnitude sharper with similar generalization.",
"title": ""
},
{
"docid": "9a73e9bc7c0dc343ad9dbe1f3dfe650c",
"text": "The word robust has been used in many contexts in signal processing. Our treatment concerns statistical robustness, which deals with deviations from the distributional assumptions. Many problems encountered in engineering practice rely on the Gaussian distribution of the data, which in many situations is well justified. This enables a simple derivation of optimal estimators. Nominal optimality, however, is useless if the estimator was derived under distributional assumptions on the noise and the signal that do not hold in practice. Even slight deviations from the assumed distribution may cause the estimator's performance to drastically degrade or to completely break down. The signal processing practitioner should, therefore, ask whether the performance of the derived estimator is acceptable in situations where the distributional assumptions do not hold. Isn't it robustness that is of a major concern for engineering practice? Many areas of engineering today show that the distribution of the measurements is far from Gaussian as it contains outliers, which cause the distribution to be heavy tailed. Under such scenarios, we address single and multichannel estimation problems as well as linear univariate regression for independently and identically distributed (i.i.d.) data. A rather extensive treatment of the important and challenging case of dependent data for the signal processing practitioner is also included. For these problems, a comparative analysis of the most important robust methods is carried out by evaluating their performance theoretically, using simulations as well as real-world data.",
"title": ""
},
{
"docid": "c3bf8153bfcb0d430d1189153de6242c",
"text": "Sentiment analysis is one of the key challenges for mining online user generated content. In this work, we focus on customer reviews which are an important form of opinionated content. The goal is to identify each sentence’s semantic orientation (e.g. positive or negative) of a review. Traditional sentiment classification methods often involve substantial human efforts, e.g. lexicon construction, feature engineering. In recent years, deep learning has emerged as an effective means for solving sentiment classification problems. A neural network intrinsically learns a useful representation automatically without human efforts. However, the success of deep learning highly relies on the availability of large-scale training data. In this paper, we propose a novel deep learning framework for review sentiment classification which employs prevalently available ratings as weak supervision signals. The framework consists of two steps: (1) learn a high level representation (embedding space) which captures the general sentiment distribution of sentences through rating information; (2) add a classification layer on top of the embedding layer and use labeled sentences for supervised fine-tuning. Experiments on review data obtained from Amazon show the efficacy of our method and its superiority over baseline methods.",
"title": ""
},
{
"docid": "29981f5a499482dc1e7b060d7c33b3f6",
"text": "The upcoming General Data Protection Regulation (GDPR) requires justification of data activities to acquire, use, share, and store data using consent obtained from the user. Failure to comply may result in significant heavy fines which incentivises creation and maintenance of records for all activities involving consent and data. Compliance documentation therefore requires provenance information outlining consent and data lifecycles to demonstrate correct usage of data in accordance with the related consent provided and updated by the user. In this paper, we present GDPRov, a linked data ontology for expressing provenance of consent and data lifecycles with a view towards documenting compliance. GDPRov is an OWL2 ontology that extends PROV-O and P-Plan to model the provenance, and uses SPARQL to express compliance related",
"title": ""
},
{
"docid": "c19b63a2c109c098c22877bcba8690ae",
"text": "A monolithic current-mode pulse width modulation (PWM) step-down dc-dc converter with 96.7% peak efficiency and advanced control and protection circuits is presented in this paper. The high efficiency is achieved by \"dynamic partial shutdown strategy\" which enhances circuit speed with less power consumption. Automatic PWM and \"pulse frequency modulation\" switching boosts conversion efficiency during light load operation. The modified current sensing circuit and slope compensation circuit simplify the current-mode control circuit and enhance the response speed. A simple high-speed over-current protection circuit is proposed with the modified current sensing circuit. The new on-chip soft-start circuit prevents the power on inrush current without additional off-chip components. The dc-dc converter has been fabricated with a 0.6 mum CMOS process and measured 1.35 mm2 with the controller measured 0.27 mm2. Experimental results show that the novel on-chip soft-start circuit with longer than 1.5 ms soft-start time suppresses the power-on inrush current. This converter can operate at 1.1 MHz with supply voltage from 2.2 to 6.0 V. Measured power efficiency is 88.5-96.7% for 0.9 to 800 mA output current and over 85.5% for 1000 mA output current.",
"title": ""
},
{
"docid": "5546f93f4c10681edb0fdfe3bf52809c",
"text": "The current applications of neural networks to in vivo medical imaging and signal processing are reviewed. As is evident from the literature neural networks have already been used for a wide variety of tasks within medicine. As this trend is expected to continue this review contains a description of recent studies to provide an appreciation of the problems associated with implementing neural networks for medical imaging and signal processing.",
"title": ""
},
{
"docid": "6da3d534c435ec1145ca180e868900a0",
"text": "A challenging and daunting task for financial investors is determining stock market timing - when to buy, sell and the future price of a stock. This challenge is due to the complexity of the stock market. New methods have emerged that increase the accuracy of stock prediction. Examples of these methods are fuzzy logic, neural network and hybridized methods such as hybrid Kohonen self organizing map (SOM), adaptive neuro-fuzzy inference system (ANFIS) etc. This paper presents a number of methods used to predict the stock price of the day. These methods are backpropagation, Kohonen SOM, and a hybrid Kohonen SOM. The results show that the difference in error of the hybrid Kohonen SOM is significantly reduced compared to the other methods used. Hence, the results suggest that the hybrid Kohonen SOM is a better predictor compared to Kohonen SOM and backpropagation",
"title": ""
},
{
"docid": "6b9663085968c5483c9a2871b4807524",
"text": "E-Commerce is one of the crucial trading methods worldwide. Hence, it is important to understand consumers’ online purchase intention. This research aims to examine factors that influence consumers’ online purchase intention among university students in Malaysia. Quantitative research approach has been adapted in this research by distributing online questionnaires to 250 Malaysian university students aged between 20-29 years old, who possess experience in online purchases. Findings of this research have discovered that trust, perceived usefulness and subjective norm are the significant factors in predicting online purchase intention. However, perceived ease of use and perceived enjoyment are not significant in predicting the variance in online purchase intention. The findings also revealed that subjective norm is the most significant predicting factor on online purchase intention among university students in Malaysia. Findings of this research will provide online marketers with a better understanding on online purchase intention which enable them to direct effective online marketing strategies.",
"title": ""
},
{
"docid": "45df1a8b9868124aaec1fe9b9a786a1a",
"text": "A novel Quasi-Yagi antenna was proposed with ultra-wideband and miniaturization characteristics. The butterfly dipoles and parasitic branches are adopting to improve the bandwidth of the Quasi-Yagi antenna. Affections of antenna structure parameters on antenna was analyzed in this paper. After optimization, the operating frequency ranges from 3.3 to 10.2 GHz. Meanwhile, gain is more than 5.1 dB, and the maximum is 9.8 dB.",
"title": ""
},
{
"docid": "26b7d1d79382d61dfcd523864c477e21",
"text": "The vending machine which provides the beverage like snacks, cold drink, it is also used for ticketing. These systems are operated on either coin or note or manually switch operated. This paper presents system which operates not on coin or note, it operates on RFID system. This system gives the access through only RFID which avoid the misuse of machine. A small RFID reader is fitted on the machine. The identity card which contains RFID tag is given to each employee. According to estimation the numbers of cups per day as per client’s requirement are programmed. Then an employee goes to vending machine show his card to the reader then the drink is dispensed. But when employee wants more coffees than fixed number, that person is allow for that but that employee has to pay for extra cups and amount is cut from the salary account. KeywordsRFID, Arduino, Vending machine.",
"title": ""
},
{
"docid": "6ae355534b51632dd8ba153f273f0b0f",
"text": "Parallelism is the key to achieving high performance in computing. However, writing efficient and scalable parallel programs is notoriously difficult, and often requires significant expertise. To address this challenge, it is crucial to provide programmers with high-level tools to enable them to develop solutions efficiently, and at the same time emphasize the theoretical and practical aspects of algorithm design to allow the solutions developed to run efficiently under all possible settings. This thesis addresses this challenge using a three-pronged approach consisting of the design of shared-memory programming techniques, frameworks, and algorithms for important problems in computing. The thesis provides evidence that with appropriate programming techniques, frameworks, and algorithms, shared-memory programs can be simple, fast, and scalable, both in theory and in practice. The results developed in this thesis serve to ease the transition into the multicore era. The first part of this thesis introduces tools and techniques for deterministic parallel programming, including means for encapsulating nondeterminism via powerful commutative building blocks, as well as a novel framework for executing sequential iterative loops in parallel, which lead to deterministic parallel algorithms that are efficient both in theory and in practice. The second part of this thesis introduces Ligra, the first high-level sharedmemory framework for parallel graph traversal algorithms. The framework allows programmers to express graph traversal algorithms using very short and concise code, delivers performance competitive with that of highly-optimized code, and is up to orders of magnitude faster than existing systems designed for distributed memory. This part of the thesis also introduces Ligra+, which extends Ligra with graph compression techniques to reduce space usage and improve parallel performance at the same time, and is also the first graph processing system to support in-memory graph compression. The third and fourth parts of this thesis bridge the gap between theory and practice in parallel algorithm design by introducing the first algorithms for a variety of important problems on graphs and strings that are efficient both in theory and in practice. For example, the thesis develops the first linear-work and polylogarithmic-depth algorithms for suffix tree construction and graph connectivity that are also practical, as well as a work-efficient, polylogarithmicdepth, and cache-efficient shared-memory algorithm for triangle computations that achieves a 2–5x speedup over the best existing algorithms on 40 cores.",
"title": ""
}
] |
scidocsrr
|
a29982c2d6c0e7b857e7fa62230de699
|
3D DRAM Design and Application to 3D Multicore Systems
|
[
{
"docid": "6ab478043997a788df8b400d0b5fa22e",
"text": "3D die stacking is an exciting new technology that increases transistor density by vertically integrating two or more die with a dense, high-speed interface. The result of 3D die stacking is a significant reduction of interconnect both within a die and across dies in a system. For instance, blocks within a microprocessor can be placed vertically on multiple die to reduce block to block wire distance, latency, and power. Disparate Si technologies can also be combined in a 3D die stack, such as DRAM stacked on a CPU, resulting in lower power higher BW and lower latency interfaces, without concern for technology integration into a single process flow. 3D has the potential to change processor design constraints by providing substantial power and performance benefits. Despite the promising advantages of 3D, there is significant concern for thermal impact. In this research, we study the performance advantages and thermal challenges of two forms of die stacking: Stacking a large DRAM or SRAM cache on a microprocessor and dividing a traditional microarchitecture between two die in a stack Results: It is shown that a 32MB 3D stacked DRAM cache can reduce the cycles per memory access of a twothreaded RMS benchmark on average by 13% and as much as 55% while increasing the peak temperature by a negligible 0.08ºC. Off-die BW and power are also reduced by 66% on average. It is also shown that a 3D floorplan of a high performance microprocessor can simultaneously reduce power 15% and increase performance 15% with a small 14ºC increase in peak temperature. Voltage scaling can reach neutral thermals with a simultaneous 34% power reduction and 8% performance improvement.",
"title": ""
},
{
"docid": "5d3561bfc4bbc5b3ee0051365853ac63",
"text": "Three-dimensional integration is an emerging fabrication technology that vertically stacks multiple integrated chips. The benefits include an increase in device density; much greater flexibility in routing signals, power, and clock; the ability to integrate disparate technologies; and the potential for new 3D circuit and microarchitecture organizations. This article provides a technical introduction to the technology and its impact on processor design. Although our discussions here primarily focus on high-performance processor design, most of the observations and conclusions apply to other microprocessor market segments.",
"title": ""
}
] |
[
{
"docid": "689e4936c818fd9b40ac8a4990cc693f",
"text": "We address the problem of image-based scene analysis from streaming video, as would be seen from a moving platform, in order to efficiently generate spatially and temporally consistent predictions of semantic categories over time. In contrast to previous techniques which typically address this problem in batch and/or through graphical models, we demonstrate that by learning visual similarities between pixels across frames, a simple filtering algorithfiltering algorithmm is able to achieve high performance predictions in an efficient and online/causal manner. Our technique is a meta-algorithm that can be efficiently wrapped around any scene analysis technique that produces a per-pixel semantic category distribution. We validate our approach over three different scene analysis techniques on three different datasets that contain different semantic object categories. Our experiments demonstrate that our approach is very efficient in practice and substantially improves the consistency of the predictions over time.",
"title": ""
},
{
"docid": "5bd483e895de779f8b91ca8537950a2f",
"text": "To evaluate the efficacy of pregabalin in facilitating taper off chronic benzodiazepines, outpatients (N = 106) with a lifetime diagnosis of generalized anxiety disorder (current diagnosis could be subthreshold) who had been treated with a benzodiazepine for 8-52 weeks were stabilized for 2-4 weeks on alprazolam in the range of 1-4 mg/day. Patients were then randomized to 12 weeks of double-blind treatment with either pregabalin 300-600 mg/day or placebo while undergoing a gradual benzodiazepine taper at a rate of 25% per week, followed by a 6-week benzodiazepine-free phase during which they continued double-blind study treatment. Outcome measures included ability to remain benzodiazepine-free (primary) as well as changes in Hamilton Anxiety Rating Scale (HAM)-A and Physician Withdrawal Checklist (PWC). At endpoint, a non-significant higher proportion of patients remained benzodiazepine-free receiving pregabalin compared with placebo (51.4% vs 37.0%). Treatment with pregabalin was associated with significantly greater endpoint reduction in the HAM-A total score versus placebo (-2.5 vs +1.3; p < 0.001), and lower endpoint mean PWC scores (6.5 vs 10.3; p = 0.012). Thirty patients (53%) in the pregabalin group and 19 patients (37%) in the placebo group completed the study, reducing the power to detect a significant difference on the primary outcome. The results on the anxiety and withdrawal severity measures suggest that switching to pregabalin may be a safe and effective method for discontinuing long-term benzodiazepine therapy.",
"title": ""
},
{
"docid": "75f4945b1631c60608808c4977cede7f",
"text": "The validity of nine neoclassical formulas of facial proportions was tested in a group of 153 young adult North American Caucasians. Age-related qualities were investigated in six of the nine canons in 100 six-year-old, 105 twelve-year-old, and 103 eighteen-year-old healthy subjects divided equally between the sexes. The two canons found to be valid most often in young adults were both horizontal proportions (interorbital width equals nose width in 40 percent and nose width equals 1/4 face width in 37 percent). The poorest correspondences are found in the vertical profile proportions, showing equality of no more than two parts of the head and face. Sex does not influence the findings significantly, but age-related differences were observed. Twenty-four variations derived from three vertical profile, four horizontal facial, and two nasoaural neoclassical canons were identified in the group of young adults. For each of the new proportions, the mean absolute and relative differences were calculated. The absolute differences were greater between the facial profile sections (vertical canons) and smaller between the horizontally oriented facial proportions. This study shows a large variability in size of facial features in a normal face. While some of the neoclassical canons may fit a few cases, they do not represent the average facial proportions and their interpretation as a prescription for ideal facial proportions must be tested.",
"title": ""
},
{
"docid": "7fcfa6b251a20d5bb35516d322ebc6c9",
"text": "Plastic waste disposal is a huge ecotechnological problem and one of the approaches to solving this problem is the development of biodegradable plastics. This review summarizes data on their use, biodegradability, commercial reliability and production from renewable resources. Some commercially successful biodegradable plastics are based on chemical synthesis (i.e. polyglycolic acid, polylactic acid, polycaprolactone, and polyvinyl alcohol). Others are products of microbial fermentations (i.e. polyesters and neutral polysaccharides) or are prepared from chemically modified natural products (e.g., starch, cellulose, chitin or soy protein).",
"title": ""
},
{
"docid": "ad459972401d7451eca68edb1e312dd1",
"text": "Computer-generated images are an essential part of today's life where an increasing demand for richer images requires more and more visual detail. e quality of resulting images is strongly dependent on the representation of the underlying surface geometry. is is particularly important in the movie industry where subdivision surfaces have evolved into an industry standard. While subdivision surfaces provide artists with a sophisticated level of exibility for modeling, the corresponding image generation is computationally expensive. For this reason, movie productions perform rendering offline on large-scale render farms. In this thesis we present techniques that facilitate the use of high-quality movie content in real-time applications that run on commodity desktop computers. We utilize modern graphics hardware and use hardware tessellation to generate surface geometry on-they based on patches. e key advantage of hardware tessellation is the ability to generate geometry on-chip and to rasterize obtained polygons directly, thus minimizing memory I/O and enabling cost-effective animations since only patch control points need to be updated every frame. We rst convert subdivision surfaces into patch primitives that can be processed by the tessellation unit. en patches are directly evaluated rather than by iterative subdivision. In addition, we add highfrequency surface detail on top of a base surface by using an analytic displacement function. Both displaced surface positions and corresponding normals are obtained from this function and the underlying subdivision surface. We further present techniques to speed up rendering by culling hidden patches, thus avoiding unnecessary computations. For interaction amongst objects themselves we also present a method that performs collision detection on hardware-tessellated dynamic objects. In conclusion, we provide a comprehensive solution for using subdivision surfaces in realtime applications. We believe that the next generation of games and authoring tools will bene t from our techniques in order to allow for rendering and animating highly detailed surfaces.",
"title": ""
},
{
"docid": "7130e3271363c48fb4e07e3bb5c69e50",
"text": "Recent work in metric learning has significantly improved the state-of-the-art in k-nearest neighbor classification. Support vector machines (SVM), particularly with RBF kernels, are amongst the most popular classification algorithms that uses distance metrics to compare examples. This paper provides an empirical analysis of the efficacy of three of the most popular Mahalanobis metric learning algorithms as pre-processing for SVM training. We show that none of these algorithms generate metrics that lead to particularly satisfying improvements for SVM-RBF classification. As a remedy we introduce support vector metric learning (SVML), a novel algorithm that seamlessly combines the learning of a Mahalanobis metric with the training of the RBF-SVM parameters. We demonstrate the capabilities of SVML on nine benchmark data sets of varying sizes and difficulties. In our study, SVML outperforms all alternative state-of-the-art metric learning algorithms in terms of accuracy and establishes itself as a serious alternative to the standard Euclidean metric with model selection by cross validation.",
"title": ""
},
{
"docid": "2f7d55a3302c8b4e269af4406a0d17d4",
"text": "This study investigated the effects of human trampling on cover, diversity and species richness in an alpine heath ecosystem in northern Sweden. We tested the hypothesis that proximity to trails decreases plant cover, diversity and species richness of the canopy and the understory. We found a significant decrease in plant cover with proximity to the trail for the understory, but not for the canopy level, and significant decreases in the abundance of deciduous shrubs in the canopy layer and lichens in the understory. Proximity also had a significant negative impact on species richness of lichens. However, there were no significant changes in species richness, diversity or evenness of distribution in the canopy or understory with proximity to the trail. While not significant, liverworts, acrocarpous and pleurocarpous bryophytes tended to have contrasting abundance patterns with differing proximity to the trail, indicating that trampling may cause shifts in dominance hierarchies of different groups of bryophytes. Due to the decrease in understory cover, the abundance of litter, rock and soil increased with proximity to the trail. These results demonstrate that low-frequency human trampling in alpine heaths over long periods can have major negative impacts on lichen abundance and species richness. To our knowledge, this is the first study to demonstrate that trampling can decrease species richness of lichens. It emphasises the importance of including species-level data on non-vascular plants when conducting studies in alpine or tundra ecosystems, since they often make up the majority of species and play a significant role in ecosystem functioning and response in many of these extreme environments.",
"title": ""
},
{
"docid": "1939a5101fbdb8734161ab74333a2d52",
"text": "Two FPGA based implementations of random number generators intended for embedded cryptographic applications are presented. The first is a true random number generator (TRNG) which employs oscillator phase noise, and the second is a bit serial implementation of a Blum Blum Shub (BBS) pseudorandom number generator (PRNG). Both designs are extremely compact and can be implemented on any FPGA or PLD device. They were designed specifically for use as FPGA based cryptographic hardware cores. The TRNG and PRNG were tested using the NIST and Diehard random number test suites.",
"title": ""
},
{
"docid": "ac48dad2fd7798c670618b7917d023f5",
"text": "In classification or prediction tasks, data imbalance problem is frequently observed when most of instances belong to one majority class. Data imbalance problem has received considerable attention in machine learning community because it is one of the main causes that degrade the performance of classifiers or predictors. In this paper, we propose geometric mean based boosting algorithm (GMBoost) to resolve data imbalance problem. GMBoost enables learning with consideration of both majority and minority classes because it uses the geometric mean of both classes in error rate and accuracy calculation. To evaluate the performance of GMBoost, we have applied GMBoost to bankruptcy prediction task. The results and their comparative analysis with AdaBoost and cost-sensitive boosting indicate that GMBoost has the advantages of high prediction power and robust learning capability in imbalanced data as well as balanced data distribution. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "79339679226cc161cb84be73b45d2df5",
"text": "We introduce an algorithm, called KarmaLego, for the discovery of frequent symbolic time interval-related patterns (TIRPs). The mined symbolic time intervals can be part of the input, or can be generated by a temporal-abstraction process from raw time-stamped data. The algorithm includes a data structure for TIRP-candidate generation and a novel method for efficient candidate-TIRP generation, by exploiting the transitivity property of Allen’s temporal relations. Additionally, since the non-ambiguous definition of TIRPs does not specify the duration of the time intervals, we propose to pre-cluster the time intervals based on their duration to decrease the variance of the supporting instances. Our experimental comparison of the KarmaLego algorithm’s runtime performance with several existing state of the art time intervals pattern mining methods demonstrated a significant speed-up, especially with large datasets and low levels of minimal vertical support. Furthermore, pre-clustering by time interval duration led to an increase in the homogeneity of the duration of the discovered TIRP’s supporting instances’ time intervals components, accompanied, however, by a corresponding decrease in the number of discovered TIRPs.",
"title": ""
},
{
"docid": "2ae3a8bf304cfce89e8fcd331d1ec733",
"text": "Linear Discriminant Analysis (LDA) is among the most optimal dimension reduction methods for classification, which provides a high degree of class separability for numerous applications from science and engineering. However, problems arise with this classical method when one or both of the scatter matrices is singular. Singular scatter matrices are not unusual in many applications, especially for highdimensional data. For high-dimensional undersampled and oversampled problems, the classical LDA requires modification in order to solve a wider range of problems. In recent work the generalized singular value decomposition (GSVD) has been shown to mitigate the issue of singular scatter matrices, and a new algorithm, LDA/GSVD, has been shown to be very robust for many applications in machine learning. However, the GSVD inherently has a considerable computational overhead. In this paper, we propose fast algorithms based on the QR decomposition and regularization that solve the LDA/GSVD computational bottleneck. In addition, we present fast algorithms for classical LDA and regularized LDA utilizing the framework based on LDA/GSVD and preprocessing by the Cholesky decomposition. Experimental results are presented that demonstrate substantial speedup in all of classical LDA, regularized LDA, and LDA/GSVD algorithms without any sacrifice in classification performance for a wide range of machine learning applications.",
"title": ""
},
{
"docid": "2c3b85bcef5ac7dd15e7411a1d10da22",
"text": "Revision history 2009-01-09 Corrected grammar in the paragraph which precedes Equation (17). Changed datestamp format in the revision history. 2008-07-05 Corrected caption for Figure (2). Added conditioning on θn for l in convergence discussion in Section (3.2). Changed email contact info to reduce spam. 2006-10-14 Added explanation and disambiguating parentheses in the development leading to Equation (14). Minor corrections. 2006-06-28 Added Figure (1). Corrected typo above Equation (5). Minor corrections. Added hyperlinks. 2005-08-26 Minor corrections. 2004-07-18 Initial revision.",
"title": ""
},
{
"docid": "178a579cc665e9d514b2c63e25c6c084",
"text": "Jalapeño is a virtual machine for Java servers written in the Java language. To be able to address the requirements of servers (performance and scalability in particular), Jalapeño was designed “from scratch” to be as self-sufficient as possible. Jalapeño’s unique object model and memory layout allows a hardware null-pointer check as well as fast access to array elements, fields, and methods. Run-time services conventionally provided in native code are implemented primarily in Java. Java threads are multiplexed by virtual processors (implemented as operating system threads). A family of concurrent object allocators and parallel type-accurate garbage collectors is supported. Jalapeño’s interoperable compilers enable quasi-preemptive thread switching and precise location of object references. Jalapeño’s dynamic optimizing compiler is designed to obtain high quality code for methods that are observed to be frequently executed or computationally intensive.",
"title": ""
},
{
"docid": "0ed0009bce9c3389606920a3cfa4db5f",
"text": "We consider the task of weakly supervised one-shot detection. In this task, we attempt to perform a detection task over a set of unseen classes, when training only using weak binary labels that indicate the existence of a class instance in a given example. The model is conditioned on a single exemplar of an unseen class and a target example that may or may not contain an instance of the same class as the exemplar. A similarity map is computed by using a Siamese neural network to map the exemplar and regions of the target example to a latent representation space and then computing cosine similarity scores between representations. An attention mechanism weights different regions in the target example, and enables learning of the one-shot detection task using the weaker labels alone. The model can be applied to detection tasks from different domains, including computer vision object detection. We evaluate our attention Siamese networks on a oneshot detection task from the audio domain, where it detects audio keywords in spoken utterances. Our model considerably outperforms a baseline approach and yields a 42.6% average precision for detection across 10 unseen classes. Moreover, architectural developments from computer vision object detection models such as a region proposal network can be incorporated into the model architecture, and results show that performance is expected to improve by doing so.",
"title": ""
},
{
"docid": "5b0f64a6618cbabeec9c9437c234c14d",
"text": "The ankle-brachial index is valuable for screening for peripheral artery disease in patients at risk and for diagnosing the disease in patients who present with lower-extremity symptoms that suggest it. The ankle-brachial index also predicts the risk of cardiovascular events, cerebrovascular events, and even death from any cause. Few other tests provide as much diagnostic accuracy and prognostic information at such low cost and risk.",
"title": ""
},
{
"docid": "8db7b12cb22d60a698c2aaae31bfbe6a",
"text": "The present article describes the basic therapeutic techniques used in the cognitive-behavioral therapy (CBT) of generalized anxiety disorders and reviews the methodological characteristics and outcomes of 13 controlled clinical trials. The studies in general display rigorous methodology, and their outcomes are quite consistent. CBT has been shown to yield clinical improvements in both anxiety and depression that are superior to no treatment and nonspecific control conditions (and at times to either cognitive therapy alone or behavioral therapy alone) at both posttherapy and follow-up. CBT is also associated with low dropout rates, maintained long-term improvements, and the largest within-group and between-group effect sizes relative to all other comparison conditions.",
"title": ""
},
{
"docid": "48c78545d402b5eed80e705feb45f8f2",
"text": "With advances in data collection technologies, tensor data is assuming increasing prominence in many applications and the problem of supervised tensor learning has emerged as a topic of critical significance in the data mining and machine learning community. Conventional methods for supervised tensor learning mainly focus on learning kernels by flattening the tensor into vectors or matrices, however structural information within the tensors will be lost. In this paper, we introduce a new scheme to design structure-preserving kernels for supervised tensor learning. Specifically, we demonstrate how to leverage the naturally available structure within the tensorial representation to encode prior knowledge in the kernel. We proposed a tensor kernel that can preserve tensor structures based upon dual-tensorial mapping. The dual-tensorial mapping function can map each tensor instance in the input space to another tensor in the feature space while preserving the tensorial structure. Theoretically, our approach is an extension of the conventional kernels in the vector space to tensor space. We applied our novel kernel in conjunction with SVM to real-world tensor classification problems including brain fMRI classification for three different diseases (i.e., Alzheimer's disease, ADHD and brain damage by HIV). Extensive empirical studies demonstrate that our proposed approach can effectively boost tensor classification performances, particularly with small sample sizes.",
"title": ""
},
{
"docid": "9c74cef6fde489ec69ea1d3e0f2f011f",
"text": "The author acknowledges the financial support for this research made available by the Lean Aerospace Initiative at MIT sponsored jointly by the US Air Force and a consortium of aerospace companies. All facts, statements, opinions, and conclusions expressed herein are solely those of the author and do not in any way reflect those of the Lean Aerospace Initiative, the US Air Force, the sponsoring companies and organizations (individually or as a group), or MIT. The latter are absolved from any remaining errors or shortcomings for which the author takes full responsibility. Introduction The essence of lean is very simple, but from a research and implementation point of view overwhelming. Lean is the search for perfection through the elimination of waste and the insertion of practices that contribute to reduction in cost and schedule while improving performance of products. This concept of lean has wide applicability to a large range of processes, people and organizations, from concept design to the factory floor, from the laborer to the upper management, from the customer to the developer. Progress has been made in implementing and raising the awareness of lean practices at the factory floor. However, the level of implementation and education in other areas, like product development, is very low. The Lean Aerospace Initiative (LAI) has been producing research in support of the military and industry since 1993 on the topic of lean and its benefits. Implementation of the research has been shown to have significant impact and interest. LAI is in a very unique situation at MIT to influence and educate world-class engineering students we have exposure to every day. This research will take advantage of this situation and produce a strategic framework for educating engineers on the front-end lean product development findings that have been produced through LAI. These include topics of understanding the customer and the product value, evaluating multidimensional risk, organizational impact on program performance, and many others. The research objectives are to: 1) synthesize the findings uncovered by LAI pertaining to non-manufacturing disciplines into a readily usable manner and 2) formulate a strategic approach for educating engineers on the tools and concepts that facilitate early problem synthesis, mission engineering, and front end product development. Overview There are six modules into which the LAI product development research has been organized. Module I is used to provide a fundamental framework of lean and its application to product development. Module II …",
"title": ""
},
{
"docid": "aff804f90fd1ffba5ee8c06e96ddd11b",
"text": "The area of machine learning has made considerable progress over the past decade, enabled by the widespread availability of large datasets, as well as by improved algorithms and models. Given the large computational demands of machine learning workloads, parallelism, implemented either through single-node concurrency or through multi-node distribution, has been a third key ingredient to advances in machine learning.\n The goal of this tutorial is to provide the audience with an overview of standard distribution techniques in machine learning, with an eye towards the intriguing trade-offs between synchronization and communication costs of distributed machine learning algorithms, on the one hand, and their convergence, on the other.The tutorial will focus on parallelization strategies for the fundamental stochastic gradient descent (SGD) algorithm, which is a key tool when training machine learning models, from classical instances such as linear regression, to state-of-the-art neural network architectures.\n The tutorial will describe the guarantees provided by this algorithm in the sequential case, and then move on to cover both shared-memory and message-passing parallelization strategies, together with the guarantees they provide, and corresponding trade-offs. The presentation will conclude with a broad overview of ongoing research in distributed and concurrent machine learning. The tutorial will assume no prior knowledge beyond familiarity with basic concepts in algebra and analysis.",
"title": ""
},
{
"docid": "fb2ce776c503168e82cc3ffac9c205dd",
"text": "Artifact rejection is a central issue when dealing with electroencephalogram recordings. Although independent component analysis (ICA) separates data in linearly independent components (IC), the classification of these components as artifact or EEG signal still requires visual inspection by experts. In this paper, we achieve automated artifact elimination using linear discriminant analysis (LDA) for classification of feature vectors extracted from ICA components via image processing algorithms. We compare the performance of this automated classifier to visual classification by experts and identify range filtering as a feature extraction method with great potential for automated IC artifact recognition (accuracy rate 88%). We obtain almost the same level of recognition performance for geometric features and local binary pattern (LBP) features. Compared to the existing automated solutions the proposed method has two main advantages: First, it does not depend on direct recording of artifact signals, which then, e.g. have to be subtracted from the contaminated EEG. Second, it is not limited to a specific number or type of artifact. In summary, the present method is an automatic, reliable, real-time capable and practical tool that reduces the time intensive manual selection of ICs for artifact removal. The results are very promising despite the relatively small channel resolution of 25 electrodes.",
"title": ""
}
] |
scidocsrr
|
88cd948c16e34624f7f4d783d23247e7
|
Novel Field-Weakening Control Scheme for Permanent-Magnet Synchronous Machines Based on Voltage Angle Control
|
[
{
"docid": "1ddf046f3aeeb5fd8b999293ef67f202",
"text": "This paper proposes a novel flux-weakening control algorithm of an interior permanent-magnet synchronous motor for ldquoquasirdquo six-step operation. The proposed method is composed of feedforward and feedback paths. The feedforward path consists of 1-D lookup table, and the feedback is based on the difference between the reference voltage updated by current regulator and the output voltage limited by the overmodulation. Using this method, the flux-weakening and the antiwindup controls can be achieved simultaneously. In addition, the quasi-six-step operation can be obtained. That is, the available maximum output torque in the flux-weakening region is close to that in the six-step operation while the ability of the current control is maintained. The effectiveness of this method is proved by the experimental results.",
"title": ""
}
] |
[
{
"docid": "d4400c07fe072a841c8f8e910c0e17f0",
"text": "In the field of big data applications, lossless data compression and decompression can play an important role in improving the data center's efficiency in storage and distribution of data. To avoid becoming a performance bottleneck, they must be accelerated to have a capability of high speed data processing. As FPGAs begin to be deployed as compute accelerators in the data centers for its advantages of massive parallel customized processing capability, power efficiency and hardware reconfiguration. It is promising and interesting to use FPGAs for acceleration of data compression and decompression. The conventional development of FPGA accelerators using hardware description language costs much more design efforts than that of CPUs or GPUs. High level synthesis (HLS) can be used to greatly improve the design productivity. In this paper, we present a solution for accelerating lossless data decompression on FPGA by using HLS. With a pipelined data-flow structure, the proposed decompression accelerator can perform static Huffman decoding and LZ77 decompression at a very high throughput rate. According to the experimental results conducted on FPGA with the Calgary Corpus data benchmark, the average data throughput of the proposed decompression core achieves to 4.6 Gbps while running at 200 MHz.",
"title": ""
},
{
"docid": "e8294214d1a97fe552da2161d43e541f",
"text": "Czech (like other Slavic languages) is well known for its complex morphology. Text processing (e.g., automatic translation, syntactic analysis...) usually requires unambiguous selection of grammatical categories (so called morphological tag) for every word in a text. Morphological tagging consists of two parts – assigning all possible tags to every word in a text and selecting the right tag in a given context. Project Morče attempts to solve the second part, usually called disambiguation. Using a statistical method based on the combination of a Hidden Markov Model and the AveragedAveraged Perceptron algorithm, a number of experiments have been made exploring different parameter settings of the algorithm in order to obtain the best success rate possible. Final accuracy of Morče on data from PDT 2.0 was 95.431% (results of March 2006). So far, it is the best result for a standalone tagger.",
"title": ""
},
{
"docid": "9e090c99c86d99272cddcfc03ff807a8",
"text": "Finding structure in multiple streams of data is an important problem. Consider the streams of data flowing from a robot’s sensors, the monitors in an intensive care unit, or periodic measurements of various indicators of the health of the economy. There is clearly utility in determining how current and past values in those streams are related to future values. We formulate the problem of finding structure in multiple streams of categorical data as search over the space of dependenceies, unexpectedly frequent or Internal data in parentheses Word fragments No spaces Figure 2 A PostScript document and the text extracted from it a /show { print } def Findingstructureinmultiplestreamsofdataisanimportant problem.Considerthestreamsofdata§owingfromarobot' ssensors,themonitorsinanintensivecareunit,orperiodi cmeasurementsofvariousindicatorsofthehealthofthee conomy.Thereisclearlyutilityindetermininghowcurrenta ndpastvaluesinthosestreamsarerelatedtofuturevalues b /show { print ( ) print } def Finding structure in m ultiple streams of data is an imp ortan t problem. Consider the streams of data §o wing from a rob ot's sensors, the monitors in an in tensiv e care unit, or p erio dic measuremen ts of v arious indicators of the health of the econom y . There is clearly utilit y in determining ho w curren t and past v alues in those streams are related to future v alues",
"title": ""
},
{
"docid": "b15dcda2b395d02a2df18f6d8bfa3b19",
"text": "We present a method for human pose tracking that learns explicitly about the dynamic effects of human motion on joint appearance. In contrast to previous techniques which employ generic tools such as dense optical flow or spatiotemporal smoothness constraints to pass pose inference cues between frames, our system instead learns to predict joint displacements from the previous frame to the current frame based on the possibly changing appearance of relevant pixels surrounding the corresponding joints in the previous frame. This explicit learning of pose deformations is formulated by incorporating concepts from human pose estimation into an optical flow-like framework. With this approach, state-of-the-art performance is achieved on standard benchmarks for various pose tracking tasks including 3D body pose tracking in RGB video, 3D hand pose tracking in depth sequences, and 3D hand gesture tracking in RGB video.",
"title": ""
},
{
"docid": "f597c21404b091c0f4046b7c6429c98c",
"text": "We report on an architecture for the unsupervised discovery of talker-invariant subword embeddings. It is made out of two components: a dynamic-time warping based spoken term discovery (STD) system and a Siamese deep neural network (DNN). The STD system clusters word-sized repeated fragments in the acoustic streams while the DNN is trained to minimize the distance between time aligned frames of tokens of the same cluster, and maximize the distance between tokens of different clusters. We use additional side information regarding the average duration of phonemic units, as well as talker identity tags. For evaluation we use the datasets and metrics of the Zero Resource Speech Challenge. The model shows improvement over the baseline in subword unit modeling.",
"title": ""
},
{
"docid": "fc0327de912ec8ef6ca33467d34bcd9e",
"text": "In this paper, a progressive fingerprint image compression (for storage or transmission) using edge detection scheme is adopted. The image is decomposed into two components. The first component is the primary component, which contains the edges, the other component is the secondary component, which contains the textures and the features. In this paper, a general grasp for the image is reconstructed in the first stage at a bit rate of 0.0223 bpp for Sample (1) and 0.0245 bpp for Sample (2) image. The quality of the reconstructed images is competitive to the 0.75 bpp target bit set by FBI standard. Also, the compression ratio and the image quality of this algorithm is competitive to other existing methods given in the literature [6]-[9]. The compression ratio for our algorithm is about 45:1 (0.180 bpp).",
"title": ""
},
{
"docid": "adc310c02471d8be579b3bfd32c33225",
"text": "In this work, we put forward the notion of Worry-Free Encryption. This allows Alice to encrypt confidential information under Bob's public key and send it to him, without having to worry about whether Bob has the authority to actually access this information. This is done by encrypting the message under a hidden access policy that only allows Bob to decrypt if his credentials satisfy the policy. Our notion can be seen as a functional encryption scheme but in a public-key setting. As such, we are able to insist that even if the credential authority is corrupted, it should not be able to compromise the security of any honest user.\n We put forward the notion of Worry-Free Encryption and show how to achieve it for any polynomial-time computable policy, under only the assumption that IND-CPA public-key encryption schemes exist. Furthermore, we construct CCA-secure Worry-Free Encryption, efficiently in the random oracle model, and generally (but inefficiently) using simulation-sound non-interactive zero-knowledge proofs.",
"title": ""
},
{
"docid": "d6dfa1f279a5df160814e1d378162c02",
"text": "Understanding and forecasting mobile traffic of large scale cellular networks is extremely valuable for service providers to control and manage the explosive mobile data, such as network planning, load balancing, and data pricing mechanisms. This paper targets at extracting and modeling traffic patterns of 9,000 cellular towers deployed in a metropolitan city. To achieve this goal, we design, implement, and evaluate a time series analysis approach that is able to decompose large scale mobile traffic into regularity and randomness components. Then, we use time series prediction to forecast the traffic patterns based on the regularity components. Our study verifies the effectiveness of our utilized time series decomposition method, and shows the geographical distribution of the regularity and randomness component. Moreover, we reveal that high predictability of the regularity component can be achieved, and demonstrate that the prediction of randomness component of mobile traffic data is impossible.",
"title": ""
},
{
"docid": "8abb8d78c9a6ba0bd860053135ae11ba",
"text": "We review recent evidence on price rigidity from themacroeconomics literature and discuss how this evidence is used to inform macroeconomic modeling. Sluggish price adjustment is a leading explanation for the large effects of demand shocks on output and, in particular, the effects of monetary policy on output. A recent influx of data on individual prices has greatly deepenedmacroeconomists’ understanding of individual price dynamics. However, the analysis of these new data raises a host of new empirical issues that have not traditionally been confronted by parsimonious macroeconomic models of price setting. Simple statistics such as the frequency of price change may be misleading guides to the flexibility of the aggregate price level in a setting in which temporary sales, product churning, cross-sectional heterogeneity, and large idiosyncratic price movements play an important role. We discuss empirical evidence on these and other important features of micro price adjustment and ask how they affect the sluggishness of aggregate price adjustment and the economy’s response to demand shocks. 133 A nn u. R ev . E co n. 2 01 3. 5: 13 316 3. D ow nl oa de d fr om w w w .a nn ua lr ev ie w s. or g by P ro f. J on S te in ss on o n 08 /0 5/ 13 . F or p er so na l u se o nl y.",
"title": ""
},
{
"docid": "5bee78694f3428d3882e27000921f501",
"text": "We introduce a new approach to perform background subtraction in moving camera scenarios. Unlike previous treatments of the problem, we do not restrict the camera motion or the scene geometry. The proposed approach relies on Bayesian selection of the transformation that best describes the geometric relation between consecutive frames. Based on the selected transformation, we propagate a set of learned background and foreground appearance models using a single or a series of homography transforms. The propagated models are subjected to MAP-MRF optimization framework that combines motion, appearance, spatial, and temporal cues; the optimization process provides the final background/foreground labels. Extensive experimental evaluation with challenging videos shows that the proposed method outperforms the baseline and state-of-the-art methods in most cases.",
"title": ""
},
{
"docid": "b995fffdb04eae75b85ece3b5dd7724e",
"text": "It is necessary for potential consume to make decision based on online reviews. However, its usefulness brings forth a curse - deceptive opinion spam. The deceptive opinion spam mislead potential customers and organizations reshaping their businesses and prevent opinion-mining techniques from reaching accurate conclusions. Thus, the detection of fake reviews has become more and more fervent. In this work, we attempt to find out how to distinguish between fake reviews and non-fake reviews by using linguistic features in terms of Yelp Filter Dataset. To our surprise, the linguistic features performed well. Further, we proposed a method to extract features based on Latent Dirichlet Allocation. The result of experiment proved that the method is effective.",
"title": ""
},
{
"docid": "3a46f6ff14e4921fa9bcdfdc9064b754",
"text": "Deep learning on graph structures has shown exciting results in various applications. However, few attentions have been paid to the robustness of such models, in contrast to numerous research work for image or text adversarial attack and defense. In this paper, we focus on the adversarial attacks that fool deep learning models by modifying the combinatorial structure of data. We first propose a reinforcement learning based attack method that learns the generalizable attack policy, while only requiring prediction labels from the target classifier. We further propose attack methods based on genetic algorithms and gradient descent in the scenario where additional prediction confidence or gradients are available. We use both synthetic and real-world data to show that, a family of Graph Neural Network models are vulnerable to these attacks, in both graph-level and node-level classification tasks. We also show such attacks can be used to diagnose the learned classifiers.",
"title": ""
},
{
"docid": "cc1959b1beeb8f5460c39c9d4f55d9e4",
"text": "The DBLP Computer Science Bibliography evolved from an early small experimental Web server to a popular service for the computer science community. Many design decisions and details of the public XML-records behind DBLP never were documented. This paper is a review of the evolution of DBLP. The main perspective is data modeling. In DBLP persons play a central role, our discussion of person names may be applicable to many other data bases. All DBLP data are available for your own experiments. You may either download the complete set, or use a simple XML-based API described in an online appendix.",
"title": ""
},
{
"docid": "bd96b290d83f10db3d70e912aa4bd177",
"text": "In deployment of smart grid it is imperative to adopt advanced and smart technologies in SCADA of distribution system for smart monitoring, automation and control of a power system. The present paper focuses the status of present SCADA of small distribution systems and proposes the use of smart meter for variety of tasks to be performed in smart distribution systems. Sample smart operations for monitoring and control task using latest communication technologies are demonstrated with simulation and hardware results. The proposed scheme can be extended and implemented effectively to gratify variety of errands as mandatory in smart grid.",
"title": ""
},
{
"docid": "8dcbe510328d97df25dab65ba5975bf4",
"text": "The information-based prediction models using machine learning techniques have gained massive popularity during the last few decades. Such models have been applied in a number of domains such as medical diagnosis, crime prediction, movies rating, etc. Similar is the trend in telecom industry where prediction models have been applied to predict the dissatisfied customers who are likely to change the service provider. Due to immense financial cost of customer churn in telecom, the companies from all over the world have analyzed various factors (such as call cost, call quality, customer service response time, etc.) using several learners such as decision trees, support vector machines, neural networks, probabilistic models such as Bayes, etc. This paper presents a detailed survey of models from 2000 to 2015 describing the datasets used in churn prediction, impacting features in those datasets and classifiers that are used to implement prediction model. A total of 48 studies related to churn prediction in telecom industry are discussed using 23 datasets (3 public and 20 private). Our survey aims to highlight the evolution of techniques from simple features/learners to more complex learners and feature engineering or sampling techniques. We also give an overview of the current challenges in churn prediction and suggest solutions to resolve them. This paper will allow researchers such as data analysts in general and telecom operators in particular to choose best suited techniques and features to prepare their churn prediction models.",
"title": ""
},
{
"docid": "f84011e3b4c8b1e80d4e79dee3ccad53",
"text": "What is the future of fashion? Tackling this question from a data-driven vision perspective, we propose to forecast visual style trends before they occur. We introduce the first approach to predict the future popularity of styles discovered from fashion images in an unsupervised manner. Using these styles as a basis, we train a forecasting model to represent their trends over time. The resulting model can hypothesize new mixtures of styles that will become popular in the future, discover style dynamics (trendy vs. classic), and name the key visual attributes that will dominate tomorrow’s fashion. We demonstrate our idea applied to three datasets encapsulating 80,000 fashion products sold across six years on Amazon. Results indicate that fashion forecasting benefits greatly from visual analysis, much more than textual or meta-data cues surrounding products.",
"title": ""
},
{
"docid": "b0ea0b7e3900b440cb4e1d5162c6830b",
"text": "Product Lifecycle Management (PLM) solutions have been serving as the basis for collaborative product definition, manufacturing, and service management in many industries. They capture and provide access to product and process information and preserve integrity of information throughout the lifecycle of a product. Efficient growth in the role of Building Information Modeling (BIM) can benefit vastly from unifying solutions to acquire, manage and make use of information and processes from various project and enterprise level systems, selectively adapting functionality from PLM systems. However, there are important differences between PLM’s target industries and the Architecture, Engineering, and Construction (AEC) industry characteristics that require modification and tailoring of some aspects of current PLM technology. In this study we examine the fundamental PLM functionalities that create synergy with the BIM-enabled AEC industry. We propose a conceptual model for the information flow and integration between BIM and PLM systems. Finally, we explore the differences between the AEC industry and traditional scope of service for PLM solutions.",
"title": ""
},
{
"docid": "ea9bafe86af4418fa51abe27a2c2180b",
"text": "In this work, we propose a novel phenomenological model of the EEG signal based on the dynamics of a coupled Duffing-van der Pol oscillator network. An optimization scheme is adopted to match data generated from the model with clinically obtained EEG data from subjects under resting eyes-open (EO) and eyes-closed (EC) conditions. It is shown that a coupled system of two Duffing-van der Pol oscillators with optimized parameters yields signals with characteristics that match those of the EEG in both the EO and EC cases. The results, which are reinforced using statistical analysis, show that the EEG recordings under EC and EO resting conditions are clearly distinct realizations of the same underlying model occurring due to parameter variations with qualitatively different nonlinear dynamic characteristics. In addition, the interplay between noise and nonlinearity is addressed and it is shown that, for appropriately chosen values of noise intensity in the model, very good agreement exists between the model output and the EEG in terms of the power spectrum as well as Shannon entropy. In summary, the results establish that an appropriately tuned stochastic coupled nonlinear oscillator network such as the Duffing-van der Pol system could provide a useful framework for modeling and analysis of the EEG signal. In turn, design of algorithms based on the framework has the potential to positively impact the development of novel diagnostic strategies for brain injuries and disorders. © 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "986a0b910a4674b3c4bf92a668780dd6",
"text": "One of the most important attributes of the polymerase chain reaction (PCR) is its exquisite sensitivity. However, the high sensitivity of PCR also renders it prone to falsepositive results because of, for example, exogenous contamination. Good laboratory practice and specific anti-contamination strategies are essential to minimize the chance of contamination. Some of these strategies, for example, physical separation of the areas for the handling samples and PCR products, may need to be taken into consideration during the establishment of a laboratory. In this chapter, different strategies for the detection, avoidance, and elimination of PCR contamination will be discussed.",
"title": ""
}
] |
scidocsrr
|
79f3891de5cf9b7cd27787c450750234
|
Robust Subspace Segmentation by Simultaneously Learning Data Representations and Their Affinity Matrix
|
[
{
"docid": "50c3e7855f8a654571a62a094a86c4eb",
"text": "In this paper, we address the subspace clustering problem. Given a set of data samples (vectors) approximately drawn from a union of multiple subspaces, our goal is to cluster the samples into their respective subspaces and remove possible outliers as well. To this end, we propose a novel objective function named Low-Rank Representation (LRR), which seeks the lowest rank representation among all the candidates that can represent the data samples as linear combinations of the bases in a given dictionary. It is shown that the convex program associated with LRR solves the subspace clustering problem in the following sense: When the data is clean, we prove that LRR exactly recovers the true subspace structures; when the data are contaminated by outliers, we prove that under certain conditions LRR can exactly recover the row space of the original data and detect the outlier as well; for data corrupted by arbitrary sparse errors, LRR can also approximately recover the row space with theoretical guarantees. Since the subspace membership is provably determined by the row space, these further imply that LRR can perform robust subspace clustering and error correction in an efficient and effective way.",
"title": ""
},
{
"docid": "c91df82c01cbf7d1f2666c43e96a5787",
"text": "The past few years have witnessed an explosion in the availability of data from multiple sources and modalities. For example, millions of cameras have been installed in buildings, streets, airports and cities around the world. This has generated extraordinary advances on how to acquire, compress, store, transmit and process massive amounts of complex high-dimensional data. Many of these advances have relied on the observation that, even though these data sets are high-dimensional, their intrinsic dimension is often much smaller than the dimension of the ambient space. In computer vision, for example, the number of pixels in an image can be rather large, yet most computer vision models use only a few parameters to describe the appearance, geometry and dynamics of a scene. This has motivated the development of a number of techniques for finding a low-dimensional representation of a high-dimensional data set. Conventional techniques, such as Principal Component Analysis (PCA), assume that the data is drawn from a single low-dimensional subspace of a high-dimensional space. Such approaches have found widespread applications in many fields, e.g., pattern recognition, data compression, image processing, bioinformatics, etc. In practice, however, the data points could be drawn from multiple subspaces and the membership of the data points to the subspaces might be unknown. For instance, a video sequence could contain several moving objects and different subspaces might be needed to describe the motion of different objects in the scene. Therefore, there is a need to simultaneously cluster the data into multiple subspaces and find a low-dimensional subspace fitting each group of points. This problem, known as subspace clustering, has found numerous applications in computer vision (e.g., image segmentation [1], motion segmentation [2] and face clustering [3]), image processing (e.g., image representation and compression [4]) and systems theory (e.g., hybrid system identification [5]). A number of approaches to subspace clustering have been proposed in the past two decades. A review of methods from the data mining community can be found in [6]. This article will present methods from the machine learning and computer vision communities, including algebraic methods [7, 8, 9, 10], iterative methods [11, 12, 13, 14, 15], statistical methods [16, 17, 18, 19, 20], and spectral clustering-based methods [7, 21, 22, 23, 24, 25, 26, 27]. We review these methods, discuss their advantages and disadvantages, and evaluate their performance on the motion segmentation and face clustering problems. P",
"title": ""
},
{
"docid": "7e7b2e8fc47f53d7d7bde48c75b28596",
"text": "We propose in this paper a novel sparse subspace clustering method that regularizes sparse subspace representation by exploiting the structural sharing between tasks and data points via group sparse coding. We derive simple, provably convergent, and computationally efficient algorithms for solving the proposed group formulations. We demonstrate the advantage of the framework on three challenging benchmark datasets ranging from medical record data to image and text clustering and show that they consistently outperforms rival methods.",
"title": ""
}
] |
[
{
"docid": "8cd73397c9a79646ac1b2acac44dd8a7",
"text": "Liquid micro-jet array impingement cooling of a power conversion module with 12 power switching devices (six insulated gate bipolar transistors and six diodes) is investigated. The 1200-V/150-A module converts dc input power to variable frequency, variable voltage three-phase ac output to drive a 50HP three-phase induction motor. The silicon devices are attached to a packaging layer [direct bonded copper (DBC)], which in turn is soldered to a metal base plate. DI water micro-jet array impinges on the base plate of the module targeted at the footprint area of the devices. Although the high heat flux cooling capability of liquid impingement is a well-established finding, the impact of its practical implementation in power systems has never been addressed. This paper presents the first one-to-one comparison of liquid micro-jet array impingement cooling (JAIC) with the traditional methods, such as air-cooling over finned heat sink or liquid flow in multi-pass cold plate. Results show that compared to the conventional cooling methods, JAIC can significantly enhance the module output power. If the output power is maintained constant, the device temperature can be reduced drastically by JAIC. Furthermore, jet impingement provides uniform cooling for multiple devices placed over a large area, thereby reducing non-uniformity of temperature among the devices. The reduction in device temperature, both its absolute value and the non-uniformity, implies multi-fold increase in module reliability. The results thus illustrate the importance of efficient thermal management technique for compact and reliable power conversion application",
"title": ""
},
{
"docid": "53888fb785c159f1b0cabe5357231238",
"text": "In this paper, we propose a smart parking system detecting and finding the parked location of a consumer's vehicle. Using ultrasonic and magnetic sensor, the proposed system detects vehicles in indoor and outdoor parking fields, accurately. Wireless sensor motes support a vehicle location service in parking lots using BLE.",
"title": ""
},
{
"docid": "fcb69bd97835da9f244841d54996f070",
"text": "A conventional transverse slot substrate integrated waveguide (SIW) periodic leaky wave antenna (LWA) provides a fan beam, usually E-plane beam having narrow beam width and H-plane having wider beamwidth. The main beam direction changes with frequency sweep. In the applications requiring a pencil beam, an array of the antenna is generally used to decrease the H-plane beam width which requires long and tiring optimization steps. In this paper, it is shown that the H-plane beamwidth can be easily decreased by using two baffles with a conventional leaky wave antenna. A prototype periodic leaky wave antenna with baffles is designed and fabricated for X-band applications. The E- and H-plane 3 dB beam widths of the antenna at 10.5GHz are, respectively, 6° and 22°. Over the frequency range 8.2–14 GHz, the antenna scans from θ = −60° to θ = 15°, from backward to forward direction. The use of baffles also improves the gain of the antenna including broadside direction by approximately 4 dB.",
"title": ""
},
{
"docid": "4dbead8a5316bc51e357867db4731561",
"text": "Fingerprint systems have received a great deal of research and attracted many researchers’ effort since they provide a powerful tool for access control and security and for practical applications. A literature review of the techniques used to extract the features of fingerprint as well as recognition techniques is given in this paper. Some of the reviewed research articles have used traditional methods such as recognition techniques, whereas the other articles have used neural networks methods. In addition, fingerprint techniques of enhancement are introduced.",
"title": ""
},
{
"docid": "6a85677755a82b147cb0874ae8299458",
"text": "Data mining involves the process of recovering related, significant and credential information from a large collection of aggregated data. A major area of current research in data mining is the field of clinical investigations that involve disease diagnosis, prognosis and drug therapy. The objective of this paper is to identify an efficient classifier for prognostic breast cancer data. This research work involves designing a data mining framework that incorporates the task of learning patterns and rules that will facilitate the formulation of decisions in new cases. The machine learning techniques employed to train the proposed system are based on feature relevance analysis and classification algorithms. Wisconsin Prognostic Breast Cancer (WPBC) data from the UCI machine learning repository is utilized by means of data mining techniques to completely train the system on 198 individual cases, each comprising of 33 predictor values. This paper highlights the performance of feature reduction and classification algorithms on the training dataset. We evaluate the number of attributes for split in the Random tree algorithm and the confidence level and minimum size of the leaves in the C4.5 algorithm to produce 100 percent classification accuracy. Our results demonstrate that Random Tree and Quinlan’s C4.5 classification algorithm produce 100 percent accuracy in the training and test phase of classification with proper evaluation of algorithmic parameters.",
"title": ""
},
{
"docid": "95ac40af0bc68a69a1f56fdb358c149e",
"text": "This paper presents an approach to the study of cognitive activities in collaborative software development. This approach has been developed by a multidisciplinary team made up of software engineers and cognitive psychologists. The basis of this approach is to improve our understanding of software development by observing professionals at work. The goal is to derive lines of conduct or good practices based on observations and analyses of the processes that are naturally used by software engineers. The strategy involved is derived from a standard approach in cognitive science. It is based on the videotaping of the activities of software engineers, transcription of the videos, coding of the transcription, defining categories from the coded episodes and defining cognitive behaviors or dialogs from the categories. This project presents two original contributions that make this approach generic in software engineering. The first contribution is the introduction of a formal hierarchical coding scheme, which will enable comparison of various types of observations. The second is the merging of psychological and statistical analysis approaches to build a cognitive model. The details of this new approach are illustrated with the initial data obtained from the analysis of technical review meetings.",
"title": ""
},
{
"docid": "3486d3493a0deef5c3c029d909e3cdfc",
"text": "To date, reinforcement learning has mostly been studied solving simple learning tasks. Reinforcement learning methods that have been studied so far typically converge slowly. The purpose of this work is thus two-fold: 1) to investigate the utility of reinforcement learning in solving much more complicated learning tasks than previously studied, and 2) to investigate methods that will speed up reinforcement learning. This paper compares eight reinforcement learning frameworks: adaptive heuristic critic (AHC) learning due to Sutton, Q-learning due to Watkins, and three extensions to both basic methods for speeding up learning. The three extensions are experience replay, learning action models for planning, and teaching. The frameworks were investigated using connectionism as an approach to generalization. To evaluate the performance of different frameworks, a dynamic environment was used as a testbed. The environment is moderately complex and nondeterministic. This paper describes these frameworks and algorithms in detail and presents empirical evaluation of the frameworks.",
"title": ""
},
{
"docid": "7678163641a37a02474bd42a48acec16",
"text": "Thiopurine S-methyltransferase (TPMT) is involved in the metabolism of thiopurine drugs. Patients that due to genetic variation lack this enzyme or have lower levels than normal, can be adversely affected if normal doses of thiopurines are prescribed. The evidence for measuring TPMT prior to starting patients on thiopurine drug therapy has been reviewed and the various approaches to establishing a service considered. Until recently clinical guidelines on the use of the TPMT varied by medical specialty. This has now changed, with clear guidance encouraging clinicians to use the TPMT test prior to starting any patient on thiopurine therapy. The TPMT test is the first pharmacogenomic test that has crossed from research to routine use. Several analytical approaches can be taken to assess TPMT status. The use of phenotyping supported with genotyping on selected samples has emerged as the analytical model that has enabled national referral services to be developed to a high level in the UK. The National Health Service now has access to cost-effective and timely TPMT assay services, with two laboratories undertaking the majority of the work at national level and with several local services developing. There appears to be adequate capacity and an appropriate internal market to ensure that TPMT assay services are commensurate with the clinical demand.",
"title": ""
},
{
"docid": "7fed1248efb156c8b2585147e2791ed7",
"text": "In [1], we proposed a graph-based formulation that links and clusters person hypotheses over time by solving a minimum cost subgraph multicut problem. In this paper, we modify and extend [1] in three ways: 1) We introduce a novel local pairwise feature based on local appearance matching that is robust to partial occlusion and camera motion. 2) We perform extensive experiments to compare different pairwise potentials and to analyze the robustness of the tracking formulation. 3) We consider a plain multicut problem and remove outlying clusters from its solution. This allows us to employ an efficient primal feasible optimization algorithm that is not applicable to the subgraph multicut problem of [1]. Unlike the branch-and-cut algorithm used there, this efficient algorithm used here is applicable to long videos and many detections. Together with the novel feature, it eliminates the need for the intermediate tracklet representation of [1]. We demonstrate the effectiveness of our overall approach on the MOT16 benchmark [2], achieving state-of-art performance.",
"title": ""
},
{
"docid": "d0d02fc3ca58d6dbeb6e3dc21a9136a8",
"text": "Breast cancer represents the second important cause of cancer deaths in women today and it is the most common type of cancer in women. Disease diagnosis is one of the applications where data mining tools are proving successful results. Data mining with decision trees is popular and effective data mining classification approach. Decision trees have the ability to generate understandable classification rules, which are very efficient tool for transfer knowledge to physicians and medical specialists. In fundamental truth, they provide trails to find rules that could be evaluated for separating the input samples into one of several groups without having to state the functional relationship directly. The objective of this paper is to examine the performance of recent invented decision tree modeling algorithms and compared with one that achieved by radial basis function kernel support vector machine (RBF-SVM) on the diagnosis of breast cancer using cytological proven tumor dataset. Four models have been evaluated in decision tree: Chi-squared Automatic Interaction Detection (CHAID), Classification and Regression tree (C&R), Quick Unbiased Efficient Statistical Tree (QUEST), and Ross Quinlan new decision tree model C5. 0. The objective is to classify a tumor as either benign or malignant based on cell descriptions compound by microscopic examination using decision tree models. The proposed algorithm imputes the missing values with C&R tree. Then, the performances of the five models are measured by three statistical measures; classification",
"title": ""
},
{
"docid": "7b098fec7e8099ffed7ee889d28f6ed6",
"text": "With the emergence of edge computing paradigm, many applications such as image recognition and augmented reality require to perform machine learning (ML) and artificial intelligence (AI) tasks on edge devices. Most AI and ML models are large and computational-heavy, whereas edge devices are usually equipped with limited computational and storage resources. Such models can be compressed and reduced for deployment on edge devices, but they may loose their capability and not perform well. Recent works used knowledge transfer techniques to transfer information from a large network (termed teacher) to a small one (termed student) in order to improve the performance of the latter. This approach seems to be promising for learning on edge devices, but a thorough investigation on its effectiveness is lacking. This paper provides an extensive study on the performance (in both accuracy and convergence speed) of knowledge transfer, considering different student architectures and different techniques for transferring knowledge from teacher to student. The results show that the performance of KT does vary by architectures and transfer techniques. A good performance improvement is obtained by transferring knowledge from both the intermediate layers and last layer of the teacher to a shallower student. But other architectures and transfer techniques do not fare so well and some of them even lead to negative performance impact.",
"title": ""
},
{
"docid": "0c7eff3e7c961defce07b98914431414",
"text": "The navigational system of the mammalian cortex comprises a number of interacting brain regions. Grid cells in the medial entorhinal cortex and place cells in the hippocampus are thought to participate in the formation of a dynamic representation of the animal's current location, and these cells are presumably critical for storing the representation in memory. To traverse the environment, animals must be able to translate coordinate information from spatial maps in the entorhinal cortex and hippocampus into body-centered representations that can be used to direct locomotion. How this is done remains an enigma. We propose that the posterior parietal cortex is critical for this transformation.",
"title": ""
},
{
"docid": "449bd861c7f4bda09e60e66a1a9bbe00",
"text": "A novel molecular approach to the synthesis of polycrystalline Cu-doped ZnO rod-like nanostructures with variable concentrations of introduced copper ions in ZnO host matrix is presented. Spectroscopic (PLS, variable temperature XRD, XPS, ELNES, HERFD) and microscopic (HRTEM) analysis methods reveal the +II oxidation state of the lattice incorporated Cu ions. Photoluminescence spectra show a systematic narrowing (tuning) of the band gap depending on the amount of Cu(II) doping. The advantage of the template assembly of doped ZnO nanorods is that it offers general access to doped oxide structures under moderate thermal conditions. The doping content of the host structure can be individually tuned by the stoichiometric ratio of the molecular precursor complex of the host metal oxide and the molecular precursor complex of the dopant, Di-aquo-bis[2-(methoxyimino)-propanoato]zinc(II) 1 and -copper(II) 2. Moreover, these keto-dioximato complexes are accessible for a number of transition metal and lanthanide elements, thus allowing this synthetic approach to be expanded into a variety of doped 1D metal oxide structures.",
"title": ""
},
{
"docid": "dd86d2530dfa9a44b84d85b9db18e200",
"text": "In order to extract entities of a fine-grained category from semi-structured data in web pages, existing information extraction systems rely on seed examples or redundancy across multiple web pages. In this paper, we consider a new zero-shot learning task of extracting entities specified by a natural language query (in place of seeds) given only a single web page. Our approach defines a log-linear model over latent extraction predicates, which select lists of entities from the web page. The main challenge is to define features on widely varying candidate entity lists. We tackle this by abstracting list elements and using aggregate statistics to define features. Finally, we created a new dataset of diverse queries and web pages, and show that our system achieves significantly better accuracy than a natural baseline.",
"title": ""
},
{
"docid": "07b889a2b1a18bc1f91021f3b889474a",
"text": "In this study, we show a correlation between electrical properties (relative permittivity-εr and conductivity-σ) of blood plasma and plasma glucose concentration. In order to formulate that correlation, we performed electrical property measurements on blood samples collected from 10 adults between the ages of 18 and 40 at University of Alabama Birmingham (UAB) Children's hospital. The measurements are conducted between 500 MHz and 20 GHz band. Using the data obtained from measurements, we developed a single-pole Cole-Cole model for εr and σ as a function of plasma blood glucose concentration. To provide an application, we designed a microstrip patch antenna that can be used to predict the glucose concentration within a given plasma sample. Simulation results regarding antenna design and its performance are also presented.",
"title": ""
},
{
"docid": "30ba7b3cf3ba8a7760703a90261d70eb",
"text": "Starch is a major storage product of many economically important crops such as wheat, rice, maize, tapioca, and potato. A large-scale starch processing industry has emerged in the last century. In the past decades, we have seen a shift from the acid hydrolysis of starch to the use of starch-converting enzymes in the production of maltodextrin, modified starches, or glucose and fructose syrups. Currently, these enzymes comprise about 30% of the world’s enzyme production. Besides the use in starch hydrolysis, starch-converting enzymes are also used in a number of other industrial applications, such as laundry and porcelain detergents or as anti-staling agents in baking. A number of these starch-converting enzymes belong to a single family: the -amylase family or family13 glycosyl hydrolases. This group of enzymes share a number of common characteristics such as a ( / )8 barrel structure, the hydrolysis or formation of glycosidic bonds in the conformation, and a number of conserved amino acid residues in the active site. As many as 21 different reaction and product specificities are found in this family. Currently, 25 three-dimensional (3D) structures of a few members of the -amylase family have been determined using protein crystallization and X-ray crystallography. These data in combination with site-directed mutagenesis studies have helped to better understand the interactions between the substrate or product molecule and the different amino acids found in and around the active site. This review illustrates the reaction and product diversity found within the -amylase family, the mechanistic principles deduced from structure–function relationship structures, and the use of the enzymes of this family in industrial applications. © 2002 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "a357ce62099cd5b12c09c688c5b9736e",
"text": "Considerations of personal identity bear on John Searle's Chinese Room argument, and on the opposed position that a computer itself could really understand a natural language. In this paper I develop the notion of a virtual person, modelled on the concept of virtual machines familiar in computer science. I show how Searle's argument, and J. Maloney's attempt to defend it, fail. I conclude that Searle is correct in holding that no digital machine could understand language, but wrong in holding that artificial minds are impossible: minds and persons are not the same as the machines, biological or electronic, that realize them.",
"title": ""
},
{
"docid": "c7add5ca57003fc82c0a4c8be7e15373",
"text": "Along with advances in information technology, cybercrime techniques also increased. There are several forms of attacks on data and information, such as hackers, crackers, Trojans, etc. The Symantec Intelligence report edition on August 2012 indicated that the attacker selected the target of attacks. The type of data is valuable and confidential. The Hackers selected the target to attack or steal interest information the first and they did not just taking random from a large amount of data. This indication worried because hackers stealing the data more planned. Therefore, today many systems reinforced with various efforts to maintain data security and overcome these attacks. Necessary methods to secure electronic messages that do not fall on those who are not authorized. One alternative is steganography. Cryptography and Steganography are the two major techniques for secret communication. Cryptography converts information from its original form (plaintext) into unreadable form (cipher text); where as in steganography is the art of hiding messages within other data without changing the data to it attaches, so data before and after the process of hiding almost look like the same. There are many different techniques are available for cryptography and steganography. The cryptography suspicion against disguised message is easily recognizable, because of the message disguised by changing the original message becomes as if illegible. While further reduce suspicion steganography disguised as a message hidden in the file. The research designed the application of steganography using Least Significant Bit (LSB) in which the previous message is encrypted using the Advanced Encryption Standard algorithm (AES) and it can restore the previously hidden data. The messages in this form application and hidden text on media digital image so as not to arouse suspicion. The result of research shown the steganography is expected to hide the secret message, so the message is not easy to know other people who are not eligible.",
"title": ""
},
{
"docid": "472519682e5b086732b31e558ec7934d",
"text": "As networks become ubiquitous in people's lives, users depend on networks a lot for sufficient communication and convenient information access. However, networks suffer from security issues. Network security becomes a challenging topic since numerous new network attacks have appeared increasingly sophisticated and caused vast loss to network resources. Game theoretic approaches have been introduced as a useful tool to handle those tricky network attacks. In this paper, we review the existing game-theory based solutions for network security problems, classifying their application scenarios under two categories, attack-defense analysis and security measurement. Moreover, we present a brief view of the game models in those solutions and summarize them into two categories, cooperative game models and non-cooperative game models with the latter category consisting of subcategories. In addition to the introduction to the state of the art, we discuss the limitations of those game theoretic approaches and propose future research directions.",
"title": ""
},
{
"docid": "39ccd0efd846c2314da557b73a326e85",
"text": "We address the problem of recognizing situations in images. Given an image, the task is to predict the most salient verb (action), and fill its semantic roles such as who is performing the action, what is the source and target of the action, etc. Different verbs have different roles (e.g. attacking has weapon), and each role can take on many possible values (nouns). We propose a model based on Graph Neural Networks that allows us to efficiently capture joint dependencies between roles using neural networks defined on a graph. Experiments with different graph connectivities show that our approach that propagates information between roles significantly outperforms existing work, as well as multiple baselines. We obtain roughly 3-5% improvement over previous work in predicting the full situation. We also provide a thorough qualitative analysis of our model and influence of different roles in the verbs.",
"title": ""
}
] |
scidocsrr
|
8f2a90bd27bbb1b93535540e34ae3d9f
|
Progressively Diffused Networks for Semantic Image Segmentation
|
[
{
"docid": "286e82a307597775bc21861da5c7d615",
"text": "Parsing human regions into semantic parts, e.g., body, head and arms etc., from a random natural image is challenging while fundamental for computer vision and widely applicable in industry. One major difficulty to handle such a problem is the high flexibility of scale and location of a human instance and its corresponding parts, making the parsing task either lack of boundary details or suffer from local confusions. To tackle such problems, in this work, we propose the “Auto-Zoom Net” (AZN) for human part parsing, which is a unified fully convolutional neural network structure that: (1) parses each human instance into detailed parts. (2) predicts the locations and scales of human instances and their corresponding parts. In our unified network, the two tasks are mutually beneficial. The score maps obtained for parsing help estimate the locations and scales for human instances and their parts. With the predicted locations and scales, our model “zooms” the region into a right scale to further refine the parsing. In practice, we perform the two tasks iteratively so that detailed human parts are gradually recovered. We conduct extensive experiments over the challenging PASCAL-Person-Part segmentation, and show our approach significantly outperforms the state-of-art parsing techniques especially for instances and parts at small scale. In addition, we perform experiments for horse and cow segmentation and also obtain results which are considerably better than state-of-the-art methods (by over 5%)., which is contribued by the proposed iterative zooming process.",
"title": ""
}
] |
[
{
"docid": "4becb2f976472e288bcb791f29612475",
"text": "In this paper we integrate at the tactical level two decision problems arising in container terminals: the berth allocation problem, which consists of assigning and scheduling incoming ships to berthing positions, and the quay crane assignment problem, which assigns to incoming ships a certain QC profile (i.e. number of quay cranes per working shift). We present two formulations: a mixed integer quadratic program and a linearization which reduces to a mixed integer linear program. The objective function aims, on the one hand, to maximize the total value of chosen QC profiles and, on the other hand, to minimize the housekeeping costs generated by transshipment flows between ships. To solve the problem we developed a heuristic algorithm which combines tabu search methods and mathematical programming techniques. Computational results on instances based on real data are presented and compared to those obtained through a commercial solver.",
"title": ""
},
{
"docid": "709efaca57b9eef28e9a58eb4c4c5ace",
"text": "BACKGROUND\nThe increasing use of zebrafish model has not been accompanied by the evolution of proper anaesthesia for this species in research. The most used anaesthetic in fishes, MS222, may induce aversion, reduction of heart rate, and consequently high mortality, especially during long exposures. Therefore, we aim to explore new anaesthetic protocols to be used in zebrafish by studying the quality of anaesthesia and recovery induced by different concentrations of propofol alone and in combination with different concentrations of lidocaine.\n\n\nMATERIAL AND METHODS\nIn experiment A, eighty-three AB zebrafish were randomly assigned to 7 different groups: control, 2.5 (2.5P), 5 (5P) or 7.5 μg/ml (7.5P) of propofol; and 2.5 μg/ml of propofol combined with 50, (P/50L), 100 (P/100L) or 150 μg/ml (P/150L) of lidocaine. Zebrafish were placed in an anaesthetic water bath and time to lose the equilibrium, reflex to touch, reflex to a tail pinch, and respiratory rate were measured. Time to gain equilibrium was also assessed in a clean tank. Five and 24 hours after anaesthesia recovery, zebrafish were evaluated concerning activity and reactivity. Afterwards, in a second phase of experiments (experiment B), the best protocol of the experiment A was compared with a new group of 8 fishes treated with 100 mg/L of MS222 (100M).\n\n\nRESULTS\nIn experiment A, only different concentrations of propofol/lidocaine combination induced full anaesthesia in all animals. Thus only these groups were compared with a standard dose of MS222 in experiment B. Propofol/lidocaine induced a quicker loss of equilibrium, and loss of response to light and painful stimuli compared with MS222. However zebrafish treated with MS222 recovered quickly than the ones treated with propofol/lidocaine.\n\n\nCONCLUSION\nIn conclusion, propofol/lidocaine combination and MS222 have advantages in different situations. MS222 is ideal for minor procedures when a quick recovery is important, while propofol/lidocaine is best to induce a quick and complete anaesthesia.",
"title": ""
},
{
"docid": "77dc5e6d8027443e074adfec1691eb92",
"text": "This survey provides a comprehensive overview of the landscape of crowdsourcing research, targeted at the machine learning community. We begin with an overview of the ways in which crowdsourcing can be used to advance machine learning research, focusing on four application areas: 1) data generation, 2) evaluation and debugging of models, 3) hybrid intelligence systems that leverage the complementary strengths of humans and machines to expand the capabilities of AI, and 4) crowdsourced behavioral experiments that improve our understanding of how humans interact with machine learning systems and technology more broadly. We next review the extensive literature on the behavior of crowdworkers themselves. This research, which explores the prevalence of dishonesty among crowdworkers, how workers respond to both monetary incentives and intrinsic forms of motivation, and how crowdworkers interact with each other, has immediate implications that we distill into best practices that researchers should follow when using crowdsourcing in their own research. We conclude with a discussion of additional tips and best practices that are crucial to the success of any project that uses crowdsourcing, but rarely mentioned in the literature.",
"title": ""
},
{
"docid": "0ff1ea411bcdd28b6c8bc773176f8e1c",
"text": "The paper presents a generalization of Haskell's IO monad suitable for synchronous concurrent programming. The new monad integrates the deterministic concurrency paradigm of synchronous programming with the powerful abstraction features of functional languages and with full support for imperative programming. For event-driven applications, it offers an alternative to the use of existing, thread-based concurrency extensions of functional languages. The concepts presented have been applied in practice in a framework for programming interactive graphics.",
"title": ""
},
{
"docid": "a1530b82b61fc6fc8eceb083fc394e9b",
"text": "The performance of any algorithm will largely depend on the setting of its algorithm-dependent parameters. The optimal setting should allow the algorithm to achieve the best performance for solving a range of optimization problems. However, such parameter tuning itself is a tough optimization problem. In this paper, we present a framework for self-tuning algorithms so that an algorithm to be tuned can be used to tune the algorithm itself. Using the firefly algorithm as an example, we show that this framework works well. It is also found that different parameters may have different sensitivities and thus require different degrees of tuning. Parameters with high sensitivities require fine-tuning to achieve optimality.",
"title": ""
},
{
"docid": "e9b8d419cf5863fcb417025d3081453f",
"text": "Recent trends in targeted cyber-attacks has increased the interest of research in the field of cyber securit y. Such attacks have massive disruptive effects on organizati ons, enterprises and governments. Cyber kill chain is a model to describe cyber-attacks so as to develop incident response a nd analysis capabilities. Cyber kill chain in simple terms is an attack chain, the path that an intruder takes to penetrate information systems over time to execute an attack on the target. This paper broadly categories the methodologies, techniques an d tools involved in cyber-attacks. This paper intends to help a cybe r security researcher to realize the options available to an a ttacker at every stage of a cyber-attack. Keywords—Reconnaissance, RAT, Exploit, Cyber Attack, Persistence, Command & Control",
"title": ""
},
{
"docid": "3f7c6490ccb6d95bd22644faef7f452f",
"text": "A blockchain is a distributed, decentralised database of records of digital events (transactions) that took place and were shared among the participating parties. Each transaction in the public ledger is verified by consensus of a majority of the participants in the system. Bitcoin may not be that important in the future, but blockchain technology's role in Financial and Non-financial world can't be undermined. In this paper, we provide a holistic view of how Blockchain technology works, its strength and weaknesses, and its role to change the way the business happens today and tomorrow.",
"title": ""
},
{
"docid": "80e4748abbb22d2bfefa5e5cbd78fb86",
"text": "A reimplementation of the UNIX file system is described. The reimplementation provides substantially higher throughput rates by using more flexible allocation policies that allow better locality of reference and can be adapted to a wide range of peripheral and processor characteristics. The new file system clusters data that is sequentially accessed and provides tw o block sizes to allo w fast access to lar ge files while not wasting large amounts of space for small files. File access rates of up to ten times f aster than the traditional UNIX file system are e xperienced. Longneeded enhancements to the programmers’ interface are discussed. These include a mechanism to place advisory locks on files, extensions of the name space across file systems, the ability to use long file names, and provisions for administrati ve control of resource usage. Revised February 18, 1984 CR",
"title": ""
},
{
"docid": "33296736553ceaab2e113b62c05a803c",
"text": "In cases of child abuse, usually, the parents are initial suspects. A common explanation of the parents is that the injuries were caused by a sibling. Child-on-child violence is reported to be very rare in children less than 5 years of age, and thorough investigation by the police, child protective services, and medicolegal examinations are needed to proof or disproof the parents' statement. We report two cases of physical abuse of infants by small children.",
"title": ""
},
{
"docid": "eefd8df7bc302285caf663a083cf2afc",
"text": "This paper addresses the challenges brought about by the fact that the English language in modern Russia is mostly taught by non-native English speakers in the context of classic „teacher-textbook-student‟ paradigm. It also highlights the significance of shifting from this teacher-centered approach to EFL teaching and learning process, and presents the results of a teaching experiment aimed at creating a student-centered environment in a language classroom. We suggested that presenting the results of students‟ research projects in the form of Englishlanguage video films can be an effective means of expanding students‟ learning opportunities and eliminating some weaknesses of the artificial language learning environment. After five-year observations over more than 500 students learning EFL at the Petrozavodsk State University (PetrSU) and 20 non-native English-speaking teachers of the same University, we conducted a teaching experiment, which involved 22 students of the History Department and 28 students of the Law Department and resulted in creating two video films on professionally oriented topics chosen by the students and aligned with their fields of study. Our study revealed that students‟ filmmaking is an effective tool to support not only such traditional methods as computer-based learning, presentation, research, but also such innovative kinds of work as multimedia or animated presentations. Therefore, making videos in a foreign language proves to be a highly productive way of acquiring English language skills through the use of technology.",
"title": ""
},
{
"docid": "f3ed24816a14d2c9d96e4d74f9ca5b52",
"text": "Sentiment analysis (SA) using code-mixed data from social media has several applications in opinion mining ranging from customer satisfaction to social campaign analysis in multilingual societies. Advances in this area are impeded by the lack of a suitable annotated dataset. We introduce a Hindi-English (Hi-En) code-mixed dataset for sentiment analysis and perform empirical analysis comparing the suitability and performance of various state-of-the-art SA methods in social media. In this paper, we introduce learning sub-word level representations in LSTM (Subword-LSTM) architecture instead of character-level or word-level representations. This linguistic prior in our architecture enables us to learn the information about sentiment value of important morphemes. This also seems to work well in highly noisy text containing misspellings as shown in our experiments which is demonstrated in morpheme-level feature maps learned by our model. Also, we hypothesize that encoding this linguistic prior in the Subword-LSTM architecture leads to the superior performance. Our system attains accuracy 4-5% greater than traditional approaches on our dataset, and also outperforms the available system for sentiment analysis in Hi-En code-mixed text by 18%.",
"title": ""
},
{
"docid": "3939958f235df9dbf7733f946bfa5051",
"text": "This paper presents preliminary findings from our empirical study of the cognition employed by performers in improvisational theatre. Our study has been conducted in a laboratory setting with local improvisers. Participants performed predesigned improv \"games\", which were videotaped and shown to each individual participant for a retrospective protocol collection. The participants were then shown the video again as a group to elicit data on group dynamics, misunderstandings, etc. This paper presents our initial findings that we have built based on our initial analysis of the data and highlights details of interest.",
"title": ""
},
{
"docid": "6a3db2d0a4ee3972ff7a2181865ff618",
"text": "In the last 15 years, the FOUP/LPU (front opening unified pod (FOUP) and load-port unit (LPU)) module was adopted by major 300 mm wafer semiconductor fabs and proved to be able to create a very high particle free environment for wafer transfer. However, it is not able to provide a moisture, oxygen or airborne molecular contaminants (AMCs) free environment, as the moisture, oxygen or airborne molecular contaminants exhibit in the FOUP through filter, FOUP material, and/or the last processes (in-process). Currently, the technology roadmap of devices has already moved towards the era of sub-20nm, some even to 10nm, node. For those devices made in such a small scale patterns, they are generally very sensitive to moisture, oxygen and other AMCs in the air. An example is that after the processes of etching, the contaminant, as a form of AMC, may evaporate, deposit, and contaminate wafers in the later processes, such as CMP. The deposited AMC may, again, evaporate and deposit on the wafer of next process. Nitrogen gas purge for stationary door-closed FOUP, which is normally when FOUP is at a purge station or a FOUP stocker, has been adopted to minimize sensitive in-process wafers' exposure to those contaminants in many processes. However, gas purge performed when FOUP door is off i.e. FOUP is in open condition, (thereafter referred as “door off” condition) is still rare. Nevertheless, this approach is very urgent is for sub-20nm process. If oxygen is not of concern, Clean Dry Air (CDA) purge instead of nitrogen is an alternative gas. Note that nitrogen is much more expensive than CDA and with potential safety concern. In-processes, such as etching/Chemical Mechanical Polishing (CMP) require FOUP purge while the FOUP door is open to an Equipment Front End Module (EFEM) load-port. This door off condition comes with exceptional challenges as compared to stationary door-closed conditions. To overcome this critical challenge, a new FOUP/LPU purge system is proposed. The system includes two uniform purge diffusers plus top-down pure gas curtain created by a so called “flow field former” when FOUP is in door-off condition. Note that a conceptual patent about “flow field former” in this proposal has been applied (under reviewing). In implementation of this project, firstly, a prototype FOUP/LPU purge system will be built in an ISO class 1 (0.1. um) cleanroom. Various environment parameters in the FOUP including temperature, relative humidity, air velocity magnitude, and concentration of particle will be monitored. Visualization on flow pattern in the FOUP and in the vicinity of door edge will be carried out by green-light laser visualization system. Optimized size/dimensions and operation parameters of the flow filed former will be determined based on the overall testing results. The performance of the newly proposed system will be eventually verified in a production line of a prestigious semiconductor fab. The ultimate objective of this project is to prevent cross contamination and surface oxidation with a quickly control of moisture, oxygen and AMCs when FOUP is in door-off condition through an efficient and purge gas saving system.",
"title": ""
},
{
"docid": "e36659351fcd339533b73fd3dd77f261",
"text": "Past research provided abundant evidence that exposure to violent video games increases aggressive tendencies and decreases prosocial tendencies. In contrast, research on the effects of exposure to prosocial video games has been relatively sparse. The present research found support for the hypothesis that exposure to prosocial video games is positively related to prosocial affect and negatively related to antisocial affect. More specifically, two studies revealed that playing a prosocial (relative to a neutral) video game increased interpersonal empathy and decreased reported pleasure at another's misfortune (i.e., schadenfreude). These results lend further credence to the predictive validity of the General Learning Model (Buckley & Anderson, 2006) for the effects of media exposure on social tendencies.",
"title": ""
},
{
"docid": "8942b664429435fbd66c765215bec284",
"text": "In this paper, we present a technique for generating animation from a variety of user-defined constraints. We pose constraint-based motion synthesis as a maximum a posterior (MAP) problem and develop an optimization framework that generates natural motion satisfying user constraints. The system automatically learns a statistical dynamic model from motion capture data and then enforces it as a motion prior. This motion prior, together with user-defined constraints, comprises a trajectory optimization problem. Solving this problem in the low-dimensional space yields optimal natural motion that achieves the goals specified by the user. We demonstrate the effectiveness of this approach by generating whole-body and facial motion from a variety of spatial-temporal constraints.",
"title": ""
},
{
"docid": "16c5b9caf08d2c2eccb062fe94ecee8b",
"text": "Introduction: The sensory loss and alteration of the shape of the foot make the foot liable to trauma and pressure, and subsequently cause more callus formation, blisters, and ulcers. Foot ulcers usually are liable to secondary infection as cellulitis or osteomyelitis, and may result in amputations. Foot ulcers are a major problem and a major cause of handicaps in leprosy patients. The current study is to present our clinical experience and evaluate the use of sural flap with posterior tibial neurovascular decompression (PTND) in recurrent foot ulcers in leprosy patients. Patient and methods: A total number of 9 patients were suffering from chronic sequelae of leprosy as recurrent foot ulcers. All the patients were reconstructed with the reverse sural artery fasciocutaneous flap with posterior tibial neurovascular decompression from September 2012 to August 2015. Six patients were male and three were female with a mean age of 39.8 years (range, 30-50 years). All the soft tissue defects were in the weight-bearing area of the inside of the foot. The flap sizes ranged from 15/4 to 18/6 cm. Mean follow-up period was 21.2 months (range, 35-2 months). Results: All the flaps healed uneventfully. There was no major complication as total flap necrosis. Only minor complications occurred which were treated without surgical intervention except in two patients who developed superficial necrosis of the skin paddle. Surgical debridement was done one week later. The flap was completely viable after surgery, and the contour of the foot was restored. We found that an improvement of sensation occurred in those patients in whom the anesthesia started one year ago or less and no sensory recovery in patient in whom the anesthesia had lasted for more than two years. Conclusion: The reverse sural artery flap with posterior tibial neurovascular decompression provides a reliable method for recurrent foot soft tissue reconstruction in leprosy patients with encouraging function and aesthetic outcomes. It is a quick and easy procedure.",
"title": ""
},
{
"docid": "086886072f3ac6908bd47822ce7398d1",
"text": "This paper presents a methodology to accurately record human finger postures during grasping. The main contribution consists of a kinematic model of the human hand reconstructed via magnetic resonance imaging of one subject that (i) is fully parameterized and can be adapted to different subjects, and (ii) is amenable to in-vivo joint angle recordings via optical tracking of markers attached to the skin. The principal novelty here is the introduction of a soft-tissue artifact compensation mechanism that can be optimally calibrated in a systematic way. The high-quality data gathered are employed to study the properties of hand postural synergies in humans, for the sake of ongoing neuroscience investigations. These data are analyzed and some comparisons with similar studies are reported. After a meaningful mapping strategy has been devised, these data could be employed to define robotic hand postures suitable to attain effective grasps, or could be used as prior knowledge in lower-dimensional, real-time avatar hand animation.",
"title": ""
},
{
"docid": "461e5b86c94feaf0453b7b7490e9da81",
"text": "The HfO2/TiO2/HfO2 trilayer-structure resistive random access memory (RRAM) devices have been fabricated on Pt- and TiN-coated Si substrates with Pt top electrodes by atomic layer deposition (ALD). The effect of the bottom electrodes of Pt and TiN on the resistive switching properties of trilayer-structure units has been investigated. Both Pt/HfO2/TiO2/HfO2/Pt and Pt/HfO2/TiO2/HfO2/TiN exhibit typical bipolar resistive switching behavior. The dominant conduction mechanisms in low and high resistance states (LRS and HRS) of both memory cells are Ohmic behavior and space-charge-limited current, respectively. It is found that the bottom electrodes of Pt and TiN have great influence on the electroforming polarity preference, ratio of high and low resistance, and dispersion of the operating voltages of trilayer-structure memory cells. Compared to using symmetric Pt top/bottom electrodes, the RRAM cells using asymmetric Pt top/TiN bottom electrodes show smaller negative forming voltage of -3.7 V, relatively narrow distribution of the set/reset voltages and lower ratio of high and low resistances of 102. The electrode-dependent electroforming polarity can be interpreted by considering electrodes' chemical activity with oxygen, the related reactions at anode, and the nonuniform distribution of oxygen vacancy concentration in trilayer-structure of HfO2/TiO2/HfO2 on Pt- and TiN-coated Si. Moreover, for Pt/HfO2/TiO2/HfO2/TiN devices, the TiN electrode as oxygen reservoir plays an important role in reducing forming voltage and improving uniformity of resistive switching parameters.",
"title": ""
},
{
"docid": "2802db74e062103d45143e8e9ad71890",
"text": "Maritime traffic monitoring is an important aspect of safety and security, particularly in close to port operations. While there is a large amount of data with variable quality, decision makers need reliable information about possible situations or threats. To address this requirement, we propose extraction of normal ship trajectory patterns that builds clusters using, besides ship tracing data, the publicly available International Maritime Organization (IMO) rules. The main result of clustering is a set of generated lanes that can be mapped to those defined in the IMO directives. Since the model also takes non-spatial attributes (speed and direction) into account, the results allow decision makers to detect abnormal patterns - vessels that do not obey the normal lanes or sail with higher or lower speeds.",
"title": ""
},
{
"docid": "f12931426173073fcbde9a2fe101dfcb",
"text": "In this letter, a novel compact electromagnetic band-gap (EBG) structure constructed by etching a complementary split ring resonator (CSRR) on the patch of a conventional mushroom-type EBG (CMT-EBG) is proposed. The first bandgap is defined in all directions in the surface structure. Compared to the CMT-EBG structure, the CSRR-based EBG presents a 28% size reduction in the start frequency of the first bandgap. However, asymmetrical frames of the CSRR-based EBG result in different properties at X and Y-directions. Another two tunable bandgaps in Y-direction are observed. Thus, the proposed EBG can be used for multi-band applications, such as dual/triple-band antennas. The EBGs have been constructed and measured.",
"title": ""
}
] |
scidocsrr
|
cf07b56f9251eb0204994fdd05a91fc1
|
Learning Common and Feature-Specific Patterns: A Novel Multiple-Sparse-Representation-Based Tracker
|
[
{
"docid": "3473e863d335725776281fe2082b756f",
"text": "Visual tracking using multiple features has been proved as a robust approach because features could complement each other. Since different types of variations such as illumination, occlusion, and pose may occur in a video sequence, especially long sequence videos, how to properly select and fuse appropriate features has become one of the key problems in this approach. To address this issue, this paper proposes a new joint sparse representation model for robust feature-level fusion. The proposed method dynamically removes unreliable features to be fused for tracking by using the advantages of sparse representation. In order to capture the non-linear similarity of features, we extend the proposed method into a general kernelized framework, which is able to perform feature fusion on various kernel spaces. As a result, robust tracking performance is obtained. Both the qualitative and quantitative experimental results on publicly available videos show that the proposed method outperforms both sparse representation-based and fusion based-trackers.",
"title": ""
}
] |
[
{
"docid": "61185af23da5d0138eef58ab62cd0e72",
"text": "BACKGROUND\nEarlobe tears and disfigurement often result from prolonged pierced earring use and trauma. They are a common cosmetic complaint for which surgical reconstruction has often been advocated.\n\n\nMATERIALS AND METHODS\nA series of 10 patients with earlobe tears or disfigurement treated using straight-line closure, carbon dioxide (CO2 ) laser ablation, or both are described. A succinct literature review of torn earlobe repair is provided.\n\n\nRESULTS\nSuccessful repair with excellent cosmesis of torn and disfigured earlobes was obtained after straight-line surgical closure, CO2 laser ablation, or both.\n\n\nCONCLUSION\nA minimally invasive earlobe repair technique that involves concomitant surgical closure and CO2 laser skin vaporization produces excellent cosmetic results for torn or disfigured earlobes.",
"title": ""
},
{
"docid": "dfca3dda01cbc79624c65c00384f9a03",
"text": "Research in program comprehension has evolved considerably over the past decades. However, only little is known about how developers practice program comprehension in their daily work. This article reports on qualitative and quantitative research to comprehend the strategies, tools, and knowledge used for program comprehension. We observed 28 professional developers, focusing on their comprehension behavior, strategies followed, and tools used. In an online survey with 1,477 respondents, we analyzed the importance of certain types of knowledge for comprehension and where developers typically access and share this knowledge.\n We found that developers follow pragmatic comprehension strategies depending on context. They try to avoid comprehension whenever possible and often put themselves in the role of users by inspecting graphical interfaces. Participants confirmed that standards, experience, and personal communication facilitate comprehension. The team size, its distribution, and open-source experience influence their knowledge sharing and access behavior. While face-to-face communication is preferred for accessing knowledge, knowledge is frequently shared in informal comments.\n Our results reveal a gap between research and practice, as we did not observe any use of comprehension tools and developers seem to be unaware of them. Overall, our findings call for reconsidering the research agendas towards context-aware tool support.",
"title": ""
},
{
"docid": "76dd20f0464ff42badc5fd4381eed256",
"text": "C therapy (CBT) approaches are rooted in the fundamental principle that an individual’s cognitions play a significant and primary role in the development and maintenance of emotional and behavioral responses to life situations. In CBT models, cognitive processes, in the form of meanings, judgments, appraisals, and assumptions associated with specific life events, are the primary determinants of one’s feelings and actions in response to life events and thus either facilitate or hinder the process of adaptation. CBT includes a range of approaches that have been shown to be efficacious in treating posttraumatic stress disorder (PTSD). In this chapter, we present an overview of leading cognitive-behavioral approaches used in the treatment of PTSD. The treatment approaches discussed here include cognitive therapy/reframing, exposure therapies (prolonged exposure [PE] and virtual reality exposure [VRE]), stress inoculation training (SIT), eye movement desensitization and reprocessing (EMDR), and Briere’s selftrauma model (1992, 1996, 2002). In our discussion of each of these approaches, we include a description of the key assumptions that frame the particular approach and the main strategies associated with the treatment. In the final section of this chapter, we review the growing body of research that has evaluated the effectiveness of cognitive-behavioral treatments for PTSD.",
"title": ""
},
{
"docid": "cd81a67321e796a44f78e80479d35096",
"text": "Nature-inspired intelligent swarm technologies deals with complex problems that might be impossible to solve using traditional technologies and approaches. Swarm intelligence techniques (note the difference from intelligent swarms) are population-based stochastic methods used in combinatorial optimization problems in which the collective behavior of relatively simple individuals arises from their local interactions with their environment to produce functional global patterns. Swarm intelligence represents a meta heuristic approach to solving a variety of problems",
"title": ""
},
{
"docid": "7591f47d69c91c4da90fc04949ec21c7",
"text": "This project uses a non-invasive method for measuring the blood glucose concentration levels. By implementing two infrared light with different wavelength; 940nm and 950nm based on the use of light emitting diodes and measure transmittance through solution of distilled water and d-glucose of concentration from 0mg/dL to 200mg/dL by using a 1000nm photodiode. It is observed that the output voltage from the photodiode increased proportionally to the increased of concentration levels. The relation observed was linear. Nine subjects with the same age but different body weight have been used to observe the glucose level during fasting and non-fasting. During fasting the voltage is about 0.13096V to 0.236V and during non-fasting the voltage range is about 0.12V to 0.256V. This method of measuring blood glucose level may become a preferably choice for diabetics because of the non-invasive and may extend to the general public. For having a large majority people able to monitor their blood glucose levels, it may prevent hypoglycemia, hyperglycemia and perhaps the onset of diabetes.",
"title": ""
},
{
"docid": "79729b8f7532617015cbbdc15a876a5c",
"text": "We introduce recurrent neural networkbased Minimum Translation Unit (MTU) models which make predictions based on an unbounded history of previous bilingual contexts. Traditional back-off n-gram models suffer under the sparse nature of MTUs which makes estimation of highorder sequence models challenging. We tackle the sparsity problem by modeling MTUs both as bags-of-words and as a sequence of individual source and target words. Our best results improve the output of a phrase-based statistical machine translation system trained on WMT 2012 French-English data by up to 1.5 BLEU, and we outperform the traditional n-gram based MTU approach by up to 0.8 BLEU.",
"title": ""
},
{
"docid": "e945b0e23ad090cd76b920e073d26116",
"text": "Despite the success of proxy caching in the Web, proxy servers have not been used effectively for caching of Internet multimedia streams such as audio and video. Explosive growth in demand for web-based streaming applications justifies the need for caching popular streams at a proxy server close to the interested clients. Because of the need for congestion control in the Internet, multimedia streams should be quality adaptive. This implies that on a cache-hit, a proxy must replay a variable-quality cached stream whose quality is determined by the bandwidth of the first session. This paper addresses the implications of congestion control and quality adaptation on proxy caching mechanisms. We present a fine-grain replacement algorithm for layered-encoded multimedia streams at Internet proxy servers, and describe a pre-fetching scheme to smooth out the variations in quality of a cached stream during subsequent playbacks. This enables the proxy to perform quality adaptation more effectively and maximizes the delivered quality. We also extend the semantics of popularity and introduce the idea of weighted hit to capture both the level of interest and the usefulness of a layer for a cached stream. Finally, we present our replacement algorithm and show that its interaction with prefetching results in the state of the cache converging to the optimal state such that the quality of a cached stream is proportional to its popularity, and the variations in quality of a cached stream are inversely proportional to its popularity. This implies that after serving several requests for a stream, the proxy can effectively hide low bandwidth paths to the original server from interested clients.",
"title": ""
},
{
"docid": "a25f169d851ff02380d139148f7429f6",
"text": "The refinement of checksums is an essential grand challenge. Given the current status of lossless information, theorists clearly desire the refinement of the locationidentity split, which embodies the essential principles of operating systems. Our focus in this paper is not on whether IPv4 can be made relational, constant-time, and decentralized, but rather on proposing new linear-time symmetries (YEW).",
"title": ""
},
{
"docid": "a2d699f3c600743c732b26071639038a",
"text": "A novel rectifying circuit topology is proposed for converting electromagnetic pulse waves (PWs), that are collected by a wideband antenna, into dc voltage. The typical incident signal considered in this paper consists of 10-ns pulses modulated around 2.4 GHz with a repetition period of 100 ns. The proposed rectifying circuit topology comprises a double-current architecture with inductances that collect the energy during the pulse delivery as well as an output capacitance that maintains the dc output voltage between the pulses. Experimental results show that the efficiency of the rectifier reaches 64% for a mean available incident power of 4 dBm. Similar performances are achieved when a wideband antenna is combined with the rectifier in order to realize a rectenna. By increasing the repetition period of the incident PWs to 400 ns, the rectifier still operates with an efficiency of 52% for a mean available incident pulse power of −8 dBm. Finally, the proposed PW rectenna is tested for a wireless energy transmission application in a low- $Q$ cavity. The time reversal technique is applied to focus PWs around the desired rectenna. Results show that the rectenna is still efficient when noisy PW is handled.",
"title": ""
},
{
"docid": "9bc681a751d8fe9e2c93204ea06786b8",
"text": "In this paper, a complimentary split ring resonator (CSRR) enhanced wideband log-periodic antenna with coupled microstrip line feeding is presented. Here in this work, coupled line feeding to the patches is proposed to avoid individual microstrip feed matching complexities. Three CSRR elements were etched in the ground plane. Individual patches were designed according to the conventional log-periodic design rules. FR4 dielectric substrate is used to design a five-element log-periodic patch with CSRR printed on the ground plane. The result shows a wide operating band ranging from 4.5 GHz to 9 GHz. Surface current distribution of the antenna shows a strong resonance of CSRR's placed in the ground plane. The design approach of the antenna is reported and performance of the proposed antenna has been evaluated through three dimensional electromagnetic simulation validating performance enhancement of the antenna due to presence of CSRRs. Antennas designed in this work may be used in satellite and indoor wireless communication.",
"title": ""
},
{
"docid": "c57d4b7ea0e5f7126329626408f1da2d",
"text": "Educational Data Mining (EDM) is an interdisciplinary ingenuous research area that handles the development of methods to explore data arising in a scholastic fields. Computational approaches used by EDM is to examine scholastic data in order to study educational questions. As a result, it provides intrinsic knowledge of teaching and learning process for effective education planning. This paper conducts a comprehensive study on the recent and relevant studies put through in this field to date. The study focuses on methods of analysing educational data to develop models for improving academic performances and improving institutional effectiveness. This paper accumulates and relegates literature, identifies consequential work and mediates it to computing educators and professional bodies. We identify research that gives well-fortified advice to amend edifying and invigorate the more impuissant segment students in the institution. The results of these studies give insight into techniques for ameliorating pedagogical process, presaging student performance, compare the precision of data mining algorithms, and demonstrate the maturity of open source implements.",
"title": ""
},
{
"docid": "59ea7aef21c4d0d997c9821f61441084",
"text": "Conventional vaccine strategies have been highly efficacious for several decades in reducing mortality and morbidity due to infectious diseases. The bane of conventional vaccines, such as those that include whole organisms or large proteins, appear to be the inclusion of unnecessary antigenic load that, not only contributes little to the protective immune response, but complicates the situation by inducing allergenic and/or reactogenic responses. Peptide vaccines are an attractive alternative strategy that relies on usage of short peptide fragments to engineer the induction of highly targeted immune responses, consequently avoiding allergenic and/or reactogenic sequences. Conversely, peptide vaccines used in isolation are often weakly immunogenic and require particulate carriers for delivery and adjuvanting. In this article, we discuss the specific advantages and considerations in targeted induction of immune responses by peptide vaccines and progresses in the development of such vaccines against various diseases. Additionally, we also discuss the development of particulate carrier strategies and the inherent challenges with regard to safety when combining such technologies with peptide vaccines.",
"title": ""
},
{
"docid": "652536bf512c975b7cb61e60a3246829",
"text": "OBJECTIVE\nInterventions to prevent type 2 diabetes should be directed toward individuals at increased risk for the disease. To identify such individuals without laboratory tests, we developed the Diabetes Risk Score.\n\n\nRESEARCH DESIGN AND METHODS\nA random population sample of 35- to 64-year-old men and women with no antidiabetic drug treatment at baseline were followed for 10 years. New cases of drug-treated type 2 diabetes were ascertained from the National Drug Registry. Multivariate logistic regression model coefficients were used to assign each variable category a score. The Diabetes Risk Score was composed as the sum of these individual scores. The validity of the score was tested in an independent population survey performed in 1992 with prospective follow-up for 5 years.\n\n\nRESULTS\nAge, BMI, waist circumference, history of antihypertensive drug treatment and high blood glucose, physical activity, and daily consumption of fruits, berries, or vegetables were selected as categorical variables. Complete baseline risk data were found in 4435 subjects with 182 incident cases of diabetes. The Diabetes Risk Score value varied from 0 to 20. To predict drug-treated diabetes, the score value >or=9 had sensitivity of 0.78 and 0.81, specificity of 0.77 and 0.76, and positive predictive value of 0.13 and 0.05 in the 1987 and 1992 cohorts, respectively.\n\n\nCONCLUSIONS\nThe Diabetes Risk Score is a simple, fast, inexpensive, noninvasive, and reliable tool to identify individuals at high risk for type 2 diabetes.",
"title": ""
},
{
"docid": "dca156a404916f2ab274406ad565e391",
"text": "Liang Zhou, member IEEE and YiFeng Wu, member IEEE Transphorm, Inc. 75 Castilian Dr., Goleta, CA, 93117 USA [email protected] Abstract: This paper presents a true bridgeless totem-pole Power-Factor-Correction (PFC) circuit using GaN HEMT. Enabled by a diode-free GaN power HEMT bridge with low reverse-recovery charge, very-high-efficiency single-phase AC-DC conversion is realized using a totem-pole topology without the limit of forward voltage drop from a fast diode. When implemented with a pair of sync-rec MOSFETs for line rectification, 99% efficiency is achieved at 230V ac input and 400 dc output in continuous-current mode.",
"title": ""
},
{
"docid": "7fafa786fd387007479a737950b03004",
"text": "A longstanding goal of behavior-based robotics is to solve high-level navigation tasks using end to end navigation behaviors that directly map sensors to actions. Navigation behaviors, such as reaching a goal or following a path without collisions, can be learned from exploration and interaction with the environment, but are constrained by the type and quality of a robot’s sensors, dynamics, and actuators. Traditional motion planning handles varied robot geometry and dynamics, but typically assumes high-quality observations. Modern vision-based navigation typically considers imperfect or partial observations, but simplifies the robot action space. With both approaches, the transition from simulation to reality can be difficult. Here, we learn two end to end navigation behaviors that avoid moving obstacles: point to point and path following. These policies receive noisy lidar observations and output robot linear and angular velocities. We train these policies in small, static environments with Shaped-DDPG, an adaptation of the Deep Deterministic Policy Gradient (DDPG) reinforcement learning method which optimizes reward and network architecture. Over 500 meters of on-robot experiments show , these policies generalize to new environments and moving obstacles, are robust to sensor, actuator, and localization noise, and can serve as robust building blocks for larger navigation tasks. The path following and point and point policies are 83% and 56% more successful than the baseline, respectively.",
"title": ""
},
{
"docid": "a87e49bd4a49f35099171b89d278c4d9",
"text": "Due to its versatility, copositive optimization receives increasing interest in the Operational Research community, and is a rapidly expanding and fertile field of research. It is a special case of conic optimization, which consists of minimizing a linear function over a cone subject to linear constraints. The diversity of copositive formulations in different domains of optimization is impressive, since problem classes both in the continuous and discrete world, as well as both deterministic and stochastic models are covered. Copositivity appears in local and global optimality conditions for quadratic optimization, but can also yield tighter bounds for NP-hard combinatorial optimization problems. Here some of the recent success stories are told, along with principles, algorithms and applications.",
"title": ""
},
{
"docid": "cf3923db7a4880b586e869be16739c8f",
"text": "Deep learning algorithms excel at extracting patterns from raw data, and with large datasets, they have been very successful in computer vision and natural language applications. However, in other domains, large datasets on which to learn representations from may not exist. In this work, we develop a novel multimodal CNN-MLP neural network architecture that utilizes both domain-specific feature engineering as well as learned representations from raw data. We illustrate the effectiveness of such network designs in the chemical sciences, for predicting biodegradability. DeepBioD, a multimodal CNN-MLP network is more accurate than either standalone network designs, and achieves an error classification rate of 0.125 that is 27% lower than the current state-of-theart. Thus, our work indicates that combining traditional feature engineering with representation learning can be effective, particularly in situations where labeled data is limited.",
"title": ""
},
{
"docid": "21db70be88df052de82990109941e49a",
"text": "We present an approach to automatically assign semantic labels to rooms reconstructed from 3D RGB maps of apartments. Evidence for the room types is generated using state-of-the-art deep-learning techniques for scene classification and object detection based on automatically generated virtual RGB views, as well as from a geometric analysis of the map's 3D structure. The evidence is merged in a conditional random field, using statistics mined from different datasets of indoor environments. We evaluate our approach qualitatively and quantitatively and compare it to related methods.",
"title": ""
},
{
"docid": "378f0e528dddcb0319d0015ebc5f8ccb",
"text": "Specific and non specific cholinesterase activities were demonstrated in the ABRM of Mytilus edulis L. and Mytilus galloprovincialis L. by means of different techniques. The results were found identical for both species: neuromuscular junctions “en grappe”-type scarcely distributed within the ABRM, contain AChE. According to the histochemical inhibition tests, (a) the eserine inhibits AChE activity of the ABRM with a level of 5·10−5 M or higher, (b) the ChE non specific activities are inhibited by iso-OMPA level between 5·10−5 to 10−4 M. The histo- and cytochemical observations were completed by showing the existence of neuromuscular junctions containing small clear vesicles: they probably are the morphological support for ACh presence. Moreover, specific and non specific ChE activities were localized in the glio-interstitial cells. AChE precipitates were developped along the ABRM sarcolemma, some muscle mitochondria and in the intercellular spaces remain enigmatic.",
"title": ""
},
{
"docid": "d3f69b8a375e12e916e54727cb9b3b4b",
"text": "OBJECTIVES\nTo determine the concurrent validity of standard clinical outcome measures compared to laboratory outcome measure while performing the weight-bearing lunge test (WBLT).\n\n\nDESIGN\nCross-sectional study.\n\n\nMETHODS\nFifty participants performed the WBLT to determine dorsiflexion ROM using four different measurement techniques: dorsiflexion angle with digital inclinometer at 15cm distal to the tibial tuberosity (°), dorsiflexion angle with inclinometer at tibial tuberosity (°), maximum lunge distance (cm), and dorsiflexion angle using a 2D motion capture system (°). Outcome measures were recorded concurrently during each trial. To establish concurrent validity, Pearson product-moment correlation coefficients (r) were conducted, comparing each dependent variable to the 2D motion capture analysis (identified as the reference standard). A higher correlation indicates strong concurrent validity.\n\n\nRESULTS\nThere was a high correlation between each measurement technique and the reference standard. Specifically the correlation between the inclinometer placement at 15cm below the tibial tuberosity (44.9°±5.5°) and the motion capture angle (27.0°±6.0°) was r=0.76 (p=0.001), between the inclinometer placement at the tibial tuberosity angle (39.0°±4.6°) and the motion capture angle was r=0.71 (p=0.001), and between the distance from the wall clinical measure (10.3±3.0cm) to the motion capture angle was r=0.74 (p=0.001).\n\n\nCONCLUSIONS\nThis study determined that the clinical measures used during the WBLT have a high correlation with the reference standard for assessing dorsiflexion range of motion. Therefore, obtaining maximum lunge distance and inclinometer angles are both valid assessments during the weight-bearing lunge test.",
"title": ""
}
] |
scidocsrr
|
83d9978b4d2832fa04ebebc465b5821d
|
Fast Gradient-Based Algorithms for Constrained Total Variation Image Denoising and Deblurring Problems
|
[
{
"docid": "e425bba0f3ab24c226ab8881f3fe0780",
"text": "We present a new method for solving total variation (TV) minimization problems in image restoration. The main idea is to remove some of the singularity caused by the nondifferentiability of the quantity |∇u| in the definition of the TV-norm before we apply a linearization technique such as Newton’s method. This is accomplished by introducing an additional variable for the flux quantity appearing in the gradient of the objective function, which can be interpreted as the normal vector to the level sets of the image u. Our method can be viewed as a primal-dual method as proposed by Conn and Overton [A Primal-Dual Interior Point Method for Minimizing a Sum of Euclidean Norms, preprint, 1994] and Andersen [Ph.D. thesis, Odense University, Denmark, 1995] for the minimization of a sum of Euclidean norms. In addition to possessing local quadratic convergence, experimental results show that the new method seems to be globally convergent.",
"title": ""
},
{
"docid": "776e04fa00628e249900b02f1edf9432",
"text": "We propose an algorithm for minimizing the total variation of an image, and provide a proof of convergence. We show applications to image denoising, zooming, and the computation of the mean curvature motion of interfaces.",
"title": ""
},
{
"docid": "e2a9bb49fd88071631986874ea197bc1",
"text": "We consider the class of iterative shrinkage-thresholding algorithms (ISTA) for solving linear inverse problems arising in signal/image processing. This class of methods, which can be viewed as an extension of the classical gradient algorithm, is attractive due to its simplicity and thus is adequate for solving large-scale problems even with dense matrix data. However, such methods are also known to converge quite slowly. In this paper we present a new fast iterative shrinkage-thresholding algorithm (FISTA) which preserves the computational simplicity of ISTA but with a global rate of convergence which is proven to be significantly better, both theoretically and practically. Initial promising numerical results for wavelet-based image deblurring demonstrate the capabilities of FISTA which is shown to be faster than ISTA by several orders of magnitude.",
"title": ""
}
] |
[
{
"docid": "79e5953be9730eae76985c240ad56b87",
"text": "In research of sentiment analysis, polarity terms are important elements to help classify the polarity of text to be positive or negative. This paper presents SenseTag which is a sentiment tagging tool for constructing Thai sentiment lexicon on web-based platform. It consists of three main components: data collection, data annotation and administrative operation. The first component is to collect data from social media such as Twitter and www.pantip.com, then preprocess them. The second component is designed to make it easier for linguist to annotate data. The third component allows administrator to configure and manage tag category, domain corpus and user account. We discuss and report the result of annotation in four topics (e.g., mobile phone, automobile, stock market, and general).",
"title": ""
},
{
"docid": "cf8bfa9d33bea4ba7db1ca0202773f93",
"text": "Primary Cutaneous Peripheral T-Cell Lymphoma NOS (PTL-NOS) is a rare, progressive, fatal dermatologic disease that presents with features similar to many common benign plaque-like skin conditions, making recognition of its distinguishing features critical for early diagnosis and treatment (Bolognia et al., 2008). A 78-year-old woman presented to ambulatory care with a single 5 cm nodule on her shoulder that had developed rapidly over 1-2 weeks. Examination was suspicious for malignancy and a biopsy was performed. Biopsy results demonstrated CD4 positivity, consistent with Mycosis Fungoides with coexpression of CD5, CD47, and CD7. Within three months her cancer had progressed into diffuse lesions spanning her entire body. As rapid progression is usually uncharacteristic of Mycosis Fungoides, her diagnosis was amended to PTL-NOS. Cutaneous T-Cell Lymphoma (CTCL) should be suspected in patients with patches, plaques, erythroderma, or papules that persist or multiply despite conservative treatment. Singular biopsies are often nondiagnostic, requiring a high degree of suspicion if there is deviation from the anticipated clinical course. Multiple biopsies are often necessary to make the diagnosis. Physicians caring for patients with rapidly progressive, nonspecific dermatoses with features described above should keep more uncommon forms of CTCL in mind and refer for early biopsy.",
"title": ""
},
{
"docid": "77d11e0b66f3543fadf91d0de4c928c9",
"text": "In the United States, the number of people over 65 will double between ow and 2030 to 69.4 million. Providing care for this increasing population becomes increasingly difficult as the cognitive and physical health of elders deteriorates. This survey article describes ome of the factors that contribute to the institutionalization of elders, and then presents some of the work done towards providing technological support for this vulnerable community.",
"title": ""
},
{
"docid": "bf33724b6be926dd7c46c929b635d31d",
"text": "Biogenic amines are compounds commonly present in living organisms in which they are responsible for many essential functions. They can be naturally present in many foods such as fruits and vegetables, meat, fish, chocolate and milk, but they can also be produced in high amounts by microorganisms through the activity of amino acid decarboxylases. Excessive consumption of these amines can be of health concern because their not equilibrate assumption in human organism can generate different degrees of diseases determined by their action on nervous, gastric and intestinal systems and blood pressure. High microbial counts, which characterise fermented foods, often unavoidably lead to considerable accumulation of biogenic amines, especially tyramine, 2-phenylethylamine, tryptamine, cadaverine, putrescine and histamine. However, great fluctuations of amine content are reported in the same type of product. These differences depend on many variables: the quali-quantitative composition of microbial microflora, the chemico-physical variables, the hygienic procedure adopted during production, and the availability of precursors. Dry fermented sausages are worldwide diffused fermented meat products that can be a source of biogenic amines. Even in the absence of specific rules and regulations regarding the presence of these compounds in sausages and other fermented products, an increasing attention is given to biogenic amines, especially in relation to the higher number of consumers with enhanced sensitivity to biogenic amines determined by the inhibition of the action of amino oxidases, the enzymes involved in the detoxification of these substances. The aim of this paper is to give an overview on the presence of these compounds in dry fermented sausages and to discuss the most important factors influencing their accumulation. These include process and implicit factors as well as the role of starter and nonstarter microflora growing in the different steps of sausage production. Moreover, the role of microorganisms with amino oxidase activity as starter cultures to control or reduce the accumulation of biogenic amines during ripening and storage of sausages is discussed.",
"title": ""
},
{
"docid": "27e314cff197a974e58ea3e68c0b0f8f",
"text": "In this paper, we present a gated convolutional neural network and a temporal attention-based localization method for audio classification, which won the 1st place in the large-scale weakly supervised sound event detection task of Detection and Classification of Acoustic Scenes and Events (DCASE) 2017 challenge. The audio clips in this task, which are extracted from YouTube videos, are manually labelled with one or more audio tags, but without time stamps of the audio events, hence referred to as weakly labelled data. Two subtasks are defined in this challenge including audio tagging and sound event detection using this weakly labelled data. We propose a convolutional recurrent neural network (CRNN) with learnable gated linear units (GLUs) non-linearity applied on the log Mel spectrogram. In addition, we propose a temporal attention method along the frames to predict the locations of each audio event in a chunk from the weakly labelled data. The performances of our systems were ranked the 1st and the 2nd as a team in these two sub-tasks of DCASE 2017 challenge with F value 55.6% and Equal error 0.73, respectively.",
"title": ""
},
{
"docid": "cc88ad638cf94569b9ce609000c37c27",
"text": "We herein propose a new type of efficient neutral photoacid generator. A photoinduced 6π-electrocyclization reaction of photochromic triangle terarylenes triggers subsequent release of a Brønsted acid, which took place from the photocyclized form. A H-atom and its conjugate base were introduced at both sides of a 6π-system to form the self-contained photoacid generator. UV irradiation to the 6π-system produces a cyclohexa-1,3-diene part with a H-atom and a conjugate base on the sp(3) C-atoms at 5- and 6-positions, respectively, which spontaneously release an acid molecule quantitatively forming a polyaromatic compound. A net quantum yield of photoacid generation as high as 0.52 under ambient conditions and a photoinitiated cationic polymerization of an epoxy monomer are demonstrated.",
"title": ""
},
{
"docid": "e1f531740891d47387a2fc2ef4f71c46",
"text": "Multi-dimensional arrays, or tensors, are increasingly found in fields such as signal processing and recommender systems. Real-world tensors can be enormous in size and often very sparse. There is a need for efficient, high-performance tools capable of processing the massive sparse tensors of today and the future. This paper introduces SPLATT, a C library with shared-memory parallelism for three-mode tensors. SPLATT contains algorithmic improvements over competing state of the art tools for sparse tensor factorization. SPLATT has a fast, parallel method of multiplying a matricide tensor by a Khatri-Rao product, which is a key kernel in tensor factorization methods. SPLATT uses a novel data structure that exploits the sparsity patterns of tensors. This data structure has a small memory footprint similar to competing methods and allows for the computational improvements featured in our work. We also present a method of finding cache-friendly reordering and utilizing them with a novel form of cache tiling. To our knowledge, this is the first work to investigate reordering and cache tiling in this context. SPLATT averages almost 30x speedup compared to our baseline when using 16 threads and reaches over 80x speedup on NELL-2.",
"title": ""
},
{
"docid": "4b40fcd6df5403738cabb5f243588d31",
"text": "We purpose a hybrid approach for classification of brain tissues in magnetic resonance images (MRI) based on genetic algorithm (GA) and support vector machine (SVM). A wavelet based texture feature set is derived. The optimal texture features are extracted from normal and tumor regions by using spatial gray level dependence method (SGLDM). These features are given as input to the SVM classifier. The choice of features, which constitute a big problem in classification techniques, is solved by using GA. These optimal features are used to classify the brain tissues into normal, benign or malignant tumor. The performance of the algorithm is evaluated on a series of brain tumor images.",
"title": ""
},
{
"docid": "b3fce50260d7f77e8ca294db9c6666f6",
"text": "Nanotechnology is enabling the development of devices in a scale ranging from one to a few hundred nanometers. Coordination and information sharing among these nano-devices will lead towards the development of future nanonetworks, boosting the range of applications of nanotechnology in the biomédical, environmental and military fields. Despite the major progress in nano-device design and fabrication, it is still not clear how these atomically precise machines will communicate. Recently, the advancements in graphene-based electronics have opened the door to electromagnetic communications in the nano-scale. In this paper, a new quantum mechanical framework is used to analyze the properties of Carbon Nanotubes (CNTs) as nano-dipole antennas. For this, first the transmission line properties of CNTs are obtained using the tight-binding model as functions of the CNT length, diameter, and edge geometry. Then, relevant antenna parameters such as the fundamental resonant frequency and the input impedance are calculated and compared to those of a nano-patch antenna based on a Graphene Nanoribbon (GNR) with similar dimensions. The results show that for a maximum antenna size in the order of several hundred nanometers (the expected maximum size for a nano-device), both a nano-dipole and a nano-patch antenna will be able to radiate electromagnetic waves in the terahertz band (0.1–10.0 THz).",
"title": ""
},
{
"docid": "d0778852e57dddf8a454dd609908ff87",
"text": "Abstract: Trivariate barycentric coordinates can be used both to express a point inside a tetrahedron as a convex combination of the four vertices and to linearly interpolate data given at the vertices. In this paper we generalize these coordinates to convex polyhedra and the kernels of star-shaped polyhedra. These coordinates generalize in a natural way a recently constructed set of coordinates for planar polygons, called mean value coordinates.",
"title": ""
},
{
"docid": "474986186c068f8872f763288b0cabd7",
"text": "Mobile ad hoc network researchers face the challenge of achieving full functionality with good performance while linking the new technology to the rest of the Internet. A strict layered design is not flexible enough to cope with the dynamics of manet environments, however, and will prevent performance optimizations. The MobileMan cross-layer architecture offers an alternative to the pure layered approach that promotes stricter local interaction among protocols in a manet node.",
"title": ""
},
{
"docid": "cc93fe4b851e3d7f3dcdcd2a54af6660",
"text": "Positioning is a key task in most field robotics applications but can be very challenging in GPS-denied or high-slip environments. A common tactic in such cases is to position visually, and we present a visual odometry implementation with the unusual reliance on optical mouse sensors to report vehicle velocity. Using multiple kilometers of data from a lunar rover prototype, we demonstrate that, in conjunction with a moderate-grade inertial measurement unit, such a sensor can provide an integrated pose stream that is at times more accurate than that achievable by wheel odometry and visibly more desirable for perception purposes than that provided by a high-end GPS-INS system. A discussion of the sensor’s limitations and several drift mitigating strategies attempted are presented.",
"title": ""
},
{
"docid": "c8c0c4cdd7d9d218ce7a57275d00abda",
"text": "Thanatophoric dysplasia (TD) is a lethal form of skeletal dysplasia with short-limb dwarfism. Two types distinguished with their radiological characteristics have been defined clinically. The femur is curved in type 1, while it is straight in type 2. TD is known to be due to a mutation in the fibroblast growth factor receptor 3 (FGFR3) gene. We report a male patient who showed clinical findings congruent with TD type 2 and a new mutation in the FGFR3 gene, a finding which has not been reported previously.",
"title": ""
},
{
"docid": "f9e273248ed6e73766f1fc5ba1ecdfda",
"text": "Rapid, vertically climbing cockroaches produced climbing dynamics similar to geckos, despite differences in attachment mechanism, ;foot or toe' morphology and leg number. Given the common pattern in such diverse species, we propose the first template for the dynamics of rapid, legged climbing analogous to the spring-loaded, inverted pendulum used to characterize level running in a diversity of pedestrians. We measured single leg wall reaction forces and center of mass dynamics in death-head cockroaches Blaberus discoidalis, as they ascended a three-axis force plate oriented vertically and coated with glass beads to aid attachment. Cockroaches used an alternating tripod gait during climbs at 19.5+/-4.2 cm s(-1), approximately 5 body lengths s(-1). Single-leg force patterns differed significantly from level running. During vertical climbing, all legs generated forces to pull the animal up the plate. Front and middle legs pulled laterally toward the midline. Front legs pulled the head toward the wall, while hind legs pushed the abdomen away. These single-leg force patterns summed to generate dynamics of the whole animal in the frontal plane such that the center of mass cyclically accelerated up the wall in synchrony with cyclical side-to-side motion that resulted from alternating net lateral pulling forces. The general force patterns used by cockroaches and geckos have provided biological inspiration for the design of a climbing robot named RiSE (Robots in Scansorial Environments).",
"title": ""
},
{
"docid": "dd0319de90cd0e58a9298a62c2178b25",
"text": "The extraction of blood vessels from retinal images is an important and challenging task in medical analysis and diagnosis. This paper presents a novel hybrid automatic approach for the extraction of retinal image vessels. The method consists in the application of mathematical morphology and a fuzzy clustering algorithm followed by a purification procedure. In mathematical morphology, the retinal image is smoothed and strengthened so that the blood vessels are enhanced and the background information is suppressed. The fuzzy clustering algorithm is then employed to the previous enhanced image for segmentation. After the fuzzy segmentation, a purification procedure is used to reduce the weak edges and noise, and the final results of the blood vessels are consequently achieved. The performance of the proposed method is compared with some existing segmentation methods and hand-labeled segmentations. The approach has been tested on a series of retinal images, and experimental results show that our technique is promising and effective.",
"title": ""
},
{
"docid": "e1426862bc5b1feb4960cbf8a617f95a",
"text": "Quality data recorded in varied realistic environments is vital for effective human face related research. Currently available datasets for human facial expression analysis have been generated in highly controlled lab environments. We present a new static facial expression database Static Facial Expressions in the Wild (SFEW) extracted from a temporal facial expressions database Acted Facial Expressions in the Wild (AFEW) [9], which we have extracted from movies. In the past, many robust methods have been reported in the literature. However, these methods have been experimented on different databases or using different protocols within the same databases. The lack of a standard protocol makes it difficult to compare systems and acts as a hindrance in the progress of the field. Therefore, we propose a person independent training and testing protocol for expression recognition as part of the BEFIT workshop. Further, we compare our dataset with the JAFFE and Multi-PIE datasets and provide baseline results.",
"title": ""
},
{
"docid": "d719fb1fe0faf76c14d24f7587c5345f",
"text": "This paper describes a framework for the estimation of shape from sparse or incomplete range data. It uses a shape representation called blending, which allows for the geometric combination of shapes into a unified model— selected regions of the component shapes are cut-out and glued together. Estimation of shape using this representation is realized using a physics-based framework, and also includes a process for deciding how to adapt the structure and topology of the model to improve the fit. The blending representation helps avoid abrupt changes in model geometry during fitting by allowing the smooth evolution of the shape, which improves the robustness of the technique. We demonstrate this framework with a series of experiments showing its ability to automatically extract structured representations from range data given both structurally and topologically complex objects. Comments University of Pennsylvania Department of Computer and Information Science Technical Report No. MSCIS-97-12. This technical report is available at ScholarlyCommons: http://repository.upenn.edu/cis_reports/47 (appeared inIEEE Transactions on Pattern Analysis and Machine Intelligence , Vol. 20, No. 11, pp. 1186-1205, November 1998) Shape Evolution with Structural and Topological Changes using Blending Douglas DeCarlo and Dimitris Metaxas †",
"title": ""
},
{
"docid": "37e31e8e6173b77624494a235744c4a5",
"text": "The extraction of multi-attribute objects from the deep web is the bridge between the unstructured web and structured data. Existing approaches either induce wrappers from a set of human-annotated pages or leverage repeated structures on the page without supervision. What the former lack in automation, the latter lack in accuracy. Thus accurate, automatic multi-attribute object extraction has remained an open challenge. AMBER overcomes both limitations through mutual supervision between the repeated structure and automatically produced annotations. Previous approaches based on automatic annotations have suffered from low quality due to the inherent noise in the annotations and have attempted to compensate by exploring multiple candidate wrappers. In contrast, AMBER compensates for this noise by integrating repeated structure analysis with annotation-based induction: The repeated structure limits the search space for wrapper induction, and conversely, annotations allow the repeated structure analysis to distinguish noise from relevant data. Both, low recall and low precision in the annotations are mitigated to achieve almost human quality (> 98%) multiattribute object extraction. To achieve this accuracy, AMBER needs to be trained once for an entire domain. AMBER bootstraps its training from a small, possibly noisy set of attribute instances and a few unannotated sites of the domain.",
"title": ""
},
{
"docid": "042be132d3e4e99fa5fb3c2efda99d93",
"text": "In this research relevant areas that are important in Information System Security have been reviewed based on the health care industry of Malaysia. Some concepts such as definition of Information System Security, System Security Goals, System Security Threats and human error have been studied. The Human factors that are effective on Information System Security have been highlighted and also some relevant models have been introduced. Reviewing the pervious factors helped to find out the Health Information System factors. Finally, the effective human factors on Health Information System have been identified and the structure of Healthcare industry has been studied. Moreover, these factors are categorized in three new groups: Organizational Factors, Motivational Factors and Learning. This information will help to design a framework in Health Information System. 1.Introduction With less concern for people and organizational issues, a major part of information systems security strategies are technical in nature. As a consequence, since most information systems security strategies are of importance as they concentrate on technical oriented solutions, for instance checklists, risk analysis and assessment techniques, there is a necessity to investigate other ways of managing information systems security as they tend to disregard the social factors of risks and the informal structures of organizations. This investigation concentrates chiefly on human and organizational factors within the computer and information security system. The impact on security can be drastic if human and organizational factors influence their employment and use, irrespective of the power of technical controls (Bishop, 2002). In this aspect, the juncture for computer and information security vulnerabilities may be set by vulnerable computer and information security protection (e.g., weak passwords or poor usability) and malicious intentions may appear. The results of blemished organizational policies and individual practices whose origins are deeply rooted within early design presumptions or managerial choices causes susceptibilities (Besnard and Arief, 2004). Health Information System (HIS) has been implemented in Malaysia since late 1990s. HIS is an integration of several hospitals' information system to manage administration works, patients and clinical records. Because it is easy to access HIS data through the internet so its vulnerability to misuses, data lost and attacks will increase. Health data is very sensitive, therefore they require high protection and information security must be carefully watched as it plays an important role to protect the data from being stolen or harmed. Despite the vast research in information security, the human factor has been neglected …",
"title": ""
},
{
"docid": "02bd18358ac5cb5539a99d4c2babd2ea",
"text": "This tutorial provides an overview of the key research results in the area of entity resolution that are relevant to addressing the new challenges in entity resolution posed by the Web of data, in which real world entities are described by interlinked data rather than documents. Since such descriptions are usually partial, overlapping and sometimes evolving, entity resolution emerges as a central problem both to increase dataset linking but also to search the Web of data for entities and their relations.",
"title": ""
}
] |
scidocsrr
|
de2da68766bcd45e7b462a2ecdbf07d9
|
The effects of post-adoption beliefs on the expectation-confirmation model for information technology continuance
|
[
{
"docid": "6c2afcf5d7db0f5d6baa9d435c203f8a",
"text": "An attempt to extend current thinking on postpurchase response to include attribute satisfaction and dissatisfaction as separate determinants not fully reflected in either cognitive (i.e.. expectancy disconfirmation) or affective paradigms is presented. In separate studies of automobile satisfaction and satisfaction with course instruction, respondents provided the nature of emotional experience, disconfirmation perceptions, and separate attribute satisfaction and dissatisfaction judgments. Analysis confirmed the disconfirmation effect and tbe effects of separate dimensions of positive and negative affect and also suggested a multidimensional structure to the affect dimensions. Additionally, attribute satisfaction and dissatisfaction were significantly related to positive and negative affect, respectively, and to overall satisfaction. It is suggested that all dimensions tested are needed for a full accounting of postpurchase responses in usage.",
"title": ""
}
] |
[
{
"docid": "c3b707fff5a77427ea50ca5354a1ebe3",
"text": "Research conducted primarily during the 1970s and 1980s supported the assertion that carefully constructed text illustrations generally enhance learners’ performance on a variety of text-dependent cognitive outcomes. Research conducted throughout the 1990s still strongly supports that assertion. The more recent research has extended pictures-in-text conclusions to alternative media and technological formats and has begun to explore more systematically the “whys,” “whens,” and “for whoms” of picture facilitation, in addition to the “whethers” and “how muchs.” Consideration is given here to both more and less conventional types of textbook illustration, with several “tenets for teachers” provided in relation to each type.",
"title": ""
},
{
"docid": "66ab42e668afaf95c39b378518e60198",
"text": "OBJECTIVE\nTo present a guideline-derived mnemonic that provides a systematic monitoring process to increase pharmacists' confidence in total parenteral nutrition (TPN) monitoring and improve safety and efficacy of TPN use.\n\n\nDATA SOURCES\nThe American Society for Parenteral and Enteral Nutrition (ASPEN) guidelines were reviewed. Additional resources included a literature search of PubMed (1980 to May 2016) using the search terms: total parenteral nutrition, mnemonic, indications, allergy, macronutrients, micronutrients, fluid, comorbidities, labs, peripheral line, and central line. Articles (English-language only) were evaluated for content, and additional references were identified from a review of literature citations.\n\n\nSTUDY SELECTION AND DATA EXTRACTION\nAll English-language observational studies, review articles, meta-analyses, guidelines, and randomized trials assessing monitoring parameters of TPN were evaluated.\n\n\nDATA SYNTHESIS\nThe ASPEN guidelines were referenced to develop key components of the mnemonic. Review articles, observational trials, meta-analyses, and randomized trials were reviewed in cases where guidelines did not adequately address these components.\n\n\nCONCLUSIONS\nA guideline-derived mnemonic was developed to systematically and safely manage TPN therapy. The mnemonic combines 7 essential components of TPN use and monitoring: Indications, Allergies, Macro/Micro nutrients, Fluid, Underlying comorbidities, Labs, and Line type.",
"title": ""
},
{
"docid": "1c0b590a687f628cb52d34a37a337576",
"text": "Hexagonal torus networks are special family of Eisenstein-Jacobi (EJ) networks which have gained popularity as good candidates network On-Chip (NoC) for interconnecting Multiprocessor System-on-Chips (MPSoCs). They showed better topological properties compared to the 2D torus networks with the same number of nodes. All-to-all broadcast is a collective communication algorithm used frequently in some parallel applications. Recently, an off-chip all-to-all broadcast algorithm has been proposed for hexagonal torus networks assuming half-duplex links and all-ports communication. The proposed all-to-all broadcast algorithm does not achieve the minimum transmission time and requires 24 kextra buffers, where kis the network diameter. We first extend this work by proposing an efficient all-to-all broadcast on hexagonal torus networks under full-duplex links and all-ports communications assumptions which achieves the minimum transmission delay but requires 36 k extra buffers per router. In a second stage, we develop a new all-to-all broadcast more suitable for hexagonal torus network on-chip that achieves optimal transmission delay time without requiring any extra buffers per router. By reducing the amount of buffer space, the new all-to-all broadcast reduces the routers cost which is an important issue in NoCs architectures.",
"title": ""
},
{
"docid": "f5817d371dd3e8bd93d99a41210aed48",
"text": "Early works on human action recognition focused on tracking and classifying articulated body motions. Such methods required accurate localisation of body parts, which is a difficult task, particularly under realistic imaging conditions. As such, recent trends have shifted towards the use of more abstract, low-level appearance features such as spatio-temporal interest points. Motivated by the recent progress in pose estimation, we feel that pose-based action recognition systems warrant a second look. In this paper, we address the question of whether pose estimation is useful for action recognition or if it is better to train a classifier only on low-level appearance features drawn from video data. We compare pose-based, appearance-based and combined pose and appearance features for action recognition in a home-monitoring scenario. Our experiments show that posebased features outperform low-level appearance features, even when heavily corrupted by noise, suggesting that pose estimation is beneficial for the action recognition task.",
"title": ""
},
{
"docid": "36da2b6102762c80b3ae8068d764e220",
"text": "Video games have become an essential part of the way people play and learn. While an increasing number of people are using games to learn in informal environments, their acceptance in the classroom as an instructional activity has been mixed. Successes in informal learning have caused supporters to falsely believe that implementing them into the classroom would be a relatively easy transition and have the potential to revolutionise the entire educational system. In spite of all the hype, many are puzzled as to why more teachers have not yet incorporated them into their teaching. The literature is littered with reports that point to a variety of reasons. One of the reasons, we believe, is that very little has been done to convince teachers that the effort to change their curriculum to integrate video games and other forms of technology is worthy of the effort. Not until policy makers realise the importance of professional British Journal of Educational Technology (2009) doi:10.1111/j.1467-8535.2009.01007.x © 2009 The Authors. Journal compilation © 2009 Becta. Published by Blackwell Publishing, 9600 Garsington Road, Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA. development and training as an important use of funds will positive changes in thinking and perceptions come about, which will allow these various forms of technology to reach their potential. The authors have hypothesised that the major impediments to useful technology integration include the general lack of institutional infrastructure, poor teacher training, and overly-complicated technologies. Overcoming these obstacles requires both a top-down and a bottom-up approach. This paper presents the results of a pilot study with a group of preservice teachers to determine whether our hypotheses regarding potential negativity surrounding video games was valid and whether a wider scale study is warranted. The results of this study are discussed along with suggestions for further research and potential changes in teacher training programmes. Introduction Over the past 40 years, video games have become an increasingly popular way to play and learn. Those who play regularly often note that the major attraction is their ability to become quickly engaged and immersed in gameplay (Lenhart & Kayne, 2008). Many have taken notice of video games’ apparent effectiveness in teaching social interaction and critical thinking in informal learning environments. Beliefs about the effectiveness of video games in informal learning situations have been hyped to the extent that they are often described as the ‘holy grail’ that will revolutionise our entire educational system (Gee, 2003; Kirkley & Kirkley, 2004; Prensky, 2001; Sawyer, 2002). In spite of all the hype and promotion, many educators express puzzlement and disappointment that only a modest number of teachers have incorporated video games into their teaching (Egenfeldt-Nielsen, 2004; Pivec & Pivec, 2008). These results seem to mirror those reported on a general lack of successful integration on the part of teachers and educators of new technologies and media in general. The reasons reported in that research point to a varied and complex issue that involves dispelling preconceived notions, prejudices, and concerns (Kati, 2008; Kim & Baylor, 2008). It is our position that very little has been done to date to overcome these objections. We agree with Magliaro and Ezeife (2007) who posited that teachers can and do greatly influence the successes or failures of classroom interventions. Expenditures on media and technology alone do not guarantee their successful or productive use in the classroom. Policy makers need to realise that professional development and training is the most significant use of funds that will positively affect teaching styles and that will allow technology to reach its potential to change education. But as Cuban, Kirkpatrick and Peck (2001) noted, the practices of policy makers and administrators to increase the effective use of technologies in the classroom more often than not conflict with implementation. In their qualitative study of two Silicon Valley high schools, the authors found that despite ready access to computer technologies, 2 British Journal of Educational Technology © 2009 The Authors. Journal compilation © 2009 Becta. only a handful of teachers actually changed their teaching practices (ie, moved from teacher-centered to student-centered pedagogies). Furthermore, the authors identified several barriers to technological innovation in the classroom, including most notably: a lack of preparation time, poor technical support, outdated technologies, and the inability to sustain interest in the particular lessons and a lack of opportunities for collaboration due to the rigid structure and short time periods allocated to instruction. The authors concluded by suggesting that the path for integrating technology would eventually flourish, but that it initially would be riddled with problems caused by impediments placed upon its success by a lack of institutional infrastructure, poor training, and overly-complicated technologies. We agree with those who suggest that any proposed classroom intervention correlates directly to the expectations and perceived value/benefit on the part of the integrating teachers, who largely control what and how their students learn (Hanusheck, Kain & Rivkin, 1998). Faced with these significant obstacles, it should not be surprising that video games, like other technologies, have been less than successful in transforming the classroom. We further suggest that overcoming these obstacles requires both a top-down and a bottom-up approach. Policy makers carry the burden of correcting the infrastructural issues both for practical reasons as well as for creating optimism on the part of teachers to believe that their administrators actually support their decisions. On the other hand, anyone associated with educational systems for any length of time will agree that a top-down only approach is destined for failure. The successful adoption of any new classroom intervention is based, in larger part, on teachers’ investing in the belief that the experience is worth the effort. If a teacher sees little or no value in an intervention, or is unfamiliar with its use, then the chances that it will be properly implemented are minimised. In other words, a teacher’s adoption of any instructional strategy is directly correlated with his or her views, ideas, and expectations about what is possible, feasible, and useful. In their studies into the game playing habits of various college students, Shaffer, Squire and Gee (2005) alluded to the fact that of those that they interviewed, future teachers indicated that they did not play video games as often as those enrolled in other majors. Our review of these comments generated several additional research questions that we believe deserve further investigation. We began to hypothesise that if it were true that teachers, as a group, do not in fact play video games on a regular basis, it should not be surprising that they would have difficulty integrating games into their curriculum. They would not have sufficient basis to integrate the rules of gameplay with their instructional strategies, nor would they be able to make proper assessments as to which games might be the most effective. We understand that one does not have to actually like something or be good at something to appreciate its value. For example, one does not necessarily have to be a fan of rap music or have a knack for performing it to understand that it could be a useful teaching tool. But, on the other hand, we wondered whether the attitudes towards video games on the part of teachers were not merely neutral, but in fact actually negative, which would further undermine any attempts at successfully introducing games into their classrooms. Expectancy-value 3 © 2009 The Authors. Journal compilation © 2009 Becta. This paper presents the results of a pilot study we conducted that utilised a group of preservice teachers to determine whether our hypothesis regarding potential negativity surrounding video games was valid and whether a wider scale study is warranted. In this examination, we utilised a preference survey to ask participants to reveal their impressions and expectancies about video games in general, their playing habits, and their personal assessments as to the potential role games might play in their future teaching strategies. We believe that the results we found are useful in determining ramifications for some potential changes in teacher preparation and professional development programmes. They provide more background on the kinds of learning that can take place, as described by Prensky (2001), Gee (2003) and others, they consider how to evaluate supposed educational games that exist in the market, and they suggest successful integration strategies. Just as no one can assume that digital kids already have expertise in participatory learning simply because they are exposed to these experiences in their informal, outside of school activities, those responsible for teacher training cannot assume that just because up-and-coming teachers have been brought up in the digital age, they are automatically familiar with, disposed to using, and have positive ideas about how games can be integrated into their curriculum. As a case in point, we found that there exists a significant disconnect between teachers and their students regarding the value of gameplay, and whether one can efficiently and effectively learn from games. In this study, we also attempted to determine if there might be an interaction effect based on the type of console being used. We wanted to confirm Pearson and Bailey’s (2008) assertions that the Nintendo Wii (Nintendo Company, Ltd. 11-1 KamitobaHokodate-cho, Minami-ku, Kyoto 601-8501, Japan) consoles would not only promote improvements in physical move",
"title": ""
},
{
"docid": "af6b0d1f5f3938c0912dccbe43a4a88b",
"text": "The mean body size of limnetic cladocerans decreases from cold temperate to tropical regions, in both the northern and the southern hemisphere. This size shift has been attributed to both direct (e.g. physiological) or indirect (especially increased predation) impacts. To provide further information on the role of predation, we compiled results from several studies of subtropical Uruguayan lakes using three different approaches: (i) field observations from two lakes with contrasting fish abundance, Lakes Rivera and Rodó, (ii) fish exclusion experiments conducted in in-lake mesocosms in three lakes, and (iii) analyses of the Daphnia egg bank in the surface sediment of eighteen lakes. When fish predation pressure was low due to fish kills in Lake Rivera, large-bodied Daphnia appeared. In contrast, small-sized cladocerans were abundant in Lake Rodó, which exhibited a typical high abundance of fish. Likewise, relatively large cladocerans (e.g. Daphnia and Simocephalus) appeared in fishless mesocosms after only 2 weeks, most likely hatched from resting egg banks stored in the surface sediment, but their abundance declined again after fish stocking. Moreover, field studies showed that 9 out of 18 Uruguayan shallow lakes had resting eggs of Daphnia in their surface sediment despite that this genus was only recorded in three of the lakes in summer water samples, indicating that Daphnia might be able to build up populations at low risk of predation. Our results show that medium and large-sized zooplankton can occur in subtropical lakes when fish predation is removed. The evidence provided here collectively confirms the hypothesis that predation, rather than high-temperature induced physiological constraints, is the key factor determining the dominance of small-sized zooplankton in warm lakes.",
"title": ""
},
{
"docid": "a83b417c2be604427eacf33b1db91468",
"text": "We report a male infant with iris coloboma, choanal atresia, postnatal retardation of growth and psychomotor development, genital anomaly, ear anomaly, and anal atresia. In addition, there was cutaneous syndactyly and nail hypoplasia of the second and third fingers on the right and hypoplasia of the left second finger nail. Comparable observations have rarely been reported and possibly represent genetic heterogeneity.",
"title": ""
},
{
"docid": "f76088febc06463f01e98561d89d06cd",
"text": "We present a novel stereo-to-multiview video conversion method for glasses-free multiview displays. Different from previous stereo-to-multiview approaches, our mapping algorithm utilizes the limited depth range of autostereoscopic displays optimally and strives to preserve the scene’s artistic composition and perceived depth even under strong depth compression. We first present an investigation of how perceived image quality relates to spatial frequency and disparity. The outcome of this study is utilized in a two-step mapping algorithm, where we (i) compress the scene depth using a non-linear global function to the depth range of an autostereoscopic display, and (ii) enhance the depth gradients of salient objects to restore the perceived depth and salient scene structure. Finally, an adapted image domain warping algorithm is proposed to generate the multiview output, which enables overall disparity range extension.",
"title": ""
},
{
"docid": "52f95d1c0e198c64455269fd09108703",
"text": "Dynamic control theory has long been used in solving optimal asset allocation problems, and a number of trading decision systems based on reinforcement learning methods have been applied in asset allocation and portfolio rebalancing. In this paper, we extend the existing work in recurrent reinforcement learning (RRL) and build an optimal variable weight portfolio allocation under a coherent downside risk measure, the expected maximum drawdown, E(MDD). In particular, we propose a recurrent reinforcement learning method, with a coherent risk adjusted performance objective function, the Calmar ratio, to obtain both buy and sell signals and asset allocation weights. Using a portfolio consisting of the most frequently traded exchange-traded funds, we show that the expected maximum drawdown risk based objective function yields superior return performance compared to previously proposed RRL objective functions (i.e. the Sharpe ratio and the Sterling ratio), and that variable weight RRL long/short portfolios outperform equal weight RRL long/short portfolios under different transaction cost scenarios. We further propose an adaptive E(MDD) risk based RRL portfolio rebalancing decision system with a transaction cost and market condition stop-loss retraining mechanism, and we show that the ∗Corresponding author: Steve Y. Yang, Postal address: School of Business, Stevens Institute of Technology, 1 Castle Point on Hudson, Hoboken, NJ 07030 USA. Tel.: +1 201 216 3394 Fax: +1 201 216 5385 Email addresses: [email protected] (Saud Almahdi), [email protected] (Steve Y. Yang) Preprint submitted to Expert Systems with Applications June 15, 2017",
"title": ""
},
{
"docid": "04f4d5acba9e7dc932b1730d274037f9",
"text": "Neuroaesthetics is gaining momentum. At this early juncture, it is worth taking stock of where the field is and what lies ahead. Here, I review writings that fall under the rubric of neuroaesthetics. These writings include discussions of the parallel organizational principles of the brain and the intent and practices of artists, the description of informative anecdotes, and the emergence of experimental neuroaesthetics. I then suggest a few areas within neuroaesthetics that might be pursued profitably. Finally, I raise some challenges for the field. These challenges are not unique to neuroaesthetics. As neuroaesthetics comes of age, it might take advantage of the lessons learned from more mature domains of inquiry within cognitive neuroscience.",
"title": ""
},
{
"docid": "3ec63f1c1f74c5d11eaa9d360ceaac55",
"text": "High-level shape understanding and technique evaluation on large repositories of 3D shapes often benefit from additional information known about the shapes. One example of such information is the semantic segmentation of a shape into functional or meaningful parts. Generating accurate segmentations with meaningful segment boundaries is, however, a costly process, typically requiring large amounts of user time to achieve high quality results. In this paper we present an active learning framework for large dataset segmentation, which iteratively provides the user with new predictions by training new models based on already segmented shapes. Our proposed pipeline consists of three novel components. First, we a propose a fast and relatively accurate feature-based deep learning model to provide datasetwide segmentation predictions. Second, we propose an information theory measure to estimate the prediction quality and for ordering subsequent fast and meaningful shape selection. Our experiments show that such suggestive ordering helps reduce users time and effort, produce high quality predictions, and construct a model that generalizes well. Finally, we provide effective segmentation refinement features to help the user quickly correct any incorrect predictions. We show that our framework is more accurate and in general more efficient than state-of-the-art, for massive dataset segmentation with while also providing consistent segment boundaries.",
"title": ""
},
{
"docid": "bda90d8f3b9cf98f714c1a4bfb7a9f61",
"text": "Learning image similarity metrics in an end-to-end fashion with deep networks has demonstrated excellent results on tasks such as clustering and retrieval. However, current methods, all focus on a very local view of the data. In this paper, we propose a new metric learning scheme, based on structured prediction, that is aware of the global structure of the embedding space, and which is designed to optimize a clustering quality metric (NMI). We show state of the art performance on standard datasets, such as CUB200-2011 [37], Cars196 [18], and Stanford online products [30] on NMI and R@K evaluation metrics.",
"title": ""
},
{
"docid": "e46b79180d2e7f1afdd0f144fef3f976",
"text": "The recognition of disease and chemical named entities in scientific articles is a very important subtask in information extraction in the biomedical domain. Due to the diversity and complexity of disease names, the recognition of named entities of diseases is rather tougher than those of chemical names. Although there are some remarkable chemical named entity recognition systems available online such as ChemSpot and tmChem, the publicly available recognition systems of disease named entities are rare. This article presents a system for disease named entity recognition (DNER) and normalization. First, two separate DNER models are developed. One is based on conditional random fields model with a rule-based post-processing module. The other one is based on the bidirectional recurrent neural networks. Then the named entities recognized by each of the DNER model are fed into a support vector machine classifier for combining results. Finally, each recognized disease named entity is normalized to a medical subject heading disease name by using a vector space model based method. Experimental results show that using 1000 PubMed abstracts for training, our proposed system achieves an F1-measure of 0.8428 at the mention level and 0.7804 at the concept level, respectively, on the testing data of the chemical-disease relation task in BioCreative V.Database URL: http://219.223.252.210:8080/SS/cdr.html.",
"title": ""
},
{
"docid": "e9af5e2bfc36dd709ae6feefc4c38976",
"text": "Due to object detection's close relationship with video analysis and image understanding, it has attracted much research attention in recent years. Traditional object detection methods are built on handcrafted features and shallow trainable architectures. Their performance easily stagnates by constructing complex ensembles that combine multiple low-level image features with high-level context from object detectors and scene classifiers. With the rapid development in deep learning, more powerful tools, which are able to learn semantic, high-level, deeper features, are introduced to address the problems existing in traditional architectures. These models behave differently in network architecture, training strategy, and optimization function. In this paper, we provide a review of deep learning-based object detection frameworks. Our review begins with a brief introduction on the history of deep learning and its representative tool, namely, the convolutional neural network. Then, we focus on typical generic object detection architectures along with some modifications and useful tricks to improve detection performance further. As distinct specific detection tasks exhibit different characteristics, we also briefly survey several specific tasks, including salient object detection, face detection, and pedestrian detection. Experimental analyses are also provided to compare various methods and draw some meaningful conclusions. Finally, several promising directions and tasks are provided to serve as guidelines for future work in both object detection and relevant neural network-based learning systems.",
"title": ""
},
{
"docid": "d3a79da70eed0ec0352cb924c8ce0744",
"text": "2. School of Electronics Engineering and Computer science. Peking University, Beijing 100871,China Abstract—Speech emotion recognition (SER) is to study the formation and change of speaker’s emotional state from the speech signal perspective, so as to make the interaction between human and computer more intelligent. SER is a challenging task that has encountered the problem of less training data and low prediction accuracy. Here we propose a data augmentation algorithm based on the imaging principle of the retina and convex lens, to acquire the different sizes of spectrogram and increase the amount of training data by changing the distance between the spectrogram and the convex lens. Meanwhile, with the help of deep learning to get the high-level features, we propose the Deep Retinal Convolution Neural Networks (DRCNNs) for SER and achieve the average accuracy over 99%. The experimental results indicate that DRCNNs outperforms the previous studies in terms of both the number of emotions and the accuracy of recognition. Predictably, our results will dramatically improve human-computer interaction.",
"title": ""
},
{
"docid": "b7aaa53b8018dab4202f1cc4d5de542a",
"text": "Deep learning is very effective at jointly learning feature representations and classification models, especially when dealing with high dimensional input patterns. Probabilistic logic reasoning, on the other hand, is capable to take consistent and robust decisions in complex environments. The integration of deep learning and logic reasoning is still an open-research problem and it is considered to be the key for the development of real intelligent agents. This paper presents Deep Logic Models, which are deep graphical models integrating deep learning and logic reasoning both for learning and inference. Deep Logic Models create an endto-end differentiable architecture, where deep learners are embedded into a network implementing a continuous relaxation of the logic knowledge. The learning process allows to jointly learn the weights of the deep learners and the meta-parameters controlling the high-level reasoning. The experimental results show that the proposed methodology overtakes the limitations of the other approaches that have been proposed to bridge deep learning and reasoning.",
"title": ""
},
{
"docid": "e5a18d6df921ab96da8e106cdb4eeac7",
"text": "This article extends psychological methods and concepts into a domain that is as profoundly consequential as it is poorly understood: intelligence analysis. We report findings from a geopolitical forecasting tournament that assessed the accuracy of more than 150,000 forecasts of 743 participants on 199 events occurring over 2 years. Participants were above average in intelligence and political knowledge relative to the general population. Individual differences in performance emerged, and forecasting skills were surprisingly consistent over time. Key predictors were (a) dispositional variables of cognitive ability, political knowledge, and open-mindedness; (b) situational variables of training in probabilistic reasoning and participation in collaborative teams that shared information and discussed rationales (Mellers, Ungar, et al., 2014); and (c) behavioral variables of deliberation time and frequency of belief updating. We developed a profile of the best forecasters; they were better at inductive reasoning, pattern detection, cognitive flexibility, and open-mindedness. They had greater understanding of geopolitics, training in probabilistic reasoning, and opportunities to succeed in cognitively enriched team environments. Last but not least, they viewed forecasting as a skill that required deliberate practice, sustained effort, and constant monitoring of current affairs.",
"title": ""
},
{
"docid": "fd208ec9a2d74306495ac8c6d454bfd6",
"text": "This qualitative study investigates the perceptions of suburban middle school students’ on academic motivation and student engagement. Ten students, grades 6-8, were randomly selected by the researcher from school counselors’ caseloads and the primary data collection techniques included two types of interviews; individual interviews and focus group interviews. Findings indicate students’ motivation and engagement in middle school is strongly influenced by the social relationships in their lives. The interpersonal factors identified by students were peer influence, teacher support and teacher characteristics, and parental behaviors. Each of these factors consisted of academic and social-emotional support which hindered and/or encouraged motivation and engagement. Students identified socializing with their friends as a means to want to be in school and to engage in learning. Also, students are more engaged and motivated if they believe their teachers care about their academic success and value their job. Lastly, parental involvement in academics appeared to be more crucial for younger students than older students in order to encourage motivation and engagement in school. MIDDLE SCHOOL STUDENTS’ PERCEPTIONS 5 Middle School Students’ Perceptions on Student Engagement and Academic Motivation Middle School Students’ Perceptions on Student Engagement and Academic Motivation Early adolescence marks a time for change for students academically and socially. Students are challenged academically in the sense that there is greater emphasis on developing specific intellectual and cognitive capabilities in school, while at the same time they are attempting to develop social skills and meaningful relationships. It is often easy to overlook the social and interpersonal challenges faced by students in the classroom when there is a large focus on grades in education, especially since teachers’ competencies are often assessed on their students’ academic performance. When schools do not consider psychosocial needs of students, there is a decrease in academic motivation and interest, lower levels of student engagement and poorer academic performance (i.e. grades) for middle school students (Wang & Eccles, 2013). In fact, students who report high levels of engagement in school are 75% more likely to have higher grades and higher attendance rates. Disengaged students tend to have lower grades and are more likely to drop out of school (Klem & Connell, 2004). Therefore, this research has focused on understanding the connections between certain interpersonal influences and academic motivation and engagement.",
"title": ""
},
{
"docid": "eeb1c6e76e3957e5444dcc3865595642",
"text": "The advances of Radio-Frequency Identification (RFID) technology have significantly enhanced the capability of capturing data from pervasive space. It becomes a great challenge in the information era to effectively understand human behavior, mobility and activity through the perceived RFID data. Focusing on RFID data management, this article provides an overview of current challenges, emerging opportunities and recent progresses in RFID. In particular, this article has described and analyzed the research work on three aspects: algorithm, protocol and performance evaluation. We investigate the research progress in RFID with anti-collision algorithms, authentication and privacy protection protocols, localization and activity sensing, as well as performance tuning in realistic settings. We emphasize the basic principles of RFID data management to understand the state-of-the-art and to address directions of future research in RFID.",
"title": ""
},
{
"docid": "d5ef0d39859698fadafcf1cc2b077836",
"text": "In this work, we conduct a detailed evaluation of various allneural, end-to-end trained, sequence-to-sequence models applied to the task of speech recognition. Notably, each of these systems directly predicts graphemes in the written domain, without using an external pronunciation lexicon, or a separate language model. We examine several sequence-to-sequence models including connectionist temporal classification (CTC), the recurrent neural network (RNN) transducer, an attentionbased model, and a model which augments the RNN transducer with an attention mechanism. We find that the sequence-to-sequence models are competitive with traditional state-of-the-art approaches on dictation test sets, although the baseline, which uses a separate pronunciation and language model, outperforms these models on voice-search test sets.",
"title": ""
}
] |
scidocsrr
|
cfef84345002c67420504bd0562c9739
|
Ovicidal effects of a neem seed extract preparation on eggs of body and head lice
|
[
{
"docid": "b77af68695ad7b5f0f2e4519013aae04",
"text": "Because topical compounds based on insecticidal chemicals are the mainstay of head lice treatment, but resistance is increasing, alternatives, such as herbs and oils are being sold to treat head lice. To test a commercial shampoo based on seed extract of Azadirachta indica (neem tree) for its in vitro effect, head lice (n=17) were collected from school children in Australia and immersed in Wash-Away Louse™ shampoo (Alpha-Biocare GmbH, Germany). Vitality was evaluated for more than 3 h by examination under a dissecting microscope. Positive and negative controls were a commercially available head lice treatment containing permethrin 1% (n=19) and no treatment (n=14). All lice treated with the neem shampoo did not show any vital signs from the initial examination after immersion at 5–30 min; after 3 h, only a single louse showed minor signs of life, indicated by gut movements, a mortality of 94%. In the permethrin group, mortality was 20% at 5 min, 50% at 15 min, and 74% after 3 h. All 14 head lice of the negative control group survived during the observation period. Our data show that Wash-Away Louse™ is highly effective in vitro against head lice. The neem shampoo was more effective than the permethrin-based product. We speculate that complex plant-based compounds will replace the well-defined chemical pediculicides if resistance to the commonly used products further increases.",
"title": ""
}
] |
[
{
"docid": "aa0f1910a52018d224dbe65b2be07a4f",
"text": "We describe a system that uses automated planning to synthesize correct and efficient parallel graph programs from high-level algorithmic specifications. Automated planning allows us to use constraints to declaratively encode program transformations such as scheduling, implementation selection, and insertion of synchronization. Each plan emitted by the planner satisfies all constraints simultaneously, and corresponds to a composition of these transformations. In this way, we obtain an integrated compilation approach for a very challenging problem domain. We have used this system to synthesize parallel programs for four graph problems: triangle counting, maximal independent set computation, preflow-push maxflow, and connected components. Experiments on a variety of inputs show that the synthesized implementations perform competitively with hand-written, highly-tuned code.",
"title": ""
},
{
"docid": "2a8c5de43ce73c360a5418709a504fa8",
"text": "The INTERSPEECH 2018 Computational Paralinguistics Challenge addresses four different problems for the first time in a research competition under well-defined conditions: In the Atypical Affect Sub-Challenge, four basic emotions annotated in the speech of handicapped subjects have to be classified; in the Self-Assessed Affect Sub-Challenge, valence scores given by the speakers themselves are used for a three-class classification problem; in the Crying Sub-Challenge, three types of infant vocalisations have to be told apart; and in the Heart Beats Sub-Challenge, three different types of heart beats have to be determined. We describe the Sub-Challenges, their conditions, and baseline feature extraction and classifiers, which include data-learnt (supervised) feature representations by end-to-end learning, the ‘usual’ ComParE and BoAW features, and deep unsupervised representation learning using the AUDEEP toolkit for the first time in the challenge series.",
"title": ""
},
{
"docid": "71c81eb75f55ad6efaf8977b93e6dbef",
"text": "Autonomous vehicle navigation is challenging since various types of road scenarios in real urban environments have to be considered, particularly when only perception sensors are used, without position information. This paper presents a novel real-time optimal-drivable-region and lane detection system for autonomous driving based on the fusion of light detection and ranging (LIDAR) and vision data. Our system uses a multisensory scheme to cover the most drivable areas in front of a vehicle. We propose a feature-level fusion method for the LIDAR and vision data and an optimal selection strategy for detecting the best drivable region. Then, a conditional lane detection algorithm is selectively executed depending on the automatic classification of the optimal drivable region. Our system successfully handles both structured and unstructured roads. The results of several experiments are provided to demonstrate the reliability, effectiveness, and robustness of the system.",
"title": ""
},
{
"docid": "2d997b25227266eddba3da5f728d078b",
"text": "Image morphing has received much attention in recent years. It has proven to be a powerful tool for visual effects in film and television, enabling the fluid transformation of one digital image into another. This paper surveys the growth of this field and describes recent advances in image morphing in terms of feature specification, warp generation methods, and transition control. These areas relate to the ease of use and quality of results. We describe the role of radial basis functions, thin plate splines, energy minimization, and multilevel free-form deformations in advancing the state-of-the-art in image morphing. Recent work on a generalized framework for morphing among multiple images is described.",
"title": ""
},
{
"docid": "c87dfe9acd741262a71223b42fe53bab",
"text": "the most common congenitally missing teeth in the mouth.1,2 The replacement of these teeth raises several important treatment planning concerns. Therefore, it is beneficial to use an interdisciplinary treatment approach in order to get the most predictable outcome. As was previously discussed in Part 1, canine substitution can be an esthetic treatment alternative for the replacement of missing lateral incisors. However, there are many individuals who do not meet the qualifications necessary to be considered for canine substitution. In these patients, some form of restoration must be considered. The restorative treatment alternatives can be divided into two categories: a single-tooth implant or a tooth-supported restoration. The three primary types of tooth-supported restorations available today are a resin-bonded fixed partial denture, cantilevered fixed partial denture, or a conventional full-coverage fixed partial denture. The primary consideration among all these treatment options is conservation of tooth structure. Ideally, the treatment of choice should be the least invasive option that satisfies the expected esthetic and functional objectives. Many adolescent and adult patients lack sufficient space for a lateral incisor restoration. This is often due to ectopic eruption of the canine into the lateral incisor position. The orthodontist must move the canine distally into its appropriate position. This will ultimately aid in achieving alveolar ridge development and optimal final esthetics for the final restoration. Over the past several years, the single-tooth implant has become a popular method of replacing missing teeth.3,4 With the hard and soft tissue grafting procedures that are available, implant success rates as well as the final esthetic outcome have become increasingly predictable.5,6 However, there are still certain instances in which implants cannot be used, such as in the patient who is unwilling to undergo the necessary treatment to facilitate proper implant placement. In these situations some form of tooth-supported restoration must be used. Managing Congenitally Missing Lateral Incisors Part 2: Tooth-Supported Restorations",
"title": ""
},
{
"docid": "6b0b505c9ec2686c775b9af353d3287b",
"text": "OBJECTIVE\nTo determine the prevalence of additional injuries or bleeding disorders in a large population of young infants evaluated for abuse because of apparently isolated bruising.\n\n\nSTUDY DESIGN\nThis was a prospectively planned secondary analysis of an observational study of children<10 years (120 months) of age evaluated for possible physical abuse by 20 US child abuse teams. This analysis included infants<6 months of age with apparently isolated bruising who underwent diagnostic testing for additional injuries or bleeding disorders.\n\n\nRESULTS\nAmong 2890 children, 33.9% (980/2890) were <6 months old, and 25.9% (254/980) of these had bruises identified. Within this group, 57.5% (146/254) had apparently isolated bruises at presentation. Skeletal surveys identified new injury in 23.3% (34/146), neuroimaging identified new injury in 27.4% (40/146), and abdominal injury was identified in 2.7% (4/146). Overall, 50% (73/146) had at least one additional serious injury. Although testing for bleeding disorders was performed in 70.5% (103/146), no bleeding disorders were identified. Ultimately, 50% (73/146) had a high perceived likelihood of abuse.\n\n\nCONCLUSIONS\nInfants younger than 6 months of age with bruising prompting subspecialty consultation for abuse have a high risk of additional serious injuries. Routine medical evaluation for young infants with bruises and concern for physical abuse should include physical examination, skeletal survey, neuroimaging, and abdominal injury screening.",
"title": ""
},
{
"docid": "36dde22c25339790e7c011ca5e8677e4",
"text": "Land surface temperature and emissivity (LST&E) products are generated by the Moderate Resolution Imaging Spectroradiometer (MODIS) and Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) on the National Aeronautics and Space Administration's Terra satellite. These products are generated at different spatial, spectral, and temporal resolutions, resulting in discrepancies between them that are difficult to quantify, compounded by the fact that different retrieval algorithms are used to produce them. The highest spatial resolution MODIS emissivity product currently produced is from the day/night algorithm, which has a spatial resolution of 5 km. The lack of a high-spatial-resolution emissivity product from MODIS limits the usefulness of the data for a variety of applications and limits utilization with higher resolution products such as those from ASTER. This paper aims to address this problem by using the ASTER Temperature Emissivity Separation (TES) algorithm, combined with an improved atmospheric correction method, to generate the LST&E products for MODIS at 1-km spatial resolution and for ASTER in a consistent manner. The rms differences between the ASTER and MODIS emissivities generated from TES over the southwestern U.S. were 0.013 at 8.6 μm and 0.0096 at 11 μm, with good correlations of up to 0.83. The validation with laboratory-measured sand samples from the Algodones and Kelso Dunes in CA showed a good agreement in spectral shape and magnitude, with mean emissivity differences in all bands of 0.009 and 0.010 for MODIS and ASTER, respectively. These differences are equivalent to approximately 0.6 K in the LST for a material at 300 K and at 11 μm.",
"title": ""
},
{
"docid": "736e70e5488d1e7e1fb05aaac08fdc5d",
"text": "Almost all current multi-view methods are slow, and thus suited to offline reconstruction. This paper presents a set o f heuristic space-carving algorithms with a focus on speed over detail. The algorithms discretize space via the 3D Delaunay triangulation, and they carve away the volumes that violate free-space or visibility constraints. Whereas sim ilar methods exist, our algorithms are fast and fully incrementa l. They encompass a dynamic event-driven approach to reconstruction that is suitable for integration with online SLAM or Structure-from-Motion. We integrate our algorithms with PTAM [ 12], and we realize a complete system that reconstructs 3D geometry from video in real-time. Experiments on typical real-world inpu ts demonstrate online performance with modest hardware. We provide run-time complexity analysis and show that the perevent processing time is independent of the number of images previously processed: a requirement for real-time operation on lengthy image sequences.",
"title": ""
},
{
"docid": "7f5bc643261247c0f977130405c6440d",
"text": "In medical image analysis applications, the availability of the large amounts of annotated data is becoming increasingly critical. However, annotated medical data is often scarce and costly to obtain. In this paper, we address the problem of synthesizing retinal color images by applying recent techniques based on adversarial learning. In this setting, a generative model is trained to maximize a loss function provided by a second model attempting to classify its output into real or synthetic. In particular, we propose to implement an adversarial autoencoder for the task of retinal vessel network synthesis. We use the generated vessel trees as an intermediate stage for the generation of color retinal images, which is accomplished with a generative adversarial network. Both models require the optimization of almost everywhere differentiable loss functions, which allows us to train them jointly. The resulting model offers an end-to-end retinal image synthesis system capable of generating as many retinal images as the user requires, with their corresponding vessel networks, by sampling from a simple probability distribution that we impose to the associated latent space. We show that the learned latent space contains a well-defined semantic structure, implying that we can perform calculations in the space of retinal images, e.g., smoothly interpolating new data points between two retinal images. Visual and quantitative results demonstrate that the synthesized images are substantially different from those in the training set, while being also anatomically consistent and displaying a reasonable visual quality.",
"title": ""
},
{
"docid": "31cec57cc62759852a6500d9b0102333",
"text": "Migraine is a common disease throughout the world. Not only does it affect the life of people tremendously, but it also leads to high costs, e.g. due to inability to work or various required drug-taking cycles for finding the best drug for a patient. Solving the latter aspect could help to improve the life of patients and decrease the impact of the other consequences. Therefore, in this paper, we present an approach for a drug recommendation system based on the highly scalable native graph database Neo4J. The presented system uses simulated patient data to help physicians gain more transparency about which drug fits a migraine patient best considering her individual features. Our evaluation shows that the proposed system works as intended. This means that only drugs with highest relevance scores and no interactions with the patient's diseases, drugs or pregnancy are recommended.",
"title": ""
},
{
"docid": "8f9c8119c55e2ac905528e21388b71ab",
"text": "Over the past 20 years Web browsers have changed considerably from being a simple text display to now supporting complex multimedia applications. The client can now enjoy chatting, playing games and Internet banking. All these applications have something in common, they can be run on multiple platforms and in some cases they will run offline. With the introduction of HTML5 this evolution will continue, with browsers offering greater levels of functionality. This paper outlines the background study and the importance of new technologies, such as HTML5's new browser based storage called IndexedDB. We will show how the technology of storing data on the client side has changed over the time and how the technologies for storing data on the client will be used in future when considering known security issues. Further, we propose a solution to IndexedDB's known security issues in form of a security model, which will extend the current model.",
"title": ""
},
{
"docid": "cc5815edf96596a1540fa1fca53da0d3",
"text": "INTRODUCTION\nSevere motion sickness is easily identifiable with sufferers showing obvious behavioral signs, including emesis (vomiting). Mild motion sickness and sopite syndrome lack such clear and objective behavioral markers. We postulate that yawning may have the potential to be used in operational settings as such a marker. This study assesses the utility of yawning as a behavioral marker for the identification of soporific effects by investigating the association between yawning and mild motion sickness/sopite syndrome in a controlled environment.\n\n\nMETHODS\nUsing a randomized motion-counterbalanced design, we collected yawning and motion sickness data from 39 healthy individuals (34 men and 5 women, ages 27-59 yr) in static and motion conditions. Each individual participated in two 1-h sessions. Each session consisted of six 10-min blocks. Subjects performed a multitasking battery on a head mounted display while seated on the moving platform. The occurrence and severity of symptoms were assessed with the Motion Sickness Assessment Questionnaire (MSAQ).\n\n\nRESULTS\nYawning occurred predominantly in the motion condition. All yawners in motion (N = 5) were symptomatic. Compared to nonyawners (MSAQ indices: Total = 14.0, Sopite = 15.0), subjects who yawned in motion demonstrated increased severity of motion sickness and soporific symptoms (MSAQ indices: Total = 17.2, Sopite = 22.4), and reduced multitasking cognitive performance (Composite score: nonyawners = 1348; yawners = 1145).\n\n\nDISCUSSION\nThese results provide evidence that yawning may be a viable behavioral marker to recognize the onset of soporific effects and their concomitant reduction in cognitive performance.",
"title": ""
},
{
"docid": "2da43b2d5430737ba04c73b4b7338481",
"text": "BACKGROUND\nGrounded theory methodology is a suitable qualitative research approach for clinical inquiry into nursing practice, leading to theory development in nursing. Given the variations in, and subjectivity attached to, the manner in which qualitative research is carried out, it is important for researchers to explain the process of how a theory about a nursing phenomenon was generated. Similarly, when grounded theory research reports are reviewed for clinical use, nurses need to look for researchers' explanations of their inquiry process.\n\n\nAIM\nThe focus of this article is to discuss the practical application of grounded theory procedures as they relate to rigour.\n\n\nMETHOD\nReflecting on examples from a grounded theory research study, we suggest eight methods of research practice to delineate further Beck's schema for ensuring, credibility, auditability and fittingness, which are all components of rigour.\n\n\nFINDINGS\nThe eight methods of research practice used to enhance rigour in the course of conducting a grounded theory research study were: (1) let participants guide the inquiry process; (2) check the theoretical construction generated against participants' meanings of the phenomenon; (3) use participants' actual words in the theory; (4) articulate the researcher's personal views and insights about the phenomenon explored; (5) specify the criteria built into the researcher's thinking; (6) specify how and why participants in the study were selected; (7) delineate the scope of the research; and (8) describe how the literature relates to each category which emerged in the theory.\n\n\nCONCLUSIONS\nThe eight methods of research practice should be of use to those in nursing research, management, practice and education in enhancing rigour during the research process and for critiquing published grounded theory research reports.",
"title": ""
},
{
"docid": "939cd6055f850b8fdb6ba869d375cf25",
"text": "...although PPP lessons are often supplemented with skills lessons, most students taught mainly through conventional approaches such as PPP leave school unable to communicate effectively in English (Stern, 1983). This situation has prompted many ELT professionals to take note of... second language acquisition (SLA) studies... and turn towards holistic approaches where meaning is central and where opportunities for language use abound. Task-based learning is one such approach...",
"title": ""
},
{
"docid": "ac8dfb227545260e468957185acd6faa",
"text": "Writing mostly is a solitary activity. Right now, I sit in front of a computer screen. On my desk are piles of paper; notes for what I want to say; unfinished projects waiting to be attended to; books on shelves nearby to be consulted. I need to be alone when I write. Whether writing on a computer, on a typewriter or by hand, most writers I know prefer a secluded place without distractions from telephones and other people who",
"title": ""
},
{
"docid": "ee5fbcc34536f675cadb8e20eb6eb520",
"text": "This work addresses employing direct and indirect discretization methods to obtain a rational discrete approximation of continuous time parallel fractional PID controllers. The different approaches are illustrated by implementing them on an example.",
"title": ""
},
{
"docid": "a8920f6ba4500587cf2a160b8d91331a",
"text": "In this paper, we present an approach that can handle Z-numbers in the context of multi-criteria decision-making problems. The concept of Z-number as an ordered pair Z=(A, B) of fuzzy numbers A and B is used, where A is a linguistic value of a variable of interest and B is a linguistic value of the probability measure of A. As human beings, we communicate with each other by means of natural language using sentences like “the journey from home to university most likely takes about half an hour.” The Z-numbers are converted to fuzzy numbers. Then the Z-TODIM and Z-TOPSIS are presented as a direct extension of the fuzzy TODIM and fuzzy TOPSIS, respectively. The proposed methods are applied to two case studies and compared with the standard approach using crisp values. The results obtained show the feasibility of the approach.",
"title": ""
},
{
"docid": "1d446f3c18bc70bea85a5181e8e29253",
"text": "Nowadays, shopping has played a key role in our economic activity. It deserves investigation how to provide smart shopping by promptly interacting with customers in supermarkets. This paper proposes a sensor-based smart shopping cart (3S-cart) system by using the context-aware ability of sensors to detect the behavior of customers, and respond to them in real time. A prototype of 3S-cart is implemented by encapsulating modularized sensors in a box to be put on shopping carts. Thus, 3S-cart is lightweight and easy to deploy. We also demonstrate two supermarket applications by 3S-cart. In the sales-promotion application, each cart checks if its customer has interest in some products and shows sales information at once to increase the purchasing desire. In the product-navigation application, a customer asks the system to find an unhindered, shortest path to comfortably obtain the desired product. This paper contributes in exploiting the sensor technology to provide interactive shopping in supermarkets, and addressing the prototyping experience and potential applications of the proposed 3S-cart system.",
"title": ""
},
{
"docid": "42db85c2e0e243c5e31895cfc1f03af6",
"text": "This survey presents recent progress on Affective Computing (AC) using mobile devices. AC has been one of the most active research topics for decades. The primary limitation of traditional AC research refers to as impermeable emotions. This criticism is prominent when emotions are investigated outside social contexts. It is problematic because some emotions are directed at other people and arise from interactions with them. The development of smart mobile wearable devices (e.g., Apple Watch, Google Glass, iPhone, Fitbit) enables the wild and natural study for AC in the aspect of computer science. This survey emphasizes the AC study and system using smart wearable devices. Various models, methodologies and systems are discussed in order to examine the state of the art. Finally, we discuss remaining challenges and future works.",
"title": ""
},
{
"docid": "55ca1e978369711765ed4d333313d61a",
"text": "Females frequently score higher on standard tests of empathy, social sensitivity, and emotion recognition than do males. It remains to be clarified, however, whether these gender differences are associated with gender specific neural mechanisms of emotional social cognition. We investigated gender differences in an emotion attribution task using functional magnetic resonance imaging. Subjects either focused on their own emotional response to emotion expressing faces (SELF-task) or evaluated the emotional state expressed by the faces (OTHER-task). Behaviorally, females rated SELF-related emotions significantly stronger than males. Across the sexes, SELF- and OTHER-related processing of facial expressions activated a network of medial and lateral prefrontal, temporal, and parietal brain regions involved in emotional perspective taking. During SELF-related processing, females recruited the right inferior frontal cortex and superior temporal sulcus stronger than males. In contrast, there was increased neural activity in the left temporoparietal junction in males (relative to females). When performing the OTHER-task, females showed increased activation of the right inferior frontal cortex while there were no differential activations in males. The data suggest that females recruit areas containing mirror neurons to a higher degree than males during both SELF- and OTHER-related processing in empathic face-to-face interactions. This may underlie facilitated emotional \"contagion\" in females. Together with the observation that males differentially rely on the left temporoparietal junction (an area mediating the distinction between the SELF and OTHERS) the data suggest that females and males rely on different strategies when assessing their own emotions in response to other people.",
"title": ""
}
] |
scidocsrr
|
d1f9ff799306af6daa3e972a69ab07ba
|
Doubly-Efficient zkSNARKs Without Trusted Setup
|
[
{
"docid": "a0b40209ee7655fcb08b080467d48915",
"text": "This note describes a simplification of the GKR interactive proof for circuit evaluation (Goldwasser, Kalai, and Rothblum, J. ACM 2015), as efficiently instantiated by Cormode, Mitzenmacher, and Thaler (ITCS 2012). The simplification reduces the prover runtime, round complexity, and total communication cost of the protocol by roughly 33%.",
"title": ""
}
] |
[
{
"docid": "7b6cf139cae3e9dae8a2886ddabcfef0",
"text": "An enhanced automated material handling system (AMHS) that uses a local FOUP buffer at each tool is presented as a method of enabling lot size reduction and parallel metrology sampling in the photolithography (litho) bay. The local FOUP buffers can be integrated with current OHT AMHS systems in existing fabs with little or no change to the AMHS or process equipment. The local buffers enhance the effectiveness of the OHT by eliminating intermediate moves to stockers, increasing the move rate capacity by 15-20%, and decreasing the loadport exchange time to 30 seconds. These enhancements can enable the AMHS to achieve the high move rates compatible with lot size reduction down to 12-15 wafers per FOUP. The implementation of such a system in a photolithography bay could result in a 60-74% reduction in metrology delay time, which is the time between wafer exposure at a litho tool and collection of metrology and inspection data.",
"title": ""
},
{
"docid": "30053afee6cae747d41a815832908c22",
"text": "Correspondence to: Y H Chan Tel: (65) 6317 2121 Fax: (65) 6317 2122 Email: chanyh@ cteru.gov.sg INTRODUCTION Now we are at the last stage of the research process: Statistical Analysis & Reporting. In this article, we will discuss how to present the collected data and the forthcoming write-ups will highlight on the appropriate statistical tests to be applied. The terms Sample & Population; Parameter & Statistic; Descriptive & Inferential Statistics; Random variables; Sampling Distribution of the Mean; Central Limit Theorem could be read-up from the references indicated. To be able to correctly present descriptive (and inferential) statistics, we have to understand the two data types (see Fig. 1) that are usually encountered in any research study.",
"title": ""
},
{
"docid": "651e1c0385dd55e04bb2fe90f0e6dd24",
"text": "Pollution has been recognized as the major threat to sustainability of river in Malaysia. Some of the limitations of existing methods for river monitoring are cost of deployment, non-real-time monitoring, and low resolution both in time and space. To overcome these limitations, a smart river monitoring solution is proposed for river water quality in Malaysia. The proposed method incorporates unmanned aerial vehicle (UAV), internet of things (IoT), low power wide area (LPWA) and data analytic (DA). A setup of the proposed method and preliminary results are presented. The proposed method is expected to deliver an efficient and real-time solution for river monitoring in Malaysia.",
"title": ""
},
{
"docid": "8a564e77710c118e4de86be643b061a6",
"text": "SOAR is a cognitive architecture named from state, operator and result, which is adopted to portray the drivers’ guidance compliance behavior on variable message sign VMS in this paper. VMS represents traffic conditions to drivers by three colors: red, yellow, and green. Based on the multiagent platform, SOAR is introduced to design the agent with the detailed description of the working memory, long-term memory, decision cycle, and learning mechanism. With the fixed decision cycle, agent transforms state through four kinds of operators, including choosing route directly, changing the driving goal, changing the temper of driver, and changing the road condition of prediction. The agent learns from the process of state transformation by chunking and reinforcement learning. Finally, computerized simulation program is used to study the guidance compliance behavior. Experiments are simulated many times under given simulation network and conditions. The result, including the comparison between guidance and no guidance, the state transition times, and average chunking times are analyzed to further study the laws of guidance compliance and learning mechanism.",
"title": ""
},
{
"docid": "fa691b72e61685d0fa89bf7a821373da",
"text": "BACKGROUND\nStabilization of a pelvic discontinuity with a posterior column plate with or without an associated acetabular cage sometimes results in persistent micromotion across the discontinuity with late fatigue failure and component loosening. Acetabular distraction offers an alternative technique for reconstruction in cases of severe bone loss with an associated pelvic discontinuity.\n\n\nQUESTIONS/PURPOSES\nWe describe the acetabular distraction technique with porous tantalum components and evaluate its survival, function, and complication rate in patients undergoing revision for chronic pelvic discontinuity.\n\n\nMETHODS\nBetween 2002 and 2006, we treated 28 patients with a chronic pelvic discontinuity with acetabular reconstruction using acetabular distraction. A porous tantalum elliptical acetabular component was used alone or with an associated modular porous tantalum augment in all patients. Three patients died and five were lost to followup before 2 years. The remaining 20 patients were followed semiannually for a minimum of 2 years (average, 4.5 years; range, 2-7 years) with clinical (Merle d'Aubigné-Postel score) and radiographic (loosening, migration, failure) evaluation.\n\n\nRESULTS\nOne of the 20 patients required rerevision for aseptic loosening. Fifteen patients remained radiographically stable at last followup. Four patients had early migration of their acetabular component but thereafter remained clinically asymptomatic and radiographically stable. At latest followup, the average improvement in the patients not requiring rerevision using the modified Merle d'Aubigné-Postel score was 6.6 (range, 3.3-9.6). There were no postoperative dislocations; however, one patient had an infection, one a vascular injury, and one a bowel injury.\n\n\nCONCLUSIONS\nAcetabular distraction with porous tantalum components provides predictable pain relief and durability at 2- to 7-year followup when reconstructing severe acetabular defects with an associated pelvic discontinuity.\n\n\nLEVEL OF EVIDENCE\nLevel IV, therapeutic study. See Instructions for Authors for a complete description of levels of evidence.",
"title": ""
},
{
"docid": "51e2397ecb9ab973543fcadd5cd28d0e",
"text": "A genetic algorithm (GA) is a search and optimization method developed by mimicking the evolutionary principles and chromosomal processing in natural genetics. A GA begins its search with a random set of solutions usually coded in binary string structures. Every solution is assigned a tness which is directly related to the objective function of the search and optimization problem. Thereafter, the population of solutions is modiied to a new population by applying three operators similar to natural genetic operators|reproduction, crossover, and mutation. A GA works iteratively by successively applying these three operators in each generation till a termination criterion is satissed. Over the past one decade, GAs have been successfully applied to a wide variety of problems, because of their simplicity, global perspective, and inherent parallel processing. In this paper, we outline the working principle of a GA by describing these three operators and by outlining an intuitive sketch of why the GA is a useful search algorithm. Thereafter, we apply a GA to solve a complex engineering design problem. Finally, we discuss how GAs can enhance the performance of other soft computing techniques|fuzzy logic and neural network techniques.",
"title": ""
},
{
"docid": "296a4f095e8ca3218a5800d522eabe2d",
"text": "The noradrenergic system modulates performance on tasks dependent on semantic and associative network flexibility (NF) in individuals without neurodevelopmental diagnoses in experiments using a beta-adrenergic antagonist, propranolol. Some studies suggest drugs decreasing noradrenergic activity are beneficial in ASD. In individuals without neurodevelopmental diagnoses, propranolol is beneficial only for difficult NF-dependent problems. However, in populations with altered noradrenergic regulation, propranolol also benefits performance for simple problems. Due to decreased flexibility of access to networks in ASD, we wished to examine the effect of propranolol on NF in ASD. ASD subjects benefited from propranolol on simple anagrams, whereas control subjects were impaired by propranolol. Further study will be necessary to confirm this finding in a larger sample and to compare clinical response with cognitive response to propranolol.",
"title": ""
},
{
"docid": "899887c4020c73c153813e7060f9d144",
"text": "Synthetic biology aims to develop engineering-driven approaches to the programming of cellular functions that could yield transformative technologies. Synthetic gene circuits that combine DNA, protein, and RNA components have demonstrated a range of functions such as bistability, oscillation, feedback, and logic capabilities. However, it remains challenging to scale up these circuits owing to the limited number of designable, orthogonal, high-performance parts, the empirical and often tedious composition rules, and the requirements for substantial resources for encoding and operation. Here, we report a strategy for constructing RNA-only nanodevices to evaluate complex logic in living cells. Our ‘ribocomputing’ systems are composed of de-novo-designed parts and operate through predictable and designable base-pairing rules, allowing the effective in silico design of computing devices with prescribed configurations and functions in complex cellular environments. These devices operate at the post-transcriptional level and use an extended RNA transcript to co-localize all circuit sensing, computation, signal transduction, and output elements in the same self-assembled molecular complex, which reduces diffusion-mediated signal losses, lowers metabolic cost, and improves circuit reliability. We demonstrate that ribocomputing devices in Escherichia coli can evaluate two-input logic with a dynamic range up to 900-fold and scale them to four-input AND, six-input OR, and a complex 12-input expression (A1 AND A2 AND NOT A1*) OR (B1 AND B2 AND NOT B2*) OR (C1 AND C2) OR (D1 AND D2) OR (E1 AND E2). Successful operation of ribocomputing devices based on programmable RNA interactions suggests that systems employing the same design principles could be implemented in other host organisms or in extracellular settings.",
"title": ""
},
{
"docid": "bffc44d02edaa8a699c698185e143d22",
"text": "Photoplethysmography (PPG) technology has been used to develop small, wearable, pulse rate sensors. These devices, consisting of infrared light-emitting diodes (LEDs) and photodetectors, offer a simple, reliable, low-cost means of monitoring the pulse rate noninvasively. Recent advances in optical technology have facilitated the use of high-intensity green LEDs for PPG, increasing the adoption of this measurement technique. In this review, we briefly present the history of PPG and recent developments in wearable pulse rate sensors with green LEDs. The application of wearable pulse rate monitors is discussed.",
"title": ""
},
{
"docid": "cb2edc1728a31b3c37ebf636be81f01f",
"text": "Optimization problems in the power industry have attracted researchers from engineering, operations research and mathematics for many years. The complex nature of generation, transmission, and distribution of electric power implies ample opportunity of improvement towards the optimal. Mathematical models have proven indispensable in deepening the understanding of these optimization problems. The progress in algorithms and implementations has an essential share in widening the abilities to solve these optimization problems on hardware that is permanently improving. In the present paper we address unit commitment in power operation planning. This problem concerns the scheduling of start-up/shut-down decisions and operation levels for power generation units such that the fuel costs over some time horizon are minimal. The diversity of power systems regarding technological design and economic environment leads to a variety of issues potentially occurring in mathematical models of unit commitment. The ongoing liberalization of electricity markets will add to this by shifting the objective in power planning from fuel cost minimization to revenue maximization. For an introduction into basic aspects of unit commitment the reader is referred to the book by Wood and Wollenberg [35]. A literature synopsis on various traditional methodological approaches has been compiled by Sheble and Fahd [29]. In our paper, we present some of the more recent issues in modeling and algorithms for unit commitment. The present paper grew out of a collaboration with the German utility VEAG Vereinigte Energiewerke AG Berlin whose generation system comprises conventional coal and gas fired thermal units as well as pumped-storage plants. An important",
"title": ""
},
{
"docid": "c716b38ed5f8172cedc7310ff1a9eb1a",
"text": "Spam is considered an invasion of privacy. Its changeable structures and variability raise the need for new spam classification techniques. The present study proposes using Bayesian additive regression trees (BART) for spam classification and evaluates its performance against other classification methods, including logistic regression, support vector machines, classification and regression trees, neural networks, random forests, and naive Bayes. BART in its original form is not designed for such problems, hence we modify BART and make it applicable to classification problems. We evaluate the classifiers using three spam datasets; Ling-Spam, PU1, and Spambase to determine the predictive accuracy and the false positive rate.",
"title": ""
},
{
"docid": "224defa4906e121e42218f17c6efa4f2",
"text": "This paper presents a particular model of heuristic search as a path-finding problem in a directed graph. A class of graph-searching procedures is described which uses a heuristic function to guide search. Heuristic functions are estimates of the number o f edges that remain to be traversed in reaching a goal node. A number of theoretical results for this model, and the intuition for these results, are presented. They relate the e])~ciency o f search to the accuracy o f the heuristic function. The results also explore efficiency as a consequence of the reliance or weight placed on the heuristics used.",
"title": ""
},
{
"docid": "5ebddc1a0ce88499702deab9d57ccb62",
"text": "Research into statistical parsing for English has enjoyed over a decade of successful results. However, adapting these models to other languages has met with difficulties. Previous comparative work has shown that Modern Arabic is one of the most difficult languages to parse due to rich morphology and free word order. Classical Arabic is the ancient form of Arabic, and is understudied in computational linguistics, relative to its worldwide reach as the language of the Quran. The thesis is based on seven publications that make significant contributions to knowledge relating to annotating and parsing Classical Arabic. Classical Arabic has been studied in depth by grammarians for over a thousand years using a traditional grammar known as i’rāb (ةاغعإ). Using this grammar to develop a representation for parsing is challenging, as it describes syntax using a hybrid of phrase-structure and dependency relations. This work aims to advance the state-of-the-art for hybrid parsing by introducing a formal representation for annotation and a resource for machine learning. The main contributions are the first treebank for Classical Arabic and the first statistical dependency-based parser in any language for ellipsis, dropped pronouns and hybrid representations. A central argument of this thesis is that using a hybrid representation closely aligned to traditional grammar leads to improved parsing for Arabic. To test this hypothesis, two approaches are compared. As a reference, a pure dependency parser is adapted using graph transformations, resulting in an 87.47% F1-score. This is compared to an integrated parsing model with an F1-score of 89.03%, demonstrating that joint dependency-constituency parsing is better suited to Classical Arabic. The Quran was chosen for annotation as a large body of work exists providing detailed syntactic analysis. Volunteer crowdsourcing is used for annotation in combination with expert supervision. A practical result of the annotation effort is the corpus website: http://corpus.quran.com, an educational resource with over two million users per year. ِيحِ ه رم ٱ نِػٰ َ حْْ ه رم ٱ ِ ه للَّ ٱ مِسْبِ ُيكِحَْمإ يُلِعَْمإ تَهٱَ مَه ه ِ إ اَنتَمْه لَع امَ ه لَ ِ إ اَنَم َ لْْعِ لََ مََهاحَبْ ُ س „Glory be to thee! We have no knowledge except what you have taught us. Indeed it is you who is the all-knowing, the all-wise.‟ A prayer of the angels –The Quran, verse (2:32)",
"title": ""
},
{
"docid": "1763a00460d9f35720d730f1f8ed7a60",
"text": "Bayesian posterior inference is prevalent in various machine learning problems. Variational inference provides one way to approximate the posterior distribution, however its expressive power is limited and so is the accuracy of resulting approximation. Recently, there has a trend of using neural networks to approximate the variational posterior distribution due to the flexibility of neural network architecture. One way to construct flexible variational distribution is to warp a simple density into a complex by normalizing flows, where the resulting density can be analytically evaluated. However, there is a trade-off between the flexibility of normalizing flow and computation cost for efficient transformation. In this paper, we propose a simple yet effective architecture of normalizing flows, ConvFlow, based on convolution over the dimensions of random input vector. Experiments on synthetic and real world posterior inference problems demonstrate the effectiveness and efficiency of the proposed method.",
"title": ""
},
{
"docid": "9e44a60a9284a8e1bacfb24b450564c2",
"text": "Predicting the price correlation of two assets for future time periods is important in portfolio optimization. We apply LSTM recurrent neural networks (RNN) in predicting the stock price correlation coefficient of two individual stocks. RNN’s are competent in understanding temporal dependencies. The use of LSTM cells further enhances its long term predictive properties. To encompass both linearity and nonlinearity in the model, we adopt the ARIMA model as well. The ARIMA model filters linear tendencies in the data and passes on the residual value to the LSTM model. The ARIMA-LSTM hybrid model is tested against other traditional predictive financial models such as the full historical model, constant correlation model, single-index model and the multi-group model. In our empirical study, the predictive ability of the ARIMA-LSTM model turned out superior to all other financial models by a significant scale. Our work implies that it is worth considering the ARIMA-LSTM model to forecast correlation coefficient for portfolio optimization.",
"title": ""
},
{
"docid": "e64d177c2898aee78fbe0f06ef61c373",
"text": "For both humans and robots, tactile sensing is important for interaction with the environment: it is the core sensing used for exploration and manipulation of objects. In this paper, we present a novel tactile-array sensor based on flexible piezoresistive rubber. We describe the design of the sensor and data acquisition system.We evaluate the sensitivity and robustness of the sensor, and show that it is consistent over time with little relaxation. Furthermore, the sensor has the benefit of being flexible, having a high resolution, it is easy to mount, and simple to manufacture. We demonstrate the use of the sensor in an active object-classification system. A robotic gripper with two sensors mounted on its fingers performs a palpation procedure on a set of objects. By squeezing an object, the robot actively explores the material properties, and the system acquires tactile information corresponding to the resulting pressure. Based on a k nearest neighbor classifier and using dynamic time warping to calculate the distance between different time series, the system is able to successfully classify objects. Our sensor demonstrates similar classification performance to the Weiss Robotics tactile sensor, while having additional benefits. © 2012 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "f1ae440f8c29d5f9406aa55ca31e2bb4",
"text": "ConceptMapper is an open source tool we created for classifying mentions in an unstructured text document based on concept terminologies and yielding named entities as output. It is implemented as a UIMA1 (Unstructured Information Management Architecture (IBM, 2004)) annotator, and concepts come from standardised or proprietary terminologies. ConceptMapper can be easily configured, for instance, to use different search strategies or syntactic concepts. In this paper we will describe ConceptMapper, its configuration parameters and their trade-offs, in terms of precision and recall in identifying concepts in a collection of clinical reports written in English. ConceptMapper is available from the Apache UIMA Sandbox, using the Apache Open Source license.",
"title": ""
},
{
"docid": "96d32ec1ed1cc011b50bdac8decb32db",
"text": "In order to maximize online reading performance and comprehension, how should a designer choose typographical variables such as font size and font type? This paper presents an eye tracking study of how font size and font type affect online reading. In a between-subjects design, we collected data from 82 subjects reading stories formatted in a variety of point sizes, san serif, and serif fonts. Reading statistics such as reading speed were computed, and post-tests of comprehension were recorded. For smaller font sizes, fixation durations are significantly longer, resulting in slower reading – but not significantly slower. While there were no significant differences in serif vs. san serif fonts, serif reading was slightly faster. Significant eye tracking differences were found for demographic variables such as age group and whether English is the subject’s first language.",
"title": ""
},
{
"docid": "c7a13f85fdeb234c09237581b7a83238",
"text": "Acoustic structures of sound in Gunnison's prairie dog alarm calls are described, showing how these acoustic structures may encode information about three different predator species (red-tailed hawk-Buteo jamaicensis; domestic dog-Canis familaris; and coyote-Canis latrans). By dividing each alarm call into 25 equal-sized partitions and using resonant frequencies within each partition, commonly occurring acoustic structures were identified as components of alarm calls for the three predators. Although most of the acoustic structures appeared in alarm calls elicited by all three predator species, the frequency of occurrence of these acoustic structures varied among the alarm calls for the different predators, suggesting that these structures encode identifying information for each of the predators. A classification analysis of alarm calls elicited by each of the three predators showed that acoustic structures could correctly classify 67% of the calls elicited by domestic dogs, 73% of the calls elicited by coyotes, and 99% of the calls elicited by red-tailed hawks. The different distributions of acoustic structures associated with alarm calls for the three predator species suggest a duality of function, one of the design elements of language listed by Hockett [in Animal Sounds and Communication, edited by W. E. Lanyon and W. N. Tavolga (American Institute of Biological Sciences, Washington, DC, 1960), pp. 392-430].",
"title": ""
},
{
"docid": "daf1be97c0e1f6d133b58ca899fbd5af",
"text": "Predicting traffic conditions has been recently explored as a way to relieve traffic congestion. Several pioneering approaches have been proposed based on traffic observations of the target location as well as its adjacent regions, but they obtain somewhat limited accuracy due to lack of mining road topology. To address the effect attenuation problem, we propose to take account of the traffic of surrounding locations. We propose an end-to-end framework called DeepTransport, in which Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) are utilized to obtain spatial-temporal traffic information within a transport network topology. In addition, attention mechanism is introduced to align spatial and temporal information. Moreover, we constructed and released a real-world large traffic condition dataset with 5-minute resolution. Our experiments on this dataset demonstrate our method captures the complex relationship in both temporal and spatial domain. It significantly outperforms traditional statistical methods and a state-of-the-art deep learning method.",
"title": ""
}
] |
scidocsrr
|
345fc21411b0e9224637700415427356
|
A Temperature and Process Compensated Ultralow-Voltage Rectifier in Standard Threshold CMOS for Energy-Harvesting Applications
|
[
{
"docid": "3c7154162996f3fecbedd2aa79555ca4",
"text": "This paper describes the design and implementation of fully integrated rectifiers in BiCMOS and standard CMOS technologies for rectifying an externally generated RF carrier signal in inductively powered wireless devices, such as biomedical implants, radio-frequency identification (RFID) tags, and smartcards to generate an on-chip dc supply. Various full-wave rectifier topologies and low-power circuit design techniques are employed to decrease substrate leakage current and parasitic components, reduce the possibility of latch-up, and improve power transmission efficiency and high-frequency performance of the rectifier block. These circuits are used in wireless neural stimulating microsystems, fabricated in two processes: the University of Michigan's 3-/spl mu/m 1M/2P N-epi BiCMOS, and the AMI 1.5-/spl mu/m 2M/2P N-well standard CMOS. The rectifier areas are 0.12-0.48 mm/sup 2/ in the above processes and they are capable of delivering >25mW from a receiver coil to the implant circuitry. The performance of these integrated rectifiers has been tested and compared, using carrier signals in 0.1-10-MHz range.",
"title": ""
},
{
"docid": "4e8dbd3470028541cb53f70cefd54abd",
"text": "Design strategy and efficiency optimization of ultrahigh-frequency (UHF) micro-power rectifiers using diode-connected MOS transistors with very low threshold voltage is presented. The analysis takes into account the conduction angle, leakage current, and body effect in deriving the output voltage. Appropriate approximations allow analytical expressions for the output voltage, power consumption, and efficiency to be derived. A design procedure to maximize efficiency is presented. A superposition method is proposed to optimize the performance of multiple-output rectifiers. Constant-power scaling and area-efficient design are discussed. Using a 0.18-mum CMOS process with zero-threshold transistors, 900-MHz rectifiers with different conversion ratios were designed, and extensive HSPICE simulations show good agreement with the analysis. A 24-stage triple-output rectifier was designed and fabricated, and measurement results verified the validity of the analysis",
"title": ""
}
] |
[
{
"docid": "798cd7ebdd234cb62b32d963fdb51af0",
"text": "The use of frontal sinus radiographs in positive identification has become an increasingly applied and accepted technique among forensic anthropologists, radiologists, and pathologists. From an evidentiary standpoint, however, it is important to know whether frontal sinus radiographs are a reliable method for confirming or rejecting an identification, and standardized methods should be applied when making comparisons. The purpose of the following study is to develop an objective, standardized comparison method, and investigate the reliability of that method. Elliptic Fourier analysis (EFA) was used to assess the variation in 808 outlines of frontal sinuses by calculating likelihood ratios and posterior probabilities from EFA coefficients. Results show that using EFA coefficient comparison to estimate the probability of a correct identification is a reliable technique, and EFA comparison of frontal sinus outlines is recommended when it may be necessary to provide quantitative substantiation for a forensic identification based on these structures.",
"title": ""
},
{
"docid": "d763cefd5d584405e1a6c8e32c371c0c",
"text": "Abstract: Whole world and administrators of Educational institutions’ in our country are concerned about regularity of student attendance. Student’s overall academic performance is affected by the student’s present in his institute. Mainly there are two conventional methods for attendance taking and they are by calling student nams or by taking student sign on paper. They both were more time consuming and inefficient. Hence, there is a requirement of computer-based student attendance management system which will assist the faculty for maintaining attendance of presence. The paper reviews various computerized attendance management system. In this paper basic problem of student attendance management is defined which is traditionally taken manually by faculty. One alternative to make student attendance system automatic is provided by Computer Vision. In this paper we review the various computerized system which is being developed by using different techniques. Based on this review a new approach for student attendance recording and management is proposed to be used for various colleges or academic institutes.",
"title": ""
},
{
"docid": "32fdbda76d2e3a1843e5b45210fbc118",
"text": "Fault-localization techniques that utilize information about all test cases in a test suite have been presented. These techniques use various approaches to identify the likely faulty part(s) of a program, based on information about the execution of the program with the test suite. Researchers have begun to investigate the impact that the composition of the test suite has on the effectiveness of these fault-localization techniques. In this paper, we present the first experiment on one aspect of test-suite composition--test-suite reduction. Our experiment studies the impact of the test-suite reduction on the effectiveness of fault-localization techniques. In our experiment, we apply 10 test-suite reduction strategies to test suites for eight subject programs. We then measure the differences between the effectiveness of four existing fault-localization techniques on the unreduced and reduced test suites. We also measure the reduction in test-suite size of the 10 test-suite reduction strategies. Our experiment shows that fault-localization effectiveness varies depending on the test-suite reduction strategy used, and it demonstrates the trade-offs between test-suite reduction and fault-localization effectiveness.",
"title": ""
},
{
"docid": "5822594b8e4f8d61f65f3136f66fadad",
"text": "Document Analysis and Recognition (DAR) aims at the automatic extraction of information presented on paper and initially addressed to human comprehension. The desired output of DAR systems is usually in a suitable symbolic representation that can subsequently be processed by computers. Over the centuries, paper documents have been the principal instrument to make permanent the progress of the humankind. Nowadays, most information is still recorded, stored, and distributed in paper format. The widespread use of computers for document editing, with the introduction of PCs and wordprocessors in the late 1980’s, had the effect of increasing, instead of reducing, the amount of information held on paper. Even if current technological trends seem to move towards a paperless world, some studies demonstrated that the use of paper as a media for information exchange is still increasing [1]. Moreover, there are still application domains where the paper persists to be the preferred media [2]. The most widely known applications of DAR are related to the processing of office documents (such as invoices, bank documents, business letters, and checks) and to the automatic mail sorting. With the current availability of inexpensive high-resolution scanning devices, combined with powerful computers, state-of-the-art OCR packages can solve simple recognition tasks for most users. Recent research directions are widening the use of the DAR techniques, significant examples are the processing of ancient/historical documents in digital libraries, the information extraction from “digital born” documents, such as PDF and HTML, and the analysis of natural images (acquired with mobile phones and digital cameras) containing textual information. The development of a DAR system requires the integration of several competences in computer science, among the others: image processing, pattern recognition, natural language processing, artificial intelligence, and database systems. DAR applications are particularly suitable for the incorporation of",
"title": ""
},
{
"docid": "f439c6d3d8433f8c8c5243a68695c8ce",
"text": "Energy harvesting capabilities enable totally untethered operation of mobile and ubiquitous systems for extended periods of time without requiring battery replacement. This paper examines technical issues with solar energy harvesting. First, maximum power point tracking (MPPT) techniques are compared in terms of solar cell model, tracking source, and controller style. For energy harvesting in conjunction with energy storage, this paper compares batteries and supercapacitors, and discusses trade-offs between complexity of charging circuitry and efficiency. Recent techniques for handling cold booting are also examined in terms of both hardware and software solutions. This paper assumes mainly small-scale photovoltaic sources, although many techniques apply to other sources as well. Together, the increase efficiency is expected to enable more compact, lower cost energy harvesters to bring longer, more stable operation to the systems.",
"title": ""
},
{
"docid": "5bf9ebaecbcd4b713a52d3572e622cbd",
"text": "Essay scoring is a complicated processing requiring analyzing, summarizing and judging expertise. Traditional work on essay scoring focused on automatic handcrafted features, which are expensive yet sparse. Neural models offer a way to learn syntactic and semantic features automatically, which can potentially improve upon discrete features. In this paper, we employ convolutional neural network (CNN) for the effect of automatically learning features, and compare the result with the state-of-art discrete baselines. For in-domain and domain-adaptation essay scoring tasks, our neural model empirically outperforms discrete models.",
"title": ""
},
{
"docid": "335d50541ebfcf8b4d1e0953ea021334",
"text": "Recent progress in biomechatronics technology brings a lot of benefit to increase the mobility of above elbow (AE) amputees in their daily life activities. A prosthetic arm is used to compensate for the lost functions of the AE amputees absent arm. A number of commercial prosthetic arm have been developed since last few decades. However, many amputees have not used them due to the discrepancy between their expectations and the reality. One of the main factors that cause the loss of interest in current commercial prosthetic arm is the lack of desired DOF (degree of freedom) that results unnatural arm motion. In this paper, a 5 DOF externally powered prosthetic arm for AE amputees is proposed to increase their mobility in daily life activities. The proposed prosthesis is designed to generate natural human like arm motion while performing a daily life task. This paper summarizes the design and controllability of the proposed prosthesis.",
"title": ""
},
{
"docid": "ba755cab267998a3ea813c0f46c8c99c",
"text": "In this paper, we developed a deep neural network (DNN) that learns to solve simultaneously the three tasks of the cQA challenge proposed by the SemEval-2016 Task 3, i.e., question-comment similarity, question-question similarity and new question-comment similarity. The latter is the main task, which can exploit the previous two for achieving better results. Our DNN is trained jointly on all the three cQA tasks and learns to encode questions and comments into a single vector representation shared across the multiple tasks. The results on the official challenge test set show that our approach produces higher accuracy and faster convergence rates than the individual neural networks. Additionally, our method, which does not use any manual feature engineering, approaches the state of the art established with methods that make heavy use of it.",
"title": ""
},
{
"docid": "7c609e5b5205df9e15a8889a621270da",
"text": "This paper presents a singing robot system realized by collaboration of the singing synthesis technology “VOCALOID” (developed by YAMAHA) and the novel biped humanoid robot HRP-4C named “Miim” (developed by AIST). One of the advantages of the cybernetic human HRP-4C is is found on its capacity to perform a variety of body motions and realistic facial expressions. To achieve a realistic robot-singing performance, facial motions such as lip-sync, eyes blinking and facial gestures are required. We developed a demonstration system for VOCALOID and HRP-4C, mainly consisting of singing data and the corresponding facial motions. We report in this work the technical overview of the system and the results of an exhibition presented at CEATEC JAPAN 2009.",
"title": ""
},
{
"docid": "4e8d7e1fdb48da4198e21ae1ef2cd406",
"text": "This paper describes a procedure for the creation of large-scale video datasets for action classification and localization from unconstrained, realistic web data. The scalability of the proposed procedure is demonstrated by building a novel video benchmark, named SLAC (Sparsely Labeled ACtions), consisting of over 520K untrimmed videos and 1.75M clip annotations spanning 200 action categories. Using our proposed framework, annotating a clip takes merely 8.8 seconds on average. This represents a saving in labeling time of over 95% compared to the traditional procedure of manual trimming and localization of actions. Our approach dramatically reduces the amount of human labeling by automatically identifying hard clips, i.e., clips that contain coherent actions but lead to prediction disagreement between action classifiers. A human annotator can disambiguate whether such a clip truly contains the hypothesized action in a handful of seconds, thus generating labels for highly informative samples at little cost. We show that our large-scale dataset can be used to effectively pretrain action recognition models, significantly improving final metrics on smaller-scale benchmarks after fine-tuning. On Kinetics [14], UCF-101 [30] and HMDB-51 [15], models pre-trained on SLAC outperform baselines trained from scratch, by 2.0%, 20.1% and 35.4% in top-1 accuracy, respectively when RGB input is used. Furthermore, we introduce a simple procedure that leverages the sparse labels in SLAC to pre-train action localization models. On THUMOS14 [12] and ActivityNet-v1.3[2], our localization model improves the mAP of baseline model by 8.6% and 2.5%, respectively.",
"title": ""
},
{
"docid": "3b45e971fd172b01045d8e5241514b37",
"text": "Learning from reinforcements is a promising approach for creating intelligent agents. However, reinforcement learning usually requires a large number of training episodes. We present and evaluate a design that addresses this shortcoming by allowing a connectionist Q-learner to accept advice given, at any time and in a natural manner, by an external observer. In our approach, the advice-giver watches the learner and occasionally makes suggestions, expressed as instructions in a simple imperative programming language. Based on techniques from knowledge-based neural networks, we insert these programs directly into the agent‘s utility function. Subsequent reinforcement learning further integrates and refines the advice. We present empirical evidence that investigates several aspects of our approach and shows that, given good advice, a learner can achieve statistically significant gains in expected reward. A second experiment shows that advice improves the expected reward regardless of the stage of training at which it is given, while another study demonstrates that subsequent advice can result in further gains in reward. Finally, we present experimental results that indicate our method is more powerful than a naive technique for making use of advice.",
"title": ""
},
{
"docid": "0122057f9fd813efd9f9e0db308fe8d9",
"text": "Noun phrases in queries are identified and classified into four types: proper names, dictionary phrases, simple phrases and complex phrases. A document has a phrase if all content words in the phrase are within a window of a certain size. The window sizes for different types of phrases are different and are determined using a decision tree. Phrases are more important than individual terms. Consequently, documents in response to a query are ranked with matching phrases given a higher priority. We utilize WordNet to disambiguate word senses of query terms. Whenever the sense of a query term is determined, its synonyms, hyponyms, words from its definition and its compound words are considered for possible additions to the query. Experimental results show that our approach yields between 23% and 31% improvements over the best-known results on the TREC 9, 10 and 12 collections for short (title only) queries, without using Web data.",
"title": ""
},
{
"docid": "e47e03eac12c9d2c7a943457ee03f592",
"text": "Representing and comparing graphs is a central problem in many fields. We present an approach to learn representations of graphs using recurrent neural network autoencoders. Recurrent neural networks require sequential data, so we begin with several methods to generate sequences from graphs, including random walks, breadth-first search, and shortest paths. We train long short-term memory (LSTM) autoencoders to embed these graph sequences into a continuous vector space. We then represent a graph by averaging its graph sequence representations. The graph representations are then used for graph classification and comparison tasks. We demonstrate the effectiveness of our approach by showing improvements over the existing state-of-the-art on several graph classification tasks, including both labeled and unlabeled graphs.",
"title": ""
},
{
"docid": "f0166259741ab845e6261282687b2334",
"text": "Kernel-based machine learning algorithms are based on mapping data from the original input feature space to a kernel feature space of higher dimensionality to solve a linear problem in that space. Over the last decade, kernel based classification and regression approaches such as support vector machines have widely been used in remote sensing as well as in various civil engineering applications. In spite of their better performance with different datasets, support vector machines still suffer from shortcomings such as visualization/interpretation of model, choice of kernel and kernel specific parameter as well as the regularization parameter. Relevance vector machines are another kernel based approach being explored for classification and regression with in last few years. The advantages of the relevance vector machines over the support vector machines is the availability of probabilistic predictions, using arbitrary kernel functions and not requiring setting of the regularization parameter. This paper presents a state-ofthe-art review of SVM and RVM in remote sensing and provides some details of their use in other civil engineering application also.",
"title": ""
},
{
"docid": "355d626f3cafae8364eadee56c2582ce",
"text": "Micro-controllers such as Arduino are widely used by all kinds of makers worldwide. Popularity has been driven by Arduino’s simplicity of use and the large number of sensors and libraries available to extend the basic capabilities of these controllers. The last decade has witnessed a surge of software engineering solutions for “the Internet of Things”, but in several cases these solutions require computational resources that are more advanced than simple, resource-limited micro-controllers. Surprisingly, in spite of being the basic ingredients of complex hardware-software systems, there does not seem to be a simple and flexible way to (1) extend the basic capabilities of micro-controllers, and (2) to coordinate inter-connected micro-controllers in “the Internet of Things”. Indeed, new capabilities are added on a per-application basis and interactions are mainly limited to bespoke, point-to-point protocols that target the hardware I/O rather than the services provided by this hardware. In this paper we present the Arduino Service Interface Programming (ASIP) model, a new model that addresses the issues above by (1) providing a “Service” abstraction to easily add new capabilities to micro-controllers, and (2) providing support for networked boards using a range of strategies, including socket connections, bridging devices, MQTT-based publish-subscribe messaging, discovery services, etc. We provide an open-source implementation of the code running on Arduino boards and client libraries in Java, Python, Racket and Erlang. We show how ASIP enables the rapid development of non-trivial applications (coordination of input/output on distributed boards and implementation of a line-following algorithm for a remote robot) and we assess the performance of ASIP in several ways, both quantitative and qualitative.",
"title": ""
},
{
"docid": "9d2583618e9e00333d044ac53da65ceb",
"text": "The phosphor deposits of the β-sialon:Eu2+ mixed with various amounts (0-1 g) of the SnO₂ nanoparticles were fabricated by the electrophoretic deposition (EPD) process. The mixed SnO₂ nanoparticles was observed to cover onto the particle surfaces of the β-sialon:Eu2+ as well as fill in the voids among the phosphor particles. The external and internal quantum efficiencies (QEs) of the prepared deposits were found to be dependent on the mixing amount of the SnO₂: by comparing with the deposit without any mixing (48% internal and 38% external QEs), after mixing the SnO₂ nanoparticles, the both QEs were improved to 55% internal and 43% external QEs at small mixing amount (0.05 g); whereas, with increasing the mixing amount to 0.1 and 1 g, they were reduced to 36% and 29% for the 0.1 g addition and 15% and 12% l QEs for the 1 g addition. More interestingly, tunable color appearances of the deposits prepared by the EPD process were achieved, from yellow green to blue, by varying the addition amount of the SnO₂, enabling it as an alternative technique instead of altering the voltage and depositing time for the color appearance controllability.",
"title": ""
},
{
"docid": "e992ffd4ebbf9d096de092caf476e37d",
"text": "If self-regulation conforms to an energy or strength model, then self-control should be impaired by prior exertion. In Study 1, trying to regulate one's emotional response to an upsetting movie was followed by a decrease in physical stamina. In Study 2, suppressing forbidden thoughts led to a subsequent tendency to give up quickly on unsolvable anagrams. In Study 3, suppressing thoughts impaired subsequent efforts to control the expression of amusement and enjoyment. In Study 4, autobiographical accounts of successful versus failed emotional control linked prior regulatory demands and fatigue to self-regulatory failure. A strength model of self-regulation fits the data better than activation, priming, skill, or constant capacity models of self-regulation.",
"title": ""
},
{
"docid": "82e6da590f8f836c9a06c26ef4440005",
"text": "We introduce a new count-based optimistic exploration algorithm for reinforcement learning (RL) that is feasible in environments with highdimensional state-action spaces. The success of RL algorithms in these domains depends crucially on generalisation from limited training experience. Function approximation techniques enable RL agents to generalise in order to estimate the value of unvisited states, but at present few methods enable generalisation regarding uncertainty. This has prevented the combination of scalable RL algorithms with efficient exploration strategies that drive the agent to reduce its uncertainty. We present a new method for computing a generalised state visit-count, which allows the agent to estimate the uncertainty associated with any state. Our φ-pseudocount achieves generalisation by exploiting the same feature representation of the state space that is used for value function approximation. States that have less frequently observed features are deemed more uncertain. The φ-ExplorationBonus algorithm rewards the agent for exploring in feature space rather than in the untransformed state space. The method is simpler and less computationally expensive than some previous proposals, and achieves near state-of-the-art results on highdimensional RL benchmarks.",
"title": ""
},
{
"docid": "1e80f38e3ccc1047f7ee7c2b458c0beb",
"text": "This thesis presents an approach to robot arm control exploiting natural dynamics. The approach consists of using a compliant arm whose joints are controlled with simple non-linear oscillators. The arm has special actuators which makes it robust to collisions and gives it a smooth compliant, motion. The oscillators produce rhythmic commands of the joints of the arm, and feedback of the joint motions is used to modify the oscillator behavior. The oscillators enable the resonant properties of the arm to be exploited to perform a variety of rhythmic and discrete tasks. These tasks include tuning into the resonant frequencies of the arm itself, juggling, turning cranks, playing with a Slinky toy, sawing wood, throwing balls, hammering nails and drumming. For most of these tasks, the controllers at each joint are completely independent, being coupled by mechanical coupling through the physical arm of the robot. The thesis shows that this mechanical coupling allows the oscillators to automatically adjust their commands to be appropriate for the arm dynamics and the task. This coordination is robust to large changes in the oscillator parameters, and large changes in the dynamic properties of the arm. As well as providing a wealth of experimental data to support this approach, the thesis also provides a range of analysis tools, both approximate and exact. These can be used to understand and predict the behavior of current implementations, and design new ones. These analysis techniques improve the value of oscillator solutions. The results in the thesis suggest that the general approach of exploiting natural dynamics is a powerful method for obtaining coordinated dynamic behavior of robot arms. Thesis Supervisor: Rodney A. Brooks Title: Professor of Electrical Engineering and Computer Science, MIT 5.4. CASE (C): MODIFYING THE NATURAL DYNAMICS 95",
"title": ""
},
{
"docid": "ba9d274247f3f3da9274be52fa8a7096",
"text": "Dysregulated growth hormone (GH) hypersecretion is usually caused by a GH-secreting pituitary adenoma and leads to acromegaly - a disorder of disproportionate skeletal, tissue, and organ growth. High GH and IGF1 levels lead to comorbidities including arthritis, facial changes, prognathism, and glucose intolerance. If the condition is untreated, enhanced mortality due to cardiovascular, cerebrovascular, and pulmonary dysfunction is associated with a 30% decrease in life span. This Review discusses acromegaly pathogenesis and management options. The latter include surgery, radiation, and use of novel medications. Somatostatin receptor (SSTR) ligands inhibit GH release, control tumor growth, and attenuate peripheral GH action, while GH receptor antagonists block GH action and effectively lower IGF1 levels. Novel peptides, including SSTR ligands, exhibiting polyreceptor subtype affinities and chimeric dopaminergic-somatostatinergic properties are currently in clinical trials. Effective control of GH and IGF1 hypersecretion and ablation or stabilization of the pituitary tumor mass lead to improved comorbidities and lowering of mortality rates for this hormonal disorder.",
"title": ""
}
] |
scidocsrr
|
234dd6636d9123b964f2dad497cee6ae
|
Let Your Photos Talk: Generating Narrative Paragraph for Photo Stream via Bidirectional Attention Recurrent Neural Networks
|
[
{
"docid": "6a7bfed246b83517655cb79a951b1f48",
"text": "Hypernymy, textual entailment, and image captioning can be seen as special cases of a single visual-semantic hierarchy over words, sentences, and images. In this paper we advocate for explicitly modeling the partial order structure of this hierarchy. Towards this goal, we introduce a general method for learning ordered representations, and show how it can be applied to a variety of tasks involving images and language. We show that the resulting representations improve performance over current approaches for hypernym prediction and image-caption retrieval.",
"title": ""
},
{
"docid": "9eaab923986bf74bdd073f6766ca45b2",
"text": "This paper introduces a novel generation system that composes humanlike descriptions of images from computer vision detections. By leveraging syntactically informed word co-occurrence statistics, the generator filters and constrains the noisy detections output from a vision system to generate syntactic trees that detail what the computer vision system sees. Results show that the generation system outperforms state-of-the-art systems, automatically generating some of the most natural image descriptions to date.",
"title": ""
}
] |
[
{
"docid": "97a9e9e85691a1fc461209dd1c636497",
"text": "Diversity and plasticity are hallmarks of cells of the monocyte-macrophage lineage. In response to IFNs, Toll-like receptor engagement, or IL-4/IL-13 signaling, macrophages undergo M1 (classical) or M2 (alternative) activation, which represent extremes of a continuum in a universe of activation states. Progress has now been made in defining the signaling pathways, transcriptional networks, and epigenetic mechanisms underlying M1-M2 or M2-like polarized activation. Functional skewing of mononuclear phagocytes occurs in vivo under physiological conditions (e.g., ontogenesis and pregnancy) and in pathology (allergic and chronic inflammation, tissue repair, infection, and cancer). However, in selected preclinical and clinical conditions, coexistence of cells in different activation states and unique or mixed phenotypes have been observed, a reflection of dynamic changes and complex tissue-derived signals. The identification of mechanisms and molecules associated with macrophage plasticity and polarized activation provides a basis for macrophage-centered diagnostic and therapeutic strategies.",
"title": ""
},
{
"docid": "b4874b03c639ee105f76266d37540a54",
"text": "We tested the validity and reliability of the BioSpace InBody 320, Omron and Bod-eComm body composition devices in men and women (n 254; 21-80 years) and boys and girls (n 117; 10-17 years). We analysed percentage body fat (%BF) and compared the results with dual-energy X-ray absorptiometry (DEXA) in adults and compared the results of the InBody with underwater weighing (UW) in children. All body composition devices were correlated (r 0.54-0.97; P< or =0.010) to DEXA except the Bod-eComm in women aged 71-80 years (r 0.54; P=0.106). In girls, the InBody %BF was correlated with UW (r 0.79; P< or =0.010); however, a more moderate correlation (r 0.69; P< or =0.010) existed in boys. Bland-Altman plots indicated that all body composition devices underestimated %BF in adults (1.0-4.8 %) and overestimated %BF in children (0.3-2.3 %). Lastly, independent t tests revealed that the mean %BF assessed by the Bod-eComm in women (aged 51-60 and 71-80 years) and in the Omron (age 18-35 years) were significantly different compared with DEXA (P< or =0.010). In men, the Omron (aged 18-35 years), and the InBody (aged 36-50 years) were significantly different compared with DEXA (P=0.025; P=0.040 respectively). In addition, independent t tests indicated that the InBody mean %BF in girls aged 10-17 years was significantly different from UW (P=0.001). Pearson's correlation analyses demonstrated that the Bod-eComm (men and women) and Omron (women) had significant mean differences compared with the reference criterion; therefore, the %BF output from these two devices should be interpreted with caution. The repeatability of each body composition device was supported by small CV (<3.0 %).",
"title": ""
},
{
"docid": "083f43f1cc8fe2ad186567f243ee04de",
"text": "We consider the task of recognition of Australian vehicle number plates (also called license plates or registration plates in other countries). A system for Australian number plate recognition must cope with wide variations in the appearance of the plates. Each state uses its own range of designs with font variations between the designs. There are special designs issued for significant events such as the Sydney 2000 Olympic Games. Also, vehicle owners may place the plates inside glass covered frames or use plates made of non-standard materials. These issues compound the complexity of automatic number plate recognition, making existing approaches inadequate. We have developed a system that incorporates a novel combination of image processing and artificial neural network technologies to successfully locate and read Australian vehicle number plates in digital images. Commercial application of the system is envisaged.",
"title": ""
},
{
"docid": "eced9f448727b7461e253f48d9cf8505",
"text": "Near-range videos contain objects that are close to the camera. These videos often contain discontinuous depth variation (DDV), which is the main challenge to the existing video stabilization methods. Traditionally, 2D methods are robust to various camera motions (e.g., quick rotation and zooming) under scenes with continuous depth variation (CDV). However, in the presence of DDV, they often generate wobbled results due to the limited ability of their 2D motion models. Alternatively, 3D methods are more robust in handling near-range videos. We show that, by compensating rotational motions and ignoring translational motions, near-range videos can be successfully stabilized by 3D methods without sacrificing the stability too much. However, it is time-consuming to reconstruct the 3D structures for the entire video and sometimes even impossible due to rapid camera motions. In this paper, we combine the advantages of 2D and 3D methods, yielding a hybrid approach that is robust to various camera motions and can handle the near-range scenarios well. To this end, we automatically partition the input video into CDV and DDV segments. Then, the 2D and 3D approaches are adopted for CDV and DDV clips, respectively. Finally, these segments are stitched seamlessly via a constrained optimization. We validate our method on a large variety of consumer videos.",
"title": ""
},
{
"docid": "3bdd6168db10b8b195ce88ae9c4a75f9",
"text": "Nowadays Intrusion Detection System (IDS) which is increasingly a key element of system security is used to identify the malicious activities in a computer system or network. There are different approaches being employed in intrusion detection systems, but unluckily each of the technique so far is not entirely ideal. The prediction process may produce false alarms in many anomaly based intrusion detection systems. With the concept of fuzzy logic, the false alarm rate in establishing intrusive activities can be reduced. A set of efficient fuzzy rules can be used to define the normal and abnormal behaviors in a computer network. Therefore some strategy is needed for best promising security to monitor the anomalous behavior in computer network. In this paper I present a few research papers regarding the foundations of intrusion detection systems, the methodologies and good fuzzy classifiers using genetic algorithm which are the focus of current development efforts and the solution of the problem of Intrusion Detection System to offer a realworld view of intrusion detection. Ultimately, a discussion of the upcoming technologies and various methodologies which promise to improve the capability of computer systems to detect intrusions is offered.",
"title": ""
},
{
"docid": "deccc7ba3b930a9c56a377053699a46b",
"text": "Preview: Some traditional measurements of forecast accuracy are unsuitable for intermittent-demand data because they can give infinite or undefined values. Rob Hyndman summarizes these forecast accuracy metrics and explains their potential failings. He also introduces a new metric—the mean absolute scaled error (MASE)—which is more appropriate for intermittent-demand data. More generally, he believes that the MASE should become the standard metric for comparing forecast accuracy across multiple time series.",
"title": ""
},
{
"docid": "c0d794e7275e7410998115303bf0cf79",
"text": "We present a hierarchical model that learns image decompositions via alternating layers of convolutional sparse coding and max pooling. When trained on natural images, the layers of our model capture image information in a variety of forms: low-level edges, mid-level edge junctions, high-level object parts and complete objects. To build our model we rely on a novel inference scheme that ensures each layer reconstructs the input, rather than just the output of the layer directly beneath, as is common with existing hierarchical approaches. This makes it possible to learn multiple layers of representation and we show models with 4 layers, trained on images from the Caltech-101 and 256 datasets. When combined with a standard classifier, features extracted from these models outperform SIFT, as well as representations from other feature learning methods.",
"title": ""
},
{
"docid": "2bc0102fdc3a66ca5262bdaa90a94187",
"text": "Visual localization enables autonomous vehicles to navigate in their surroundings and Augmented Reality applications to link virtual to real worlds. In order to be practically relevant, visual localization approaches need to be robust to a wide variety of viewing condition, including day-night changes, as well as weather and seasonal variations. In this paper, we introduce the first benchmark datasets specifically designed for analyzing the impact of such factors on visual localization. Using carefully created ground truth poses for query images taken under a wide variety of conditions, we evaluate the impact of various factors on the quality of 6 degree-of-freedom (6DOF) camera pose estimation through extensive experiments with state-of-the-art localization approaches. Based on our results, we draw conclusions about the difficulty of different conditions and propose promising avenues for future work. We will eventually make our two novel benchmarks publicly available.",
"title": ""
},
{
"docid": "c86fbf52aecb41ce4f3d806f62965c50",
"text": "Multi-core end-systems use Receive Side Scaling (RSS) to parallelize protocol processing. RSS uses a hash function on the standard flow descriptors and an indirection table to assign incoming packets to receive queues which are pinned to specific cores. This ensures flow affinity in that the interrupt processing of all packets belonging to a specific flow is processed by the same core. A key limitation of standard RSS is that it does not consider the application process that consumes the incoming data in determining the flow affinity. In this paper, we carry out a detailed experimental analysis of the performance impact of the application affinity in a 40 Gbps testbed network with a dual hexa-core end-system. We show, contrary to conventional wisdom, that when the application process and the flow are affinitized to the same core, the performance (measured in terms of end-to-end TCP throughput) is significantly lower than the line rate. Near line rate performance is observed when the flow and the application process are affinitized to different cores belonging to the same socket. Furthermore, affinitizing the application and the flow to cores on different sockets results in significantly lower throughput than the line rate. These results arise due to the memory bottleneck, which is demonstrated using preliminary correlational data on the cache hit rate in the core that services the application process.",
"title": ""
},
{
"docid": "ea1f836ba53e49663d5b7f480a2f8772",
"text": "Strengths and weaknesses of modern widebandwidth bipolar transistor operational amplifiers are investigated and compared with respect to bandwidth, slew rate, noise, distortion, and power. This paper traces the evolution of operational amplifier designs since vacuum tube days to give a perspective of the large number of circuit variations used over time. Of particular value is the ability to use many of these circuit design options as the basis of new amplifiers. In addition, an array of operational amplifier components fabricated on the AT&T CBIC V2 [1] process is described. This design incorporates many of the architectural techniques that Vin have evolved over the years to produce four separate operational amplifier on a single base wafer. The process design methodology requires identifying the common elements in each architecture and the minimum number of additional components required to implement four unique architectures on the array. +V",
"title": ""
},
{
"docid": "aecef2d4d6716046265c559dbfb351b6",
"text": "This handbook is about writing software requirements specifications and legal contracts, two kinds of documents with similar needs for completeness, consistency, and precision. Particularly when these are written, as they usually are, in natural language, ambiguity—by any definition—is a major cause of their not specifying what they should. Simple misuse of the language in which the document is written is one source of these ambiguities.",
"title": ""
},
{
"docid": "0f6806c44bf6fa7e6a2c3fb02ef8781b",
"text": "Air quality has been negatively affected by industrial activities, which have caused imbalances in nature. The issue of air pollution has become a big concern for many people, especially those living in industrial areas. Air pollution levels can be measured using smart sensors. Additionally, Internet of Things (IoT) technology can be integrated to remotely detect pollution without any human interaction. The data gathered by such a system can be transmitted instantly to a web-based application to facilitate monitoring real time data and allow immediate risk management. In this paper, we describe an entire Internet of Things (IoT) system that monitors air pollution by collecting real-time data in specific locations. This data is analyzed and measured against a predetermined threshold. The collected data is sent to the concerned official organization to notify them in case of any violation so that they can take the necessary measures. Furthermore, if the value of the measured pollutants exceeds the threshold, an alarm system is triggered taking several actions to warn the surrounding people.",
"title": ""
},
{
"docid": "63cfa266a73cfbec205ebb189614a8f9",
"text": "Big data analytics (BDA) has emerged as an important area of study for both academics and practitioners. Despite of rising potential value of BDA, a few studies have been conducted to investigate the effect of BDA on firm performance. In this research in progress, according to the challenges of BDA dimensions (volume, variety, velocity, veracity and value) we propose the BDA capability dimensions in line with IT capability concept. BDA infrastructure capability, BDA management capability, BDA personnel capability and relational BDA capability provide the overall BDA Capability concept. The study, by employing dynamic capability, proposes that BDA capability impacts on firm financial and market performance by mediated effect of operational performance. The finding of this research by providing essential BDA capability and its effect on firm performance can apply as a roadmap and fill the gap between managers’ expectation of BDA and what is emerged of BDA implementation.",
"title": ""
},
{
"docid": "cf5ace08a6e33b5da51fac3fa9e0360f",
"text": "Since 2008, intelligence units of six states of the western part of Switzerland have been sharing a common database for the analysis of high volume crimes. On a daily basis, events reported to the police are analysed, filtered and classified to detect crime repetitions and interpret the crime environment. Several forensic outcomes are integrated in the system such as matches of traces with persons, and links between scenes detected by the comparison of forensic case data. Systematic procedures have been settled to integrate links assumed mainly through DNA profiles, shoemarks patterns and images. A statistical outlook on a retrospective dataset of series from 2009 to 2011 of the database informs for instance on the number of repetition detected or confirmed and increased by forensic case data. Time needed to obtain forensic intelligence in regard with the type of marks treated, is seen as a critical issue. Furthermore, the underlying integration process of forensic intelligence into the crime intelligence database raised several difficulties in regards of the acquisition of data and the models used in the forensic databases. Solutions found and adopted operational procedures are described and discussed. This process form the basis to many other researches aimed at developing forensic intelligence models.",
"title": ""
},
{
"docid": "3dbf5b2b03f90667d5602f3121c526fb",
"text": "In order to identify the parasitic diseases, this paper propose the automatic identification of Human Parasite Eggs to eight different species : Ascaris, Uncinarias, Trichuris, Dyphillobothrium-Pacificum, Taenia-Solium, Fasciola Hepetica and Enterobius-Vermicularis from their microscopic images based on Multitexton Histogram - MTH using new structures of textons. This proposed system includes two stages. In first stage, a feature extraction mechanism that is based on MTH descriptor retrieving the relationships between textons. In second stage, an CBIR system has been implemented in orden to detect their correct species of helminths. Finally, simulation results show overall success rates of 94,78% in the detection.",
"title": ""
},
{
"docid": "853b5ab3ed6a9a07c8d11ad32d0e58ad",
"text": "We introduce a new statistical model for time series that iteratively segments data into regimes with approximately linear dynamics and learns the parameters of each of these linear regimes. This model combines and generalizes two of the most widely used stochastic time-series modelshidden Markov models and linear dynamical systemsand is closely related to models that are widely used in the control and econometrics literatures. It can also be derived by extending the mixture of experts neural network (Jacobs, Jordan, Nowlan, & Hinton, 1991) to its fully dynamical version, in which both expert and gating networks are recurrent. Inferring the posterior probabilities of the hidden states of this model is computationally intractable, and therefore the exact expectation maximization (EM) algorithm cannot be applied. However, we present a variational approximation that maximizes a lower bound on the log-likelihood and makes use of both the forward and backward recursions for hidden Markov models and the Kalman filter recursions for linear dynamical systems. We tested the algorithm on artificial data sets and a natural data set of respiration force from a patient with sleep apnea. The results suggest that variational approximations are a viable method for inference and learning in switching state-space models.",
"title": ""
},
{
"docid": "5b0f64a6618cbabeec9c9437c234c14d",
"text": "The ankle-brachial index is valuable for screening for peripheral artery disease in patients at risk and for diagnosing the disease in patients who present with lower-extremity symptoms that suggest it. The ankle-brachial index also predicts the risk of cardiovascular events, cerebrovascular events, and even death from any cause. Few other tests provide as much diagnostic accuracy and prognostic information at such low cost and risk.",
"title": ""
},
{
"docid": "c1492f5eb2fafc52da81902a9d19d480",
"text": "A compact dual-band multiple-input-multiple-output (MIMO)/diversity antenna is proposed. This antenna is designed for 2.4/5.2/5.8GHz WLAN and 2.5/3.5/5.5 GHz WiMAX applications in portable mobile devices. It consists of two back-to-back monopole antennas connected with a T-shaped stub, where two rectangular slots are cut from the ground, which significantly reduces the mutual coupling between the two ports at the lower frequency band. The volume of this antenna is 40mm ∗ 30mm ∗ 1mm including the ground plane. Measured results show the isolation is better than −20 dB at the lower frequency band from 2.39 to 3.75GHz and −25 dB at the higher frequency band from 5.03 to 7 GHz, respectively. Moreover, acceptable radiation patterns, antenna gain, and envelope correlation coefficient are obtained. These characteristics indicate that the proposed antenna is suitable for some portable MIMO/diversity equipments.",
"title": ""
},
{
"docid": "7074d77d242b4d1ecbebc038c04698b8",
"text": "We discuss our tools and techniques to monitor and inject packets in Bluetooth Low Energy. Also known as BTLE or Bluetooth Smart, it is found in recent high-end smartphones, sports devices, sensors, and will soon appear in many medical devices. We show that we can effectively render useless the encryption of any Bluetooth Low Energy link.",
"title": ""
},
{
"docid": "66ca4bacfbae3ff32b105565dace5194",
"text": "In this paper, we analyze and systematize the state-ofthe-art graph data privacy and utility techniques. Specifically, we propose and develop SecGraph (available at [1]), a uniform and open-source Secure Graph data sharing/publishing system. In SecGraph, we systematically study, implement, and evaluate 11 graph data anonymization algorithms, 19 data utility metrics, and 15 modern Structure-based De-Anonymization (SDA) attacks. To the best of our knowledge, SecGraph is the first such system that enables data owners to anonymize data by state-of-the-art anonymization techniques, measure the data’s utility, and evaluate the data’s vulnerability against modern De-Anonymization (DA) attacks. In addition, SecGraph enables researchers to conduct fair analysis and evaluation of existing and newly developed anonymization/DA techniques. Leveraging SecGraph, we conduct extensive experiments to systematically evaluate the existing graph data anonymization and DA techniques. The results demonstrate that (i) most anonymization schemes can partially or conditionally preserve most graph utilities while losing some application utility; (ii) no DA attack is optimum in all scenarios. The DA performance depends on several factors, e.g., similarity between anonymized and auxiliary data, graph density, and DA heuristics; and (iii) all the state-of-the-art anonymization schemes are vulnerable to several or all of the modern SDA attacks. The degree of vulnerability of each anonymization scheme depends on how much and which data utility it preserves.",
"title": ""
}
] |
scidocsrr
|
d8747aa5b5de3e64c07583b6cc7943e6
|
Smart to Smarter : Smart Home Systems History , Future and Challenges
|
[
{
"docid": "51f66b4ff06999f6ce7df45a1db1d8f7",
"text": "Smart homes with advanced building technologies can react to sensor triggers in a variety of preconfigured ways. These rules are usually only visible within designated configuration interfaces. For this reason inhabitants who are not actively involved in the configuration process can be taken by surprise by the effects of such rules, such as for example the unexpected automated actions of lights or shades. To provide these inhabitants with better means to understand their home, as well as to increase their motivation to actively engage with its configuration, we propose Casalendar, a visualization that integrates the status of smart home technologies into the familiar interface of a calendar. We present our design and initial findings about the application of a temporal metaphor in smart home interfaces.",
"title": ""
}
] |
[
{
"docid": "559637a4f8f5b99bb3210c5c7d03d2e0",
"text": "Third-generation personal navigation assistants (PNAs) (i.e., those that provide a map, the user's current location, and directions) must be able to reconcile the user's location with the underlying map. This process is known as map matching. Most existing research has focused on map matching when both the user's location and the map are known with a high degree of accuracy. However, there are many situations in which this is unlikely to be the case. Hence, this paper considers map matching algorithms that can be used to reconcile inaccurate locational data with an inaccurate map/network. Ó 2000 Published by Elsevier Science Ltd.",
"title": ""
},
{
"docid": "1580e188796e4e7b6c5930e346629849",
"text": "This paper describes the development process of FarsNet; a lexical ontology for the Persian language. FarsNet is designed to contain a Persian WordNet with about 10000 synsets in its first phase and grow to cover verbs' argument structures and their selectional restrictions in its second phase. In this paper we discuss the semi-automatic approach to create the first phase: the Persian WordNet.",
"title": ""
},
{
"docid": "e18b565bddfc86c0ab3ef5ad190bdf06",
"text": "Human activities observed from visual sensors often give rise to a sequence of smoothly varying features. In many cases, the space of features can be formally defined as a manifold, where the action becomes a trajectory on the manifold. Such trajectories are high dimensional in addition to being non-linear, which can severely limit computations on them. We also argue that by their nature, human actions themselves lie on a much lower dimensional manifold compared to the high dimensional feature space. Learning an accurate low dimensional embedding for actions could have a huge impact in the areas of efficient search and retrieval, visualization, learning, and recognition. Traditional manifold learning addresses this problem for static points in ℝn, but its extension to trajectories on Riemannian manifolds is non-trivial and has remained unexplored. The challenge arises due to the inherent non-linearity, and temporal variability that can significantly distort the distance metric between trajectories. To address these issues we use the transport square-root velocity function (TSRVF) space, a recently proposed representation that provides a metric which has favorable theoretical properties such as invariance to group action. We propose to learn the low dimensional embedding with a manifold functional variant of principal component analysis (mfPCA). We show that mf-PCA effectively models the manifold trajectories in several applications such as action recognition, clustering and diverse sequence sampling while reducing the dimensionality by a factor of ~ 250×. The mfPCA features can also be reconstructed back to the original manifold to allow for easy visualization of the latent variable space.",
"title": ""
},
{
"docid": "48b045dd9a8961a5205bbc12129ede51",
"text": "Textual-visual matching aims at measuring similarities between sentence descriptions and images. Most existing methods tackle this problem without effectively utilizing identity-level annotations. In this paper, we propose an identity-aware two-stage framework for the textual-visual matching problem. Our stage-1 CNN-LSTM network learns to embed cross-modal features with a novel Cross-Modal Cross-Entropy (CMCE) loss. The stage-1 network is able to efficiently screen easy incorrect matchings and also provide initial training point for the stage-2 training. The stage-2 CNN-LSTM network refines the matching results with a latent co-attention mechanism. The spatial attention relates each word with corresponding image regions while the latent semantic attention aligns different sentence structures to make the matching results more robust to sentence structure variations. Extensive experiments on three datasets with identity-level annotations show that our framework outperforms state-of-the-art approaches by large margins.",
"title": ""
},
{
"docid": "13d5011f3d6c1997e3c44b3f03cf2017",
"text": "Reinforcement learning with appropriately designed reward signal could be used to solve many sequential learning problems. However, in practice, the reinforcement learning algorithms could be broken in unexpected, counterintuitive ways. One of the failure modes is reward hacking which usually happens when a reward function makes the agent obtain high return in an unexpected way. This unexpected way may subvert the designer’s intentions and lead to accidents during training. In this paper, a new multi-step state-action value algorithm is proposed to solve the problem of reward hacking. Unlike traditional algorithms, the proposed method uses a new return function, which alters the discount of future rewards and no longer stresses the immediate reward as the main influence when selecting the current state action. The performance of the proposed method is evaluated on two games, Mappy and Mountain Car. The empirical results demonstrate that the proposed method can alleviate the negative impact of reward hacking and greatly improve the performance of reinforcement learning algorithm. Moreover, the results illustrate that the proposed method could also be applied to the continuous state space problem successfully.",
"title": ""
},
{
"docid": "ebd72a597dba9a41dba5f3f0b4d1e6b9",
"text": "One may consider that drug-drug interactions (DDIs) associated with antacids is an obsolete topic because they are prescribed less frequently by medical professionals due to the advent of drugs that more effectively suppress gastric acidity (i.e. histamine H2-receptor antagonists [H2RAs] and proton pump inhibitors [PPIs]). Nevertheless, the use of antacids by ambulant patients may be ever increasing, because they are freely available as over-the-counter (OTC) drugs. Antacids consisting of weak basic substances coupled with polyvalent cations may alter the rate and/or the extent of absorption of concomitantly administered drugs via different mechanisms. Polyvalent cations in antacid formulations may form insoluble chelate complexes with drugs and substantially reduce their bioavailability. Clinical studies demonstrated that two classes of antibacterial s (tetracyclines and fluoroquinolones) are susceptible to clinically relevant DDIs with antacids through this mechanism. Countermeasures against this type of DDI include spacing out the dosing interval —taking antacid either 4 hours before or 2 hours after administration of these antibacterials. Bisphosphonates may be susceptible to DDIs with antacids by the same mechanism, as described in the prescription information of most bisphosphonates, but no quantitative data about the DDIs are available. For drugs with solubility critically dependent on pH, neutralization of gastric fluid by antacids may alter the dissolution of these drugs and the rate and/or extent of their absorption. However, the magnitude of DDIs elicited by antacids through this mechanism is less than that produced by H2RAs or PPIs; therefore, the clinical relevance of such DDIs is often obscure. Magnesium ions contained in some antacid formulas may increase gastric emptying, thereby accelerating the rate of absorption of some drugs. However, the clinical relevance of this is unclear in most cases because the difference in plasma drug concentration observed after dosing shortly disappears. Recent reports have indicated that some of the molecular-targeting agents such as the tyrosine kinase inhibitors dasatinib and imatinib, and the thrombopoietin receptor agonist eltrombopag may be susceptible to DDIs with antacids. Finally, the recent trend of developing OTC drugs as combination formulations of an antacid and an H2RA is a concern because these drugs will increase the risk of DDIs by dual mechanisms, i.e. a gastric pH-dependent mechanism by H2RAs and a cation-mediated chelation mechanism by antacids.",
"title": ""
},
{
"docid": "aa2df951eba502ec71eda401755d25a7",
"text": "A uni-planar backward directional couplers is analyzed and designed. Microstrip parallel coupled lines and asymmetrical delay lines make multi-sectioned coupler which enhances the directivity. In this paper, 20 dB multi-sectioned couplers with single and double delay lines are designed and measured to show the validation of analysis. The coupler with two delay lines has the directivity over 30 dB and the fractional bandwidth of 30% at the center frequency of 1.8 GHz.",
"title": ""
},
{
"docid": "576aa36956f37b491382b0bdd91f4bea",
"text": "The generation of RDF data has accelerated to the point where many data sets need to be partitioned across multiple machines in order to achieve reasonable performance when querying the data. Although tremendous progress has been made in the Semantic Web community for achieving high performance data management on a single node, current solutions that allow the data to be partitioned across multiple machines are highly inefficient. In this paper, we introduce a scalable RDF data management system that is up to three orders of magnitude more efficient than popular multi-node RDF data management systems. In so doing, we introduce techniques for (1) leveraging state-of-the-art single node RDF-store technology (2) partitioning the data across nodes in a manner that helps accelerate query processing through locality optimizations and (3) decomposing SPARQL queries into high performance fragments that take advantage of how data is partitioned in a cluster.",
"title": ""
},
{
"docid": "682f68ccb2a00b9c1ccc93caf587cb2d",
"text": "To evaluate the feasibility of coating formulated recombinant human erythropoietin alfa (EPO) on a titanium microneedle transdermal delivery system, ZP-EPO, and assess preclinical patch delivery performance. Formulation rheology and surface activity were assessed by viscometry and contact angle measurement. EPO liquid formulation was coated onto titanium microneedles by dip-coating and drying. Stability of coated EPO was assessed by SEC-HPLC, CZE and potency assay. Preclinical in vivo delivery and pharmacokinetic studies were conducted in rats with EPO-coated microneedle patches and compared to subcutaneous EPO injection. Studies demonstrated successful EPO formulation development and coating on microneedle arrays. ZP-EPO patch was stable at 25°C for at least 3 months with no significant change in % aggregates, isoforms, or potency. Preclinical studies in rats showed the ZP-EPO microneedle patches, coated with 750 IU to 22,000 IU, delivered with high efficiency (75–90%) with a linear dose response. PK profile was similar to subcutaneous injection of commercial EPO. Results suggest transdermal microneedle patch delivery of EPO is feasible and may offer an efficient, dose-adjustable, patient-friendly alternative to current intravenous or subcutaneous routes of administration.",
"title": ""
},
{
"docid": "f274322ad7eed4829945bc3d483ceecb",
"text": "In this paper, an observer problem from a computer vision application is studied. Rigid body pose estimation using inertial sensors and a monocular camera is considered and it is shown how rotation estimation can be decoupled from position estimation. Orientation estimation is formulated as an observer problem with implicit output where the states evolve on (3). A careful observability study reveals interesting group theoretic structures tied to the underlying system structure. A locally convergent observer where the states evolve on (3) is proposed and numerical estimates of the domain of attraction is given. Further, it is shown that, given convergent orientation estimates, position estimation can be formulated as a linear implicit output problem. From an applications perspective, it is outlined how delayed low bandwidth visual observations and high bandwidth rate gyro measurements can provide high bandwidth estimates. This is consistent with real-time constraints due to the complementary characteristics of the sensors which are fused in a multirate way.",
"title": ""
},
{
"docid": "5a4d8576222e8b704baaa1b67815ca01",
"text": "In evolutionary robotics, populations of robots are typically trained in simulation before one or more of them are instantiated as physical robots. However, in order to evolve robust behavior, each robot must be evaluated in multiple environments. If an environment is characterized by f free parameters, each of which can take one of np features, each robot must be evaluated in all np environments to ensure robustness. Here, we show that if the robots are constrained to have modular morphologies and controllers, they only need to be evaluated in np environments to reach the same level of robustness. This becomes possible because the robots evolve such that each module of the morphology allows the controller to independently recognize a familiar percept in the environment, and each percept corresponds to one of the environmental free parameters. When exposed to a new environment, the robot perceives it as a novel combination of familiar percepts which it can solve without requiring further training. A non-modular morphology and controller however perceives the same environment as a completely novel environment, requiring further training. This acceleration in evolvability – the rate of the evolution of adaptive and robust behavior – suggests that evolutionary robotics may become a scalable approach for automatically creating complex autonomous machines, if the evolution of neural and morphological modularity is taken into account.",
"title": ""
},
{
"docid": "be0806fc3f2f77642f72bbfdc8248f52",
"text": "Transparent electrodes with a dielectric-metal-dielectric (DMD) structure can be implemented in a simple manufacturing process and have good optical and electrical properties. In this study, nickel oxide (NiO) is introduced into the DMD structure as a more appropriate dielectric material that has a high conduction band for electron blocking and a low valence band for efficient hole transport. The indium-free NiO/Ag/NiO (NAN) transparent electrode exhibits an adjustable high transmittance of ∼82% combined with a low sheet resistance of ∼7.6 Ω·s·q(-1) and a work function of 5.3 eV after UVO treatment. The NAN electrode shows excellent surface morphology and good thermal, humidity, and environmental stabilities. Only a small change in sheet resistance can be found after NAN electrode is preserved in air for 1 year. The power conversion efficiencies of organic photovoltaic cells with NAN electrodes deposited on glass and polyethylene terephthalate (PET) substrates are 6.07 and 5.55%, respectively, which are competitive with those of indium tin oxide (ITO)-based devices. Good photoelectric properties, the low-cost material, and the room-temperature deposition process imply that NAN electrode is a striking candidate for low-cost and flexible transparent electrode for efficient flexible optoelectronic devices.",
"title": ""
},
{
"docid": "2ea86c4c0ed1b55166a5ee592f24aa95",
"text": "As a prospective candidate material for surface coating and repair applications, nickel-based superalloy Inconel 718 (IN718) was deposited on American Iron and Steel Institute (AISI) 4140 alloy steel substrate by laser engineered net shaping (LENS) to investigate the compatibility between two dissimilar materials with a focus on interface bonding and fracture behavior of the hybrid specimens. The results show that the interface between the two dissimilar materials exhibits good metallurgical bonding. Through the tensile test, all the fractures occurred in the as-deposited IN718 section rather than the interface or the substrate, implying that the as-deposited interlayer bond strength is weaker than the interfacial bond strength. From the fractography using scanning electron microscopy (SEM) and energy disperse X-ray spectrometry (EDS), three major factors affecting the tensile fracture failure of the as-deposited part are (i) metallurgical defects such as incompletely melted powder particles, lack-of-fusion porosity, and micropores; (ii) elemental segregation and Laves phase, and (iii) oxide formation. The fracture failure mechanism is a combination of all these factors which are detrimental to the mechanical properties and structural integrity by causing premature fracture failure of the as-deposited IN718.",
"title": ""
},
{
"docid": "a651ae33adce719033dad26b641e6086",
"text": "Knowledge base(KB) plays an important role in artificial intelligence. Much effort has been taken to both manually and automatically construct web-scale knowledge bases. Comparing with manually constructed KBs, automatically constructed KB is broader but with more noises. In this paper, we study the problem of improving the quality for automatically constructed web-scale knowledge bases, in particular, lexical taxonomies of isA relationships. We find that these taxonomies usually contain cycles, which are often introduced by incorrect isA relations. Inspired by this observation, we introduce two kinds of models to detect incorrect isA relations from cycles. The first one eliminates cycles by extracting directed acyclic graphs, and the other one eliminates cycles by grouping nodes into different levels. We implement our models on Probase, a state-of-the-art, automatically constructed, web-scale taxonomy. After processing tens of millions of relations, our models eliminate 74 thousand wrong relations with 91% accuracy.",
"title": ""
},
{
"docid": "7e4c00d8f17166cbfb3bdac8d5e5ad09",
"text": "Twitter is now used to distribute substantive content such as breaking news, increasing the importance of assessing the credibility of tweets. As users increasingly access tweets through search, they have less information on which to base credibility judgments as compared to consuming content from direct social network connections. We present survey results regarding users' perceptions of tweet credibility. We find a disparity between features users consider relevant to credibility assessment and those currently revealed by search engines. We then conducted two experiments in which we systematically manipulated several features of tweets to assess their impact on credibility ratings. We show that users are poor judges of truthfulness based on content alone, and instead are influenced by heuristics such as user name when making credibility assessments. Based on these findings, we discuss strategies tweet authors can use to enhance their credibility with readers (and strategies astute readers should be aware of!). We propose design improvements for displaying social search results so as to better convey credibility.",
"title": ""
},
{
"docid": "1a9d595aaff44165fd486b97025ca36d",
"text": "1389-1286/$ see front matter 2008 Elsevier B.V doi:10.1016/j.comnet.2008.09.022 * Corresponding author. Tel.: +1 413 545 4465. E-mail address: [email protected] (M. Zink). 1 http://www.usatoday.com/tech/news/2006-07 x.htm http://en.wikipedia.org/wiki/YouTube. User-Generated Content has become very popular since new web services such as YouTube allow for the distribution of user-produced media content. YouTube-like services are different from existing traditional VoD services in that the service provider has only limited control over the creation of new content. We analyze how content distribution in YouTube is realized and then conduct a measurement study of YouTube traffic in a large university campus network. Based on these measurements, we analyzed the duration and the data rate of streaming sessions, the popularity of videos, and access patterns for video clips from the clients in the campus network. The analysis of the traffic shows that trace statistics are relatively stable over short-term periods while long-term trends can be observed. We demonstrate how synthetic traces can be generated from the measured traces and show how these synthetic traces can be used as inputs to trace-driven simulations. We also analyze the benefits of alternative distribution infrastructures to improve the performance of a YouTube-like VoD service. The results of these simulations show that P2P-based distribution and proxy caching can reduce network traffic significantly and allow for faster access to video clips. 2008 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "cff459bd217bdbecefeceb70e3be5065",
"text": "In this article we present FLUX-CiM, a novel method for extracting components (e.g., author names, article titles, venues, page numbers) from bibliographic citations. Our method does not rely on patterns encoding specific delimiters used in a particular citation style.This feature yields a high degree of automation and flexibility, and allows FLUX-CiM to extract from citations in any given format. Differently from previous methods that are based on models learned from user-driven training, our method relies on a knowledge base automatically constructed from an existing set of sample metadata records from a given field (e.g., computer science, health sciences, social sciences, etc.). These records are usually available on the Web or other public data repositories. To demonstrate the effectiveness and applicability of our proposed method, we present a series of experiments in which we apply it to extract bibliographic data from citations in articles of different fields. Results of these experiments exhibit precision and recall levels above 94% for all fields, and perfect extraction for the large majority of citations tested. In addition, in a comparison against a stateof-the-art information-extraction method, ours produced superior results without the training phase required by that method. Finally, we present a strategy for using bibliographic data resulting from the extraction process with FLUX-CiM to automatically update and expand the knowledge base of a given domain. We show that this strategy can be used to achieve good extraction results even if only a very small initial sample of bibliographic records is available for building the knowledge base.",
"title": ""
},
{
"docid": "b41b14ed0091a06072629be78bec090b",
"text": "The 2-D orthogonal wavelet transform decomposes images into both spatial and spectrally local coefficients. The transformed coefficients were coded hierarchically and individually quantized in accordance with the local estimated noise sensitivity of the human visual system (HVS). The algorithm can be mapped easily onto VLSI. For the Miss America and Lena monochrome images, the technique gave high to acceptable quality reconstruction at compression ratios of 0.3-0.2 and 0.64-0.43 bits per pixel (bpp), respectively.",
"title": ""
},
{
"docid": "93afa2c0b51a9d38e79e033762335df9",
"text": "With explosive growth of data volume and ever-increasing diversity of data modalities, cross-modal similarity search, which conducts nearest neighbor search across different modalities, has been attracting increasing interest. This paper presents a deep compact code learning solution for efficient cross-modal similarity search. Many recent studies have proven that quantization-based approaches perform generally better than hashing-based approaches on single-modal similarity search. In this paper, we propose a deep quantization approach, which is among the early attempts of leveraging deep neural networks into quantization-based cross-modal similarity search. Our approach, dubbed shared predictive deep quantization (SPDQ), explicitly formulates a shared subspace across different modalities and two private subspaces for individual modalities, and representations in the shared subspace and the private subspaces are learned simultaneously by embedding them to a reproducing kernel Hilbert space, where the mean embedding of different modality distributions can be explicitly compared. In addition, in the shared subspace, a quantizer is learned to produce the semantics preserving compact codes with the help of label alignment. Thanks to this novel network architecture in cooperation with supervised quantization training, SPDQ can preserve intramodal and intermodal similarities as much as possible and greatly reduce quantization error. Experiments on two popular benchmarks corroborate that our approach outperforms state-of-the-art methods.",
"title": ""
},
{
"docid": "adc587c3400cdf927c433e9d0f929894",
"text": "With continuous increase in urban population, the need to plan and implement smart cities based solutions for better urban governance is becoming more evident. These solutions are driven, on the one hand, by innovations in ICT and, on the other hand, to increase the capability and capacity of cities to mitigate environmental, social inclusion, economic growth and sustainable development challenges. In this respect, citizens' science or public participation provides a key input for informed and intelligent planning decision and policy making. However, the challenge here is to facilitate public in acquiring the right contextual information in order to be more productive, innovative and be able to make appropriate decisions which impact on their well being, in particular, and economic and environmental sustainability in general. Such a challenge requires contemporary ICT solutions, such as using Cloud computing, capable of storing and processing significant amount of data and produce intelligent contextual information. However, processing and visualising contextual information in a Cloud environment is not straightforward due to user profiling and contextual segregation of data that could be used in different applications of a smart city. In this regard, we present a Cloud-based architecture for context-aware citizen services for smart cities and walkthrough it using a hypothetical case study.",
"title": ""
}
] |
scidocsrr
|
d5b6f820e0bffde531cef33696f89fc6
|
Machine-guided Solution to Mathematical Word Problems
|
[
{
"docid": "c698f7d6b487cc7c87d7ff215d7f12b2",
"text": "This paper reports a controlled study with statistical signi cance tests on ve text categorization methods: the Support Vector Machines (SVM), a k-Nearest Neighbor (kNN) classi er, a neural network (NNet) approach, the Linear Leastsquares Fit (LLSF) mapping and a Naive Bayes (NB) classier. We focus on the robustness of these methods in dealing with a skewed category distribution, and their performance as function of the training-set category frequency. Our results show that SVM, kNN and LLSF signi cantly outperform NNet and NB when the number of positive training instances per category are small (less than ten), and that all the methods perform comparably when the categories are su ciently common (over 300 instances).",
"title": ""
}
] |
[
{
"docid": "fb70de7ed3e42c37b130686bfa3aee47",
"text": "Data from vehicles instrumented with GPS or other localization technologies are increasingly becoming widely available due to the investments in Connected and Automated Vehicles (CAVs) and the prevalence of personal mobile devices such as smartphones. Tracking or trajectory data from these probe vehicles are already being used in practice for travel time or speed estimation and for monitoring network conditions. However, there has been limited work on extracting other critical traffic flow variables, in particular density and flow, from probe data. This paper presents a microscopic approach (akin to car-following) for inferring the number of unobserved vehicles in between a set of probe vehicles in the traffic stream. In particular, we develop algorithms to extract and exploit the somewhat regular patterns in the trajectories when the probe vehicles travel through stop-and-go waves in congested traffic. Using certain critical points of trajectories as the input, the number of unobserved vehicles between consecutive probes are then estimated through a Naïve Bayes model. The parameters needed for the Naïve Bayes include means and standard deviations for the probability density functions (pdfs) for the distance headways between vehicles. These parameters are estimated through supervised as well as unsupervised learning methods. The proposed ideas are tested based on the trajectory data collected from US 101 and I-80 in California for the FHWA's NGSIM (next generation simulation) project. Under the dense traffic conditions analyzed, the results show that the number of unobserved vehicles between two probes can be predicted with an accuracy of ±1 vehicle almost always.",
"title": ""
},
{
"docid": "8a224bf0376321caa30a95318ec9ecf9",
"text": "With the rapid development of very large scale integration (VLSI) and continuous scaling in the metal oxide semiconductor field effect transistor (MOSFET), pad corrosion in the aluminum (Al) pad surface has become practical concern in the semiconductor industry. This paper presents a new method to improve the pad corrosion on Al pad surface by using new Al/Ti/TiN film stack. The effects of different Al film stacks on the Al pad corrosion have been investigated. The experiment results show that the Al/Ti/TiN film stack could improve bond pad corrosion effectively comparing to Al/SiON film stack. Wafers processed with new Al film stack were stored up to 28 days and display no pad crystal (PDCY) defects on bond pad surfaces.",
"title": ""
},
{
"docid": "8f6806ba2f75e3671efa2aa390d79b40",
"text": "Applying amendments to multi-element contaminated soils can have contradictory effects on the mobility, bioavailability and toxicity of specific elements, depending on the amendment. Trace elements and PAHs were monitored in a contaminated soil amended with biochar and greenwaste compost over 60 days field exposure, after which phytotoxicity was assessed by a simple bio-indicator test. Copper and As concentrations in soil pore water increased more than 30 fold after adding both amendments, associated with significant increases in dissolved organic carbon and pH, whereas Zn and Cd significantly decreased. Biochar was most effective, resulting in a 10 fold decrease of Cd in pore water and a resultant reduction in phytotoxicity. Concentrations of PAHs were also reduced by biochar, with greater than 50% decreases of the heavier, more toxicologically relevant PAHs. The results highlight the potential of biochar for contaminated land remediation.",
"title": ""
},
{
"docid": "1f24bb842dacf71c9cde6ab66abd1de8",
"text": "An appropriate aging description from face image is the prime influential factor in human age recognition, but still there is an absence of a specially engineered aging descriptor, which can characterize discernible facial aging cues (e.g., craniofacial growth, skin aging) from a detailed and more finer point of view. To address this issue, we propose a local face descriptor, directional age-primitive pattern (DAPP), which inherits discernible aging cue information and is functionally more robust and discriminative than existing local descriptors. We introduce three attributes for coding the DAPP description. First, we introduce Age-Primitives encoding aging related to the most crucial texture primitives, yielding a reasonable and clear aging definition. Second, we introduce an encoding concept dubbed as Latent Secondary Direction, which preserves compact structural information in the code avoiding uncertain codes. Third, a globally adaptive thresholding mechanism is initiated to facilitate more discrimination in a flat and textured region. We apply DAPP on separate age group recognition and age estimation tasks. Applying the same approach to both of these tasks is seldom explored in the literature. Carefully conducted experiments show that the proposed DAPP description outperforms the existing approaches by an acceptable margin.",
"title": ""
},
{
"docid": "07657456a2328be11dfaf706b5728ddc",
"text": "Knowledge of wheelchair kinematics during a match is prerequisite for performance improvement in wheelchair basketball. Unfortunately, no measurement system providing key kinematic outcomes proved to be reliable in competition. In this study, the reliability of estimated wheelchair kinematics based on a three inertial measurement unit (IMU) configuration was assessed in wheelchair basketball match-like conditions. Twenty participants performed a series of tests reflecting different motion aspects of wheelchair basketball. During the tests wheelchair kinematics were simultaneously measured using IMUs on wheels and frame, and a 24-camera optical motion analysis system serving as gold standard. Results showed only small deviations of the IMU method compared to the gold standard, once a newly developed skid correction algorithm was applied. Calculated Root Mean Square Errors (RMSE) showed good estimates for frame displacement (RMSE≤0.05 m) and speed (RMSE≤0.1m/s), except for three truly vigorous tests. Estimates of frame rotation in the horizontal plane (RMSE<3°) and rotational speed (RMSE<7°/s) were very accurate. Differences in calculated Instantaneous Rotation Centres (IRC) were small, but somewhat larger in tests performed at high speed (RMSE up to 0.19 m). Average test outcomes for linear speed (ICCs>0.90), rotational speed (ICC>0.99) and IRC (ICC> 0.90) showed high correlations between IMU data and gold standard. IMU based estimation of wheelchair kinematics provided reliable results, except for brief moments of wheel skidding in truly vigorous tests. The IMU method is believed to enable prospective research in wheelchair basketball match conditions and contribute to individual support of athletes in everyday sports practice.",
"title": ""
},
{
"docid": "2390d3d6c51c4a6857c517eb2c2cb3c0",
"text": "It is common for organizations to maintain multiple variants of a given business process, such as multiple sales processes for different products or multiple bookkeeping processes for different countries. Conventional business process modeling languages do not explicitly support the representation of such families of process variants. This gap triggered significant research efforts over the past decade, leading to an array of approaches to business process variability modeling. In general, each of these approaches extends a conventional process modeling language with constructs to capture customizable process models. A customizable process model represents a family of process variants in a way that a model of each variant can be derived by adding or deleting fragments according to customization options or according to a domain model. This survey draws up a systematic inventory of approaches to customizable process modeling and provides a comparative evaluation with the aim of identifying common and differentiating modeling features, providing criteria for selecting among multiple approaches, and identifying gaps in the state of the art. The survey puts into evidence an abundance of customizable process-modeling languages, which contrasts with a relative scarcity of available tool support and empirical comparative evaluations.",
"title": ""
},
{
"docid": "c75388c19397bf1e743970cb32649b17",
"text": "In recent years, there has been a substantial amount of work on large-scale data analytics using Hadoop-based platforms running on large clusters of commodity machines. A lessexplored topic is how those data, dominated by application logs, are collected and structured to begin with. In this paper, we present Twitter’s production logging infrastructure and its evolution from application-specific logging to a unified “client events” log format, where messages are captured in common, well-formatted, flexible Thrift messages. Since most analytics tasks consider the user session as the basic unit of analysis, we pre-materialize “session sequences”, which are compact summaries that can answer a large class of common queries quickly. The development of this infrastructure has streamlined log collection and data analysis, thereby improving our ability to rapidly experiment and iterate on various aspects of the service.",
"title": ""
},
{
"docid": "01472364545392cad69b9c7e1f65f4bb",
"text": "The designing of power transmission network is a difficult task due to the complexity of power system. Due to complexity in the power system there is always a loss of the stability due to the fault. Whenever a fault is intercepted in system, the whole system goes to severe transients. These transients cause oscillation in phase angle which leads poor power quality. The nature of oscillation is increasing instead being sustained, which leads system failure in form of generator damage. To reduce and eliminate the unstable oscillations one needs to use a stabilizer which can generate a perfect compensatory signal in order to minimize the harmonics generated due to instability. This paper presents a Power System stabilizer to reduce oscillations due to small signal disturbance. Additionally, a hybrid approach is proposed using FOPID stabilizer with the PSS connected SMIB. Genetic algorithm (GA), Particle swarm optimization (PSO) and Grey Wolf Optimization (GWO) are used for the parameter tuning of the stabilizer. Reason behind the use of GA, PSO and GWO instead of conventional methods is that it search the parameter heuristically, which leads better results. The efficiency of proposed approach is observed by rotor angle and power angle deviations in the SMIB system.",
"title": ""
},
{
"docid": "838c7e972b906580162d72929e18e87a",
"text": "There is considerable controversy about the role of child sexual abuse in the etiology of anxiety disorders. Although a growing number of research studies have been published, these have produced inconsistent results and conclusions regarding the nature of the associations between child sexual abuse and the various forms of anxiety problems as well as the potential effects of third variables, such as moderators, mediators, or confounders. This article provides a systematic review of the several reviews that have investigated the literature on the role of child sexual abuse in the etiology of anxiety disorders. Seven databases were searched, supplemented with hand search of reference lists from retrieved papers. Four meta-analyses, including 3,214,482 subjects from 171 studies, were analyzed. There is evidence that child sexual abuse is a significant, although general and nonspecific, risk factor for anxiety disorders, especially posttraumatic stress disorder, regardless of gender of the victim and severity of abuse. Additional biological or psychosocial risk factors (such as alterations in brain structure or function, information processing biases, parental anxiety disorders, family dysfunction, and other forms of child abuse) may interact with child sexual abuse or act independently to cause anxiety disorders in victims in abuse survivors. However, child sexual abuse may sometimes confer additional risk of developing anxiety disorders either as a distal and indirect cause or as a proximal and direct cause. Child sexual abuse should be considered one of the several risk factors for anxiety disorders and included in multifactorial etiological models for anxiety disorders.",
"title": ""
},
{
"docid": "f114e788557e8d734bd2a04a5b789208",
"text": "Adaptive content delivery is the state of the art in real-time multimedia streaming. Leading streaming approaches, e.g., MPEG-DASH and Apple HTTP Live Streaming (HLS), have been developed for classical IP-based networks, providing effective streaming by means of pure client-based control and adaptation. However, the research activities of the Future Internet community adopt a new course that is different from today's host-based communication model. So-called information-centric networks are of considerable interest and are advertised as enablers for intelligent networks, where effective content delivery is to be provided as an inherent network feature. This paper investigates the performance gap between pure client-driven adaptation and the theoretical optimum in the promising Future Internet architecture named data networking (NDN). The theoretical optimum is derived by modeling multimedia streaming in NDN as a fractional multi-commodity flow problem and by extending it taking caching into account. We investigate the multimedia streaming performance under different forwarding strategies, exposing the interplay of forwarding strategies and adaptation mechanisms. Furthermore, we examine the influence of network inherent caching on the streaming performance by varying the caching polices and the cache sizes.",
"title": ""
},
{
"docid": "631e3583e00e5163b8e9bcd134aca098",
"text": "Orchid species have a largest families among the botanical plant. Basically, a species of orchid are visually recognizing from its color, root, petal shape or even the size. However, there are several orchid species that really look alike and the type could be falsely classified. The aim of this paper is to classify two species of orchids which are physically look identical, i.e. Dendrobium Madame Pampadour and Dendrobium Cqompactum using image processing techniques. Using Neural Network, the classification rate is 85.7%.",
"title": ""
},
{
"docid": "0c805b994e89c878a62f2e1066b0a8e7",
"text": "3D spatial data modeling is one of the key research problems in 3D GIS. More and more applications depend on these 3D spatial data. Mostly, these data are stored in Geo-DBMSs. However, recent Geo-DBMSs do not support 3D primitives modeling, it only able to describe a single-attribute of the third-dimension, i.e. modeling 2.5D datasets that used 2D primitives (plus a single z-coordinate) such as polygons in 3D space. This research focuses on 3D topological model based on space partition for 3D GIS, for instance, 3D polygons or tetrahedron form a solid3D object. Firstly, this report discusses formal definitions of 3D spatial objects, and then all the properties of each object primitives will be elaborated in detailed. The author also discusses methods for constructing the topological properties to support object semantics is introduced. The formal framework to describe the spatial model, database using Oracle Spatial is also given in this report. All related topological structures that forms the object features are discussed in detail. All related features are tested using real 3D spatial dataset of 3D building. Finally, the report concludes the experiment via visualization of using AutoDesk Map 3D.",
"title": ""
},
{
"docid": "39fc05dfc0faeb47728b31b6053c040a",
"text": "Attempted and completed self-enucleation, or removal of one's own eyes, is a rare but devastating form of self-mutilation behavior. It is often associated with psychiatric disorders, particularly schizophrenia, substance induced psychosis, and bipolar disorder. We report a case of a patient with a history of bipolar disorder who gouged his eyes bilaterally as an attempt to self-enucleate himself. On presentation, the patient was manic with both psychotic features of hyperreligous delusions and command auditory hallucinations of God telling him to take his eyes out. On presentation, the patient had no light perception vision in both eyes and his exam displayed severe proptosis, extensive conjunctival lacerations, and visibly avulsed extraocular muscles on the right side. An emergency computed tomography scan of the orbits revealed small and irregular globes, air within the orbits, and intraocular hemorrhage. He was taken to the operating room for surgical repair of his injuries. Attempted and completed self-enucleation is most commonly associated with schizophrenia and substance induced psychosis, but can also present in patients with bipolar disorder. Other less commonly associated disorders include obsessive-compulsive disorder, depression, mental retardation, neurosyphilis, Lesch-Nyhan syndrome, and structural brain lesions.",
"title": ""
},
{
"docid": "4a9debbbe5b21adcdb50bfdc0c81873c",
"text": "Stealth Dicing (SD) technology has high potential to replace the conventional blade sawing and laser grooving. The dicing method has been widely researched since 2005 [1-3] especially for thin wafer (⇐ 12 mils). SD cutting has good quality because it has dry process during laser cutting, extremely narrow scribe line and multi-die sawing capability. However, along with complicated package technology, the chip quality demands fine and accurate pitch which conventional blade saw is impossible to achieve. This paper is intended as an investigation in high performance SD sawing, including multi-pattern wafer and DAF dicing tape capability. With the improvement of low-K substrate technology and min chip scale size, SD cutting is more important than other methods used before. Such sawing quality also occurs in wafer level chip scale package. With low-K substrate and small package, the SD cutting method can cut the narrow scribe line easily (15 um), which can lead the WLCSP to achieve more complicated packing method successfully.",
"title": ""
},
{
"docid": "926734e0a379f678740d07c1042a5339",
"text": "The increasing pervasiveness of digital technologies, also refered to as \"Internet of Things\" (IoT), offers a wealth of business model opportunities, which often involve an ecosystem of partners. In this context, companies are required to look at business models beyond a firm-centric lens and respond to changed dynamics. However, extant literature has not yet provided actionable approaches for business models for IoT-driven environments. Our research therefore addresses the need for a business model framework that captures the specifics of IoT-driven ecosystems. Applying an iterative design science research approach, the present paper describes (a) the methodology, (b) the requirements, (c) the design and (d) the evaluation of a business model framework that enables researchers and practitioners to visualize, analyze and design business models in the IoT context in a structured and actionable way. The identified dimensions in the framework include the value network of collaborating partners (who); sources of value creation (where); benefits from collaboration (why). Evidence from action research and multiple case studies indicates that the framework is able to depict business models in IoT.",
"title": ""
},
{
"docid": "adf7bde558a5e29829cc034ac93184bb",
"text": "CMOS technology scaling has opened a pathway to high-performance analog-to-digital conversion in the nanometer regime, where switching is preferred over amplifying. Successive-approximation-register (SAR) is one of the conversion architectures that rely on the high switching speed of process technology, and is thus distinctively known for its superior energy efficiency, small chip area, and good digital compatibility. When properly implemented, a SAR ADC also benefits from a potential rail-to-rail input swing, 100% capacitance utilization during input sampling (thus low kT/C noise), and insensitivity to comparator offsets during the conversion process. The linearity-limiting factors for SAR ADC are capacitor mismatch, sampling switch non-idealities, as well as the reference voltage settling issue due to the high internal switching speed of the DAC. In this work, a sub-radix-2 SAR ADC is presented, which employs a perturbation-based digital background calibration scheme and a dynamic-threshold-comparison (DTC) technique to overcome some of these performance-limiting factors.",
"title": ""
},
{
"docid": "5016c58426d6c9c632c8c5abf8a4c4e4",
"text": "We propose a deep neural network for the segmentation of the prostate in MRI images. The segmentation is performed using a residual fully convolutional neural network. Automatic shape learning is allowed using a Compositional Pattern-Producing Network. Moreover, a multi-pass architecture is designed to foster self-consistent segmentation. The model is trained and tested on the dataset of the challenge PROMISE12.",
"title": ""
},
{
"docid": "9489643bf8bfa3659b9b09f5716a7d3b",
"text": "We show that the gradient descent algorithm provides an implicit regularization effect in the learning of over-parameterized matrix factorization models and one-hidden-layer neural networks with quadratic activations. Concretely, we show that given Õ(dr) random linear measurements of a rank r positive semidefinite matrix X, we can recover X by parameterizing it by UU> with U ∈ Rd×d and minimizing the squared loss, even if r d. We prove that starting from a small initialization, gradient descent recovers X in Õ( √ r) iterations approximately. The results solve the conjecture of Gunasekar et al. Gunasekar et al. (2017) under the restricted isometry property. The technique can be applied to analyzing neural networks with one-hidden-layer quadratic activations with some technical modifications.",
"title": ""
},
{
"docid": "57c66291a54ae565e087ffe2ee0d6b7b",
"text": "We develop the relational topic model (RTM), a hierarchical model of both network structure and node attributes. We focus on document networks, where the attributes of each document are its words, i.e., discrete observations taken from a fixed vocabulary. For each pair of documents, the RTM models their link as a binary random variable that is conditioned on their contents. The model can be used to summarize a network of documents, predict links between them, and predict words within them. We derive efficient inference and estimation algorithms based on variational methods that take advantage of sparsity and scale with the number of links. We evaluate the predictive performance of the RTM for large networks of scientific abstracts, web documents, and geographically tagged news.",
"title": ""
},
{
"docid": "e2b95200b6da4d2ff8c69b55f023638e",
"text": "Phishing is the third cyber-security threat globally and the first cyber-security threat in China. There were 61.69 million phishing victims in China alone from June 2011 to June 2012, with the total annual monetary loss more than 4.64 billion US dollars. These phishing attacks were highly concentrated in targeting at a few major Websites. Many phishing Webpages had a very short life span. In this paper, we assume the Websites to protect against phishing attacks are known, and study the effectiveness of machine learning based phishing detection using only lexical and domain features, which are available even when the phishing Webpages are inaccessible. We propose several novel highly effective features, and use the real phishing attack data against Taobao and Tencent, two main phishing targets in China, in studying the effectiveness of each feature, and each group of features. We then select an optimal set of features in our phishing detector, which has achieved a detection rate better than 98%, with a false positive rate of 0.64% or less. The detector is still effective when the distribution of phishing URLs changes.",
"title": ""
}
] |
scidocsrr
|
cfc1b464f06543d91ef1fddb82070235
|
A comparative study of curvature scale space and Fourier descriptors for shape-based image retrieval
|
[
{
"docid": "b02d9621ee919bccde66418e0681d1e6",
"text": "A great deal of work has been done on the evaluation of information retrieval systems for alphanumeric data. The same thing can not be said about the newly emerging multimedia and image database systems. One of the central concerns in these systems is the automatic characterization of image content and retrieval of images based on similarity of image content. In this paper, we discuss effectiveness of several shape measures for content based similarity retrieval of images. The different shape measures we have implemented include outline based features (chain code based string features, Fourier descriptors, UNL Fourier features), region based features (invariant moments, Zemike moments, pseudoZemike moments), and combined features (invariant moments & Fourier descriptors, invariant moments & UNL Fourier features). Given an image, all these shape feature measures (vectors) are computed automatically, and the feature vector can either be used for the retrieval purpose or can be stored in the database for future queries. We have tested all of the above shape features for image retrieval on a database of 500 trademark images. The average retrieval efficiency values computed over a set of fifteen representative queries for all the methods is presented. The output of a sample shape similarity query using all the features is also shown.",
"title": ""
}
] |
[
{
"docid": "9ee2081e014e2cde151e03a554e09c8e",
"text": "The emerging network slicing paradigm for 5G provides new business opportunities by enabling multi-tenancy support. At the same time, new technical challenges are introduced, as novel resource allocation algorithms are required to accommodate different business models. In particular, infrastructure providers need to implement radically new admission control policies to decide on network slices requests depending on their Service Level Agreements (SLA). When implementing such admission control policies, infrastructure providers may apply forecasting techniques in order to adjust the allocated slice resources so as to optimize the network utilization while meeting network slices' SLAs. This paper focuses on the design of three key network slicing building blocks responsible for (i) traffic analysis and prediction per network slice, (ii) admission control decisions for network slice requests, and (iii) adaptive correction of the forecasted load based on measured deviations. Our results show very substantial potential gains in terms of system utilization as well as a trade-off between conservative forecasting configurations versus more aggressive ones (higher gains, SLA risk).",
"title": ""
},
{
"docid": "ae0474dc41871a28cc3b62dfd672ad0a",
"text": "Recent success in deep learning has generated immense interest among practitioners and students, inspiring many to learn about this new technology. While visual and interactive approaches have been successfully developed to help people more easily learn deep learning, most existing tools focus on simpler models. In this work, we present GAN Lab, the first interactive visualization tool designed for non-experts to learn and experiment with Generative Adversarial Networks (GANs), a popular class of complex deep learning models. With GAN Lab, users can interactively train generative models and visualize the dynamic training process's intermediate results. GAN Lab tightly integrates an model overview graph that summarizes GAN's structure, and a layered distributions view that helps users interpret the interplay between submodels. GAN Lab introduces new interactive experimentation features for learning complex deep learning models, such as step-by-step training at multiple levels of abstraction for understanding intricate training dynamics. Implemented using TensorFlow.js, GAN Lab is accessible to anyone via modern web browsers, without the need for installation or specialized hardware, overcoming a major practical challenge in deploying interactive tools for deep learning.",
"title": ""
},
{
"docid": "eeed4c3f13f50f269bcfd51d2157f5a6",
"text": "DRAM energy is an important component to optimize in modern computing systems. One outstanding source of DRAM energy is the energy to fetch data stored on cells to the row buffer, which occurs during two DRAM operations, row activate and refresh. This work exploits previously proposed half page row access, modifying the wordline connections within a bank to halve the number of cells fetched to the row buffer, to save energy in both cases. To accomplish this, we first change the data wire connections in the sub-array to reduce the cost of row buffer overfetch in multi-core systems which yields a 12% energy savings on average and a slight performance improvement in quad-core systems. We also propose charge recycling refresh, which reuses charge left over from a prior half page refresh to refresh another half page. Our charge recycling scheme is capable of reducing both auto- and self-refresh energy, saving more than 15% of refresh energy at 85°C, and provides even shorter refresh cycle time. Finally, we propose a refresh scheduling scheme that can dynamically adjust the number of charge recycled half pages, which can save up to 30% of refresh energy at 85°C.",
"title": ""
},
{
"docid": "2beabe7d2756fea530172943b9e374e7",
"text": "Migraine treatment has evolved from the realms of the supernatural into the scientific arena, but it seems still controversial whether migraine is primarily a vascular or a neurological dysfunction. Irrespective of this controversy, the levels of serotonin (5-hydroxytryptamine; 5-HT), a vasoconstrictor and a central neurotransmitter, seem to decrease during migraine (with associated carotid vasodilatation) whereas an i.v. infusion of 5-HT can abort migraine. In fact, 5-HT as well as ergotamine, dihydroergotamine and other antimigraine agents invariably produce vasoconstriction in the external carotid circulation. The last decade has witnessed the advent of sumatriptan and second generation triptans (e.g. zolmitriptan, rizatriptan, naratriptan), which belong to a new class of drugs, now known as 5-HT1B/1D/1F receptor agonists. Compared to sumatriptan, the second-generation triptans have a higher oral bioavailability and longer plasma half-life. In line with the vascular and neurogenic theories of migraine, all triptans produce selective carotid vasoconstriction (via 5-HT1B receptors) and presynaptic inhibition of the trigeminovascular inflammatory responses implicated in migraine (via 5-HT1D/5-ht1F receptors). Moreover, selective agonists at 5-HT1D (PNU-142633) and 5-ht1F (LY344864) receptors inhibit the trigeminovascular system without producing vasoconstriction. Nevertheless, PNU-142633 proved to be ineffective in the acute treatment of migraine, whilst LY344864 did show some efficacy when used in doses which interact with 5-HT1B receptors. Finally, although the triptans are effective antimigraine agents producing selective cranial vasoconstriction, efforts are being made to develop other effective antimigraine alternatives acting via the direct blockade of vasodilator mechanisms (e.g. antagonists at CGRP receptors, antagonists at 5-HT7 receptors, inhibitors of nitric oxide biosynthesis, etc). These alternatives will hopefully lead to fewer side-effects.",
"title": ""
},
{
"docid": "4a3328f1220a9e0a732e959337253600",
"text": "In order to manage the traffic at the intersection, the traffic lights are often used. These lights are turned on and off at the predetermined time. Intelligent traffic control systems are designed to dynamically treat the problem of traffic and reduce traffic, pollution and transit time of vehicles at the intersection. In this paper we present a design of intelligent traffic control based on fuzzy logic. The input parameters for intelligent controller are selected with the various modes of intersections to be a true simulation of the intersection environment. In order to facilitate the hardware implementation and increasing the computational speed decision algorithm state machine in this system is written in Verilog language and has capability to implement in the Field Programmable Gate Array (FPGA). The simulation results show that the proposed traffic signal controller system (that has been) designed with fuzzy logic has better performance than other designed systems.",
"title": ""
},
{
"docid": "67269d2f4cc4b4ac07c855e3dfaca4ca",
"text": "Electronic textiles, or e-textiles, are an increasingly important part of wearable computing, helping to make pervasive devices truly wearable. These soft, fabric-based computers can function as lovely embodiments of Mark Weiser's vision of ubiquitous computing: providing useful functionality while disappearing discreetly into the fabric of our clothing. E-textiles also give new, expressive materials to fashion designers, textile designers, and artists, and garments stemming from these disciplines usually employ technology in visible and dramatic style. Integrating computer science, electrical engineering, textile design, and fashion design, e-textiles cross unusual boundaries, appeal to a broad spectrum of people, and provide novel opportunities for creative experimentation both in engineering and design. Moreover, e-textiles are cutting- edge technologies that capture people's imagination in unusual ways. (What other emerging pervasive technology has Vogue magazine featured?) Our work aims to capitalize on these unique features by providing a toolkit that empowers novices to design, engineer, and build their own e-textiles.",
"title": ""
},
{
"docid": "6005ebbe5848655fda5127f555f70764",
"text": "The ability to record and replay program execution helps significantly in debugging non-deterministic MPI applications by reproducing message-receive orders. However, the large amount of data that traditional record-and-reply techniques record precludes its practical applicability to massively parallel applications. In this paper, we propose a new compression algorithm, Clock Delta Compression (CDC), for scalable record and replay of non-deterministic MPI applications. CDC defines a reference order of message receives based on a totally ordered relation using Lamport clocks, and only records the differences between this reference logical-clock order and an observed order. Our evaluation shows that CDC significantly reduces the record data size. For example, when we apply CDC to Monte Carlo particle transport Benchmark (MCB), which represents common non-deterministic communication patterns, CDC reduces the record size by approximately two orders of magnitude compared to traditional techniques and incurs between 13.1% and 25.5% of runtime overhead.",
"title": ""
},
{
"docid": "41f0cea988e24716be77d84ea7bd5c45",
"text": "Over the last 25 years a lot of work has been undertaken on constructing continuum models for segregation of particles of different sizes. We focus on one model that is designed to predict segregation and remixing of two differently sized particle species. This model contains two dimensionless parameters, which in general depend on both the flow and particle properties. One of the weaknesses of the model is that these dependencies are not predicted; these have to be determined by either experiments or simulations. We present steady-state simulations using the discrete particle method (DPM) for bi-disperse systems with different size ratios. The aim is to determine one parameter in the continuum model, i.e., the segregation Péclet number (ratio of the segregation velocity to diffusion) as a function of the particle size ratio. Reasonable agreement is found; but, also measurable discrepancies are reported;",
"title": ""
},
{
"docid": "e4ed62511669cb333b1ab97d095fda46",
"text": "This paper reports a four-element array tag antenna close to a human body for UHF Radio frequency identification (RFID) applications. The four-element array is based on PIFA grounded by vias, which can enhance the directive gain. The array antenna is fed by a four-port microstrip-line power divider. The input impedance of the power divider is designed to match with that of a Monza® 4 microchip. The parametric analysis of conjugate matching was performed and prototypes were fabricated to verify the simulated results. Experimental tests show that the maximum reading range achieved by an RFID tag equipped with the array antenna achieves about 3.9 m when the tag was mounted on a human body.",
"title": ""
},
{
"docid": "641a98a0f0b1ac4d382379271dedfbef",
"text": "The image captured in water is hazy due to the several effects of the underwater medium. These effects are governed by the suspended particles that lead to absorption and scattering of light during image formation process. The underwater medium is not friendly for imaging data and brings low contrast and fade color issues. Therefore, during any image based exploration and inspection activity, it is essential to enhance the imaging data before going for further processing. This paper presents a wavelet-based fusion method to enhance the hazy underwater images by addressing the low contrast and color alteration issues. The publicly available hazy underwater images are enhanced and analyzed qualitatively with some state of the art methods. The quantitative study of image quality depicts promising results.",
"title": ""
},
{
"docid": "7c47eaa26fb5d661c056cff84b485e99",
"text": "The comparison of methods experiment is important part in process of analytical methods and instruments validation. Passing and Bablok regression analysis is a statistical procedure that allows valuable estimation of analytical methods agreement and possible systematic bias between them. It is robust, non-parametric, non sensitive to distribution of errors and data outliers. Assumptions for proper application of Passing and Bablok regression are continuously distributed data and linear relationship between data measured by two analytical methods. Results are presented with scatter diagram and regression line, and regression equation where intercept represents constant and slope proportional measurement error. Confidence intervals of 95% of intercept and slope explain if their value differ from value zero (intercept) and value one (slope) only by chance, allowing conclusion of method agreement and correction action if necessary. Residual plot revealed outliers and identify possible non-linearity. Furthermore, cumulative sum linearity test is performed to investigate possible significant deviation from linearity between two sets of data. Non linear samples are not suitable for concluding on method agreement.",
"title": ""
},
{
"docid": "196868f85571b16815127d2bd87b98ff",
"text": "Scientists have predicted that carbon’s immediate neighbors on the periodic chart, boron and nitrogen, may also form perfect nanotubes, since the advent of carbon nanotubes (CNTs) in 1991. First proposed then synthesized by researchers at UC Berkeley in the mid 1990’s, the boron nitride nanotube (BNNT) has proven very difficult to make until now. Herein we provide an update on a catalyst-free method for synthesizing highly crystalline, small diameter BNNTs with a high aspect ratio using a high power laser under a high pressure and high temperature environment first discovered jointly by NASA/NIA JSA. Progress in purification methods, dispersion studies, BNNT mat and composite formation, and modeling and diagnostics will also be presented. The white BNNTs offer extraordinary properties including neutron radiation shielding, piezoelectricity, thermal oxidative stability (> 800 ̊C in air), mechanical strength, and toughness. The characteristics of the novel BNNTs and BNNT polymer composites and their potential applications are discussed.",
"title": ""
},
{
"docid": "f810dbe1e656fe984b4b6498c1c27bcb",
"text": "Information-maximization clustering learns a probabilistic classifier in an unsupervised manner so that mutual information between feature vectors and cluster assignments is maximized. A notable advantage of this approach is that it involves only continuous optimization of model parameters, which is substantially simpler than discrete optimization of cluster assignments. However, existing methods still involve nonconvex optimization problems, and therefore finding a good local optimal solution is not straightforward in practice. In this letter, we propose an alternative information-maximization clustering method based on a squared-loss variant of mutual information. This novel approach gives a clustering solution analytically in a computationally efficient way via kernel eigenvalue decomposition. Furthermore, we provide a practical model selection procedure that allows us to objectively optimize tuning parameters included in the kernel function. Through experiments, we demonstrate the usefulness of the proposed approach.",
"title": ""
},
{
"docid": "0512987d091d29681eb8ba38a1079cff",
"text": "Deep convolutional neural networks (CNNs) have shown excellent performance in object recognition tasks and dense classification problems such as semantic segmentation. However, training deep neural networks on large and sparse datasets is still challenging and can require large amounts of computation and memory. In this work, we address the task of performing semantic segmentation on large data sets, such as three-dimensional medical images. We propose an adaptive sampling scheme that uses a-posterior error maps, generated throughout training, to focus sampling on difficult regions, resulting in improved learning. Our contribution is threefold: 1) We give a detailed description of the proposed sampling algorithm to speed up and improve learning performance on large images. 2) We propose a deep dual path CNN that captures information at fine and coarse scales, resulting in a network with a large field of view and high resolution outputs. 3) We show that our method is able to attain new state-of-the-art results on the VISCERAL Anatomy benchmark.",
"title": ""
},
{
"docid": "3cc5648cab5d732d3d30bd95d9d06c00",
"text": "We are concerned with the utility of social laws in a computational environment laws which guarantee the successful coexistence of multi ple programs and programmers In this paper we are interested in the o line design of social laws where we as designers must decide ahead of time on useful social laws In the rst part of this paper we sug gest the use of social laws in the domain of mobile robots and prove analytic results about the usefulness of this approach in that setting In the second part of this paper we present a general model of social law in a computational system and investigate some of its proper ties This includes a de nition of the basic computational problem involved with the design of multi agent systems and an investigation of the automatic synthesis of useful social laws in the framework of a model which refers explicitly to social laws This work was supported in part by a grant from the US Israel Binational Science Foundation",
"title": ""
},
{
"docid": "07c7a0b55e2f44c8688eb1c5f752bf63",
"text": "Usually occurring on the mid-face, especially on the nose, trichostasis spinulosa occurs more commonly in young, adult black women. The lesions of trichostasis spinulosa resemble open comedones (blackheads). It may be treated with tweezing, dipilatory wax, and topical retinoic acid.",
"title": ""
},
{
"docid": "1c5591bec1b8bfab63309aa2eb488e83",
"text": "When performing visualization and classification, people often confront the problem of dimensionality reduction. Isomap is one of the most promising nonlinear dimensionality reduction techniques. However, when Isomap is applied to real-world data, it shows some limitations, such as being sensitive to noise. In this paper, an improved version of Isomap, namely S-Isomap, is proposed. S-Isomap utilizes class information to guide the procedure of nonlinear dimensionality reduction. Such a kind of procedure is called supervised nonlinear dimensionality reduction. In S-Isomap, the neighborhood graph of the input data is constructed according to a certain kind of dissimilarity between data points, which is specially designed to integrate the class information. The dissimilarity has several good properties which help to discover the true neighborhood of the data and, thus, makes S-Isomap a robust technique for both visualization and classification, especially for real-world problems. In the visualization experiments, S-Isomap is compared with Isomap, LLE, and WeightedIso. The results show that S-Isomap performs the best. In the classification experiments, S-Isomap is used as a preprocess of classification and compared with Isomap, WeightedIso, as well as some other well-established classification methods, including the K-nearest neighbor classifier, BP neural network, J4.8 decision tree, and SVM. The results reveal that S-Isomap excels compared to Isomap and WeightedIso in classification, and it is highly competitive with those well-known classification methods.",
"title": ""
},
{
"docid": "2aca7cd7a01cec2cc0824a65e68d4937",
"text": "This paper addresses the problem of finite-time H∞ filtering for one family of singular stochastic systems with parametric uncertainties and time-varying norm-bounded disturbance. Initially, the definitions of singular stochastic finite-time boundedness and singular stochastic H∞ finitetime boundedness are presented. Then, the H∞ filtering is designed for the class of singular stochastic systems with or without uncertain parameters to ensure singular stochastic finitetime boundedness of the filtering error system and satisfy a prescribed H∞ performance level in some given finite-time interval. Furthermore, sufficient criteria are presented for the solvability of the filtering problems by employing the linear matrix inequality technique. Finally, numerical examples are given to illustrate the validity of the proposed methodology.",
"title": ""
},
{
"docid": "76a2bc6a8649ffe9111bfaa911572c9d",
"text": "URL shortening services have become extremely popular. However, it is still unclear whether they are an effective and reliable tool that can be leveraged to hide malicious URLs, and to what extent these abuses can impact the end users. With these questions in mind, we first analyzed existing countermeasures adopted by popular shortening services. Surprisingly, we found such countermeasures to be ineffective and trivial to bypass. This first measurement motivated us to proceed further with a large-scale collection of the HTTP interactions that originate when web users access live pages that contain short URLs. To this end, we monitored 622 distinct URL shortening services between March 2010 and April 2012, and collected 24,953,881 distinct short URLs. With this large dataset, we studied the abuse of short URLs. Despite short URLs are a significant, new security risk, in accordance with the reports resulting from the observation of the overall phishing and spamming activity, we found that only a relatively small fraction of users ever encountered malicious short URLs. Interestingly, during the second year of measurement, we noticed an increased percentage of short URLs being abused for drive-by download campaigns and a decreased percentage of short URLs being abused for spam campaigns. In addition to these security-related findings, our unique monitoring infrastructure and large dataset allowed us to complement previous research on short URLs and analyze these web services from the user's perspective.",
"title": ""
},
{
"docid": "e9ed26434ac4e17548a08a40ace99a0c",
"text": "An analytical study on air flow effects and resulting dynamics on the PACE Formula 1 race car is presented. The study incorporates Computational Fluid Dynamic analysis and simulation to maximize down force and minimize drag during high speed maneuvers of the race car. Using Star CCM+ software and mentoring provided by CD – Adapco, the simulation employs efficient meshing techniques and realistic loading conditions to understand down force on front and rear wing portions of the car as well as drag created by all exterior surfaces. Wing and external surface loading under high velocity runs of the car are illustrated. Optimization of wing orientations (direct angle of attack) and geometry modifications on outer surfaces of the car are performed to enhance down force and lessen drag for maximum stability and control during operation. The use of Surface Wrapper saved months of time in preparing the CAD model. The Transform tool and Contact Prevention tool in Star CCM+ proved to be an efficient means of correcting and modifying geometry instead of going back to the CAD model. The CFD simulations point out that the current front and rear wings do not generate the desired downforce and that the rear wing should be redesigned.",
"title": ""
}
] |
scidocsrr
|
f176f26f225072d1e3b98a81c9620ff4
|
A new approach for supervised power disaggregation by using a deep recurrent LSTM network
|
[
{
"docid": "e9497a16e9d12ea837c7a0ec44d71860",
"text": "This article surveys existing and emerging disaggregation techniques for energy-consumption data and highlights signal features that might be used to sense disaggregated data in an easily installed and cost-effective manner.",
"title": ""
}
] |
[
{
"docid": "0755cc0ad7bb1758ffc39217694deb2e",
"text": "The low power losses of silicon carbide (SiC) devices provide new opportunities to implement an ultra high-efficiency front-end rectifier for data center power supplies based on a 400-Vdc power distribution architecture, which requires high conversion efficiency in each power conversion stage. This paper presents a 7.5-kW high-efficiency three-phase buck rectifier with 480-Vac,rms input line-to-line voltage and 400-Vdc output voltage using SiC MOSFETs and Schottky diodes. To estimate power devices' losses, which are the dominant portion of total loss, the method of device evaluation and loss calculation is proposed based on a current source topology. This method simulates the current commutation process and estimates devices' losses during switching transients considering devices with and without switching actions in buck rectifier operation. Moreover, the power losses of buck rectifiers based on different combinations of 1200-V power devices are compared. The investigation and comparison demonstrate the benefits of each combination, and the lowest total loss in the all-SiC rectifier is clearly shown. A 7.5-kW prototype of the all-SiC three-phase buck rectifier using liquid cooling is fabricated and tested, with filter design and switching frequency chosen based on loss minimization. A full-load efficiency value greater than 98.5% is achieved.",
"title": ""
},
{
"docid": "4d6082ab565b98ea6aa88a68ba781fca",
"text": "Over the past decade, deep learning has achieved remarkable success in various artificial intelligence research areas. Evolved from the previous research on artificial neural networks, this technology has shown superior performance to other machine learning algorithms in areas such as image and voice recognition, natural language processing, among others. The first wave of applications of deep learning in pharmaceutical research has emerged in recent years, and its utility has gone beyond bioactivity predictions and has shown promise in addressing diverse problems in drug discovery. Examples will be discussed covering bioactivity prediction, de novo molecular design, synthesis prediction and biological image analysis.",
"title": ""
},
{
"docid": "2575dbf042cf926da3aa2cb27d1d5a24",
"text": "Because of the spread of the Internet, social platforms become big data pools. From there we can learn about the trends, culture and hot topics. This project focuses on analyzing the data from Instagram. It shows the relationship of Instagram filter data with location and number of likes to give users filter suggestion on achieving more likes based on their location. It also analyzes the popular hashtags in different locations to show visual culture differences between different cities. ACM Classification",
"title": ""
},
{
"docid": "f6459d16d5d2bb490c985222421e8546",
"text": "Software traceability is a sought-after, yet often elusive quality in large software-intensive systems primarily because the cost and effort of tracing can be overwhelming. State-of-the art solutions address this problem through utilizing trace retrieval techniques to automate the process of creating and maintaining trace links. However, there is no simple one- size-fits all solution to trace retrieval. As this paper will show, finding the right combination of tracing techniques can lead to significant improvements in the quality of generated links. We present a novel approach to trace retrieval in which the underlying infrastructure is configured at runtime to optimize trace quality. We utilize a machine-learning approach to search for the best configuration given an initial training set of validated trace links, a set of available tracing techniques specified in a feature model, and an architecture capable of instantiating all valid configurations of features. We evaluate our approach through a series of experiments using project data from the transportation, healthcare, and space exploration domains, and discuss its implementation in an industrial environment. Finally, we show how our approach can create a robust baseline against which new tracing techniques can be evaluated.",
"title": ""
},
{
"docid": "e0ca1c29ef4cdc73debabcc4409bd8eb",
"text": "The Internet of Things (IoT) will enable objects to become active participants of everyday activities. Introducing objects into the control processes of complex systems makes IoT security very difficult to address. Indeed, the Internet of Things is a complex paradigm in which people interact with the technological ecosystem based on smart objects through complex processes. The interactions of these four IoT components, person, intelligent object, technological ecosystem, and process, highlight a systemic and cognitive dimension within security of the IoT. The interaction of people with the technological ecosystem requires the protection of their privacy. Similarly, their interaction with control processes requires the guarantee of their safety. Processes must ensure their reliability and realize the objectives for which they are designed. We believe that the move towards a greater autonomy for objects will bring the security of technologies and processes and the privacy of individuals into sharper focus. Furthermore, in parallel with the increasing autonomy of objects to perceive and act on the environment, IoT security should move towards a greater autonomy in perceiving threats and reacting to attacks, based on a cognitive and systemic approach. In this work, we will analyze the role of each of the mentioned actors in IoT security and their relationships, in order to highlight the research challenges and present our approach to these issues based on a holistic vision of IoT security.",
"title": ""
},
{
"docid": "72138b8acfb7c9e11cfd92c0b78a737c",
"text": "We study the task of entity linking for tweets, which tries to associate each mention in a tweet with a knowledge base entry. Two main challenges of this task are the dearth of information in a single tweet and the rich entity mention variations. To address these challenges, we propose a collective inference method that simultaneously resolves a set of mentions. Particularly, our model integrates three kinds of similarities, i.e., mention-entry similarity, entry-entry similarity, and mention-mention similarity, to enrich the context for entity linking, and to address irregular mentions that are not covered by the entity-variation dictionary. We evaluate our method on a publicly available data set and demonstrate the effectiveness of our method.",
"title": ""
},
{
"docid": "3947250417c1c5715128533125633d9f",
"text": "Face recognition has received substantial attention from researches in biometrics, pattern recognition field and computer vision communities. Face recognition can be applied in Security measure at Air ports, Passport verification, Criminals list verification in police department, Visa processing , Verification of Electoral identification and Card Security measure at ATM’s. In this paper, a face recognition system for personal identification and verification using Principal Component Analysis (PCA) with Back Propagation Neural Networks (BPNN) is proposed. This system consists on three basic steps which are automatically detect human face image using BPNN, the various facial features extraction, and face recognition are performed based on Principal Component Analysis (PCA) with BPNN. The dimensionality of face image is reduced by the PCA and the recognition is done by the BPNN for efficient and robust face recognition. In this paper also focuses on the face database with different sources of variations, especially Pose, Expression, Accessories, Lighting and backgrounds would be used to advance the state-of-the-art face recognition technologies aiming at practical applications",
"title": ""
},
{
"docid": "bf499e8252cac48cdd406699c8413e16",
"text": "Most research in reading comprehension has focused on answering questions based on individual documents or even single paragraphs. We introduce a method which integrates and reasons relying on information spread within documents and across multiple documents. We frame it as an inference problem on a graph. Mentions of entities are nodes of this graph where edges encode relations between different mentions (e.g., withinand cross-document co-references). Graph convolutional networks (GCNs) are applied to these graphs and trained to perform multi-step reasoning. Our Entity-GCN method is scalable and compact, and it achieves state-of-the-art results on the WIKIHOP dataset (Welbl et al., 2017).",
"title": ""
},
{
"docid": "46004ee1f126c8a5b76166c5dc081bc8",
"text": "In this study, an energy harvesting chip was developed to scavenge energy from artificial light to charge a wireless sensor node. The chip core is a miniature transformer with a nano-ferrofluid magnetic core. The chip embedded transformer can convert harvested energy from its solar cell to variable voltage output for driving multiple loads. This chip system yields a simple, small, and more importantly, a battery-less power supply solution. The sensor node is equipped with multiple sensors that can be enabled by the energy harvesting power supply to collect information about the human body comfort degree. Compared with lab instruments, the nodes with temperature, humidity and photosensors driven by harvested energy had variation coefficient measurement precision of less than 6% deviation under low environmental light of 240 lux. The thermal comfort was affected by the air speed. A flow sensor equipped on the sensor node was used to detect airflow speed. Due to its high power consumption, this sensor node provided 15% less accuracy than the instruments, but it still can meet the requirement of analysis for predicted mean votes (PMV) measurement. The energy harvesting wireless sensor network (WSN) was deployed in a 24-hour convenience store to detect thermal comfort degree from the air conditioning control. During one year operation, the sensor network powered by the energy harvesting chip retained normal functions to collect the PMV index of the store. According to the one month statistics of communication status, the packet loss rate (PLR) is 2.3%, which is as good as the presented results of those WSNs powered by battery. Referring to the electric power records, almost 54% energy can be saved by the feedback control of an energy harvesting sensor network. These results illustrate that, scavenging energy not only creates a reliable power source for electronic devices, such as wireless sensor nodes, but can also be an energy source by building an energy efficient program.",
"title": ""
},
{
"docid": "c1305b1ccc199126a52c6a2b038e24d1",
"text": "This study has devoted much effort to developing an integrated model designed to predict and explain an individual’s continued use of online services based on the concepts of the expectation disconfirmation model and the theory of planned behavior. Empirical data was collected from a field survey of Cyber University System (CUS) users to verify the fit of the hypothetical model. The measurement model indicates the theoretical constructs have adequate reliability and validity while the structured equation model is illustrated as having a high model fit for empirical data. Study’s findings show that a customer’s behavioral intention towards e-service continuance is mainly determined by customer satisfaction and additionally affected by perceived usefulness and subjective norm. Generally speaking, the integrated model can fully reflect the spirit of the expectation disconfirmation model and take advantage of planned behavior theory. After consideration of the impact of systemic features, personal characteristics, and social influence on customer behavior, the integrated model had a better explanatory advantage than other EDM-based models proposed in prior research. 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "d3b01d3ce120ac7ceda6b61a04210f39",
"text": "The present experiment investigated two facets of object permanence in young infants: the ability to represent the existence and the location of a hidden stationary object, and the ability to ‘represent the existence and the trajectory of a hidden moving object. Sixand B-month-old infants sat in front of a screen; to the left of the screen was an inclined ramp. The infants watched the following event: the screen was raised and lowered, and a toy car rolled down the ramp, passed behind the screen, and exited the apparatus to the right. After the infants habituated to this event, they saw two test events. These were identical to the habituation event, except that a box was placed behind the screen. In one event (possible event), the box stood in back of the car’s tracks; in the other (impossible event), it stood on top of the tracks, blocking the car’s path. Infants looked longer at the impossible than at the possible event, indicating that they were surprised to see the car reappear from behind the screen when the box stood in its path. A control experiment in which the box was placed in front (possible event) or on top (impossible event) of the car’s tracks yielded similar results. Together, the results of these experiments suggest that infants understood that (I) the box continued to exist, in its same location, after it was occluded by the screen; (2) the car continued to exist, and pursued its trajectory, after it disappeared behind the screen; and (3) the car could not roll through the space occupied by the box. These results have implications for theory and research on the development of infants’ knowledge about objects and infants’ reasoning abilities. *The research reported in this manuscript was supported by a grant from the University Research Institute of the University of Texas at Austin. I thank Judy Deloache and Gwen Gustafson, for their careful reading of the paper; Marty Banks, Kathy Cain, Carol Dweck, Marcia Graber, and Liz Spelke, for helpful comments on different versions of the paper; and Stanley Wasserman and Dawn Iaccobucci for their help with the statistical analyses. I also thank the undergraduates who served as observers and experimenters, and the parents who kindly allowed their infants to participate in the studies. Reprint requests should be sent to Renee Baillargeon, Psychology Department, University of Illinois at Urbana/Champaign 603 E Daniel Street, Champaign, IL. 61820, U.S.A. OOlO-0277/86/$6.80",
"title": ""
},
{
"docid": "0254d49cb759e163a032b6557f969bd3",
"text": "The smart electricity grid enables a two-way flow of power and data between suppliers and consumers in order to facilitate the power flow optimization in terms of economic efficiency, reliability and sustainability. This infrastructure permits the consumers and the micro-energy producers to take a more active role in the electricity market and the dynamic energy management (DEM). The most important challenge in a smart grid (SG) is how to take advantage of the users’ participation in order to reduce the cost of power. However, effective DEM depends critically on load and renewable production forecasting. This calls for intelligent methods and solutions for the real-time exploitation of the large volumes of data generated by a vast amount of smart meters. Hence, robust data analytics, high performance computing, efficient data network management, and cloud computing techniques are critical towards the optimized operation of SGs. This research aims to highlight the big data issues and challenges faced by the DEM employed in SG networks. It also provides a brief description of the most commonly used data processing methods in the literature, and proposes a promising direction for future research in the field.",
"title": ""
},
{
"docid": "35c8c5f950123154f4445b6c6b2399c2",
"text": "Online social media have democratized the broadcasting of information, encouraging users to view the world through the lens of social networks. The exploitation of this lens, termed social sensing, presents challenges for researchers at the intersection of computer science and the social sciences.",
"title": ""
},
{
"docid": "a64bf1840a6f7d82d5ca4dc10bf87453",
"text": "Cloud-based wireless networking system applies centralized resource pooling to improve operation efficiency. Fog-based wireless networking system reduces latency by placing processing units in the network edge. Confluence of fog and cloud design paradigms in 5G radio access network will better support diverse applications. In this article, we describe the recent advances in fog radio access network (F-RAN) research, hybrid fog-cloud architecture, and system design issues. Furthermore, the GPP platform facilitates the confluence of computational and communications processing. Through observations from GPP platform testbed experiments and simulations, we discuss the opportunities of integrating the GPP platform with F-RAN architecture.",
"title": ""
},
{
"docid": "fef66948f4f647f88cc3921366f45e49",
"text": "Acoustic correlates of stress [duration, fundamental frequency (Fo), and intensity] were investigated in a language (Thai) in which both duration and Fo are employed to signal lexical contrasts. Stimuli consisted of 25 pairs of segmentally/tonally identical, syntactically ambiguous sentences. The first member of each sentence pair contained a two-syllable noun-verb sequence exhibiting a strong-strong (--) stress pattern, the second member a two-syllable noun compound exhibiting a weak-strong (--) stress pattern. Measures were taken of five prosodic dimensions of the rhyme portion of the target syllable: duration, average Fo, Fo standard deviation, average intensity, and intensity standard deviation. Results of linear regression indicated that duration is the predominant cue in signaling the distinction between stressed and unstressed syllables in Thai. Discriminant analysis showed a stress classification accuracy rate of over 99%. Findings are discussed in relation to the varying roles that Fo, intensity, and duration have in different languages given their phonological structure.",
"title": ""
},
{
"docid": "0ac9ad839f21bd03342dd786b09155fe",
"text": "Graphs are fundamental data structures which concisely capture the relational structure in many important real-world domains, such as knowledge graphs, physical and social interactions, language, and chemistry. Here we introduce a powerful new approach for learning generative models over graphs, which can capture both their structure and attributes. Our approach uses graph neural networks to express probabilistic dependencies among a graph’s nodes and edges, and can, in principle, learn distributions over any arbitrary graph. In a series of experiments our results show that once trained, our models can generate good quality samples of both synthetic graphs as well as real molecular graphs, both unconditionally and conditioned on data. Compared to baselines that do not use graph-structured representations, our models often perform far better. We also explore key challenges of learning generative models of graphs, such as how to handle symmetries and ordering of elements during the graph generation process, and offer possible solutions. Our work is the first and most general approach for learning generative models over arbitrary graphs, and opens new directions for moving away from restrictions of vectorand sequence-like knowledge representations, toward more expressive and flexible relational data structures.",
"title": ""
},
{
"docid": "14520419a4b0e27df94edc4cf23cde65",
"text": "In this paper we propose and examine non–parametric statistical tests to define similarity and homogeneity measure s for textures. The statistical tests are applied to the coeffi cients of images filtered by a multi–scale Gabor filter bank. We will demonstrate that these similarity measures are useful for both, texture based image retrieval and for unsupervised texture segmentation, and hence offer an unified approach to these closely related tasks. We present results on Brodatz–like micro–textures and a collection of real–word images.",
"title": ""
},
{
"docid": "20fd36e287a631c82aa8527e6a36931f",
"text": "Creating a mesh is the first step in a wide range of applications, including scientific computing and computer graphics. An unstructured simplex mesh requires a choice of meshpoints (vertex nodes) and a triangulation. We want to offer a short and simple MATLAB code, described in more detail than usual, so the reader can experiment (and add to the code) knowing the underlying principles. We find the node locations by solving for equilibrium in a truss structure (using piecewise linear force-displacement relations) and we reset the topology by the Delaunay algorithm. The geometry is described implicitly by its distance function. In addition to being much shorter and simpler than other meshing techniques, our algorithm typically produces meshes of very high quality. We discuss ways to improve the robustness and the performance, but our aim here is simplicity. Readers can download (and edit) the codes from http://math.mit.edu/~persson/mesh.",
"title": ""
},
{
"docid": "670b35833f96a62bce9e2ddd58081fc4",
"text": "Although video summarization has achieved great success in recent years, few approaches have realized the influence of video structure on the summarization results. As we know, the video data follow a hierarchical structure, i.e., a video is composed of shots, and a shot is composed of several frames. Generally, shots provide the activity-level information for people to understand the video content. While few existing summarization approaches pay attention to the shot segmentation procedure. They generate shots by some trivial strategies, such as fixed length segmentation, which may destroy the underlying hierarchical structure of video data and further reduce the quality of generated summaries. To address this problem, we propose a structure-adaptive video summarization approach that integrates shot segmentation and video summarization into a Hierarchical Structure-Adaptive RNN, denoted as HSA-RNN. We evaluate the proposed approach on four popular datasets, i.e., SumMe, TVsum, CoSum and VTW. The experimental results have demonstrated the effectiveness of HSA-RNN in the video summarization task.",
"title": ""
},
{
"docid": "8d5dca364cbe5e3825e2f267d1c41d50",
"text": "This paper describes an algorithm based on constrained variance maximization for the restoration of a blurred image. Blurring is a smoothing process by definition. Accordingly, the deblurring filter shall be able to perform as a high pass filter, which increases the variance. Therefore, we formulate a variance maximization object function for the deconvolution filter. Using principal component analysis (PCA), we find the filter maximizing the object function. PCA is more than just a high pass filter; by maximizing the variances, it is able to perform the decorrelation, by which the original image is extracted from the mixture (the blurred image). Our approach was experimentally compared with the adaptive Lucy-Richardson maximum likelihood (ML) algorithm. The comparative results on both synthesized and real blurred images are included.",
"title": ""
}
] |
scidocsrr
|
0d96e7122dc33a8009a25d8e1147f0a5
|
DEVELOPMENT OF A CURRENT CONTROL ULTRACAPACITOR CHARGER BASED ON DIGITAL SIGNAL PROCESSING
|
[
{
"docid": "413112cc78df9fac45a254c74049f724",
"text": "We are developing compact, high-power chargers for rapid charging of energy storage capacitors. The main application is presently rapid charging of the capacitors inside of compact Marx generators for reprated operation. Compact Marx generators produce output pulses with amplitudes above 300 kV with ns or subns rise-times. A typical application is the generation of high power microwaves. Initially all energy storage capacitors in a Marx generator are charged in parallel. During the so-called erection cycle, the capacitors are connected in series. The charging voltage in the parallel configuration is around 40-50 kV. The input voltage of our charger is in the range of several hundred volts. Rapid charging of the capacitors in the parallel configuration will enable a high pulse repetition-rate of the compact Marx generator. The high power charger uses state-of-the-art IGBTs (isolated gate bipolar transistors) in an H-bridge topology and a compact, high frequency transformer. The IGBTs and the associated controls are packaged for minimum weight and maximum power density. The packaging and device selection makes use of burst mode operation (thermal inertia) of the charger. The present charger is considerably smaller than the one presented in Giesselmann, M et al., (2001).",
"title": ""
},
{
"docid": "8ab53b0100ce36ace61660c9c8e208b4",
"text": "A novel current-pumped battery charger (CPBC) is proposed in this paper to increase the Li-ion battery charging performance. A complete charging process, consisting of three subprocesses, namely: 1) the bulk current charging process; 2) the pulsed current charging process; and 3) the pulsed float charging process, can be automatically implemented by using the inherent characteristics of current-pumped phase-locked loop (CPLL). A design example for a 700-mA ldr h Li-ion battery is built to assess the CPBC's performance. In comparison with the conventional phase-locked battery charger, the battery available capacity and charging efficiency of the proposed CPBC are improved by about 6.9% and 1.5%, respectively. The results of the experiment show that a CPLL is really suitable for carrying out a Li-ion battery pulse charger.",
"title": ""
}
] |
[
{
"docid": "31122809968518d915a6d20bab717dcb",
"text": "Wireless indoor positioning has been extensively studied for the past 2 decades and continuously attracted growing research efforts in mobile computing context. As the integration of multiple inertial sensors (e.g., accelerometer, gyroscope, and magnetometer) to nowadays smartphones in recent years, human-centric mobility sensing is emerging and coming into vogue. Mobility information, as a new dimension in addition to wireless signals, can benefit localization in a number of ways, since location and mobility are by nature related in the physical world. In this article, we survey this new trend of mobility enhancing smartphone-based indoor localization. Specifically, we first study how to measure human mobility: what types of sensors we can use and what types of mobility information we can acquire. Next, we discuss how mobility assists localization with respect to enhancing location accuracy, decreasing deployment cost, and enriching location context. Moreover, considering the quality and cost of smartphone built-in sensors, handling measurement errors is essential and accordingly investigated. Combining existing work and our own working experiences, we emphasize the principles and conduct comparative study of the mainstream technologies. Finally, we conclude this survey by addressing future research directions and opportunities in this new and largely open area.",
"title": ""
},
{
"docid": "91e130d562a6a317d5f2885fb161354d",
"text": "In silico modeling is a crucial milestone in modern drug design and development. Although computer-aided approaches in this field are well-studied, the application of deep learning methods in this research area is at the beginning. In this work, we present an original deep neural network (DNN) architecture named RANC (Reinforced Adversarial Neural Computer) for the de novo design of novel small-molecule organic structures based on the generative adversarial network (GAN) paradigm and reinforcement learning (RL). As a generator RANC uses a differentiable neural computer (DNC), a category of neural networks, with increased generation capabilities due to the addition of an explicit memory bank, which can mitigate common problems found in adversarial settings. The comparative results have shown that RANC trained on the SMILES string representation of the molecules outperforms its first DNN-based counterpart ORGANIC by several metrics relevant to drug discovery: the number of unique structures, passing medicinal chemistry filters (MCFs), Muegge criteria, and high QED scores. RANC is able to generate structures that match the distributions of the key chemical features/descriptors (e.g., MW, logP, TPSA) and lengths of the SMILES strings in the training data set. Therefore, RANC can be reasonably regarded as a promising starting point to develop novel molecules with activity against different biological targets or pathways. In addition, this approach allows scientists to save time and covers a broad chemical space populated with novel and diverse compounds.",
"title": ""
},
{
"docid": "8b0850f168b0dc0493589eeb4be05eb5",
"text": "Feature models describe the common and variable characteristics of a product line. Their advantages are well recognized in product line methods. Unfortunately, creating a feature model for an existing project is time-consuming and requires substantial effort from a modeler.\n We present procedures for reverse engineering feature models based on a crucial heuristic for identifying parents - the major challenge of this task. We also automatically recover constructs such as feature groups, mandatory features, and implies/excludes edges. We evaluate the technique on two large-scale software product lines with existing reference feature models--the Linux and eCos kernels--and FreeBSD, a project without a feature model. Our heuristic is effective across all three projects by ranking the correct parent among the top results for a vast majority of features. The procedures effectively reduce the information a modeler has to consider from thousands of choices to typically five or less.",
"title": ""
},
{
"docid": "300084201fb45869d87a42fe98e30c47",
"text": "Automating the lunch box packaging is a challenging task due to the high deformability and large individual differences in shape and physical property of food materials. Soft robotic grippers showed potentials to perform such tasks. In this paper, we presented four pneumatic soft actuators made of different materials and different fabrication methods and compared their performances through a series of tests. We found that the actuators fabricated by 3D printing showed better linearity and less individual differences, but showed low durability compared to actuators fabricated by traditional casting process. Robotic grippers were assembled using the soft actuators, and grasping tests were performed on soft paper containers filled with food materials. Results suggested that grippers with softer actuators required lower air pressure to lift up the same weight and generated less deformation on the soft container. The actuator made of casting process with Dragon Skin 10 material lifted the most weight among different actuators.",
"title": ""
},
{
"docid": "3ddaf67bd4636f3e4dd7cdd4a461f640",
"text": "Training neural networks for semantic segmentation is data hungry. Meanwhile annotating a large number of pixel-level segmentation masks needs enormous human effort. In this paper, we propose a framework with only image-level supervision. It unifies semantic segmentation and object localization with important proposal aggregation and selection modules. They greatly reduce the notorious error accumulation problem that commonly arises in weakly supervised learning. Our proposed training algorithm progressively improves segmentation performance with augmented feedback in iterations. Our method achieves decent results on the PASCAL VOC 2012 segmentation data, outperforming previous image-level supervised methods by a large margin.",
"title": ""
},
{
"docid": "1f39815e008e895632403bbe9456acad",
"text": "Information on site-specific spectrum characteristics is essential to evaluate and improve the performance of wireless networks. However, it is usually very costly to obtain accurate spectrum-condition information in heterogeneous wireless environments. This paper presents a novel spectrum-survey system, called Sybot (Spectrum survey robot), that guides network engineers to efficiently monitor the spectrum condition (e.g., RSS) of WiFi networks. Sybot effectively controls mobility and employs three disparate monitoring techniques - complete, selective, and diagnostic - that help produce and maintain an accurate spectrum-condition map for challenging indoor WiFi networks. By adaptively triggering the most suitable of the three techniques, Sybot captures spatio-temporal changes in spectrum condition. Moreover, based on the monitoring results, Sybot automatically determines several key survey parameters, such as site-specific measurement time and space granularities. Sybot has been prototyped with a commodity IEEE 802.11 router and Linux OS, and experimentally evaluated, demonstrating its ability to generate accurate spectrum-condition maps while reducing the measurement effort (space, time) by more than 56%.",
"title": ""
},
{
"docid": "55969912d37a5550953b954ba4efd7d3",
"text": "Apart from some general issues related to the Gender Identity Disorder (GID) diagnosis, such as whether it should stay in the DSM-V or not, a number of problems specifically relate to the current criteria of the GID diagnosis for adolescents and adults. These problems concern the confusion caused by similarities and differences of the terms transsexualism and GID, the inability of the current criteria to capture the whole spectrum of gender variance phenomena, the potential risk of unnecessary physically invasive examinations to rule out intersex conditions (disorders of sex development), the necessity of the D criterion (distress and impairment), and the fact that the diagnosis still applies to those who already had hormonal and surgical treatment. If the diagnosis should not be deleted from the DSM, most of the criticism could be addressed in the DSM-V if the diagnosis would be renamed, the criteria would be adjusted in wording, and made more stringent. However, this would imply that the diagnosis would still be dichotomous and similar to earlier DSM versions. Another option is to follow a more dimensional approach, allowing for different degrees of gender dysphoria depending on the number of indicators. Considering the strong resistance against sexuality related specifiers, and the relative difficulty assessing sexual orientation in individuals pursuing hormonal and surgical interventions to change physical sex characteristics, it should be investigated whether other potentially relevant specifiers (e.g., onset age) are more appropriate.",
"title": ""
},
{
"docid": "9dd5efc350db054c9efcb7bc849be0e2",
"text": "The prediction of traffic incident duration is an important foundation of advanced incident management system and driver information system. In this paper, actual traffic incident data was used to study the prediction problem of traffic incident duration by the method of neural network. 660 sets of actual traffic incident data from a freeway management center were used to train a neural network model, and 170 sets of incident data in the same data collection, which are different from training data, were used to test the prediction effect of the model. The test result shows that the correlation of the prediction values and the actual values is 0.8535, which indicates that the prediction result of the neural network model can basically represent actual incident duration.",
"title": ""
},
{
"docid": "58e2cba4f609dce3b17e945f58d90c08",
"text": "We develop a theory of financing of entrepreneurial ventures via an initial coin offering (ICO). Pre-selling a venture’s output by issuing tokens allows the entrepreneur to transfer part of the venture risk to diversified investors without diluting her control rights. This, however, leads to an agency conflict between the entrepreneur and investors that manifests itself in underinvestment. We show that an ICO can dominate traditional venture capital (VC) financing when VC investors are under-diversified, when the idiosyncratic component of venture risk is large enough, when the payoff distribution is sufficiently right-skewed, and when the degree of information asymmetry between the entrepreneur and ICO investors is not too large. Overall, our model suggests that an ICO can be a viable financing alternative for some but not all entrepreneurial ventures. An implication is that while regulating ICOs to reduce the information asymmetry associated with them is desirable, banning them outright is not.",
"title": ""
},
{
"docid": "e43814f288e1c5a84fb9d26b46fc7e37",
"text": "Achieving good performance in bytecoded language interpreters is difficult without sacrificing both simplicity and portability. This is due to the complexity of dynamic translation (\"just-in-time compilation\") of bytecodes into native code, which is the mechanism employed universally by high-performance interpreters.We demonstrate that a few simple techniques make it possible to create highly-portable dynamic translators that can attain as much as 70% the performance of optimized C for certain numerical computations. Translators based on such techniques can offer respectable performance without sacrificing either the simplicity or portability of much slower \"pure\" bytecode interpreters.",
"title": ""
},
{
"docid": "a95f77c59a06b2d101584babc74896fb",
"text": "Magnetic wall and ceiling climbing robots have been proposed in many industrial applications where robots must move over ferromagnetic material surfaces. The magnetic circuit design with magnetic attractive force calculation of permanent magnetic wheel plays an important role which significantly affects the system reliability, payload ability and power consumption of the robot. In this paper, a flexible wall and ceiling climbing robot with six permanent magnetic wheels is proposed to climb along the vertical wall and overhead ceiling of steel cargo containers as part of an illegal contraband inspection system. The permanent magnetic wheels are designed to apply to the wall and ceiling climbing robot, whilst finite element method is employed to estimate the permanent magnetic wheels with various wheel rims. The distributions of magnetic flux lines and magnetic attractive forces are compared on both plane and corner scenarios so that the robot can adaptively travel through the convex and concave surfaces of the cargo container. Optimisation of wheel rims is presented to achieve the equivalent magnetic adhesive forces along with the estimation of magnetic ring dimensions in the axial and radial directions. Finally, the practical issues correlated with the applications of the techniques are discussed and the conclusions are drawn with further improvement and prototyping.",
"title": ""
},
{
"docid": "292e3b8126ac517398060f9cdef4103b",
"text": "Since its inception about a decade ago, practitioners and researchers alike have been drawn to the blockchain technology vibe. Advocates of blockchain argue that the technology is taking us to truly ‘trust-free’ transactions. A long list of applications of blockchain has also been proposed in a relatively short period of time. However, a closer look into the literature reveals two shortcomings. To start with, the substantial proportion of the research on blockchain has focused on addressing the technical aspects of blockchain—design and features— as well as legal issues. However, there is a lack of knowledge on how blockchain technology can be used to solve practical problems faced by organizations in different sectors and industries—measurement and value, trust, management and organization. The state-of-the-art also shows that there is a dominance of conceptual and design-oriented research paradigms. To address this gap and respond to the calls for further research, this paper presents a research plan for a longitudinal case study to investigate whether blockchain technology can affect the way organizations conduct their business relationships.",
"title": ""
},
{
"docid": "78a54727232bbf9d755e84808d2cd792",
"text": "Nonlocal image representation has been successfully used in many image-related inverse problems including denoising, deblurring and deblocking. However, due to a majority of reconstruction methods only exploit the nonlocal self-similarity (NSS) prior of the degraded observation image, it is very challenging to reconstruct the latent clean image directly from the noisy observation. In this paper we propose a novel model for image denoising via group sparsity residual and external NSS prior. To boost the performance of image denoising, the concept of group sparsity residual is proposed, and thus the problem of image denoising is transformed into one that reduces the group sparsity residual. Due to the fact that the groups contain a large amount of NSS information of natural images, we obtain a good estimation of the group sparse coefficients of the original image by the external NSS prior based on Gaussian Mixture model (GMM) learning and the group sparse coefficients of noisy image are used to approximate the estimation. Experimental results demonstrate that the proposed approach not only outperforms many state-of-the-art methods, but also delivers the best qualitative denoising results with finer details and less ringing artifacts.",
"title": ""
},
{
"docid": "ccbe07453508d9c2d89ef6d2f6468619",
"text": "Much attention has been directed to the use of video games for learning in recent years, in part due to the staggering amounts of capital spent on games in the entertainment industry, but also because of their ability to captivate player attention and hold it for lengthy periods of time as players learn to master game complexities and accomplish objectives. This review of the literature on video game research focuses on publications analyzing educational game design, namely those that present design elements conducive to learning, the theoretical underpinnings of game design, and learning outcomes from video game play. Introduction Many articles have been published in the last 20 years on video games for learning, and several reviews of the literature on educational games have been completed within the last few years (Aguilera & Mendiz, 2003; O’Neil, Wainess, & Baker, 2005). However, these reviews focused on literature that addressed what players learn from video games rather than how video games can be designed to facilitate learning. This review focuses on publications addressing educational video game design, seeking to identify elements of game design that promote learning as well as the learning theories that conceptualize how video games foster learning. Research Focus and Search Methods A multiple database search using the search terms game design AND video or computer or PC AND educational or instructional, yielded nearly 100 publications from the following databases: • Academic Search Premier • ACM Digital Library • Communication and Mass Media Complete • Computer Source • ERIC • Information Science and Technology Abstracts • Internet and Personal Computing Abstracts • Library, Information Science, and Technology Abstracts • PsychARTICLES • Psychology and Behavioral Sciences Collection • PsychINFO • Science and Technology Collection • Social Sciences Abstracts Journal of Applied Educational Technology Volume 4, Number 1 Spring/Summer 2007 22 Search results were further limited to include only peer-reviewed journal articles, conference proceedings, and frequently cited books, criteria which culled the list to 56. Closer review of these publications revealed that several did not address issues related to game design. The resulting list contained 35 items spanning the last ten years, most of which were published in the last three (30 of 35 items). Results were not narrowed by specific game types, nor were design studies on game-like environments excluded as they apply design elements from video games to environments for learning and are consequently relevant to this review. Game-like environments included augmented and virtual reality, multi-user virtual environments, interactive learning environments, simulations, and simulation games. The publications reviewed are organized loosely into those that address characteristics of educational games, elements of effective video game design, learning theories for video games, learning outcomes from game play, and gender preferences in video game design. These categories provide an organizational framework for understanding significant design considerations revealed in the literature. Nevertheless, most of the publications reviewed do not fit neatly into a single category. Many of the studies contain findings that are relevant to several of the categories employed here, but may be reviewed fully only once and simply cited where otherwise appropriate. Elements of Effective Video Game Design Edutainment vs. Educational Games It is important to distinguish between educational and edutainment games prior to proceeding with a review focused on educational video game design. According to Denis and Jouvelot (2005), “The main characteristic that differentiates edutainment and video games is interactivity, because, the former being grounded on didactical and linear progressions, no place is left to wandering and alternatives” (p. 464). Edutainment games, then, are those which follow a skill and drill format in which players either practice repetitive skills or rehearse memorized facts. As such, “Edutainment often fails in transmitting non trivial (or previously assimilated) knowledge, calling again and again the same action patterns and not throwing the learning curve into relief” (Denis & Jouvelot, 2005, p. 464). In contrast, educational video games require strategizing, hypothesis testing, or problem-solving, usually with higher order thinking rather than rote memorization or simple comprehension. Characteristics of such games include a system of rewards and goals which motivate players, a narrative context which situates activity and establishes rules of engagement, learning content that is relevant to the narrative plot, and interactive cues that prompt learning and provide feedback. Nevertheless, even skill and drill games that employ such characteristics have demonstrated gains in learning. Lee, Luchini, Michael, Norris, and Soloway (2004) found that a math facts game for second graders deployed on handheld computers encouraged learners to complete a greater number of problems at an increased degree of difficulty. Learners playing the handheld game completed nearly three times the number of problems in 19 days as those using paper worksheets. Learners using the handheld game also voluntarily increased the level of difficulty in the game as they continued to play.",
"title": ""
},
{
"docid": "3d0103c34fcc6a65ad56c85a9fe10bad",
"text": "This paper approaches the problem of finding correspondences between images in which there are large changes in viewpoint, scale and illumination. Recent work has shown that scale-space ‘interest points’ may be found with good repeatability in spite of such changes. Furthermore, the high entropy of the surrounding image regions means that local descriptors are highly discriminative for matching. For descriptors at interest points to be robustly matched between images, they must be as far as possible invariant to the imaging process. In this work we introduce a family of features which use groups of interest points to form geometrically invariant descriptors of image regions. Feature descriptors are formed by resampling the image relative to canonical frames defined by the points. In addition to robust matching, a key advantage of this approach is that each match implies a hypothesis of the local 2D (projective) transformation. This allows us to immediately reject most of the false matches using a Hough transform. We reject remaining outliers using RANSAC and the epipolar constraint. Results show that dense feature matching can be achieved in a few seconds of computation on 1GHz Pentium III machines.",
"title": ""
},
{
"docid": "6eda76a015e8cb9122ed89b491474248",
"text": "Beauty treatment for skin requires a high-intensity focused ultrasound (HIFU) transducer to generate coagulative necrosis in a small focal volume (e.g., 1 mm³) placed at a shallow depth (3-4.5 mm from the skin surface). For this, it is desirable to make the F-number as small as possible under the largest possible aperture in order to generate ultrasound energy high enough to induce tissue coagulation in such a small focal volume. However, satisfying both conditions at the same time is demanding. To meet the requirements, this paper, therefore, proposes a double-focusing technique, in which the aperture of an ultrasound transducer is spherically shaped for initial focusing and an acoustic lens is used to finally focus ultrasound on a target depth of treatment; it is possible to achieve the F-number of unity or less while keeping the aperture of a transducer as large as possible. In accordance with the proposed method, we designed and fabricated a 7-MHz double-focused ultrasound transducer. The experimental results demonstrated that the fabricated double-focused transducer had a focal length of 10.2 mm reduced from an initial focal length of 15.2 mm and, thus, the F-number changed from 1.52 to 1.02. Based on the results, we concluded that the proposed double-focusing method is suitable to decrease F-number while maintaining a large aperture size.",
"title": ""
},
{
"docid": "d41cd48a377afa6b95598d2df6a27b08",
"text": "Graph-based approaches have been most successful in semisupervised learning. In this paper, we focus on label propagation in graph-based semisupervised learning. One essential point of label propagation is that the performance is heavily affected by incorporating underlying manifold of given data into the input graph. The other more important point is that in many recent real-world applications, the same instances are represented by multiple heterogeneous data sources. A key challenge under this setting is to integrate different data representations automatically to achieve better predictive performance. In this paper, we address the issue of obtaining the optimal linear combination of multiple different graphs under the label propagation setting. For this problem, we propose a new formulation with the sparsity (in coefficients of graph combination) property which cannot be rightly achieved by any other existing methods. This unique feature provides two important advantages: 1) the improvement of prediction performance by eliminating irrelevant or noisy graphs and 2) the interpretability of results, i.e., easily identifying informative graphs on classification. We propose efficient optimization algorithms for the proposed approach, by which clear interpretations of the mechanism for sparsity is provided. Through various synthetic and two real-world data sets, we empirically demonstrate the advantages of our proposed approach not only in prediction performance but also in graph selection ability.",
"title": ""
},
{
"docid": "525ddfaae4403392e8817986f2680a68",
"text": "Documentation errors increase healthcare costs and cause unnecessary patient deaths. As the standard language for diagnoses and billing, ICD codes serve as the foundation for medical documentation worldwide. Despite the prevalence of electronic medical records, hospitals still witness high levels of ICD miscoding. In this paper, we propose to automatically document ICD codes with far-field speech recognition. Far-field speech occurs when the microphone is located several meters from the source, as is common with smart homes and security systems. Our method combines acoustic signal processing with recurrent neural networks to recognize and document ICD codes in real time. To evaluate our model, we collected a far-field speech dataset of ICD-10 codes and found our model to achieve 87% accuracy with a BLEU score of 85%. By sampling from an unsupervised medical language model, our method is able to outperform existing methods. Overall, this work shows the potential of automatic speech recognition to provide efficient, accurate, and cost-effective healthcare documentation.",
"title": ""
},
{
"docid": "fcc36e4c32953dd9deedd5fd11ca8a1a",
"text": "Effective human-robot cooperation requires robotic devices that understand human goals and intentions. We frame the problem of intent recognition as one of tracking and predicting human actions within the context of plan task sequences. A hybrid mode estimation approach, which estimates both discrete operating modes and continuous state, is used to accomplish this tracking based on possibly noisy sensor input. The operating modes correspond to plan tasks, hence, the ability to estimate and predict these provides a prediction of human actions and associated needs in the plan context. The discrete and continuous estimates interact in that the discrete mode selects continous dynamic models used in the continuous estimation, and the continuous state is used to evaluate guard conditions for mode transitions. Two applications: active prosthetic devices, and cooperative assembly, are described.",
"title": ""
}
] |
scidocsrr
|
abad36abc2d7f43fbe25cb291e56b3c3
|
Decoupled Novel Object Captioner
|
[
{
"docid": "775e3aa5bd4991f227d239e01faf7fad",
"text": "We describe METEOR, an automatic metric for machine translation evaluation that is based on a generalized concept of unigram matching between the machineproduced translation and human-produced reference translations. Unigrams can be matched based on their surface forms, stemmed forms, and meanings; furthermore, METEOR can be easily extended to include more advanced matching strategies. Once all generalized unigram matches between the two strings have been found, METEOR computes a score for this matching using a combination of unigram-precision, unigram-recall, and a measure of fragmentation that is designed to directly capture how well-ordered the matched words in the machine translation are in relation to the reference. We evaluate METEOR by measuring the correlation between the metric scores and human judgments of translation quality. We compute the Pearson R correlation value between its scores and human quality assessments of the LDC TIDES 2003 Arabic-to-English and Chinese-to-English datasets. We perform segment-bysegment correlation, and show that METEOR gets an R correlation value of 0.347 on the Arabic data and 0.331 on the Chinese data. This is shown to be an improvement on using simply unigramprecision, unigram-recall and their harmonic F1 combination. We also perform experiments to show the relative contributions of the various mapping modules.",
"title": ""
},
{
"docid": "97aab319e3d38d755860b141c5a4fa38",
"text": "Automatically generating a natural language description of an image has attracted interests recently both because of its importance in practical applications and because it connects two major artificial intelligence fields: computer vision and natural language processing. Existing approaches are either top-down, which start from a gist of an image and convert it into words, or bottom-up, which come up with words describing various aspects of an image and then combine them. In this paper, we propose a new algorithm that combines both approaches through a model of semantic attention. Our algorithm learns to selectively attend to semantic concept proposals and fuse them into hidden states and outputs of recurrent neural networks. The selection and fusion form a feedback connecting the top-down and bottom-up computation. We evaluate our algorithm on two public benchmarks: Microsoft COCO and Flickr30K. Experimental results show that our algorithm significantly outperforms the state-of-the-art approaches consistently across different evaluation metrics.",
"title": ""
}
] |
[
{
"docid": "f447c062b72bf4fcb559ba30621464be",
"text": "The acute fish test is an animal test whose ecotoxicological relevance is worthy of discussion. The primary aim of protection in ecotoxicology is the population and not the individual. Furthermore the concentration of pollutants in the environment is normally not in the lethal range. Therefore the acute fish test covers solely the situation after chemical spills. Nevertheless, acute fish toxicity data still belong to the base set used for the assessment of chemicals. The embryo test with the zebrafish Danio rerio (DarT) is recommended as a substitute for the acute fish test. For validation an international laboratory comparison test was carried out. A summary of the results is presented in this paper. Based on the promising results of testing chemicals and waste water the test design was validated by the DIN-working group \"7.6 Fischei-Test\". A normed test guideline for testing waste water with fish is available. The test duration is short (48 h) and within the test different toxicological endpoints can be examined. Endpoints from the embryo test are suitable for QSAR-studies. Besides the use in ecotoxicology the introduction as a toxicological model was investigated. Disturbance of pigmentation and effects on the frequency of heart-beat were examined. A further important application is testing of teratogenic chemicals. Based on the results DarT could be a screening test within preclinical studies.",
"title": ""
},
{
"docid": "869f492020b06dbd7795251858beb6f7",
"text": "Multimodal wearable sensor data classification plays an important role in ubiquitous computing and has a wide range of applications in scenarios from healthcare to entertainment. However, most existing work in this field employs domain-specific approaches and is thus ineffective in complex situations where multi-modality sensor data are collected. Moreover, the wearable sensor data are less informative than the conventional data such as texts or images. In this paper, to improve the adaptability of such classification methods across different application domains, we turn this classification task into a game and apply a deep reinforcement learning scheme to deal with complex situations dynamically. Additionally, we introduce a selective attention mechanism into the reinforcement learning scheme to focus on the crucial dimensions of the data. This mechanism helps to capture extra information from the signal and thus it is able to significantly improve the discriminative power of the classifier. We carry out several experiments on three wearable sensor datasets and demonstrate the competitive performance of the proposed approach compared to several state-of-the-art baselines.",
"title": ""
},
{
"docid": "9a8133fbfe2c9422b6962dd88505a9e9",
"text": "The amino acid sequences of 301 glycosyl hydrolases and related enzymes have been compared. A total of 291 sequences corresponding to 39 EC entries could be classified into 35 families. Only ten sequences (less than 5% of the sample) could not be assigned to any family. With the sequences available for this analysis, 18 families were found to be monospecific (containing only one EC number) and 17 were found to be polyspecific (containing at least two EC numbers). Implications on the folding characteristics and mechanism of action of these enzymes and on the evolution of carbohydrate metabolism are discussed. With the steady increase in sequence and structural data, it is suggested that the enzyme classification system should perhaps be revised.",
"title": ""
},
{
"docid": "5dfd11e950200ff77d8173350f8ea86b",
"text": "Data streams mining has become a novel research topic of growing interest in knowledge discovery. Because of the high speed and huge size of data set in data streams, the traditional classification technologies are no longer applicable. In recent years a great deal of research has been done on this problem, most intends to efficiently solve the data streams mining problem with concept drift. This paper presents the state-of-the-art in this field with growing vitality and introduces the methods for detecting concept drift in data stream, then gives a critical summary of existing approaches to the problem, including Stagger, FLORA, MetaL(B), MetaL(IB), CD3, CD4, CD5, OLIN, CVFDT and different ensemble classifiers. At last, this paper explores the challenges and future work in this field.",
"title": ""
},
{
"docid": "1eb6514f825be9d6a4af9646b6a7a9e2",
"text": "Maritime tasks, such as surveillance and patrolling, aquaculture inspection, and wildlife monitoring, typically require large operational crews and expensive equipment. Only recently have unmanned vehicles started to be used for such missions. These vehicles, however, tend to be expensive and have limited coverage, which prevents large-scale deployment. In this paper, we propose a scalable robotics system based on swarms of small and inexpensive aquatic drones. We take advantage of bio-inspired artificial evolution techniques in order to synthesize scalable and robust collective behaviors for the drones. The behaviors are then combined hierarchically with preprogrammed control in an engineeredcentric approach, allowing the overall behavior for a particular mission to be quickly configured and tested in simulation before the aquatic drones are deployed. We demonstrate the scalability of our hybrid approach by successfully deploying up to 1,000 simulated drones to patrol a 20 km long strip for 24 hours.",
"title": ""
},
{
"docid": "8e180c13b925188f1925fee03c641669",
"text": "“Web applications have become increasingly complex and highly vulnerable,” says Peter Wood, member of the ISACA Security Advisory Group and CEO of First Base Technologies. “Social networking sites, consumer technologies – smartphones, tablets etc – and cloud services are all game changers this year. More enterprises are now requesting social engineering tests, which shows an increased awareness of threats beyond website attacks.”",
"title": ""
},
{
"docid": "b8b82691002e3d694d5766ea3269a78e",
"text": "This article presents a framework for improving the Software Configuration Management (SCM) process, that includes a maturity model to assess software organizations and an approach to guide the transition from diagnosis to action planning. The maturity model and assessment tool are useful to identify the degree of satisfaction for practices considered key for SCM. The transition approach is also important because the application of a model to produce a diagnosis is just a first step, organizations are demanding the generation of action plans to implement the recommendations. The proposed framework has been used to assess a number of software organizations and to generate the basis to build an action plan for improvement. In summary, this article shows that the maturity model and action planning approach are instrumental to reach higher SCM control and visibility, therefore producing higher quality software.",
"title": ""
},
{
"docid": "0d0eb6ed5dff220bc46ffbf87f90ee59",
"text": "Objectives. The aim of this review was to investigate whether alternating hot–cold water treatment is a legitimate training tool for enhancing athlete recovery. A number of mechanisms are discussed to justify its merits and future research directions are reported. Alternating hot–cold water treatment has been used in the clinical setting to assist in acute sporting injuries and rehabilitation purposes. However, there is overwhelming anecdotal evidence for it’s inclusion as a method for post exercise recovery. Many coaches, athletes and trainers are using alternating hot–cold water treatment as a means for post exercise recovery. Design. A literature search was performed using SportDiscus, Medline and Web of Science using the key words recovery, muscle fatigue, cryotherapy, thermotherapy, hydrotherapy, contrast water immersion and training. Results. The physiologic effects of hot–cold water contrast baths for injury treatment have been well documented, but its physiological rationale for enhancing recovery is less known. Most experimental evidence suggests that hot–cold water immersion helps to reduce injury in the acute stages of injury, through vasodilation and vasoconstriction thereby stimulating blood flow thus reducing swelling. This shunting action of the blood caused by vasodilation and vasoconstriction may be one of the mechanisms to removing metabolites, repairing the exercised muscle and slowing the metabolic process down. Conclusion. To date there are very few studies that have focussed on the effectiveness of hot–cold water immersion for post exercise treatment. More research is needed before conclusions can be drawn on whether alternating hot–cold water immersion improves recuperation and influences the physiological changes that characterises post exercise recovery. q 2003 Published by Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "b752e7513d4acbd0a0cd8991022f093e",
"text": "One common strategy for dealing with large, complex models is to partition them into pieces that are easier to handle. While decomposition into convex components results in pieces that are easy to process, such decompositions can be costly to construct and often result in representations with an unmanageable number of components. In this paper, we propose an alternative partitioning strategy that decomposes a given polyhedron into “approximately convex” pieces. For many applications, the approximately convex components of this decomposition provide similar benefits as convex components, while the resulting decomposition is both significantly smaller and can be computed more efficiently. Indeed, for many models, an approximate convex decomposition can more accurately represent the important structural features of the model by providing a mechanism for ignoring insignificant features, such as wrinkles and other surface texture. We propose a simple algorithm to compute approximate convex decompositions of polyhedra of arbitrary genus to within a user specified tolerance. This algorithm measures the significance of the model’s features and resolves them in order of priority. As a by product, it also produces an elegant hierarchical representation of the model. We illustrate its utility in constructing an approximate skeleton of the model that results in significant performance gains over skeletons based on an exact convex decomposition. This research supported in part by NSF CAREER Award CCR-9624315, NSF Grants IIS-9619850, ACI-9872126, EIA-9975018, EIA-0103742, EIA-9805823, ACI-0113971, CCR-0113974, EIA-9810937, EIA-0079874, and by the Texas Higher Education Coordinating Board grant ARP-036327-017. Figure 1: Each component is approximately convex (concavity less than 10 by our measure). There are a total of 17 components.",
"title": ""
},
{
"docid": "41d9da8054cf6f80d43f177d84334470",
"text": "Educational data mining (EDM) is an up-coming interdisciplinary research field, in which data mining (DM) techniques applying in educational data. Its objective is to better understand how students gain knowledge and recognize the settings in which they learn to improve educational outcomes. Educational systems can store a huge amount of data that coming from multiple sources in different formats. Each particular educational problem requires different types of the mining techniques because traditional DM techniques cannot be applied directly to these types of data and problems. There are many general data mining tools are available but these are not designed for deal with educational data and an educator cannot use these tools without knowledge of data mining concepts. To overcome from this problem many authors provided educational data mining tools with different data mining techniques for different purposes. This paper surveys the EDM Tools from 2001 to 2016 and presents the consolidated list of tools in one place to help the EDM researchers. Then categorizes the tools based on data mining methods and explains the usage of EDM Tools.",
"title": ""
},
{
"docid": "ab677299ffa1e6ae0f65daf5de75d66c",
"text": "This paper proposes a new theory of the relationship between the sentence processing mechanism and the available computational resources. This theory--the Syntactic Prediction Locality Theory (SPLT)--has two components: an integration cost component and a component for the memory cost associated with keeping track of obligatory syntactic requirements. Memory cost is hypothesized to be quantified in terms of the number of syntactic categories that are necessary to complete the current input string as a grammatical sentence. Furthermore, in accordance with results from the working memory literature both memory cost and integration cost are hypothesized to be heavily influenced by locality (1) the longer a predicted category must be kept in memory before the prediction is satisfied, the greater is the cost for maintaining that prediction; and (2) the greater the distance between an incoming word and the most local head or dependent to which it attaches, the greater the integration cost. The SPLT is shown to explain a wide range of processing complexity phenomena not previously accounted for under a single theory, including (1) the lower complexity of subject-extracted relative clauses compared to object-extracted relative clauses, (2) numerous processing overload effects across languages, including the unacceptability of multiply center-embedded structures, (3) the lower complexity of cross-serial dependencies relative to center-embedded dependencies, (4) heaviness effects, such that sentences are easier to understand when larger phrases are placed later and (5) numerous ambiguity effects, such as those which have been argued to be evidence for the Active Filler Hypothesis.",
"title": ""
},
{
"docid": "95afd1d83b5641a7dff782588348d2ec",
"text": "Intensive repetitive therapy improves function and quality of life for stroke patients. Intense therapies to overcome upper extremity impairment are beneficial, however, they are expensive because, in part, they rely on individualized interaction between the patient and rehabilitation specialist. The development of a pneumatic muscle driven hand therapy device, the Mentor/spl trade/, reinforces the need for volitional activation of joint movement while concurrently offering knowledge of results about range of motion, muscle activity or resistance to movement. The device is well tolerated and has received favorable comments from stroke survivors, their caregivers, and therapists.",
"title": ""
},
{
"docid": "3ee6ad4099e8fe99042472207e6dac09",
"text": "The millimeter-wave (mmWave) band offers the potential for high-bandwidth communication channels in cellular networks. It is not clear, however, whether both high data rates and coverage in terms of signal-to-noise-plus-interference ratio can be achieved in interference-limited mmWave cellular networks due to the differences in propagation conditions and antenna topologies. This article shows that dense mmWave networks can achieve both higher data rates and comparable coverage relative to conventional microwave networks. Sum rate gains can be achieved using more advanced beamforming techniques that allow multiuser transmission. The insights are derived using a new theoretical network model that incorporates key characteristics of mmWave networks.",
"title": ""
},
{
"docid": "fec2b6b7cdef1ddf88dffd674fe7111a",
"text": "This paper introduces Dex, a reinforcement learning environment toolkit specialized for training and evaluation of continual learning methods as well as general reinforcement learning problems. We also present the novel continual learning method of incremental learning, where a challenging environment is solved using optimal weight initialization learned from first solving a similar easier environment. We show that incremental learning can produce vastly superior results than standard methods by providing a strong baseline method across ten Dex environments. We finally develop a saliency method for qualitative analysis of reinforcement learning, which shows the impact incremental learning has on network attention.",
"title": ""
},
{
"docid": "cfbf63d92dfafe4ac0243acdff6cf562",
"text": "In this paper we present a linguistic resource for the lexical representation of affective knowledge. This resource (named W ORDNETAFFECT) was developed starting from W ORDNET, through a selection and tagging of a subset of synsets representing the affective",
"title": ""
},
{
"docid": "4d387aff850619c9a805da9e3cca9604",
"text": "Application of sensor-based technology within activity monitoring systems is becoming a popular technique within the smart environment paradigm. Nevertheless, the use of such an approach generates complex constructs of data, which subsequently requires the use of intricate activity recognition techniques to automatically infer the underlying activity. This paper explores a cluster-based ensemble method as a new solution for the purposes of activity recognition within smart environments. With this approach activities are modelled as collections of clusters built on different subsets of features. A classification process is performed by assigning a new instance to its closest cluster from each collection. Two different sensor data representations have been investigated, namely numeric and binary. Following the evaluation of the proposed methodology it has been demonstrated that the cluster-based ensemble method can be successfully applied as a viable option for activity recognition. Results following exposure to data collected from a range of activities indicated that the ensemble method had the ability to perform with accuracies of 94.2% and 97.5% for numeric and binary data, respectively. These results outperformed a range of single classifiers considered as benchmarks.",
"title": ""
},
{
"docid": "feb184ada1d0deb3c1798beb3da8ff53",
"text": "Despite significant progress in image-based 3D scene flow estimation, the performance of such approaches has not yet reached the fidelity required by many applications. Simultaneously, these applications are often not restricted to image-based estimation: laser scanners provide a popular alternative to traditional cameras, for example in the context of self-driving cars, as they directly yield a 3D point cloud. In this paper, we propose to estimate 3D scene flow from such unstructured point clouds using a deep neural network. In a single forward pass, our model jointly predicts 3D scene flow as well as the 3D bounding box and rigid body motion of objects in the scene. While the prospect of estimating 3D scene flow from unstructured point clouds is promising, it is also a challenging task. We show that the traditional global representation of rigid body motion prohibits inference by CNNs, and propose a translation equivariant representation to circumvent this problem. For training our deep network, a large dataset is required. Because of this, we augment real scans from KITTI with virtual objects, realistically modeling occlusions and simulating sensor noise. A thorough comparison with classic and learning-based techniques highlights the robustness of the proposed approach.",
"title": ""
},
{
"docid": "1638f42ee75131459f659ece60f46874",
"text": "Cloud computing is a rapidly evolving information technology (IT) phenomenon. Rather than procure, deploy and manage a physical IT infrastructure to host their software applications, organizations are increasingly deploying their infrastructure into remote, virtualized environments, often hosted and managed by third parties. This development has significant implications for digital forensic investigators, equipment vendors, law enforcement, as well as corporate compliance and audit departments (among others). Much of digital forensic practice assumes careful control and management of IT assets (particularly data storage) during the conduct of an investigation. This paper summarises the key aspects of cloud computing and analyses how established digital forensic procedures will be invalidated in this new environment. Several new research challenges addressing this changing context are also identified and discussed.",
"title": ""
},
{
"docid": "ff5c993fd071b31b6f639d1f64ce28b0",
"text": "We show that explicit pragmatic inference aids in correctly generating and following natural language instructions for complex, sequential tasks. Our pragmatics-enabled models reason about why speakers produce certain instructions, and about how listeners will react upon hearing them. Like previous pragmatic models, we use learned base listener and speaker models to build a pragmatic speaker that uses the base listener to simulate the interpretation of candidate descriptions, and a pragmatic listener that reasons counterfactually about alternative descriptions. We extend these models to tasks with sequential structure. Evaluation of language generation and interpretation shows that pragmatic inference improves state-of-the-art listener models (at correctly interpreting human instructions) and speaker models (at producing instructions correctly interpreted by humans) in diverse settings.",
"title": ""
}
] |
scidocsrr
|
4950c8a2a9457e207b571cc87457bec2
|
Open and Secure: Amending the Security of the BSI Smart Metering Infrastructure to Smart Home Applications via the Smart Meter Gateway
|
[
{
"docid": "ed06226e548fac89cc06a798618622c6",
"text": "Exciting yet challenging times lie ahead. The electrical power industry is undergoing rapid change. The rising cost of energy, the mass electrification of everyday life, and climate change are the major drivers that will determine the speed at which such transformations will occur. Regardless of how quickly various utilities embrace smart grid concepts, technologies, and systems, they all agree onthe inevitability of this massive transformation. It is a move that will not only affect their business processes but also their organization and technologies.",
"title": ""
}
] |
[
{
"docid": "fd5e6dcb20280daad202f34cd940e7ce",
"text": "Chapters cover topics in areas such as P and NP, space complexity, randomness, computational problems that are (or appear) infeasible to solve, pseudo-random generators, and probabilistic proof systems. The introduction nicely summarizes the material covered in the rest of the book and includes a diagram of dependencies between chapter topics. Initial chapters cover preliminary topics as preparation for the rest of the book. These are more than topical or historical summaries but generally not sufficient to fully prepare the reader for later material. Readers should approach this text already competent at undergraduate-level algorithms in areas such as basic analysis, algorithm strategies, fundamental algorithm techniques, and the basics for determining computability. Elective work in P versus NP or advanced analysis would be valuable but that isn‟t really required.",
"title": ""
},
{
"docid": "23f763524696940bca697d7a84029fa9",
"text": "The present study examined the impact of social factors on consumer behavior in evaluative criteria of the purchased home furnishing in Amman (Jordan). In the literature, there are a few previous studies which have explored the topics on consumer behavior and home furniture industry in Jordan. Furthermore, the objective of this study is to investigate of purchasing behavior of home furniture consumers in Jordan. This study then will evaluate the factors that have influences on furniture purchasing decision process. The findings will allow the researcher to be able to recommend to Jordan furniture manufacturers and retailers. Also, questionnaires were distributed and self-administered to 400 respondents. Descriptive analysis, factors analysis, test of reliability, correlation test, and regression analysis were used in this study. The study results demonstrated that there is a positive and significant relationship between reference group, family, price, quality, color, and purchasing decision. In addition, implications of this work and directions for future research are discussed.",
"title": ""
},
{
"docid": "aa9a447a4cebaea7995df6954a77cdb5",
"text": "Accurately representing the meaning of a piece of text, otherwise known as sentence modelling, is an important component in many natural language inference tasks. We survey the spectrum of these methods, which lie along two dimensions: input representation granularity and composition model complexity. Using this framework, we reveal in our quantitative and qualitative experiments the limitations of the current state-of-the-art model in the context of sentence similarity tasks.",
"title": ""
},
{
"docid": "448d70d9f5f8e5fcb8d04d355a02c8f9",
"text": "Structural health monitoring (SHM) using wireless sensor networks (WSNs) has gained research interest due to its ability to reduce the costs associated with the installation and maintenance of SHM systems. SHM systems have been used to monitor critical infrastructure such as bridges, high-rise buildings, and stadiums and has the potential to improve structure lifespan and improve public safety. The high data collection rate of WSNs for SHM pose unique network design challenges. This paper presents a comprehensive survey of SHM using WSNs outlining the algorithms used in damage detection and localization, outlining network design challenges, and future research directions. Solutions to network design problems such as scalability, time synchronization, sensor placement, and data processing are compared and discussed. This survey also provides an overview of testbeds and real-world deployments of WSNs for SH.",
"title": ""
},
{
"docid": "0315f0355168a78bdead8d06d5f571b4",
"text": "Machine learning techniques are increasingly being applied to clinical text that is already captured in the Electronic Health Record for the sake of delivering quality care. Applications for example include predicting patient outcomes, assessing risks, or performing diagnosis. In the past, good results have been obtained using classical techniques, such as bag-of-words features, in combination with statistical models. Recently however Deep Learning techniques, such as Word Embeddings and Recurrent Neural Networks, have shown to possibly have even greater potential. In this work, we apply several Deep Learning and classical machine learning techniques to the task of predicting violence incidents during psychiatric admission using clinical text that is already registered at the start of admission. For this purpose, we use a novel and previously unexplored dataset from the Psychiatry Department of the University Medical Center Utrecht in The Netherlands. Results show that predicting violence incidents with state-of-the-art performance is possible, and that using Deep Learning techniques provides a relatively small but consistent improvement in performance. We finally discuss the potential implication of our findings for the psychiatric practice.",
"title": ""
},
{
"docid": "5ea560095b752ca8e7fb6672f4092980",
"text": "Access control is a security aspect whose requirements evolve with technology advances and, at the same time, contemporary social contexts. Multitudes of access control models grow out of their respective application domains such as healthcare and collaborative enterprises; and even then, further administering means, human factor considerations, and infringement management are required to effectively deploy the model in the particular usage environment. This paper presents a survey of access control mechanisms along with their deployment issues and solutions available today. We aim to give a comprehensive big picture as well as pragmatic deployment details to guide in understanding, setting up and enforcing access control in its real world application.",
"title": ""
},
{
"docid": "9441113599194d172b6f618058b2ba88",
"text": "Vegetable quality is frequently referred to size, shape, mass, firmness, color and bruises from which fruits can be classified and sorted. However, technological by small and middle producers implementation to assess this quality is unfeasible, due to high costs of software, equipment as well as operational costs. Based on these considerations, the proposal of this research is to evaluate a new open software that enables the classification system by recognizing fruit shape, volume, color and possibly bruises at a unique glance. The software named ImageJ, compatible with Windows, Linux and MAC/OS, is quite popular in medical research and practices, and offers algorithms to obtain the above mentioned parameters. The software allows calculation of volume, area, averages, border detection, image improvement and morphological operations in a variety of image archive formats as well as extensions by means of “plugins” written in Java.",
"title": ""
},
{
"docid": "a44f0cbe9675be06439197053a96c277",
"text": "This paper presents a novel approach to utilizing high level knowledge for the problem of scene recognition in an active vision framework, which we call active scene recognition. In traditional approaches, high level knowledge is used in the post-processing to combine the outputs of the object detectors to achieve better classification performance. In contrast, the proposed approach employs high level knowledge actively by implementing an interaction between a reasoning module and a sensory module (Figure 1). Following this paradigm, we implemented an active scene recognizer and evaluated it with a dataset of 20 scenes and 100+ objects. We also extended it to the analysis of dynamic scenes for activity recognition with attributes. Experiments demonstrate the effectiveness of the active paradigm in introducing attention and additional constraints into the sensing process.",
"title": ""
},
{
"docid": "95350d45a65cb6932f26be4c4d417a30",
"text": "This paper presents a detailed performance comparison (including efficiency, EMC performance and component electrical stress) between boost and buck type PFC under critical conduction mode (CRM). In universal input (90–265Vac) applications, the CRM buck PFC has around 1% higher efficiency compared to its counterpart at low-line (90Vac) condition. Due to the low voltage swing of switch, buck PFC has a better CM EMI performance than boost PFC. It seems that the buck PFC is more attractive in low power applications which only need to meet the IEC61000-3-2 Class D standard based on the comparison. The experimental results from two 100-W prototypes are also presented for side by side comparison.",
"title": ""
},
{
"docid": "f3bed3ce038e087b08164b8468397dc4",
"text": "transection of the lumbosacral spinal roots innervating the bladder as well as the hypogastric nerves. The residual, low amplitude evoked contraction during L2 spinal root stimulation is likely due to low number of direct projections from the L2 ventral horn to the bladder.1 Recording results suggests hypogastric efferent fibers mainly contribute to bladder storage function.2 These refined electroneurogram recording methods may be suitable by monitoring sensory and motor activity in the transferred nerves after bladder reinnervation. Ekta Tiwari, Mary F. Barbe Michel A. Lemay, Danielle M. Salvadeo, Matthew M. Wood, Michael Mazzei, Luke V. Musser, Zdenka J. Delalic, Alan S. Braverman, and Michael R. Ruggieri, Sr.",
"title": ""
},
{
"docid": "e9dd8ef478e06eadd2ebd1c8367b8352",
"text": "While the evaluation of quantitative research frequently depends on judgements based on the “holy trinity” of objectivity, reliability and validity (Spencer, Ritchie, Lewis, & Dillon, 2003, p. 59), applying these traditional criteria to qualitative research is not always a “good fit” (Schofield, 2002). Instead, educational researchers who engage in qualitative research have suggested various sets of alternative criteria including: transferability, generalisability, ontological authenticity, reciprocity, dependability, confirmability, reflexivity, fittingness, vitality and, even, sacredness and goodness (Creswell, 2002; Garman, 1996; Guba & Lincoln, 1989; Patton, 2002; Spencer et al., 2003; Stige, Malterud, & Midtgarden, 2009). While over one hundred sets of qualitative research criteria have been identified (Stige et al., 2009), some researchers warn against the absolute application of any criteria to qualitative research which is, by its nature, wide‐ranging and varied, and does not necessarily lend itself to the straightforward application of any evaluation criteria. Nevertheless, whether or not criteria are applied at all in the research evaluation process, postgraduate students face a number of decisions associated with the process of evaluating qualitative research: 1) whether or not to adopt a set of appraisal criteria; 2) which criteria to select, if criteria are used; and 3) how to apply alternative approaches to criteria‐focused evaluation. These decisions often require a paradigm shift (Khun, 1962) in the way postgraduate students perceive and approach their research. The messiness and complexity associated with such decisions can be confronting. This paper examines a number of approaches used by researchers to evaluate qualitative investigations in educational research.",
"title": ""
},
{
"docid": "bb335297dae74b8c5f45666d8ccb1c6b",
"text": "The popularity of Twitter attracts more and more spammers. Spammers send unwanted tweets to Twitter users to promote websites or services, which are harmful to normal users. In order to stop spammers, researchers have proposed a number of mechanisms. The focus of recent works is on the application of machine learning techniques into Twitter spam detection. However, tweets are retrieved in a streaming way, and Twitter provides the Streaming API for developers and researchers to access public tweets in real time. There lacks a performance evaluation of existing machine learning-based streaming spam detection methods. In this paper, we bridged the gap by carrying out a performance evaluation, which was from three different aspects of data, feature, and model. A big ground-truth of over 600 million public tweets was created by using a commercial URL-based security tool. For real-time spam detection, we further extracted 12 lightweight features for tweet representation. Spam detection was then transformed to a binary classification problem in the feature space and can be solved by conventional machine learning algorithms. We evaluated the impact of different factors to the spam detection performance, which included spam to nonspam ratio, feature discretization, training data size, data sampling, time-related data, and machine learning algorithms. The results show the streaming spam tweet detection is still a big challenge and a robust detection technique should take into account the three aspects of data, feature, and model.",
"title": ""
},
{
"docid": "d9eed063ea6399a8f33c6cbda3a55a62",
"text": "Current and future (conventional) notations used in Conceptual Modeling Techniques should have a precise (formal) semantics to provide a well-defined software development process, in order to go from specification to implementation in an automated way. To achieve this objective, the OO-Method approach to Information Systems Modeling presented in this paper attempts to overcome the conventional (informal)/formal dichotomy by selecting the best ideas from both approaches. The OO-Method makes a clear distinction between the problem space (centered on what the system is) and the solution space (centered on how it is implemented as a software product). It provides a precise, conventional graphical notation to obtain a system description at the problem space level, however this notation is strictly based on a formal OO specification language that determines the conceptual modeling constructs needed to obtain the system specification. An abstract execution model determines how to obtain the software representations corresponding to these conceptual modeling constructs. In this way, the final software product can be obtained in an automated way. r 2001 Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "5e8bcb0cc7ec8fc4c254bb392d761dda",
"text": "With the rapid development of mobile games and the roaring growth of market size, mobile game addiction is becoming a public concern. Hence, understanding the reasons behind mobile game addiction is worthwhile. Based on previous studies and two salient features of mobile games (e.g., hedonic and sociality), a research model is developed to examine the antecedents of mobile game addiction. Our proposed model is tested using a survey from 234 mobile game users and the results confirm most of our hypotheses. Specifically, perceived visibility and perceived enjoyment are found to be positively associated with flow which in turn affects addiction. Besides the indirect effect of perceived visibility on addiction via flow, perceived visibility is found to have a direct effect on addiction too. The implications for theory and practice are also discussed.",
"title": ""
},
{
"docid": "0c0b099a2a4a404632a1f065cfa328c4",
"text": "Quantum computers are available to use over the cloud, but the recent explosion of quantum software platforms can be overwhelming for those deciding on which to use. In this paper, we provide a current picture of the rapidly evolving quantum computing landscape by comparing four software platforms—Forest (pyQuil), QISKit, ProjectQ, and the Quantum Developer Kit—that enable researchers to use real and simulated quantum devices. Our analysis covers requirements and installation, language syntax through example programs, library support, and quantum simulator capabilities for each platform. For platforms that have quantum computer support, we compare hardware, quantum assembly languages, and quantum compilers. We conclude by covering features of each and briefly mentioning other quantum computing software packages.",
"title": ""
},
{
"docid": "f4c8a1b91ac31e5a2f3acf1371c3fedc",
"text": "A goal of redirected walking (RDW) is to allow large virtual worlds to be explored within small tracking areas. Generalized steering algorithms, such as steer-to-center, simply move the user toward locations that are considered to be collision free in most cases. The algorithm developed here, FORCE, identifies collision-free paths by using a map of the tracking area's shape and obstacles, in addition to a multistep, probabilistic prediction of the user's virtual path through a known virtual environment. In the present implementation, the path predictions describe a user's possible movements through a virtual store with aisles. Based on both the user's physical and virtual location / orientation, a search-based optimization technique identifies the optimal steering instruction given the possible user paths. Path prediction uses the map of the virtual world; consequently, the search may propose steering instructions that put the user close to walls if the user's future actions eventually lead away from the wall. Results from both simulated and real users are presented. FORCE identifies collision-free paths in 55.0 percent of the starting conditions compared to 46.1 percent for generalized methods. When considering only the conditions that result in different outcomes, redirection based on FORCE produces collision-free path 94.5 percent of the time.",
"title": ""
},
{
"docid": "e6548454f46962b5ce4c5d4298deb8e7",
"text": "The use of SVM (Support Vector Machines) in detecting e-mail as spam or nonspam by incorporating feature selection using GA (Genetic Algorithm) is investigated. An GA approach is adopted to select features that are most favorable to SVM classifier, which is named as GA-SVM. Scaling factor is exploited to measure the relevant coefficients of feature to the classification task and is estimated by GA. Heavy-bias operator is introduced in GA to promote sparse in the scaling factors of features. So, feature selection is performed by eliminating irrelevant features whose scaling factor is zero. The experiment results on UCI Spam database show that comparing with original SVM classifier, the number of support vector decreases while better classification results are achieved based on GA-SVM.",
"title": ""
},
{
"docid": "065740786a7fcb2e63df4103ea0ede59",
"text": "Accumulating glycine betaine through the ButA transport system from an exogenous supply is a survival strategy employed by Tetragenococcus halophilus, a moderate halophilic lactic acid bacterium with crucial role in flavor formation of high-salt food fermentation, to achieve cellular protection. In this study, we firstly confirmed that butA expression was up-regulated under salt stress conditions by quantitative reverse transcription polymerase chain reaction (qRT-PCR). Subsequently, we discovered that recombinant Escherichia coli MKH13 strains with single- and double-copy butA complete expression box(es) showed typical growth curves while they differed in their salt adaption and tolerance. Meanwhile, high-performance liquid chromatography (HPLC) experiments confirmed results obtained from growth curves. In summary, our results indicated that regulation of butA expression was salt-induced and double-copy butA cassettes entrusted a higher ability of salt adaption and tolerance to E. coli MKH13, which implied the potential of muti-copies of butA gene in the genetic modification of T. halophilus for improvement of salt tolerance and better industrial application.",
"title": ""
},
{
"docid": "18ef278ef018576a7434b483431bfa9e",
"text": "Personality dimensions are associated with preferences for various recreational activities. The present study examined whether personality dimensions differed in their associations with preferences for particular aspects of gaming experiences. We examined the unique associations that personality dimensions had with gaming preferences reported by 359 community members who were active gamers. The basic personality dimensions captured by the HEXACO model of personality were found to be associated with gaming preferences. For example, extraversion was found to have a moderate association with the socializer gaming preference (i.e., enjoyed interacting with others while playing) and a weak association with the daredevil gaming preference (i.e., enjoyed the thrill of taking risks in games). Discussion focuses on the implications of these results for understanding the connection between personality and preferences for gaming experiences.",
"title": ""
},
{
"docid": "46458b795de728c64b3c6b302f6ae574",
"text": "Cloud computing is an area that, nowadays, has been attracting a lot of researches and is expanding not only for processing data, but also for robotics. Cloud robotics is becoming a well-known subject, but it only works in a way to find a faster manner of processing data, which is almost like the idea of cloud computing. In this paper we have created a way to use cloud not only for this kind of operation but, also, to create a framework that helps users to work with ROS in a remote master, giving the possibility to create several applications that may run remotely. Using SpaceBrew, we do not have to worry about finding the robots addresses, which makes this application easier to implement because programmers only have to code as if the application is local.",
"title": ""
}
] |
scidocsrr
|
f91ca68e9ad929c71f9bc198e64e9161
|
Content-Based Collaborative Filtering for News Topic Recommendation
|
[
{
"docid": "048ff79b90371eb86b9d62810cfea31f",
"text": "In October, 2006 Netflix released a dataset containing 100 million anonymous movie ratings and challenged the data mining, machine learning and computer science communities to develop systems that could beat the accuracy of its recommendation system, Cinematch. We briefly describe the challenge itself, review related work and efforts, and summarize visible progress to date. Other potential uses of the data are outlined, including its application to the KDD Cup 2007.",
"title": ""
},
{
"docid": "89e8c2f2722f7aaaad77c0a3099d629e",
"text": "In this paper we present a generative latent variable model for rating-based collaborative filtering called the User Rating Profile model (URP). The generative process which underlies URP is designed to produce complete user rating profiles, an assignment of one rating to each item for each user. Our model represents each user as a mixture of user attitudes, and the mixing proportions are distributed according to a Dirichlet random variable. The rating for each item is generated by selecting a user attitude for the item, and then selecting a rating according to the preference pattern associated with that attitude. URP is related to several models including a multinomial mixture model, the aspect model [7], and LDA [1], but has clear advantages over each.",
"title": ""
}
] |
[
{
"docid": "32ae89cf9f73fbc92de63cadba484cdb",
"text": "INTRODUCTION\nThe aim of this study was to evaluate and compare several physicochemical properties including working and setting times, flow, solubility, and water absorption of a recent calcium silicate-based sealer (MTA Fillapex; Angelus, Londrina, Brazil) and an epoxy resin-based sealer (AH Plus; Dentsply, Konstanz, Germany).\n\n\nMETHODS\nThe materials were handled following the manufacturer's instructions. The working time and flow were tested according to ISO 6876:2001 and the setting time according to American Society for Testing and Materials C266. For solubility and water absorption tests, the materials were placed into polyvinyl chloride molds (8 × 1.6 mm). The samples (n = 10 for each material and test) were placed in a cylindrical polystyrene-sealed container with 20 mL deionized water at 37°C. At 1, 7, 14, and 28 days, the samples were removed from the solutions and blotted dry for solubility and water absorption tests. The data were analyzed using 1-way analysis of variance with the Tukey test (P < .05).\n\n\nRESULTS\nMTA Fillapex showed the lowest values of flow, working and setting times, solubility, and water absorption (P < .05). The solubility and water absorption increased significantly over time for both materials in a 1- to 28-day period (P < .05).\n\n\nCONCLUSIONS\nMTA Fillapex showed suitable physical properties to be used as an endodontic sealer.",
"title": ""
},
{
"docid": "a076df910e5d61d07dacad420dadc242",
"text": "Recognizing objects in fine-grained domains can be extremely challenging due to the subtle differences between subcategories. Discriminative markings are often highly localized, leading traditional object recognition approaches to struggle with the large pose variation often present in these domains. Pose-normalization seeks to align training exemplars, either piecewise by part or globally for the whole object, effectively factoring out differences in pose and in viewing angle. Prior approaches relied on computationally-expensive filter ensembles for part localization and required extensive supervision. This paper proposes two pose-normalized descriptors based on computationally-efficient deformable part models. The first leverages the semantics inherent in strongly-supervised DPM parts. The second exploits weak semantic annotations to learn cross-component correspondences, computing pose-normalized descriptors from the latent parts of a weakly-supervised DPM. These representations enable pooling across pose and viewpoint, in turn facilitating tasks such as fine-grained recognition and attribute prediction. Experiments conducted on the Caltech-UCSD Birds 200 dataset and Berkeley Human Attribute dataset demonstrate significant improvements of our approach over state-of-art algorithms.",
"title": ""
},
{
"docid": "dbd2caa344b9a385e6a4b2704fb0d945",
"text": "This paper describes the analysis, design, and experimental characterization of three-phase tubular modular permanent-magnet machines equipped with quasi-Halbach magnetized magnets. It identifies feasible slot/pole number combinations and discusses their relative merits. It establishes an analytical expression for the open-circuit magnetic field distribution, formulated in the cylindrical coordinate system. The expression has been verified by finite-element analysis. The analytical solution allows the prediction of the thrust force and electromotive force in closed forms, and provides an effective tool for design optimization, as will be described in Part II of the paper.",
"title": ""
},
{
"docid": "047007485d6a995f6145aadbc07dca8f",
"text": "Commerce is a rapidly emerging application area of ubiquitous computing. In this paper, we discuss the market forces that make the deployment of ubiquitous commerce infrastructures a priority for grocery retailing. We then proceed to report on a study on consumer perceptions of MyGrocer, a recently developed ubiquitous commerce system. The emphasis of the discussion is on aspects of security, privacy protection and the development of trust; we report on the findings of this study. We adopt the enacted view of technology adoption to interpret some of our findings based on three principles for the development of trust. We expect that this interpretation can help to guide the development of appropriate strategies for the successful deployment of ubiquitous commerce systems.",
"title": ""
},
{
"docid": "243502b2b8ed80764a2f37cabd968300",
"text": "We describe the design, development, and API of ODIN (Open Domain INformer), a domainindependent, rule-based event extraction (EE) framework. The proposed EE approach is: simple (most events are captured with simple lexico-syntactic patterns), powerful (the language can capture complex constructs, such as events taking other events as arguments, and regular expressions over syntactic graphs), robust (to recover from syntactic parsing errors, syntactic patterns can be freely mixed with surface, token-based patterns), and fast (the runtime environment processes 110 sentences/second in a real-world domain with a grammar of over 200 rules). We used this framework to develop a grammar for the biochemical domain, which approached human performance. Our EE framework is accompanied by a web-based user interface for the rapid development of event grammars and visualization of matches. The ODIN framework and the domain-specific grammars are available as open-source code.",
"title": ""
},
{
"docid": "28b824d73a1efb48ee5628ac461d925e",
"text": "Automatic assessment of sentiment from visual content has gained considerable attention with the increasing tendency of expressing opinions on-line. In this paper, we solve the problem of visual sentiment analysis using the high-level abstraction in the recognition process. Existing methods based on convolutional neural networks learn sentiment representations from the holistic image appearance. However, different image regions can have a different influence on the intended expression. This paper presents a weakly supervised coupled convolutional network with two branches to leverage the localized information. The first branch detects a sentiment specific soft map by training a fully convolutional network with the cross spatial pooling strategy, which only requires image-level labels, thereby significantly reducing the annotation burden. The second branch utilizes both the holistic and localized information by coupling the sentiment map with deep features for robust classification. We integrate the sentiment detection and classification branches into a unified deep framework and optimize the network in an end-to-end manner. Extensive experiments on six benchmark datasets demonstrate that the proposed method performs favorably against the state-of-the-art methods for visual sentiment analysis.",
"title": ""
},
{
"docid": "7acd253de05b3eb27d0abccbcb45367e",
"text": "High school programming competitions often follow the traditional model of collegiate competitions, exemplified by the ACM International Collegiate Programming Contest (ICPC). This tradition has been reinforced by the nature of Advanced Placement Computer Science (AP CS A), for which ICPC-style problems are considered an excellent practice regimen. As more and more students in high school computer science courses approach the field from broader starting points, such as Exploring Computer Science (ECS), or the new AP CS Principles course, an analogous structure for high school outreach events becomes of greater importance.\n This paper describes our work on developing a Scratch-based alternative competition for high school students, that can be run in parallel with a traditional morning of ICPC-style problems.",
"title": ""
},
{
"docid": "7c1c0e74fcd2fb36c60915a6947fcdac",
"text": "Modern deep transfer learning approaches have mainly focused on learning generic feature vectors from one task that are transferable to other tasks, such as word embeddings in language and pretrained convolutional features in vision. However, these approaches usually transfer unary features and largely ignore more structured graphical representations. This work explores the possibility of learning generic latent relational graphs that capture dependencies between pairs of data units (e.g., words or pixels) from large-scale unlabeled data and transferring the graphs to downstream tasks. Our proposed transfer learning framework improves performance on various tasks including question answering, natural language inference, sentiment analysis, and image classification. We also show that the learned graphs are generic enough to be transferred to different embeddings on which the graphs have not been trained (including GloVe embeddings, ELMo embeddings, and task-specific RNN hidden units), or embedding-free units such as image pixels.",
"title": ""
},
{
"docid": "b406126afe4c47ee95cb90f291a36ca6",
"text": "Many supervised machine learning algorithms require a discrete feature space. In this paper, we review previous work on continuous feature discretization, identify de ning characteristics of the methods, and conduct an empirical evaluation of several methods. We compare binning, an unsupervised discretization method, to entropy-based and purity-based methods, which are supervised algorithms. We found that the performance of the Naive-Bayes algorithm signi cantly improved when features were discretized using an entropy-based method. In fact, over the 16 tested datasets, the discretized version of Naive-Bayes slightly outperformed C4.5 on average. We also show that in some cases, the performance of the C4.5 induction algorithm signi cantly improved if features were discretized in advance; in our experiments, the performance never signi cantly degraded, an interesting phenomenon considering the fact that C4.5 is capable of locally discretizing features.",
"title": ""
},
{
"docid": "367ba3305217805d6068d6117a693a11",
"text": "Many efforts have been devoted to training generative latent variable models with autoregressive decoders, such as recurrent neural networks (RNN). Stochastic recurrent models have been successful in capturing the variability observed in natural sequential data such as speech. We unify successful ideas from recently proposed architectures into a stochastic recurrent model: each step in the sequence is associated with a latent variable that is used to condition the recurrent dynamics for future steps. Training is performed with amortized variational inference where the approximate posterior is augmented with a RNN that runs backward through the sequence. In addition to maximizing the variational lower bound, we ease training of the latent variables by adding an auxiliary cost which forces them to reconstruct the state of the backward recurrent network. This provides the latent variables with a task-independent objective that enhances the performance of the overall model. We found this strategy to perform better than alternative approaches such as KL annealing. Although being conceptually simple, our model achieves state-of-the-art results on standard speech benchmarks such as TIMIT and Blizzard and competitive performance on sequential MNIST. Finally, we apply our model to language modeling on the IMDB dataset where the auxiliary cost helps in learning interpretable latent variables.",
"title": ""
},
{
"docid": "cf42b86cf4e42d31e2726c4247edf17a",
"text": "Global Navigation Satellite System (GNSS) will in effect be fully deployed and operational in a few years, even with the delays in Galileo as a consequence of European Union's financial difficulties. The vastly broadened GNSS spectra, spread densely across 1146-1616 MHz, versus the narrow Global Positioning System (GPS) L1 and L2 bands, together with a constellation of over 100 Medium Earth Orbit (MEO) and Geostationary Earth Orbit (GEO) satellites versus GPS' 24 MEO satellites, are revolutionizing the design of GNSS receive antennas. For example, a higher elevation cutoff angle will be preferred. As a result, fundamental changes in antenna design, new features and applications, as well as cost structures are ongoing. Existing GNSS receive antenna technologies are reviewed and design challenges are discussed.",
"title": ""
},
{
"docid": "c83ec9a4ec6f58ea2fe57bf2e4fa0c37",
"text": "Deep convolutional neural network models pre-trained for the ImageNet classification task have been successfully adopted to tasks in other domains, such as texture description and object proposal generation, but these tasks require annotations for images in the new domain. In this paper, we focus on a novel and challenging task in the pure unsupervised setting: fine-grained image retrieval. Even with image labels, fine-grained images are difficult to classify, letting alone the unsupervised retrieval task. We propose the selective convolutional descriptor aggregation (SCDA) method. The SCDA first localizes the main object in fine-grained images, a step that discards the noisy background and keeps useful deep descriptors. The selected descriptors are then aggregated and the dimensionality is reduced into a short feature vector using the best practices we found. The SCDA is unsupervised, using no image label or bounding box annotation. Experiments on six fine-grained data sets confirm the effectiveness of the SCDA for fine-grained image retrieval. Besides, visualization of the SCDA features shows that they correspond to visual attributes (even subtle ones), which might explain SCDA’s high-mean average precision in fine-grained retrieval. Moreover, on general image retrieval data sets, the SCDA achieves comparable retrieval results with the state-of-the-art general image retrieval approaches.",
"title": ""
},
{
"docid": "b4f88692e7f563857f678c646e90ddf9",
"text": "The aim of this study was to examine the effects of match location, quality of opposition, and match status on possession strategies in a professional Spanish football team. Twenty-seven matches from the 2005-2006 domestic league season were notated post-event using a computerized match analysis system. Matches were divided into episodes according to evolving match status. Linear regression analysis showed that possession of the ball was greater when losing than when winning (P < 0.01) or drawing (P < 0.05), and playing against strong opposition was associated with a decrease in time spent in possession (P < 0.01). In addition, weighted mean percentage time spent in different zones of the pitch (defensive third, middle third, attacking third) was influenced by match status (P < 0.01) and match location (P < 0.05). A combination of these variables and their interactions can be used to develop a model to predict future possession in football. The findings emphasize the need for match analysts and coaches to consider independent and interactive potential effects of match location, quality of opposition, and match status during assessments of technical and tactical components of football performance. In particular, the findings indicate that strategies in soccer are influenced by match variables and teams alter their playing style during the game accordingly.",
"title": ""
},
{
"docid": "03b08a01be48aaa76684411b73e5396c",
"text": "The goal of TREC 2015 Clinical Decision Support Track was to retrieve biomedical articles relevant for answering three kinds of generic clinical questions, namely diagnosis, test, and treatment. In order to achieve this purpose, we investigated three approaches to improve the retrieval of relevant articles: modifying queries, improving indexes, and ranking with ensembles. Our final submissions were a combination of several different configurations of these approaches. Our system mainly focused on the summary fields of medical reports. We built two different kinds of indexes – an inverted index on the free text and a second kind of indexes on the Unified Medical Language System (UMLS) concepts within the entire articles that were recognized by MetaMap. We studied the variations of including UMLS concepts at paragraph and sentence level and experimented with different thresholds of MetaMap matching scores to filter UMLS concepts. The query modification process in our system involved automatic query construction, pseudo relevance feedback, and manual inputs from domain experts. Furthermore, we trained a re-ranking sub-system based on the results of TREC 2014 Clinical Decision Support track using Indri’s Learning to Rank package, RankLib. Our experiments showed that the ensemble approach could improve the overall results by boosting the ranking of articles that are near the top of several single ranked lists.",
"title": ""
},
{
"docid": "2419e2750787b1ba2f00d1629e3bbdad",
"text": "Resilient transportation systems enable quick evacuation, rescue, distribution of relief supplies, and other activities for reducing the impact of natural disasters and for accelerating the recovery from them. The resilience of a transportation system largely relies on the decisions made during a natural disaster. We developed an agent-based traffic simulator for predicting the results of potential actions taken with respect to the transportation system to quickly make appropriate decisions. For realistic simulation, we govern the behavior of individual drivers of vehicles with foundational principles learned from probe-car data. For example, we used the probe-car data to estimate the personality of individual drivers of vehicles in selecting their routes, taking into account various metrics of routes such as travel time, travel distance, and the number of turns. This behavioral model, which was constructed from actual data, constitutes a special feature of our simulator. We built this simulator using the X10 language, which enables massively parallel execution for simulating traffic in a large metropolitan area. We report the use cases of the simulator in three major cities in the context of disaster recovery and resilient transportation.",
"title": ""
},
{
"docid": "eeb9eb624d2eaf0d4649d048bbbb20d3",
"text": "The well-known distinction between field-based and objectbased approaches to spatial information is generalised to arbitrar y locational frameworks, including in particular space, time and space-time. W systematically explore the different ways in which these approaches can be c ombined, and address the relative merits of a fully four-dimensional app roach as against a more conventional ‘three-plus-one’-dimensional approac h. We single out as especially interesting in this respect a class of phenomena , here calledmultiaspect phenomena , which seem to present different aspects when considered from different points of view. Such phenomena (e.g., floods, wildfires, processions) are proposed as the most natural candidates for tr eatment as fully four-dimensional entities (‘hyperobjects’), but it remai ns problematic how to model them so as to do justice to their multi-aspectual nat ure. The paper ends with a range of important researchable questions aimed at clearing up some of the difficulties raised.",
"title": ""
},
{
"docid": "937b74a9bd0bababf9367607a32b14f8",
"text": "Voltage sag is a serious power quality problem affecting domestic, industrial and commercial customers. Voltage sags may either decrease or increase in the magnitude of system voltage due to faults or change in loads. In this paper a switch mode AC BuckBoost regulator is proposed to maintain voltage across a medium size domestic appliance constant during long period of voltage deviation from the rated value. Such deviation may occur due to change in load or change in input voltage due to voltage sag of the system itself.",
"title": ""
},
{
"docid": "c79c4bdf28ca638161cb82ac9991d5e9",
"text": "This letter proposes a novel wideband circularly polarized magnetoelectric dipole antenna. In the proposed antenna, a pair of rotationally symmetric horizontal patches functions as an electric dipole, and two vertical patches with the ground act as an equivalent magnetic dipole. A Γ-shaped probe is used to excite the antenna, and a metallic cavity with two gaps is designed for wideband and good performance in radiation. A prototype was fabricated and measured. The experimental results show that the proposed antenna has an impedance bandwidth of 65% for SWR≤2 from 1.76 to 3.46 GHz, a 3-dB axial-ratio bandwidth of 71.5% from 1.68 to 3.55 GHz, and a stable gain of 8 ± 1 dBi. Good unidirectional radiation characteristic and low back-lobe level are achieved over the whole operating frequency band.",
"title": ""
},
{
"docid": "32fdac85341377f54eaa7f9c2c2ffad7",
"text": "Event pattern matching is a query technique where a sequence of input events is matched against a complex pattern that specifies constraints on extent, order, values, and quantification of matching events. The increasing importance of such query techniques is underpinned by a significant amount of research work, the availability of commercial products, and by a recent proposal to extend SQL for event pattern matching. The proposed SQL extension includes an operator PERMUTE, which allows to express patterns that match any permutation of a set of events. No implementation of this operator is known to the authors.\n In this paper, we study the sequenced event set pattern matching problem, which is the problem of matching a sequence of input events against a complex pattern that specifies a sequence of sets of events rather than a sequence of single events. Similar to the PERMUTE operator, events that match with a set specified in the pattern can occur in any permutation, whereas events that match with different sets have to be strictly consecutive, following the order of the sets in the pattern specification. We formally define the problem of sequenced event set pattern matching, propose an automaton-based evaluation algorithm, and provide a detailed analysis of its runtime complexity. An empirical evaluation with real-world data shows that our algorithm outperforms a brute force approach that uses existing techniques to solve the sequenced event set pattern matching problem, and it validates the results from our complexity analysis.",
"title": ""
},
{
"docid": "b9d74514b91ac160bce0b39e0872c0b2",
"text": "Human falls occur very rarely; this makes it difficult to employ supervised classification techniques. Moreover, the sensing modality used must preserve the identity of those being monitored. In this paper, we investigate the use of thermal camera for fall detection, since it effectively masks the identity of those being monitored. We formulate the fall detection problem as an anomaly detection problem and aim to use autoencoders to identify falls. We also present a new anomaly scoring method to combine the reconstruction score of a frame across different video sequences. Our experiments suggests that Convolutional LSTM autoencoders perform better than convolutional and deep autoencoders in detecting unseen falls.",
"title": ""
}
] |
scidocsrr
|
3a20dd1a163c1eee5f8b04df75e4bccc
|
Computational imaging using lightweight diffractive-refractive optics.
|
[
{
"docid": "0771cd99e6ad19deb30b5c70b5c98183",
"text": "We propose a novel image denoising strategy based on an enhanced sparse representation in transform domain. The enhancement of the sparsity is achieved by grouping similar 2D image fragments (e.g., blocks) into 3D data arrays which we call \"groups.\" Collaborative Altering is a special procedure developed to deal with these 3D groups. We realize it using the three successive steps: 3D transformation of a group, shrinkage of the transform spectrum, and inverse 3D transformation. The result is a 3D estimate that consists of the jointly filtered grouped image blocks. By attenuating the noise, the collaborative filtering reveals even the finest details shared by grouped blocks and, at the same time, it preserves the essential unique features of each individual block. The filtered blocks are then returned to their original positions. Because these blocks are overlapping, for each pixel, we obtain many different estimates which need to be combined. Aggregation is a particular averaging procedure which is exploited to take advantage of this redundancy. A significant improvement is obtained by a specially developed collaborative Wiener filtering. An algorithm based on this novel denoising strategy and its efficient implementation are presented in full detail; an extension to color-image denoising is also developed. The experimental results demonstrate that this computationally scalable algorithm achieves state-of-the-art denoising performance in terms of both peak signal-to-noise ratio and subjective visual quality.",
"title": ""
}
] |
[
{
"docid": "383fc498068c5c64dc3635bb6d3c98b5",
"text": "Acupuncture is used for some conditions as an alternative to medication or surgical intervention. Several complications had been reported, and they are generally due to physical injury by the needle or transmission of diseases. We report a case of life-threatening necrotising fasciitis that developed after acupuncture treatment for osteoarthritis of the knee in a 55-year-old diabetic woman. She presented with multiple discharging sinuses over the right knee. As the patient did not respond to intravenous antibiotics, extensive debridement was performed. She made a good recovery. Since many old diabetic patients with degenerative joint diseases may consider this mode of treatment, guidelines on cleanliness and sterility of this procedure should be developed and practiced.",
"title": ""
},
{
"docid": "397f1c1a01655098d8b35b04011400c7",
"text": "Pathology reports are a primary source of information for cancer registries which process high volumes of free-text reports annually. Information extraction and coding is a manual, labor-intensive process. In this study, we investigated deep learning and a convolutional neural network (CNN), for extracting ICD-O-3 topographic codes from a corpus of breast and lung cancer pathology reports. We performed two experiments, using a CNN and a more conventional term frequency vector approach, to assess the effects of class prevalence and inter-class transfer learning. The experiments were based on a set of 942 pathology reports with human expert annotations as the gold standard. CNN performance was compared against a more conventional term frequency vector space approach. We observed that the deep learning models consistently outperformed the conventional approaches in the class prevalence experiment, resulting in micro- and macro-F score increases of up to 0.132 and 0.226, respectively, when class labels were well populated. Specifically, the best performing CNN achieved a micro-F score of 0.722 over 12 ICD-O-3 topography codes. Transfer learning provided a consistent but modest performance boost for the deep learning methods but trends were contingent on the CNN method and cancer site. These encouraging results demonstrate the potential of deep learning for automated abstraction of pathology reports.",
"title": ""
},
{
"docid": "ae287f0cce2d1652c7579c02b4692acf",
"text": "Recent studies have shown that multiple brain areas contribute to different stages and aspects of procedural learning. On the basis of a series of studies using a sequence-learning task with trial-and-error, we propose a hypothetical scheme in which a sequential procedure is acquired independently by two cortical systems, one using spatial coordinates and the other using motor coordinates. They are active preferentially in the early and late stages of learning, respectively. Both of the two systems are supported by loop circuits formed with the basal ganglia and the cerebellum, the former for reward-based evaluation and the latter for processing of timing. The proposed neural architecture would operate in a flexible manner to acquire and execute multiple sequential procedures.",
"title": ""
},
{
"docid": "eb35dd61ff58b6f20f91f0db321fd4e1",
"text": "Multiple sclerosis (MS) is a disease of central nervous system that causes the removal of fatty myelin sheath from axons of the brain and spinal cord. Autoimmunity plays an important role in this pathology outcome and body's own immune system attacks on the myelin sheath causing the damage. The etiology of the disease is partially understood and the response to treatment cannot easily be predicted. We presented the results obtained using 8 genetically predisposed randomly chosen individuals reproducing both the absence and presence of malfunctions of the Teff-Treg cross-balancing mechanisms at a local level. For simulating the absence of a local malfunction we supposed that both Teff and Treg populations had similar maximum duplication rates. Results presented here suggest that presence of a genetic predisposition is not always a sufficient condition for developing the disease. Other conditions such as a breakdown of the mechanisms that regulate and allow peripheral tolerance should be involved. The presented model allows to capture the essential dynamics of relapsing-remitting MS despite its simplicity. It gave useful insights that support the hypothesis of a breakdown of Teff-Treg cross balancing mechanisms.",
"title": ""
},
{
"docid": "269409fcebb523deb10cb3d827de389b",
"text": "Variable stiffness and variable damping can play an important role in robot movement, particularly for legged robots such as bipedal walkers. Variable impedance also introduces new control problems, since there are more degrees of freedom to control, and the resulting robot has more complex dynamics. In this paper, we introduce novel design and fabrication methodologies that are capable of producing cost effective hardware prototypes suitable for investigating the efficacy of impedance modulation. We present two variable impedance bipedal platforms produced using a combination of waterjet cutting and 3D printing, and a novel fused deposition modeling (FDM) 3D printing based method for producing hybrid plastic/metal parts. We evaluate walking trajectories at different speeds and stiffness levels. [DOI: 10.1115/1.4030388]",
"title": ""
},
{
"docid": "8da79078b61f53aca62b4057b36788c2",
"text": "Criminals use money laundering to make the proceeds from their illegal activities look legitimate in the eyes of the rest of society. Current countermeasures taken by financial organizations are based on legal requirements and very basic statistical analysis. Machine Learning offers a number of ways to detect anomalous transactions. These methods can be based on supervised and unsupervised learning algorithms that improve the performance of detection of such criminal activity. In this study we present an analysis of the difficulties and considerations of applying machine learning techniques to this problem. We discuss the pros and cons of using synthetic data and problems and advantages inherent in the generation of such a data set. We do this using a case study and suggest an approach based on Multi-Agent Based Simulations (MABS).",
"title": ""
},
{
"docid": "597ce6d64e8a65f20e605533f4602eba",
"text": "Detailed scanning of indoor scenes is tedious for humans. We propose autonomous scene scanning by a robot to relieve humans from such a laborious task. In an autonomous setting, detailed scene acquisition is inevitably coupled with scene analysis at the required level of detail. We develop a framework for object-level scene reconstruction coupled with object-centric scene analysis. As a result, the autoscanning and reconstruction will be object-aware, guided by the object analysis. The analysis is, in turn, gradually improved with progressively increased object-wise data fidelity. In realizing such a framework, we drive the robot to execute an iterative analyze-and-validate algorithm which interleaves between object analysis and guided validations.\n The object analysis incorporates online learning into a robust graph-cut based segmentation framework, achieving a global update of object-level segmentation based on the knowledge gained from robot-operated local validation. Based on the current analysis, the robot performs proactive validation over the scene with physical push and scan refinement, aiming at reducing the uncertainty of both object-level segmentation and object-wise reconstruction. We propose a joint entropy to measure such uncertainty based on segmentation confidence and reconstruction quality, and formulate the selection of validation actions as a maximum information gain problem. The output of our system is a reconstructed scene with both object extraction and object-wise geometry fidelity.",
"title": ""
},
{
"docid": "9d9afbd6168c884f54f72d3daea57ca7",
"text": "0167-8655/$ see front matter 2009 Elsevier B.V. A doi:10.1016/j.patrec.2009.06.012 * Corresponding author. Tel.: +82 2 705 8931; fax: E-mail addresses: [email protected] (S. Yoon), sa Computer aided diagnosis (CADx) systems for digitized mammograms solve the problem of classification between benign and malignant tissues while studies have shown that using only a subset of features generated from the mammograms can yield higher classification accuracy. To this end, we propose a mutual information-based Support Vector Machine Recursive Feature Elimination (SVM-RFE) as the classification method with feature selection in this paper. We have conducted extensive experiments on publicly available mammographic data and the obtained results indicate that the proposed method outperforms other SVM and SVM-RFE-based methods. 2009 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "bd30e7918a0187ff3d01d3653258bf27",
"text": "Recursive neural network is one of the most successful deep learning models for natural language processing due to the compositional nature of text. The model recursively composes the vector of a parent phrase from those of child words or phrases, with a key component named composition function. Although a variety of composition functions have been proposed, the syntactic information has not been fully encoded in the composition process. We propose two models, Tag Guided RNN (TGRNN for short) which chooses a composition function according to the part-ofspeech tag of a phrase, and Tag Embedded RNN/RNTN (TE-RNN/RNTN for short) which learns tag embeddings and then combines tag and word embeddings together. In the fine-grained sentiment classification, experiment results show the proposed models obtain remarkable improvement: TG-RNN/TE-RNN obtain remarkable improvement over baselines, TE-RNTN obtains the second best result among all the top performing models, and all the proposed models have much less parameters/complexity than their counterparts.",
"title": ""
},
{
"docid": "1ff8d3270f4884ca9a9c3d875bdf1227",
"text": "This paper addresses the challenging problem of perceiving the hidden or occluded geometry of the scene depicted in any given RGBD image. Unlike other image labeling problems such as image segmentation where each pixel needs to be assigned a single label, layered decomposition requires us to assign multiple labels to pixels. We propose a novel \"Occlusion-CRF\" model that allows for the integration of sophisticated priors to regularize the solution space and enables the automatic inference of the layer decomposition. We use a generalization of the Fusion Move algorithm to perform Maximum a Posterior (MAP) inference on the model that can handle the large label sets needed to represent multiple surface assignments to each pixel. We have evaluated the proposed model and the inference algorithm on many RGBD images of cluttered indoor scenes. Our experiments show that not only is our model able to explain occlusions but it also enables automatic inpainting of occluded/ invisible surfaces.",
"title": ""
},
{
"docid": "213313382d4e5d24a065d551012887ed",
"text": "The authors present full wave simulations and experimental results of propagation of electromagnetic waves in shallow seawaters. Transmitter and receiver antennas are ten-turns loops placed on the seabed. Some propagation frameworks are presented and simulated. Finally, simulation results are compared with experimental ones.",
"title": ""
},
{
"docid": "d7ec815dd17e8366b5238022339a0a14",
"text": "V marketing is a form of peer-to-peer communication in which individuals are encouraged to pass on promotional messages within their social networks. Conventional wisdom holds that the viral marketing process is both random and unmanageable. In this paper, we deconstruct the process and investigate the formation of the activated digital network as distinct from the underlying social network. We then consider the impact of the social structure of digital networks (random, scale free, and small world) and of the transmission behavior of individuals on campaign performance. Specifically, we identify alternative social network models to understand the mediating effects of the social structures of these models on viral marketing campaigns. Next, we analyse an actual viral marketing campaign and use the empirical data to develop and validate a computer simulation model for viral marketing. Finally, we conduct a number of simulation experiments to predict the spread of a viral message within different types of social network structures under different assumptions and scenarios. Our findings confirm that the social structure of digital networks play a critical role in the spread of a viral message. Managers seeking to optimize campaign performance should give consideration to these findings before designing and implementing viral marketing campaigns. We also demonstrate how a simulation model is used to quantify the impact of campaign management inputs and how these learnings can support managerial decision making.",
"title": ""
},
{
"docid": "06c51b2a995d4ccbddd85898afa36ae8",
"text": "Denial of Service (DoS henceforth) attack is performed solely with the intention to deny the legitimate users to access services. Since DoS attack is usually performed by means of bots, automated software. These bots send a large number of fake requests to the server which exceeds server buffer capacity which results in DoS attack. In this paper we propose an idea to prevent DoS attack on web-sites which ask for user credentials before it allows them to access resources. Our approach is based on CAPTCHA verification. We verify CAPTCHA submitted by user before allowing the access to credentials page. The CAPTCHA would consist of variety of patterns that would be distinct in nature and are randomly generated during each visit to the webpage. Most of the current web sites use a common methodology to generate all its CAPTCHAs. The bots usually take advantage of this approach since bots are able to decipher those CAPTCHAs. A set of distinct CAPTCHA patterns prevents bots to decipher it and consequently helps to reduce the generation of illicit traffic. This preserves the server bandwidth to allow the legitimate users to access the site.",
"title": ""
},
{
"docid": "75d8a1f2f59b0b49d521efb8ce6f6440",
"text": "Graph embeddings have become a key and widely used technique within the field of graph mining, proving to be successful across a broad range of domains including social, citation, transportation and biological. Graph embedding techniques aim to automatically create a low-dimensional representation of a given graph, which captures key structural elements in the resulting embedding space. However, to date, there has been little work exploring exactly which topological structures are being learned in the embeddings process. In this paper, we investigate if graph embeddings are approximating something analogous with traditional vertex level graph features. If such a relationship can be found, it could be used to provide a theoretical insight into how graph embedding approaches function. We perform this investigation by predicting known topological features, using supervised and unsupervised methods, directly from the embedding space. If a mapping between the embeddings and topological features can be found, then we argue that the structural information encapsulated by the features is represented in the embedding space. To explore this, we present extensive experimental evaluation from five stateof-the-art unsupervised graph embedding techniques, across a range of empirical graph datasets, measuring a selection of topological features. We demonstrate that several topological features are indeed being approximated by the embedding space, allowing key insight into how graph embeddings create good representations.",
"title": ""
},
{
"docid": "31b161f4288fb2e60f2d72c384906d94",
"text": "This article presents a study that aims at constructing a teaching framework for software development methods in higher education. The research field is a capstone project-based course, offered by the Technion’s Department of Computer Science, in which Extreme Programming is introduced. The research paradigm is an Action Research that involves cycles of data collection, examination, evaluation, and application of results. The research uses several research tools for data gathering, as well as several research methods for data interpretation. The article describes in detail the research background, the research method, and the gradual emergence process of a framework for teaching software development methods. As part of the comprehensive teaching framework, a set of measures is developed to assess, monitor, and improve the teaching and the actual process of software development projects.",
"title": ""
},
{
"docid": "9e3263866208bbc6a9019b3c859d2a66",
"text": "A residual network (or ResNet) is a standard deep neural net architecture, with stateof-the-art performance across numerous applications. The main premise of ResNets is that they allow the training of each layer to focus on fitting just the residual of the previous layer’s output and the target output. Thus, we should expect that the trained network is no worse than what we can obtain if we remove the residual layers and train a shallower network instead. However, due to the non-convexity of the optimization problem, it is not at all clear that ResNets indeed achieve this behavior, rather than getting stuck at some arbitrarily poor local minimum. In this paper, we rigorously prove that arbitrarily deep, nonlinear residual units indeed exhibit this behavior, in the sense that the optimization landscape contains no local minima with value above what can be obtained with a linear predictor (namely a 1-layer network). Notably, we show this under minimal or no assumptions on the precise network architecture, data distribution, or loss function used. We also provide a quantitative analysis of approximate stationary points for this problem. Finally, we show that with a certain tweak to the architecture, training the network with standard stochastic gradient descent achieves an objective value close or better than any linear predictor.",
"title": ""
},
{
"docid": "69a6cfb649c3ccb22f7a4467f24520f3",
"text": "We propose a two-stage neural model to tackle question generation from documents. First, our model estimates the probability that word sequences in a document are ones that a human would pick when selecting candidate answers by training a neural key-phrase extractor on the answers in a question-answering corpus. Predicted key phrases then act as target answers and condition a sequence-tosequence question-generation model with a copy mechanism. Empirically, our keyphrase extraction model significantly outperforms an entity-tagging baseline and existing rule-based approaches. We further demonstrate that our question generation system formulates fluent, answerable questions from key phrases. This twostage system could be used to augment or generate reading comprehension datasets, which may be leveraged to improve machine reading systems or in educational settings.",
"title": ""
},
{
"docid": "39838881287fd15b29c20f18b7e1d1eb",
"text": "In the software industry, a challenge firms often face is how to effectively commercialize innovations. An emerging business model increasingly embraced by entrepreneurs, called freemium, combines “free” and “premium” consumption in association with a product or service. In a nutshell, this model involves giving away for free a certain level or type of consumption while making money on premium consumption. We develop a unifying multi-period microeconomic framework with network externalities embedded into consumer learning in order to capture the essence of conventional for-fee models, several key freemium business models such as feature-limited or time-limited, and uniform market seeding models. Under moderate informativeness of word-of-mouth signals, we fully characterize conditions under which firms prefer freemium models, depending on consumer priors on the value of individual software modules, perceptions of crossmodule synergies, and overall value distribution across modules. Within our framework, we show that uniform seeding is always dominated by either freemium models or conventional for-fee models. We further discuss managerial and policy implications based on our analysis. Interestingly, we show that freemium, in one form or another, is always preferred from the social welfare perspective, and we provide guidance on when the firms need to be incentivized to align their interests with the society’s. Finally, we discuss how relaxing some of the assumptions of our model regarding costs or informativeness and heterogeneity of word of mouth may reduce the profit gap between seeding and the other models, and potentially lead to seeding becoming the preferred approach for the firm.",
"title": ""
},
{
"docid": "8108f8c3d53f44ca3824f4601aacdce1",
"text": "This paper presents a robust multi-class multi-object tracking (MCMOT) formulated by a Bayesian filtering framework. Multiobject tracking for unlimited object classes is conducted by combining detection responses and changing point detection (CPD) algorithm. The CPD model is used to observe abrupt or abnormal changes due to a drift and an occlusion based spatiotemporal characteristics of track states. The ensemble of convolutional neural network (CNN) based object detector and Lucas-Kanede Tracker (KLT) based motion detector is employed to compute the likelihoods of foreground regions as the detection responses of different object classes. Extensive experiments are performed using lately introduced challenging benchmark videos; ImageNet VID and MOT benchmark dataset. The comparison to state-of-the-art video tracking techniques shows very encouraging results.",
"title": ""
},
{
"docid": "ac529a455bcefa58abafa6c679bec2b4",
"text": "This article presents near-optimal guarantees for stable and robust image recovery from undersampled noisy measurements using total variation minimization. In particular, we show that from O(s log(N)) nonadaptive linear measurements, an image can be reconstructed to within the best s-term approximation of its gradient up to a logarithmic factor, and this factor can be removed by taking slightly more measurements. Along the way, we prove a strengthened Sobolev inequality for functions lying in the null space of suitably incoherent matrices.",
"title": ""
}
] |
scidocsrr
|
3144cc709445f801f4096ed8d4281a88
|
Peer Assessment in MOOCs Using Preference Learning via Matrix Factorization
|
[
{
"docid": "9b942a1342eb3c4fd2b528601fa42522",
"text": "Peer and self-assessment offer an opportunity to scale both assessment and learning to global classrooms. This article reports our experiences with two iterations of the first large online class to use peer and self-assessment. In this class, peer grades correlated highly with staff-assigned grades. The second iteration had 42.9% of students’ grades within 5% of the staff grade, and 65.5% within 10%. On average, students assessed their work 7% higher than staff did. Students also rated peers’ work from their own country 3.6% higher than those from elsewhere. We performed three experiments to improve grading accuracy. We found that giving students feedback about their grading bias increased subsequent accuracy. We introduce short, customizable feedback snippets that cover common issues with assignments, providing students more qualitative peer feedback. Finally, we introduce a data-driven approach that highlights high-variance items for improvement. We find that rubrics that use a parallel sentence structure, unambiguous wording, and well-specified dimensions have lower variance. After revising rubrics, median grading error decreased from 12.4% to 9.9%.",
"title": ""
},
{
"docid": "264dbf645418fc301b3633a280c3ad0d",
"text": "Music prediction tasks range from predicting tags given a song or clip of audio, predicting the name of the artist, or predicting related songs given a song, clip, artist name or tag. That is, we are interested in every semantic relationship between the different musical concepts in our database. In realistically sized databases, the number of songs is measured in the hundreds of thousands or more, and the number of artists in the tens of thousands or more, providing a considerable challenge to standard machine learning techniques. In this work, we propose a method that scales to such datasets which attempts to capture the semantic similarities between the database items by modeling audio, artist names, and tags in a single low-dimensional semantic embedding space. This choice of space is learnt by optimizing the set of prediction tasks of interest jointly using multi-task learning. Our single model learnt by training on the joint objective function is shown experimentally to have improved accuracy over training on each task alone. Our method also outperforms the baseline methods tried and, in comparison to them, is faster and consumes less memory. We also demonstrate how our method learns an interpretable model, where the semantic space captures well the similarities of interest.",
"title": ""
}
] |
[
{
"docid": "1a6ca0ec232b7bbfc4c103b69ec29996",
"text": "In this paper, we review different memristive threshold logic (MTL) circuits that are inspired from the synaptic action of the flow of neurotransmitters in the biological brain. The brainlike generalization ability and the area minimization of these threshold logic circuits aim toward crossing Moore’s law boundaries at device, circuits, and systems levels. Fast switching memory, signal processing, control systems, programmable logic, image processing, reconfigurable computing, and pattern recognition are identified as some of the potential applications of MTL systems. The physical realization of nanoscale devices with memristive behavior from materials, such as TiO2, ferroelectrics, silicon, and polymers, has accelerated research effort in these application areas, inspiring the scientific community to pursue the design of high-speed, low-cost, low-power, and high-density neuromorphic architectures.",
"title": ""
},
{
"docid": "857efb4909ada73ca849acf24d6e74db",
"text": "Owing to inevitable thermal/moisture instability for organic–inorganic hybrid perovskites, pure inorganic perovskite cesium lead halides with both inherent stability and prominent photovoltaic performance have become research hotspots as a promising candidate for commercial perovskite solar cells. However, it is still a serious challenge to synthesize desired cubic cesium lead iodides (CsPbI3) with superior photovoltaic performance for its thermodynamically metastable characteristics. Herein, polymer poly-vinylpyrrolidone (PVP)-induced surface passivation engineering is reported to synthesize extra-long-term stable cubic CsPbI3. It is revealed that acylamino groups of PVP induce electron cloud density enhancement on the surface of CsPbI3, thus lowering surface energy, conducive to stabilize cubic CsPbI3 even in micrometer scale. The cubic-CsPbI3 PSCs exhibit extra-long carrier diffusion length (over 1.5 μm), highest power conversion efficiency of 10.74% and excellent thermal/moisture stability. This result provides important progress towards understanding of phase stability in realization of large-scale preparations of efficient and stable inorganic PSCs. Inorganic cesium lead iodide perovskite is inherently more stable than the hybrid perovskites but it undergoes phase transition that degrades the solar cell performance. Here Li et al. stabilize it with poly-vinylpyrrolidone and obtain high efficiency of 10.74% with excellent thermal and moisture stability.",
"title": ""
},
{
"docid": "1c288a6b7446649ff77e40d5e366a142",
"text": "This paper describes the development of a new expressive robotic head for the bipedal humanoid robot. Facial expressions of our old robotic head have low facial expression recognition rate and in order to improve it we asked amateur cartoonists to create computer graphics (CG) images. To realize such expressions found in the CGs, the new head was provided with 24-DoFs and facial color. We designed compact mechanisms that fit into the head, which dimensions are based on average adult Japanese female size. We conducted a survey with pictures and videos to evaluate the expression ability. Results showed that facial expression recognition rates for the 6 basic emotions are increased compared to the old KOBIAN's head.",
"title": ""
},
{
"docid": "039231246a5aee3594b97d5037dfb010",
"text": "Reactive oxygen species (ROS) generation in tomato plants by Ralstonia solanacearum infection and the role of hydrogen peroxide (H2O2) and nitric oxide in tomato bacterial wilt control were demonstrated. During disease development of tomato bacterial wilt, accumulation of superoxide anion (O2 (-)) and H2O2 was observed and lipid peroxidation also occurred in the tomato leaf tissues. High doses of H2O2and sodium nitroprusside (SNP) nitric oxide donor showed phytotoxicity to detached tomato leaves 1 day after petiole feeding showing reduced fresh weight. Both H2O2and SNP have in vitro antibacterial activities against R. solanacearum in a dose-dependent manner, as well as plant protection in detached tomato leaves against bacterial wilt by 10(6) and 10(7) cfu/ml of R. solanacearum. H2O2- and SNP-mediated protection was also evaluated in pots using soil-drench treatment with the bacterial inoculation, and relative 'area under the disease progressive curve (AUDPC)' was calculated to compare disease protection by H2O2 and/or SNP with untreated control. Neither H2O2 nor SNP protect the tomato seedlings from the bacterial wilt, but H2O2+ SNP mixture significantly decreased disease severity with reduced relative AUDPC. These results suggest that H2O2 and SNP could be used together to control bacterial wilt in tomato plants as bactericidal agents.",
"title": ""
},
{
"docid": "ce32b34898427802abd4cc9c99eac0bc",
"text": "A circular polarizer is a single layer or multi-layer structure that converts linearly polarized waves into circularly polarized ones and vice versa. In this communication, a simple method based on transmission line circuit theory is proposed to model and design circular polarizers. This technique is more flexible than those previously presented in the way that it permits to design polarizers with the desired spacing between layers, while obtaining surfaces that may be easier to fabricate and less sensitive to fabrication errors. As an illustrating example, a modified version of the meander-line polarizer being twice as thin as its conventional counterpart is designed. Then, both polarizers are fabricated and measured. Results are shown and compared for normal and oblique incidence angles in the planes φ = 0° and φ = 90°.",
"title": ""
},
{
"docid": "2c04fd272c90a8c0a74a16980fcb5b03",
"text": "We propose a multimodal, decomposable model for articulated human pose estimation in monocular images. A typical approach to this problem is to use a linear structured model, which struggles to capture the wide range of appearance present in realistic, unconstrained images. In this paper, we instead propose a model of human pose that explicitly captures a variety of pose modes. Unlike other multimodal models, our approach includes both global and local pose cues and uses a convex objective and joint training for mode selection and pose estimation. We also employ a cascaded mode selection step which controls the trade-off between speed and accuracy, yielding a 5x speedup in inference and learning. Our model outperforms state-of-the-art approaches across the accuracy-speed trade-off curve for several pose datasets. This includes our newly-collected dataset of people in movies, FLIC, which contains an order of magnitude more labeled data for training and testing than existing datasets.",
"title": ""
},
{
"docid": "b4166b57419680e348d7a8f27fbc338a",
"text": "OBJECTIVES\nTreatments of female sexual dysfunction have been largely unsuccessful because they do not address the psychological factors that underlie female sexuality. Negative self-evaluative processes interfere with the ability to attend and register physiological changes (interoceptive awareness). This study explores the effect of mindfulness meditation training on interoceptive awareness and the three categories of known barriers to healthy sexual functioning: attention, self-judgment, and clinical symptoms.\n\n\nMETHODS\nForty-four college students (30 women) participated in either a 12-week course containing a \"meditation laboratory\" or an active control course with similar content or laboratory format. Interoceptive awareness was measured by reaction time in rating physiological response to sexual stimuli. Psychological barriers were assessed with self-reported measures of mindfulness and psychological well-being.\n\n\nRESULTS\nWomen who participated in the meditation training became significantly faster at registering their physiological responses (interoceptive awareness) to sexual stimuli compared with active controls (F(1,28) = 5.45, p = .03, η(p)(2) = 0.15). Female meditators also improved their scores on attention (t = 4.42, df = 11, p = .001), self-judgment, (t = 3.1, df = 11, p = .01), and symptoms of anxiety (t = -3.17, df = 11, p = .009) and depression (t = -2.13, df = 11, p < .05). Improvements in interoceptive awareness were correlated with improvements in the psychological barriers to healthy sexual functioning (r = -0.44 for attention, r = -0.42 for self-judgment, and r = 0.49 for anxiety; all p < .05).\n\n\nCONCLUSIONS\nMindfulness-based improvements in interoceptive awareness highlight the potential of mindfulness training as a treatment of female sexual dysfunction.",
"title": ""
},
{
"docid": "843ea8a700adf545288175c1062107bb",
"text": "Stress is a natural reaction to various stress-inducing factors which can lead to physiological and behavioral changes. If persists for a longer period, stress can cause harmful effects on our body. The body sensors along with the concept of the Internet of Things can provide rich information about one's mental and physical health. The proposed work concentrates on developing an IoT system which can efficiently detect the stress level of a person and provide a feedback which can assist the person to cope with the stressors. The system consists of a smart band module and a chest strap module which can be worn around wrist and chest respectively. The system monitors the parameters such as Electro dermal activity and Heart rate in real time and sends the data to a cloud-based ThingSpeak server serving as an online IoT platform. The computation of the data is performed using a ‘MATLAB Visualization’ application and the stress report is displayed. The authorized person can log in, view the report and take actions such as consulting a medical person, perform some meditation or yoga exercises to cope with the condition.",
"title": ""
},
{
"docid": "2bc6775efec2b59ad35b9f4841c7f3cf",
"text": "Cryptographic schemes for computing on encrypted data promise to be a fundamental building block of cryptography. The way one models such algorithms has a crucial effect on the efficiency and usefulness of the resulting cryptographic schemes. As of today, almost all known schemes for fully homomorphic encryption, functional encryption, and garbling schemes work by modeling algorithms as circuits rather than as Turing machines. As a consequence of this modeling, evaluating an algorithm over encrypted data is as slow as the worst-case running time of that algorithm, a dire fact for many tasks. In addition, in settings where an evaluator needs a description of the algorithm itself in some “encoded” form, the cost of computing and communicating such encoding is as large as the worst-case running time of this algorithm. In this work, we construct cryptographic schemes for computing Turing machines on encrypted data that avoid the worst-case problem. Specifically, we show: – An attribute-based encryption scheme for any polynomial-time Turing machine and Random Access Machine (RAM). – A (single-key and succinct) functional encryption scheme for any polynomialtime Turing machine. – A reusable garbling scheme for any polynomial-time Turing machine. These three schemes have the property that the size of a key or of a garbling for a Turing machine is very short: it depends only on the description of the Turing machine and not on its running time. Previously, the only existing constructions of such schemes were for depth-d circuits, where all the parameters grow with d. Our constructions remove this depth d restriction, have short keys, and moreover, avoid the worst-case running time. – A variant of fully homomorphic encryption scheme for Turing machines, where one can evaluate a Turing machine M on an encrypted input x in time that is dependent on the running time of M on input x as opposed to the worst-case runtime of M . Previously, such a result was known only for a restricted class of Turing machines and it required an expensive preprocessing phase (with worst-case runtime); our constructions remove both restrictions. Our results are obtained via a reduction from SNARKs (Bitanski et al) and an “extractable” variant of witness encryption, a scheme introduced by Garg et al.. We prove that the new assumption is secure in the generic group model. We also point out the connection between (the variant of) witness encryption and the obfuscation of point filter functions as defined by Goldwasser and Kalai in 2005.",
"title": ""
},
{
"docid": "e995adcdeb6c290eb484ad136d48e8a0",
"text": "Extended-connectivity fingerprints (ECFPs) are a novel class of topological fingerprints for molecular characterization. Historically, topological fingerprints were developed for substructure and similarity searching. ECFPs were developed specifically for structure-activity modeling. ECFPs are circular fingerprints with a number of useful qualities: they can be very rapidly calculated; they are not predefined and can represent an essentially infinite number of different molecular features (including stereochemical information); their features represent the presence of particular substructures, allowing easier interpretation of analysis results; and the ECFP algorithm can be tailored to generate different types of circular fingerprints, optimized for different uses. While the use of ECFPs has been widely adopted and validated, a description of their implementation has not previously been presented in the literature.",
"title": ""
},
{
"docid": "0185d09853600b950f5a1af27e0cdd91",
"text": "In this paper, the problem of matching pairs of correlated random graphs with multi-valued edge attributes is considered. Graph matching problems of this nature arise in several settings of practical interest including social network de-anonymization, study of biological data, and web graphs. An achievable region of graph parameters for successful matching is derived by analyzing a new matching algorithm that we refer to as typicality matching. The algorithm operates by investigating the joint typicality of the adjacency matrices of the two correlated graphs. Our main result shows that the achievable region depends on the mutual information between the variables corresponding to the edge probabilities of the two graphs. The result is based on bounds on the typicality of permutations of sequences of random variables that might be of independent interest.",
"title": ""
},
{
"docid": "80b514540933a9cc31136c8cb86ec9b3",
"text": "We tackle the problem of detecting occluded regions in a video stream. Under assumptions of Lambertian reflection and static illumination, the task can be posed as a variational optimization problem, and its solution approximated using convex minimization. We describe efficient numerical schemes that reach the global optimum of the relaxed cost functional, for any number of independently moving objects, and any number of occlusion layers. We test the proposed algorithm on benchmark datasets, expanded to enable evaluation of occlusion detection performance, in addition to optical flow.",
"title": ""
},
{
"docid": "b722f2fbdf20448e3a7c28fc6cab026f",
"text": "Alternative Mechanisms Rationale/Arguments/ Assumptions Connected Literature/Theory Resulting (Possible) Effect Support for/Against A1. Based on WTP and Exposure Theory A1a Light user segments (who are likely to have low WTP) are more likely to reduce (or even discontinue in extreme cases) their consumption of NYT content after the paywall implementation. Utility theory — WTP (Danaher 2002) Juxtaposing A1a and A1b leads to long tail effect due to the disproportionate reduction of popular content consumption (as a results of reduction of content consumption by light users). A1a. Supported (see the descriptive statistics in Table 11). A1b. Supported (see results from the postestimation of finite mixture model in Table 9) Since the resulting effects as well as both the assumptions (A1a and A1b) are supported, we suggest that there is support for this mechanism. A1b Light user segments are more likely to consume popular articles whereas the heavy user segment is more likely to consume a mix of niche articles and popular content. Exposure theory (McPhee 1963)",
"title": ""
},
{
"docid": "b19cbe5e99f2edb701ba22faa7406073",
"text": "There are many wireless monitoring and control applications for industrial and home markets which require longer battery life, lower data rates and less complexity than available from existing wireless standards. These standards provide higher data rates at the expense of power consumption, application complexity and cost. What these markets need, in many cases, is a standardsbased wireless technology having the performance characteristics that closely meet the requirements for reliability, security, low power and low cost. This standards-based, interoperable wireless technology will address the unique needs of low data rate wireless control and sensor-based networks.",
"title": ""
},
{
"docid": "d79252babce60e4353e2481feec57111",
"text": "A modification of stacked spiral inductors increases the self-resonance frequency by 100% with no additional processing steps, yielding values of 5 to 266 nH and self-resonance frequencies of 11.2 to 0.5 GHz. Closed-form expressions predicting the self-resonance frequency with less than 5% error have also been developed. Stacked transformers are also introduced that achieve voltage gains of 1.8 to 3 at multigigahertz frequencies. The structures have been fabricated in standard digital CMOS technologies with four and five metal layers.",
"title": ""
},
{
"docid": "cffe9e1a98238998c174e93c73785576",
"text": "๏ The experimental results show that the proposed model effectively generate more diverse and meaningful responses involving more accurate relevant entities compared with the state-of-the-art baselines. We collect a multi-turn conversation corpus which includes not only facts related inquiries but also knowledge-based chit-chats. The data is publicly available at https:// github.com/liushuman/neural-knowledge-diffusion. We obtain the element information of each movie from https://movie.douban.com/ and build the knowledge base K. The question-answering dialogues and knowledge related chit-chat are crawled from https://zhidao.baidu.com/ and https://www.douban.com/group/. The conversations are grounded on the knowledge using NER, string match, and artificial scoring and filtering rules. The total 32977 conversations consisting of 104567 utterances are divided into training (32177) and testing set (800). Overview",
"title": ""
},
{
"docid": "e5f24bd016377f49c2fc47d2c56f8128",
"text": "In this work we tackle the problem of semantic image segmentation with a combination of convolutional neural networks (CNNs) and conditional random fields (CRFs). The CRF takes contrast sensitive weights in a local neighborhood as input (pairwise interactions) to encourage consistency (smoothness) within the prediction and align our segmentation boundaries with visual edges. We model unary terms with a CNN which outperforms non data driven models. We approximate the CRF inference with a fixed number of iterations of a linearprogramming relaxation based approach. We experiment with training the combined model end-to-end using a discriminative formulation (structured support vector machine) and applying stochastic subgradient descend to it. Our proposed model achieves an intersection over union score of 62.4 in the test set of the cityscapes pixel-level semantic labeling task which is comparable to state-of-the-art models.",
"title": ""
},
{
"docid": "2d57ab9827a0dde1b35f0739588f1eee",
"text": "Probabilistic topic models could be used to extract lowdimension topics from document collections. However, such models without any human knowledge often produce topics that are not interpretable. In recent years, a number of knowledge-based topic models have been proposed, but they could not process fact-oriented triple knowledge in knowledge graphs. Knowledge graph embeddings, on the other hand, automatically capture relations between entities in knowledge graphs. In this paper, we propose a novel knowledge-based topic model by incorporating knowledge graph embeddings into topic modeling. By combining latent Dirichlet allocation, a widely used topic model with knowledge encoded by entity vectors, we improve the semantic coherence significantly and capture a better representation of a document in the topic space. Our evaluation results will demonstrate the effectiveness of our method.",
"title": ""
},
{
"docid": "b4f06236b0babb6cd049c8914170d7bf",
"text": "We propose a simple and efficient method for exploiting synthetic images when training a Deep Network to predict a 3D pose from an image. The ability of using synthetic images for training a Deep Network is extremely valuable as it is easy to create a virtually infinite training set made of such images, while capturing and annotating real images can be very cumbersome. However, synthetic images do not resemble real images exactly, and using them for training can result in suboptimal performance. It was recently shown that for exemplar-based approaches, it is possible to learn a mapping from the exemplar representations of real images to the exemplar representations of synthetic images. In this paper, we show that this approach is more general, and that a network can also be applied after the mapping to infer a 3D pose: At run-time, given a real image of the target object, we first compute the features for the image, map them to the feature space of synthetic images, and finally use the resulting features as input to another network which predicts the 3D pose. Since this network can be trained very effectively by using synthetic images, it performs very well in practice, and inference is faster and more accurate than with an exemplar-based approach. We demonstrate our approach on the LINEMOD dataset for 3D object pose estimation from color images, and the NYU dataset for 3D hand pose estimation from depth maps. We show that it allows us to outperform the state-of-the-art on both datasets.",
"title": ""
},
{
"docid": "a2f65eb4a81bc44ea810d834ab33d891",
"text": "This survey provides the basis for developing research in the area of mobile manipulator performance measurement, an area that has relatively few research articles when compared to other mobile manipulator research areas. The survey provides a literature review of mobile manipulator research with examples of experimental applications. The survey also provides an extensive list of planning and control references as this has been the major research focus for mobile manipulators which factors into performance measurement of the system. The survey then reviews performance metrics considered for mobile robots, robot arms, and mobile manipulators and the systems that measure their performance, including machine tool measurement systems through dynamic motion tracking systems. Lastly, the survey includes a section on research that has occurred for performance measurement of robots, mobile robots, and mobile manipulators beginning with calibration, standards, and mobile manipulator artifacts that are being considered for evaluation of mobile manipulator performance.",
"title": ""
}
] |
scidocsrr
|
093386ea12cba6053b1c06c9e79949ff
|
"Dave...I can assure you ...that it's going to be all right ..." A Definition, Case for, and Survey of Algorithmic Assurances in Human-Autonomy Trust Relationships
|
[
{
"docid": "a0c36cccd31a1bf0a1e7c9baa78dd3fa",
"text": "Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function (“avoiding side effects” and “avoiding reward hacking”), an objective function that is too expensive to evaluate frequently (“scalable supervision”), or undesirable behavior during the learning process (“safe exploration” and “distributional shift”). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking",
"title": ""
},
{
"docid": "71b5c8679979cccfe9cad229d4b7a952",
"text": "Despite widespread adoption, machine learning models remain mostly black boxes. Understanding the reasons behind predictions is, however, quite important in assessing trust, which is fundamental if one plans to take action based on a prediction, or when choosing whether to deploy a new model. Such understanding also provides insights into the model, which can be used to transform an untrustworthy model or prediction into a trustworthy one.\n In this work, we propose LIME, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally varound the prediction. We also propose a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem. We demonstrate the flexibility of these methods by explaining different models for text (e.g. random forests) and image classification (e.g. neural networks). We show the utility of explanations via novel experiments, both simulated and with human subjects, on various scenarios that require trust: deciding if one should trust a prediction, choosing between models, improving an untrustworthy classifier, and identifying why a classifier should not be trusted.",
"title": ""
}
] |
[
{
"docid": "f530aab20b4650bf767cfd77d6676130",
"text": "Obfuscated malware has become popular because of pure benefits brought by obfuscation: low cost and readily availability of obfuscation tools accompanied with good result of evading signature based anti-virus detection as well as prevention of reverse engineer from understanding malwares' true nature. Regardless obfuscation methods, a malware must deobfuscate its core code back to clear executable machine code so that malicious portion will be executed. Thus, to analyze the obfuscation pattern before unpacking provide a chance for us to prevent malware from further execution. In this paper, we propose a heuristic detection approach that targets obfuscated windows binary files being loaded into memory - prior to execution. We perform a series of static check on binary file's PE structure for common traces of a packer or obfuscation, and gauge a binary's maliciousness with a simple risk rating mechanism. As a result, a newly created process, if flagged as possibly malicious by the static screening, will be prevented from further execution. This paper explores the foundation of this research, as well as the testing methodology and current results.",
"title": ""
},
{
"docid": "0f285bcef022d0c260b97b14be2a4af3",
"text": "These are expended notes of my talk at the summer institute in algebraic geometry (Seattle, July-August 2005), whose main purpose is to present a global overview on the theory of higher and derived stacks. This text is far from being exhaustive but is intended to cover a rather large part of the subject, starting from the motivations and the foundational material, passing through some examples and basic notions, and ending with some more recent developments and open questions.",
"title": ""
},
{
"docid": "be009b972c794d01061c4ebdb38cc720",
"text": "The existing efforts in computer assisted semen analysis have been focused on high speed imaging and automated image analysis of sperm motility. This results in a large amount of data, and it is extremely challenging for both clinical scientists and researchers to interpret, compare and correlate the multidimensional and time-varying measurements captured from video data. In this work, we use glyphs to encode a collection of numerical measurements taken at a regular interval and to summarize spatio-temporal motion characteristics using static visual representations. The design of the glyphs addresses the needs for (a) encoding some 20 variables using separable visual channels, (b) supporting scientific observation of the interrelationships between different measurements and comparison between different sperm cells and their flagella, and (c) facilitating the learning of the encoding scheme by making use of appropriate visual abstractions and metaphors. As a case study, we focus this work on video visualization for computer-aided semen analysis, which has a broad impact on both biological sciences and medical healthcare. We demonstrate that glyph-based visualization can serve as a means of external memorization of video data as well as an overview of a large set of spatiotemporal measurements. It enables domain scientists to make scientific observation in a cost-effective manner by reducing the burden of viewing videos repeatedly, while providing them with a new visual representation for conveying semen statistics.",
"title": ""
},
{
"docid": "e08914f566fde1dd91a5270d0e12d886",
"text": "Automation in agriculture system is very important these days. This paper proposes an automated system for irrigating the fields. ESP-8266 WIFI module chip is used to connect the system to the internet. Various types of sensors are used to check the content of moisture in the soil, and the water is supplied to the soil through the motor pump. IOT is used to inform the farmers of the supply of water to the soil through an android application. Every time water is given to the soil, the farmer will get to know about that.",
"title": ""
},
{
"docid": "73064f8fb25990982e74004ef0be1dca",
"text": "The objective of this study is to empirically compare the predictive power of the hedonic model with an artificial neural network model for house price prediction. A sample of 200 houses in Christchurch, New Zealand is randomly selected from the Harcourt website. Factors including house size, house age, house type, number of bedrooms, number of bathrooms, number of garages, amenities around the house and geographical location are considered. Empirical results support the potential of artificial neural network on house price prediction, although previous studies have commented on its black box nature and achieved different conclusions.",
"title": ""
},
{
"docid": "85d9b0ed2e9838811bf3b07bb31dbeb6",
"text": "In recent years, the medium which has negative index of refraction is widely researched. The medium has both the negative permittivity and the negative permeability. In this paper, we have researched the frequency range widening of negative permeability using split ring resonators.",
"title": ""
},
{
"docid": "42f345409f9e65b36040d75693192124",
"text": "Aspect-level sentiment classification aims at identifying the sentiment polarity of specific target in its context. Previous approaches have realized the importance of targets in sentiment classification and developed various methods with the goal of precisely modeling their contexts via generating target-specific representations. However, these studies always ignore the separate modeling of targets. In this paper, we argue that both targets and contexts deserve special treatment and need to be learned their own representations via interactive learning. Then, we propose the interactive attention networks (IAN) to interactively learn attentions in the contexts and targets, and generate the representations for targets and contexts separately. With this design, the IAN model can well represent a target and its collocative context, which is helpful to sentiment classification. Experimental results on SemEval 2014 Datasets demonstrate the effectiveness of our model.",
"title": ""
},
{
"docid": "a7addb99b27233e3b855af50d1f345d8",
"text": "Analog/mixed-signal machine learning (ML) accelerators exploit the unique computing capability of analog/mixed-signal circuits and inherent error tolerance of ML algorithms to obtain higher energy efficiencies than digital ML accelerators. Unfortunately, these analog/mixed-signal ML accelerators lack programmability, and even instruction set interfaces, to support diverse ML algorithms or to enable essential software control over the energy-vs-accuracy tradeoffs. We propose PROMISE, the first end-to-end design of a PROgrammable MIxed-Signal accElerator from Instruction Set Architecture (ISA) to high-level language compiler for acceleration of diverse ML algorithms. We first identify prevalent operations in widely-used ML algorithms and key constraints in supporting these operations for a programmable mixed-signal accelerator. Second, based on that analysis, we propose an ISA with a PROMISE architecture built with silicon-validated components for mixed-signal operations. Third, we develop a compiler that can take a ML algorithm described in a high-level programming language (Julia) and generate PROMISE code, with an IR design that is both language-neutral and abstracts away unnecessary hardware details. Fourth, we show how the compiler can map an application-level error tolerance specification for neural network applications down to low-level hardware parameters (swing voltages for each application Task) to minimize energy consumption. Our experiments show that PROMISE can accelerate diverse ML algorithms with energy efficiency competitive even with fixed-function digital ASICs for specific ML algorithms, and the compiler optimization achieves significant additional energy savings even for only 1% extra errors.",
"title": ""
},
{
"docid": "efd27e1838d48342b5331b1b504d6a69",
"text": "The microflora of Tibetan kefir grains was investigated by culture- independent methods. Denaturing gradient gel electrophoresis (DGGE) of partially amplified 16S rRNA for bacteria and 26S rRNA for yeasts, followed by sequencing of the most intense bands, showed that the dominant microorganisms were Pseudomonas sp., Leuconostoc mesenteroides, Lactobacillus helveticus, Lactobacillus kefiranofaciens, Lactococcus lactis, Lactobacillus kefiri, Lactobacillus casei, Kazachstania unispora, Kluyveromyces marxianus, Saccharomyces cerevisiae, and Kazachstania exigua. The bacterial communities between three kinds of Tibetan kefir grains showed 78-84% similarity, and yeasts 80-92%. The microflora is held together in the matrix of fibrillar material composed largely of a water-insoluble polysaccharide.",
"title": ""
},
{
"docid": "78976c627fb72db5393837169060a92a",
"text": "Although many variants of language models have been proposed for information retrieval, there are two related retrieval heuristics remaining \"external\" to the language modeling approach: (1) proximity heuristic which rewards a document where the matched query terms occur close to each other; (2) passage retrieval which scores a document mainly based on the best matching passage. Existing studies have only attempted to use a standard language model as a \"black box\" to implement these heuristics, making it hard to optimize the combination parameters.\n In this paper, we propose a novel positional language model (PLM) which implements both heuristics in a unified language model. The key idea is to define a language model for each position of a document, and score a document based on the scores of its PLMs. The PLM is estimated based on propagated counts of words within a document through a proximity-based density function, which both captures proximity heuristics and achieves an effect of \"soft\" passage retrieval. We propose and study several representative density functions and several different PLM-based document ranking strategies. Experiment results on standard TREC test collections show that the PLM is effective for passage retrieval and performs better than a state-of-the-art proximity-based retrieval model.",
"title": ""
},
{
"docid": "471eca6664d0ae8f6cdfb848bc910592",
"text": "Taxonomic relation identification aims to recognize the ‘is-a’ relation between two terms. Previous works on identifying taxonomic relations are mostly based on statistical and linguistic approaches, but the accuracy of these approaches is far from satisfactory. In this paper, we propose a novel supervised learning approach for identifying taxonomic relations using term embeddings. For this purpose, we first design a dynamic weighting neural network to learn term embeddings based on not only the hypernym and hyponym terms, but also the contextual information between them. We then apply such embeddings as features to identify taxonomic relations using a supervised method. The experimental results show that our proposed approach significantly outperforms other state-of-the-art methods by 9% to 13% in terms of accuracy for both general and specific domain datasets.",
"title": ""
},
{
"docid": "05e4168615c39071bb9640bd5aa6f3d9",
"text": "The intestinal microbiome plays an important role in the metabolism of chemical compounds found within food. Bacterial metabolites are different from those that can be generated by human enzymes because bacterial processes occur under anaerobic conditions and are based mainly on reactions of reduction and/or hydrolysis. In most cases, bacterial metabolism reduces the activity of dietary compounds; however, sometimes a specific product of bacterial transformation exhibits enhanced properties. Studies on the metabolism of polyphenols by the intestinal microbiota are crucial for understanding the role of these compounds and their impact on our health. This review article presents possible pathways of polyphenol metabolism by intestinal bacteria and describes the diet-derived bioactive metabolites produced by gut microbiota, with a particular emphasis on polyphenols and their potential impact on human health. Because the etiology of many diseases is largely correlated with the intestinal microbiome, a balance between the host immune system and the commensal gut microbiota is crucial for maintaining health. Diet-related and age-related changes in the human intestinal microbiome and their consequences are summarized in the paper.",
"title": ""
},
{
"docid": "27b8e6f3781bd4010c92a705ba4d5fcc",
"text": "Maximum power point tracking (MPPT) strategies in photovoltaic (PV) systems ensure efficient utilization of PV arrays. Among different strategies, the perturb and observe (P&O) algorithm has gained wide popularity due to its intuitive nature and simple implementation. However, such simplicity in P&O introduces two inherent issues, namely, an artificial perturbation that creates losses in steady-state operation and a limited ability to track transients in changing environmental conditions. This paper develops and discusses in detail an MPPT algorithm with zero oscillation and slope tracking to address those technical challenges. The strategy combines three techniques to improve steady-state behavior and transient operation: 1) idle operation on the maximum power point (MPP); 2) identification of the irradiance change through a natural perturbation; and 3) a simple multilevel adaptive tracking step. Two key elements, which form the foundation of the proposed solution, are investigated: 1) the suppression of the artificial perturb at the MPP; and 2) the indirect identification of irradiance change through a current-monitoring algorithm, which acts as a natural perturbation. The zero-oscillation adaptive step P&O strategy builds on these mechanisms to identify relevant information and to produce efficiency gains. As a result, the combined techniques achieve superior overall performance while maintaining simplicity of implementation. Simulations and experimental results are provided to validate the proposed strategy, and to illustrate its behavior in steady and transient operations.",
"title": ""
},
{
"docid": "4381ee2e578a640dda05e609ed7f6d53",
"text": "We introduce neural networks for end-to-end differentiable proving of queries to knowledge bases by operating on dense vector representations of symbols. These neural networks are constructed recursively by taking inspiration from the backward chaining algorithm as used in Prolog. Specifically, we replace symbolic unification with a differentiable computation on vector representations of symbols using a radial basis function kernel, thereby combining symbolic reasoning with learning subsymbolic vector representations. By using gradient descent, the resulting neural network can be trained to infer facts from a given incomplete knowledge base. It learns to (i) place representations of similar symbols in close proximity in a vector space, (ii) make use of such similarities to prove queries, (iii) induce logical rules, and (iv) use provided and induced logical rules for multi-hop reasoning. We demonstrate that this architecture outperforms ComplEx, a state-of-the-art neural link prediction model, on three out of four benchmark knowledge bases while at the same time inducing interpretable function-free first-order logic rules.",
"title": ""
},
{
"docid": "1d632c181e89e7d019595f2757f7ee66",
"text": "This study investigated the process by which employee perceptions of the organizational environment are related to job involvement, effort, and performance. The researchers developed an operational definition of psychological climate that was based on how employees perceive aspects of the organizational environment and interpret them in relation to their own well-being. Perceived psychological climate was then related to job involvement, effort, and performance in a path-analytic framework. Results showed that perceptions of a motivating and involving psychological climate were related to job involvement, which in turn was related to effort. Effort was also related to work performance. Results revealed that a modest but statistically significant effect of job involvement on performance became nonsignificant when effort was inserted into the model, indicating the mediating effect of effort on the relationship. The results cross-validated well across 2 samples of outside salespeople, indicating that relationships are generalizable across these different sales contexts.",
"title": ""
},
{
"docid": "ebeed0f16727adff1d6611ba4f48dde1",
"text": "The research reported here integrates computational, visual and cartographic methods to develop a geovisual analytic approach for exploring and understanding spatio-temporal and multivariate patterns. The developed methodology and tools can help analysts investigate complex patterns across multivariate, spatial and temporal dimensions via clustering, sorting and visualization. Specifically, the approach involves a self-organizing map, a parallel coordinate plot, several forms of reorderable matrices (including several ordering methods), a geographic small multiple display and a 2-dimensional cartographic color design method. The coupling among these methods leverages their independent strengths and facilitates a visual exploration of patterns that are difficult to discover otherwise. The visualization system we developed supports overview of complex patterns and through a variety of interactions, enables users to focus on specific patterns and examine detailed views. We demonstrate the system with an application to the IEEE InfoVis 2005 contest data set, which contains time-varying, geographically referenced and multivariate data for technology companies in the US",
"title": ""
},
{
"docid": "6ee1666761a78989d5b17bf0de21aa9a",
"text": "Point set registration is a key component in many computer vision tasks. The goal of point set registration is to assign correspondences between two sets of points and to recover the transformation that maps one point set to the other. Multiple factors, including an unknown nonrigid spatial transformation, large dimensionality of point set, noise, and outliers, make the point set registration a challenging problem. We introduce a probabilistic method, called the Coherent Point Drift (CPD) algorithm, for both rigid and nonrigid point set registration. We consider the alignment of two point sets as a probability density estimation problem. We fit the Gaussian mixture model (GMM) centroids (representing the first point set) to the data (the second point set) by maximizing the likelihood. We force the GMM centroids to move coherently as a group to preserve the topological structure of the point sets. In the rigid case, we impose the coherence constraint by reparameterization of GMM centroid locations with rigid parameters and derive a closed form solution of the maximization step of the EM algorithm in arbitrary dimensions. In the nonrigid case, we impose the coherence constraint by regularizing the displacement field and using the variational calculus to derive the optimal transformation. We also introduce a fast algorithm that reduces the method computation complexity to linear. We test the CPD algorithm for both rigid and nonrigid transformations in the presence of noise, outliers, and missing points, where CPD shows accurate results and outperforms current state-of-the-art methods.",
"title": ""
},
{
"docid": "574aca6aa63dd17949fcce6a231cf2d3",
"text": "This paper presents an algorithm for segmenting the hair region in uncontrolled, real life conditions images. Our method is based on a simple statistical hair shape model representing the upper hair part. We detect this region by minimizing an energy which uses active shape and active contour. The upper hair region then allows us to learn the hair appearance parameters (color and texture) for the image considered. Finally, those parameters drive a pixel-wise segmentation technique that yields the desired (complete) hair region. We demonstrate the applicability of our method on several real images.",
"title": ""
},
{
"docid": "5d4797cffc06cbde079bf4019dc196db",
"text": "Automatically generating natural language descriptions of videos plays a fundamental challenge for computer vision community. Most recent progress in this problem has been achieved through employing 2-D and/or 3-D Convolutional Neural Networks (CNNs) to encode video content and Recurrent Neural Networks (RNNs) to decode a sentence. In this paper, we present Long Short-Term Memory with Transferred Semantic Attributes (LSTM-TSA)—a novel deep architecture that incorporates the transferred semantic attributes learnt from images and videos into the CNN plus RNN framework, by training them in an end-to-end manner. The design of LSTM-TSA is highly inspired by the facts that 1) semantic attributes play a significant contribution to captioning, and 2) images and videos carry complementary semantics and thus can reinforce each other for captioning. To boost video captioning, we propose a novel transfer unit to model the mutually correlated attributes learnt from images and videos. Extensive experiments are conducted on three public datasets, i.e., MSVD, M-VAD and MPII-MD. Our proposed LSTM-TSA achieves to-date the best published performance in sentence generation on MSVD: 52.8% and 74.0% in terms of BLEU@4 and CIDEr-D. Superior results are also reported on M-VAD and MPII-MD when compared to state-of-the-art methods.",
"title": ""
}
] |
scidocsrr
|
83fc714fe70d710f83dea67fab5613c1
|
User interest and social influence based emotion prediction for individuals
|
[
{
"docid": "0ee97a3afcc2471a05924a1171ac82cf",
"text": "A number of researchers around the world have built machines that recognize, express, model, communicate, and respond to emotional information, instances of ‘‘affective computing.’’ This article raises and responds to several criticisms of affective computing, articulating state-of-the art research challenges, especially with respect to affect in humancomputer interaction. r 2003 Elsevier Science Ltd. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "cb5d0498db49c8421fef279aea69c367",
"text": "The growing commoditization of the underground economy has given rise to malware delivery networks, which charge fees for quickly delivering malware or unwanted software to a large number of hosts. A key method to provide this service is through the orchestration of silent delivery campaigns. These campaigns involve a group of downloaders that receive remote commands and then deliver their payloads without any user interaction. These campaigns can evade detection by relying on inconspicuous downloaders on the client side and on disposable domain names on the server side. We describe Beewolf, a system for detecting silent delivery campaigns from Internet-wide records of download events. The key observation behind our system is that the downloaders involved in these campaigns frequently retrieve payloads in lockstep. Beewolf identifies such locksteps in an unsupervised and deterministic manner, and can operate on streaming data. We utilize Beewolf to study silent delivery campaigns at scale, on a data set of 33.3 million download events. This investigation yields novel findings, e.g. malware distributed through compromised software update channels, a substantial overlap between the delivery ecosystems for malware and unwanted software, and several types of business relationships within these ecosystems. Beewolf achieves over 92% true positives and fewer than 5% false positives. Moreover, Beewolf can detect suspicious downloaders a median of 165 days ahead of existing anti-virus products and payload-hosting domains a median of 196 days ahead of existing blacklists.",
"title": ""
},
{
"docid": "a82a4d82b2713e0fe0a562ac09d40fef",
"text": "The advent of new cryptographic methods in recent years also includes schemes related to functional encryption. Within these schemes Attribute-based Encryption (ABE) became the most popular, including ciphertext-policy and key-policy ABE. ABE and related schemes are widely discussed within the mathematical community. Unfortunately, there are only a few implementations circulating within the computer science and the applied cryptography community. Hence, it is very difficult to include these new cryptographic methods in real-world applications. This article gives an overview of existing implementations and elaborates on their value in specific cloud computing and IoT application scenarios. This also includes a summary of the additions the authors made to current implementations such as the introduction of dynamic attributes. Keywords—Attribute-based Encryption, Applied Cryptography, Internet of Things, Cloud Computing Security",
"title": ""
},
{
"docid": "7c1ec9a154c18fad12318aaf6bc50c46",
"text": "A design methodology for the edge termination is proposed to achieve the same breakdown voltage of the non-uniform super-junction (SJ) trench metal-oxide semiconductor field-effect transistor (TMOSFET) cell structure. A simple analytical solution for the effect of charge imbalance on the termination region is suggested, and it is satisfied with the simulation of potential distribution. The doping concentration decreases linearly in the vertical direction from the N drift region at the bottom to the channel at the top. The structure modeling and the characteristic analyses for potential distribution and electric field are simulated by using of the SILVACO TCAD 2D device simulator, Atlas. As a result, the breakdown voltage of 132V is successfully achieved at the edge termination of non-uniform SJ TMOSFET, which has the better performance than the conventional structure in the breakdown voltage.",
"title": ""
},
{
"docid": "411ff0fe0be8a81459a9ba45bbcd48cb",
"text": "People tend to make straight and smooth hand movements when reaching for an object. These trajectory features are resistant to perturbation, and both proprioceptive as well as visual feedback may guide the adaptive updating of motor commands enforcing this regularity. How is information from the two senses combined to generate a coherent internal representation of how the arm moves? Here we show that eliminating visual feedback of hand-path deviations from the straight-line reach (constraining visual feedback of motion within a virtual, \"visual channel\") prevents compensation of initial direction errors induced by perturbations. Because adaptive reduction in direction errors occurred with proprioception alone, proprioceptive and visual information are not combined in this reaching task using a fixed, linear weighting scheme as reported for static tasks not requiring arm motion. A computer model can explain these findings, assuming that proprioceptive estimates of initial limb posture are used to select motor commands for a desired reach and visual feedback of hand-path errors brings proprioceptive estimates into registration with a visuocentric representation of limb position relative to its target. Simulations demonstrate that initial configuration estimation errors lead to movement direction errors as observed experimentally. Registration improves movement accuracy when veridical visual feedback is provided but is not invoked when hand-path errors are eliminated. However, the visual channel did not exclude adjustment of terminal movement features maximizing hand-path smoothness. Thus visual and proprioceptive feedback may be combined in fundamentally different ways during trajectory control and final position regulation of reaching movements.",
"title": ""
},
{
"docid": "85f67ab0e1adad72bbe6417d67fd4c81",
"text": "Data warehouses are used to store large amounts of data. This data is often used for On-Line Analytical Processing (OLAP). Short response times are essential for on-line decision support. Common approaches to reach this goal in read-mostly environments are the precomputation of materialized views and the use of index structures. In this paper, a framework is presented to evaluate different index structures analytically depending on nine parameters for the use in a data warehouse environment. The framework is applied to four different index structures to evaluate which structure works best for range queries. We show that all parameters influence the performance. Additionally, we show why bitmap index structures use modern disks better than traditional tree structures and why bitmaps will supplant the tree based index structures in the future.",
"title": ""
},
{
"docid": "7995b01668f93569f42a7afbc213635b",
"text": "OBJECTIVES\nThermal pulsation (LipiFlow) has been advocated for meibomian gland dysfunction (MGD) treatment and was found useful. We aimed to evaluate the efficacy and safety of thermal pulsation in Asian patients with different grades of meibomian gland loss.\n\n\nMETHODS\nA hospital-based interventional study comparing thermal pulsation to warm compresses for MGD treatment. Fifty patients were recruited from the dry eye clinic of a Singapore tertiary eye hospital. The ocular surface and symptom were evaluated before treatment, and one and three months after treatment. Twenty-five patients underwent thermal pulsation (single session), whereas 25 patients underwent warm compresses (twice daily) for 3 months. Meibomian gland loss was graded using infrared meibography, whereas function was graded using the number of glands with liquid secretion.\n\n\nRESULTS\nThe mean age (SD) of participants was 56.4 (11.4) years in the warm compress group and 55.6 (12.7) years in the thermal pulsation group. Seventy-six percent of the participants were female. Irritation symptom significantly improved over 3 months in both groups (P<0.01), whereas tear breakup time (TBUT) was modestly improved at 1 month in only the thermal pulsation group (P=0.048), without significant difference between both groups over the 3 months (P=0.88). There was also no significant difference in irritation symptom, TBUT, Schirmer test, and gland secretion variables between patients with different grades of gland loss or function at follow-ups.\n\n\nCONCLUSIONS\nA single session of thermal pulsation was similar in its efficacy and safety profile to 3 months of twice daily warm compresses in Asians. Treatment efficacy was not affected by pretreatment gland loss.",
"title": ""
},
{
"docid": "7487f889eae6a32fc1afab23e54de9b8",
"text": "Although many researchers have investigated the use of different powertrain topologies, component sizes, and control strategies in fuel-cell vehicles, a detailed parametric study of the vehicle types must be conducted before a fair comparison of fuel-cell vehicle types can be performed. This paper compares the near-optimal configurations for three topologies of vehicles: fuel-cell-battery, fuel-cell-ultracapacitor, and fuel-cell-battery-ultracapacitor. The objective function includes performance, fuel economy, and powertrain cost. The vehicle models, including detailed dc/dc converter models, are programmed in Matlab/Simulink for the customized parametric study. A controller variable for each vehicle type is varied in the optimization.",
"title": ""
},
{
"docid": "bd0e01675a12193752588e6bc730edd5",
"text": "Online safety is everyone's responsibility---a concept much easier to preach than to practice.",
"title": ""
},
{
"docid": "6325188ee21b6baf65dbce6855c19bc2",
"text": "A knowledgeable observer of a game of football (soccer) can make a subjective evaluation of the quality of passes made between players during the game, such as rating them as Good, OK, or Bad. In this article, we consider the problem of producing an automated system to make the same evaluation of passes and present a model to solve this problem.\n Recently, many professional football leagues have installed object tracking systems in their stadiums that generate high-resolution and high-frequency spatiotemporal trajectories of the players and the ball. Beginning with the thesis that much of the information required to make the pass ratings is available in the trajectory signal, we further postulated that using complex data structures derived from computational geometry would enable domain football knowledge to be included in the model by computing metric variables in a principled and efficient manner. We designed a model that computes a vector of predictor variables for each pass made and uses machine learning techniques to determine a classification function that can accurately rate passes based only on the predictor variable vector.\n Experimental results show that the learned classification functions can rate passes with 90.2% accuracy. The agreement between the classifier ratings and the ratings made by a human observer is comparable to the agreement between the ratings made by human observers, and suggests that significantly higher accuracy is unlikely to be achieved. Furthermore, we show that the predictor variables computed using methods from computational geometry are among the most important to the learned classifiers.",
"title": ""
},
{
"docid": "a3fe3b92fe53109888b26bb03c200180",
"text": "Using Artificial Neural Networh (A\".) in critical applications can be challenging due to the often experimental nature of A\" construction and the \"black box\" label that is fiequently attached to A\".. Wellaccepted process models exist for algorithmic sofhyare development which facilitate software validation and acceptance. The sojiware development process model presented herein is targeted specifically toward artificial neural networks in crik-al appliicationr. 7% model is not unwieldy, and could easily be used on projects without critical aspects. This should be of particular interest to organizations that use AMVs and need to maintain or achieve a Capability Maturity Model (CM&?I or IS0 sofhyare development rating. Further, while this model is aimed directly at neural network development, with minor moda&ations, the model could be applied to any technique wherein knowledge is extractedfiom existing &ka, such as other numeric approaches or knowledge-based systems.",
"title": ""
},
{
"docid": "4720910e3152bdd484e0c56d7eba8ce0",
"text": "Supplier evaluation has assumed a strategic role in determining competitiveness of large manufacturing companies. An increasing number of researches have been devoted to the development of different kind of methodologies to cope with this problem. Nevertheless, while the number of applications is growing, there is little empirical evidence of the practical usefulness of such tools with a dichotomy between theoretical approaches and empirical applications. Considering this evidence, the goal of this paper is to contribute to understand the above dichotomy by implementing, in a corporate environment, a model for supplier evaluation based on the Analytical Hierarchical Process (AHP), one of the most prominent methodologies used to address the problem. The analysis of the implementation process of the methodology allows the identification of strengths and weaknesses of using formalized supplier selection models to tackle the supplier evaluation problem, also highlighting potential barriers preventing firms to adopt such methods. Relevant issues arising from the application and managerial implications for both customer and suppliers are discussed. & 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "2891f8baee4ab21d793d0832ce54c24f",
"text": "This paper is concerned with the task of bilingual lexicon induction using imagebased features. By applying features from a convolutional neural network (CNN), we obtain state-of-the-art performance on a standard dataset, obtaining a 79% relative improvement over previous work which uses bags of visual words based on SIFT features. The CNN image-based approach is also compared with state-of-the-art linguistic approaches to bilingual lexicon induction, even outperforming these for one of three language pairs on another standard dataset. Furthermore, we shed new light on the type of visual similarity metric to use for genuine similarity versus relatedness tasks, and experiment with using multiple layers from the same network in an attempt to improve performance.",
"title": ""
},
{
"docid": "eadc810575416fccea879c571ddfbfd2",
"text": "In this paper, we propose a zoom-out-and-in network for generating object proposals. A key observation is that it is difficult to classify anchors of different sizes with the same set of features. Anchors of different sizes should be placed accordingly based on different depth within a network: smaller boxes on high-resolution layers with a smaller stride while larger boxes on low-resolution counterparts with a larger stride. Inspired by the conv/deconv structure, we fully leverage the low-level local details and high-level regional semantics from two feature map streams, which are complimentary to each other, to identify the objectness in an image. A map attention decision (MAD) unit is further proposed to aggressively search for neuron activations among two streams and attend the most contributive ones on the feature learning of the final loss. The unit serves as a decision-maker to adaptively activate maps along certain channels with the solely purpose of optimizing the overall training loss. One advantage of MAD is that the learned weights enforced on each feature channel is predicted on-the-fly based on the input context, which is more suitable than the fixed enforcement of a convolutional kernel. Experimental results on three datasets demonstrate the effectiveness of our proposed algorithm over other state-of-the-arts, in terms of average recall for region proposal and average precision for object detection.",
"title": ""
},
{
"docid": "c9cd19c2e8ee4b07f969280672d521bf",
"text": "The owner and users of a sensor network may be different, which necessitates privacy-preserving access control. On the one hand, the network owner need enforce strict access control so that the sensed data are only accessible to users willing to pay. On the other hand, users wish to protect their respective data access patterns whose disclosure may be used against their interests. This paper presents DP2AC, a Distributed Privacy-Preserving Access Control scheme for sensor networks, which is the first work of its kind. Users in DP2AC purchase tokens from the network owner whereby to query data from sensor nodes which will reply only after validating the tokens. The use of blind signatures in token generation ensures that tokens are publicly verifiable yet unlinkable to user identities, so privacy-preserving access control is achieved. A central component in DP2AC is to prevent malicious users from reusing tokens, for which we propose a suite of distributed token reuse detection (DTRD) schemes without involving the base station. These schemes share the essential idea that a sensor node checks with some other nodes (called witnesses) whether a token has been used, but they differ in how the witnesses are chosen. We thoroughly compare their performance with regard to TRD capability, communication overhead, storage overhead, and attack resilience. The efficacy and efficiency of DP2AC are confirmed by detailed performance evaluations.",
"title": ""
},
{
"docid": "b8dfe30c07f0caf46b3fc59406dbf017",
"text": "We describe an extensible approach to generating questions for the purpose of reading comprehension assessment and practice. Our framework for question generation composes general-purpose rules to transform declarative sentences into questions, is modular in that existing NLP tools can be leveraged, and includes a statistical component for scoring questions based on features of the input, output, and transformations performed. In an evaluation in which humans rated questions according to several criteria, we found that our implementation achieves 43.3% precisionat-10 and generates approximately 6.8 acceptable questions per 250 words of source text.",
"title": ""
},
{
"docid": "5c2d3a5acb8e4f861a1bbd1ba22f3a31",
"text": "Feature selection is used to improve efficiency of learning algorithms by finding an optimal subset of features. However, most feature selection techniques can handle only certain types of data. Additional limitations of existing methods include intensive computational requirements and inability to identify redundant variables. In this paper, we are presenting a novel, information-theoretic algorithm for feature selection, which finds an optimal set of attributes by removing both irrelevant and redundant features. The algorithm has a polynomial computational complexity and it is applicable to datasets of mixed nature. The method performance is evaluated on several benchmark datasets by using a standard classifier (C4.5).",
"title": ""
},
{
"docid": "66154317ab348562536ab44fa94d2520",
"text": "We describe a prototype dialogue response generation model for the customer service domain at Amazon. The model, which is trained in a weakly supervised fashion, measures the similarity between customer questions and agent answers using a dual encoder network, a Siamese-like neural network architecture. Answer templates are extracted from embeddings derived from past agent answers, without turn-by-turn annotations. Responses to customer inquiries are generated by selecting the best template from the final set of templates. We show that, in a closed domain like customer service, the selected templates cover >70% of past customer inquiries. Furthermore, the relevance of the model-selected templates is significantly higher than templates selected by a standard tf-idf baseline.",
"title": ""
},
{
"docid": "dd5fa68b788cc0816c4e16f763711560",
"text": "Over the last ten years the basic knowledge of brain structure and function has vastly expanded, and its incorporation into the developmental sciences is now allowing for more complex and heuristic models of human infancy. In a continuation of this effort, in this two-part work I integrate current interdisciplinary data from attachment studies on dyadic affective communications, neuroscience on the early developing right brain, psychophysiology on stress systems, and psychiatry on psychopathogenesis to provide a deeper understanding of the psychoneurobiological mechanisms that underlie infant mental health. In this article I detail the neurobiology of a secure attachment, an exemplar of adaptive infant mental health, and focus upon the primary caregiver’s psychobiological regulation of the infant’s maturing limbic system, the brain areas specialized for adapting to a rapidly changing environment. The infant’s early developing right hemisphere has deep connections into the limbic and autonomic nervous systems and is dominant for the human stress response, and in this manner the attachment relationship facilitates the expansion of the child’s coping capcities. This model suggests that adaptive infant mental health can be fundamentally defined as the earliest expression of flexible strategies for coping with the novelty and stress that is inherent in human interactions. This efficient right brain function is a resilience factor for optimal development over the later stages of the life cycle. RESUMEN: En los últimos diez an ̃os el conocimiento ba ́sico de la estructura y funcio ́n del cerebro se ha expandido considerablemente, y su incorporacio ́n mo parte de las ciencias del desarrollo permite ahora tener modelos de infancia humana ma ́s complejos y heurı ́sticos. Como una continuacio ́n a este esfuerzo, en este ensayo que contiene dos partes, se integra la actual informacio ́n interdisciplinaria que proviene de los estudios de la unio ́n afectiva en relacio ́n con comunicaciones afectivas en forma de dı ́adas, la neurociencia en el desarrollo inicial del lado derecho del cerebro, la sicofisiologı ́a de los sistemas de tensión emocional, ası ́ como la siquiatrı ́a en cuanto a la sicopatoge ́nesis, con el fin de presentar un conocimiento ma ́s profundo de los mecanismos siconeurobiolo ́gic s que sirven de base para la salud mental infantil. En este ensayo se explica con detalle la neurobiologı ́a de una relacio ́n afectiva segura, un modelo de salud mental infantil que se puede adaptar, y el enfoque del mismo se centra en la reglamentacio ́n sicobiológica que quien primariamente cuida del nin ̃o tiene del maduramiento del sistema lı́mbico del infante, o sea, las a ́reas del cerebro especialmente dedicadas a la adaptacio ́n un medio Direct correspondence to: Allan N. Schore, Department of Psychiatry and Biobehavioral Sciences, UCLA School of Medicine, 9817 Sylvia Avenue, Northridge, CA 91324; fax: (818) 349-4404; e-mail: [email protected]. 8 ● A.N. Schore IMHJ (Wiley) LEFT INTERACTIVE",
"title": ""
}
] |
scidocsrr
|
6b8ec8aebda4d191aeae9701889e1f1a
|
Procedural Arrangement of Furniture for Real-Time Walkthroughs
|
[
{
"docid": "464ef88cc9b43dc0af36e435681c5d61",
"text": "As the capability to render complex, realistic scenes in real time increases, many applications of computer graphics are hitting the problem of manual model creation. In virtual reality, researchers have found that creating and updating static models is a difficult and time-consuming task [Brooks 1999]. New games require massive amounts of content. Creating such content is such an expensive process that it may drive smaller companies out of business [Wright 2005]. Content creators cannot keep up with the increasing demand for many high-quality models; algorithms will have to be called upon to assist.",
"title": ""
},
{
"docid": "d7c44247c9ac5f686200b9eca9d8d4f0",
"text": "The computer game industry requires a skilled workforce and this combined with the complexity of modern games, means that production costs are extremely high. One of the most time consuming aspects is the creation of game geometry, the virtual world which the players inhabit. Procedural techniques have been used within computer graphics to create natural textures, simulate special effects and generate complex natural models including trees and waterfalls. It is these procedural techniques that we intend to harness to generate geometry and textures suitable for a game situated in an urban environment. Procedural techniques can provide many benefits for computer graphics applications when the correct algorithm is used. An overview of several commonly used procedural techniques including fractals, L-systems, Perlin noise, tiling systems and cellular basis is provided. The function of each technique and the resulting output they create are discussed to better understand their characteristics, benefits and relevance to the city generation problem. City generation is the creation of an urban area which necessitates the creation of buildings, situated along streets and arranged in appropriate patterns. Some research has already taken place into recreating road network patterns and generating buildings that can vary in function and architectural style. We will study the main body of existing research into procedural city generation and provide an overview of their implementations and a critique of their functionality and results. Finally we present areas in which further research into the generation of cities is required and outline our research goals for city generation.",
"title": ""
}
] |
[
{
"docid": "939d6473f0a607348cbba909a5248d6c",
"text": "This paper describes a thesis exploring how computer programs can collaborate as equals in the artistic creative process. The proposed system, CoCo Sketch, encodes some rudimentary stylistic rules of abstract sketching and music theory to contribute supplemental lines and music while the user sketches. We describe a three-part research method that includes defining rudimentary stylistic rules for abstract line drawing, exploring the interaction design for artistic improvisation with a computer, and evaluating how CoCo Sketch affects the artistic creative process. We report on the initial results of early investigations into artistic style that describe cognitive, perceptual, and behavioral processes used in abstract artists making.",
"title": ""
},
{
"docid": "e1441dc114bc74b83c9f07684ff74fbe",
"text": "The perturb and observe (P&O) maximum power point tracking (MPPT) algorithm is a simple and efficient tracking technique. However, the P&O tracking method suffers from drift in case of an increase in insolation (G), and this drift effect is severe in case of a rapid increase in insolation. Drift occurs due to the incorrect decision taken by the conventional P&O algorithm at the first step change in duty cycle during increase in insolation. A modified P&O technique is proposed to avoid the drift problem by incorporating the information of change in current (ΔI) in the decision process in addition to change in power (ΔP) and change in voltage (ΔV ). The drift phenomena and its effects are clearly demonstrated in this paper for conventional P&O algorithm with both fixed and adaptive step size technique. A single-ended primary inductance converter is considered to validate the proposed drift-free P&O MPPT using direct duty ratio control technique. MATLAB/Simulink is used for simulation studies, and for experimental validation, a microcontroller is used as a digital platform to implement the proposed algorithm. The simulation and experimental results showed that the proposed algorithm accurately tracks the maximum power and avoids the drift in fast changing weather conditions.",
"title": ""
},
{
"docid": "a7fc0958b0830e0a34a281ce0a293e6a",
"text": "Abstract Laboratory diagnostics (i.e., the total testing process) develops conventionally through a virtual loop, originally referred to as \"the brain to brain cycle\" by George Lundberg. Throughout this complex cycle, there is an inherent possibility that a mistake might occur. According to reliable data, preanalytical errors still account for nearly 60%-70% of all problems occurring in laboratory diagnostics, most of them attributable to mishandling procedures during collection, handling, preparing or storing the specimens. Although most of these would be \"intercepted\" before inappropriate reactions are taken, in nearly one fifth of the cases they can produce inappropriate investigations and unjustifiable increase in costs, while generating inappropriate clinical decisions and causing some unfortunate circumstances. Several steps have already been undertaken to increase awareness and establish a governance of this frequently overlooked aspect of the total testing process. Standardization and monitoring preanalytical variables is of foremost importance and is associated with the most efficient and well-organized laboratories, resulting in reduced operational costs and increased revenues. As such, this article is aimed at providing readers with significant updates on the total quality management of the preanalytical phase to endeavour further improvement for patient safety throughout this phase of the total testing process.",
"title": ""
},
{
"docid": "7a612161017a69e49370a4eef3c54d38",
"text": "We report that human walk patterns contain statistically similar features observed in Levy walks. These features include heavy-tail flight and pause-time distributions and the super-diffusive nature of mobility. Human walks are not random walks, but it is surprising that the patterns of human walks and Levy walks contain some statistical similarity. Our study is based on 226 daily GPS traces collected from 101 volunteers in five different outdoor sites. The heavy-tail flight distribution of human mobility induces the super-diffusivity of travel, but up to 30 min to 1 h due to the boundary effect of people's daily movement, which is caused by the tendency of people to move within a predefined (also confined) area of daily activities. These tendencies are not captured in common mobility models such as random way point (RWP). To evaluate the impact of these tendencies on the performance of mobile networks, we construct a simple truncated Levy walk mobility (TLW) model that emulates the statistical features observed in our analysis and under which we measure the performance of routing protocols in delay-tolerant networks (DTNs) and mobile ad hoc networks (MANETs). The results indicate the following. Higher diffusivity induces shorter intercontact times in DTN and shorter path durations with higher success probability in MANET. The diffusivity of TLW is in between those of RWP and Brownian motion (BM). Therefore, the routing performance under RWP as commonly used in mobile network studies and tends to be overestimated for DTNs and underestimated for MANETs compared to the performance under TLW.",
"title": ""
},
{
"docid": "32c44619bfd4013edaec5fc923cfd7a6",
"text": "Neural Machine Translation (NMT) is a new approach for autom atic translation of text from one human language into another. The basic concept in NMT is t o train a large Neural Network that maximizes the translation performance on a given p arallel corpus. NMT is gaining popularity in the research community because it outperform ed traditional SMT approaches in several translation tasks at WMT and other evaluation tas ks/benchmarks at least for some language pairs. However, many of the enhancements in SMT ove r the years have not been incorporated into the NMT framework. In this paper, we focus on one such enhancement namely domain adaptation. We propose an approach for adapting a NMT system to a new domain. The main idea behind domain adaptation is that the availability of large out-of-domain training data and a small in-domain training data. We report significant ga ins with our proposed method in both automatic metrics and a human subjective evaluation me tric on two language pairs. With our adaptation method, we show large improvement on the new d omain while the performance of our general domain only degrades slightly. In addition, o ur approach is fast enough to adapt an already trained system to a new domain within few hours wit hout the need to retrain the NMT model on the combined data which usually takes several da ys/weeks depending on the volume of the data.",
"title": ""
},
{
"docid": "5a82fe10b1c7e2f3d4838c91bba9e6a0",
"text": "The ability to assess an area of interest in 3 dimensions might benefit both novice and experienced clinicians alike. High-resolution limited cone-beam volumetric tomography (CBVT) has been designed for dental applications. As opposed to sliced-image data of conventional computed tomography (CT) imaging, CBVT captures a cylindrical volume of data in one acquisition and thus offers distinct advantages over conventional medical CT. These advantages include increased accuracy, higher resolution, scan-time reduction, and dose reduction. Specific endodontic applications of CBVT are being identified as the technology becomes more prevalent. CBVT has great potential to become a valuable tool in the modern endodontic practice. The objectives of this article are to briefly review cone-beam technology and its advantages over medical CT and conventional radiography, to illustrate current and future clinical applications of cone-beam technology in endodontic practice, and to discuss medicolegal considerations pertaining to the acquisition and interpretation of 3-dimensional data.",
"title": ""
},
{
"docid": "def650b2d565f88a6404997e9e93d34f",
"text": "Quality uncertainty and high search costs for identifying relevant information from an ocean of information may prevent customers from making purchases. Recognizing potential negative impacts of this search cost for quality information and relevant information, firms began to invest in creating a virtual community that enables consumers to share their opinions and experiences to reduce quality uncertainty, and in developing recommendation systems that help customers identify goods in which they might have an interest. However, not much is known regarding the effectiveness of these efforts. In this paper, we empirically investigate the impacts of recommendations and consumer feedbacks on sales based on data gathered from Amazon.com. Our results indicate that more recommendations indeed improve sales at Amazon.com; however, consumer ratings are not found to be related to sales. On the other hand, number of consumer reviews is positively associated with sales. We also find that recommendations work better for less-popular books than for more-popular books. This is consistent with the search cost argument: a consumer’s search cost for less-popular books may be higher, and thus they may rely more on recommendations to locate a product of interest.",
"title": ""
},
{
"docid": "c89ce1ded524ff65c1ebd3d20be155bc",
"text": "Actuarial risk assessment tools are used extensively to predict future violence, but previous studies comparing their predictive accuracies have produced inconsistent findings as a result of various methodological issues. We conducted meta-analyses of the effect sizes of 9 commonly used risk assessment tools and their subscales to compare their predictive efficacies for violence. The effect sizes were extracted from 28 original reports published between 1999 and 2008, which assessed the predictive accuracy of more than one tool. We used a within-subject design to improve statistical power and multilevel regression models to disentangle random effects of variation between studies and tools and to adjust for study features. All 9 tools and their subscales predicted violence at about the same moderate level of predictive efficacy with the exception of Psychopathy Checklist--Revised (PCL-R) Factor 1, which predicted violence only at chance level among men. Approximately 25% of the total variance was due to differences between tools, whereas approximately 85% of heterogeneity between studies was explained by methodological features (age, length of follow-up, different types of violent outcome, sex, and sex-related interactions). Sex-differentiated efficacy was found for a small number of the tools. If the intention is only to predict future violence, then the 9 tools are essentially interchangeable; the selection of which tool to use in practice should depend on what other functions the tool can perform rather than on its efficacy in predicting violence. The moderate level of predictive accuracy of these tools suggests that they should not be used solely for some criminal justice decision making that requires a very high level of accuracy such as preventive detention.",
"title": ""
},
{
"docid": "87f6ede5af3b95933d8db69c6551588e",
"text": "Circumcision remains the most common operation performed on males. Although, not technically difficult, it is accompanied by a rate of morbidity and can result in complications ranging from trivial to tragic. The reported incidence of complications varies from 0.1% to 35% the most common being infection, bleeding and failure to remove the appropriate amount of foreskin. Forty patients suffering from different degrees of circumcision complications and their treatment are presented. In all patients satisfactory functional and cosmetic results were achieved. Whether it is done for ritualistic, religious or medical reasons circumcision should be performed by a fully trained surgeon using a proper technique as follows 1) adequate use of antiseptic agents; 2) complete separation of inner preputial epithelium from the glans; 3) marking the skin to be removed at the beginning of operation; 4) careful attention to the baby’s voiding within the first 6 to 8 h after circumcision; 5) removal or replacement of the dressings on the day following circumcision.",
"title": ""
},
{
"docid": "374ee37f61ec6ff27e592c6a42ee687f",
"text": "Leaf vein forms the basis of leaf characterization and classification. Different species have different leaf vein patterns. It is seen that leaf vein segmentation will help in maintaining a record of all the leaves according to their specific pattern of veins thus provide an effective way to retrieve and store information regarding various plant species in database as well as provide an effective means to characterize plants on the basis of leaf vein structure which is unique for every species. The algorithm proposes a new way of segmentation of leaf veins with the use of Odd Gabor filters and the use of morphological operations for producing a better output. The Odd Gabor filter gives an efficient output and is robust and scalable as compared with the existing techniques as it detects the fine fiber like veins present in leaves much more efficiently.",
"title": ""
},
{
"docid": "4eff9bf8fdba5c4ae5fcefe107957789",
"text": "Shape matching is an important ingredient in shape retrieval, recognition and classification, alignment and registration, and approximation and simplification. This paper treats various aspects that are needed to solve shape matching problems: choosing the precise problem, selecting the properties of the similarity measure that are needed for the problem, choosing the specific similarity measure, and constructing the algorithm to compute the similarity. The focus is on methods that lie close to the field of computational geometry.",
"title": ""
},
{
"docid": "d8247467dfe5c3bf21d3588b7af0ff71",
"text": "Self-improving software has been a goal of computer scientists since the founding of the field of Artificial Intelligence. In this work we analyze limits on computation which might restrict recursive self-improvement. We also introduce Convergence Theory which aims to predict general behavior of RSI systems.",
"title": ""
},
{
"docid": "1f6bf9c06b7ee774bc08848293b5c94a",
"text": "The success of a virtual learning environment (VLE) depends to a considerable extent on student acceptance and use of such an e-learning system. After critically assessing models of technology adoption, including the Technology Acceptance Model (TAM), TAM2, and the Unified Theory of Acceptance and Usage of Technology (UTAUT), we build a conceptual model to explain the differences between individual students in the level of acceptance and use of a VLE. This model extends TAM2 and includes subjective norm, personal innovativeness in the domain of information technology, and computer anxiety. Data were collected from 45 Chinese participants in an Executive MBA program. After performing satisfactory reliability and validity checks, the structural model was tested with the use of PLS. Results indicate that perceived usefulness has a direct effect on VLE use. Perceived ease of use and subjective norm have only indirect effects via perceived usefulness. Both personal innovativeness and computer anxiety have direct effects on perceived ease of use only. Implications are that program managers in education should not only concern themselves with basic system design but also explicitly address individual differences between VLE users. 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "ef83b08619acd4b50be1d5d6ca537445",
"text": "During the past two decades, solid-state transformers (SSTs) have evolved quickly and have been considered for replacing conventional low-frequency (LF) transformers in applications such as traction, where weight and volume savings and substantial efficiency improvements can be achieved, or in smart grids because of their controllability. As shown in this article, all main modern SST topologies realize the common key characteristics of these transformers-medium-frequency (MF) isolation stage, connection to medium voltage (MV), and controllability-by employing combinations of a very few key concepts, which have been described or patented as early as the 1960s. But still, key research challenges concerning protection, isolation, and reliability remain.",
"title": ""
},
{
"docid": "9c47b068f7645dc5464328e80be24019",
"text": "In this paper we propose a highly effective and scalable framework for recognizing logos in images. At the core of our approach lays a method for encoding and indexing the relative spatial layout of local features detected in the logo images. Based on the analysis of the local features and the composition of basic spatial structures, such as edges and triangles, we can derive a quantized representation of the regions in the logos and minimize the false positive detections. Furthermore, we propose a cascaded index for scalable multi-class recognition of logos.\n For the evaluation of our system, we have constructed and released a logo recognition benchmark which consists of manually labeled logo images, complemented with non-logo images, all posted on Flickr. The dataset consists of a training, validation, and test set with 32 logo-classes. We thoroughly evaluate our system with this benchmark and show that our approach effectively recognizes different logo classes with high precision.",
"title": ""
},
{
"docid": "a64ef7969005d186e004c0d9d340567c",
"text": "The Mirai botnet and its variants and imitators are a wake-up call to the industry to better secure Internet of Things devices or risk exposing the Internet infrastructure to increasingly disruptive distributed denial-of-service attacks.",
"title": ""
},
{
"docid": "832bed06d844fedb2867750bb7ec3989",
"text": "Viral diffusion allows a piece of information to widely and quickly spread within the network of users through word-ofmouth. In this paper, we study the problem of modeling both item and user factors that contribute to viral diffusion in Twitter network. We identify three behaviorial factors, namely user virality, user susceptibility and item virality, that contribute to viral diffusion. Instead of modeling these factors independently as done in previous research, we propose a model that measures all the factors simultaneously considering their mutual dependencies. The model has been evaluated on both synthetic and real datasets. The experiments show that our model outperforms the existing ones for synthetic data with ground truth labels. Our model also performs well for predicting the hashtags that have higher retweet likelihood. We finally present case examples that illustrate how the models differ from one another.",
"title": ""
},
{
"docid": "a78e7a3ae5f13544b69efd65c1442a09",
"text": "Vertex similarity is a major problem in network science with a wide range of applications. In this work we provide novel perspectives on finding (dis)similar vertices within a network and across two networks with the same number of vertices (graph matching). With respect to the former problem, we propose to optimize a geometric objective which allows us to express each vertex uniquely as a convex combination of a few extreme types of vertices. Our method has the important advantage of supporting efficiently several types of queries such as “which other vertices are most similar to this vertex?” by the use of the appropriate data structures and of mining interesting patterns in the network. With respect to the latter problem (graph matching), we propose the generalized condition number –a quantity widely used in numerical analysis– κ(LG, LH) of the Laplacian matrix representations of G,H as a measure of graph similarity, where G,H are the graphs of interest. We show that this objective has a solid theoretical basis and propose a deterministic and a randomized graph alignment algorithm. We evaluate our algorithms on both synthetic and real data. We observe that our proposed methods achieve high-quality results and provide us with significant insights into the network structure.",
"title": ""
},
{
"docid": "7c0328e05e30a11729bc80255e09a5b8",
"text": "This paper presents a preliminary design for a moving-target defense (MTD) for computer networks to combat an attacker's asymmetric advantage. The MTD system reasons over a set of abstract models that capture the network's configuration and its operational and security goals to select adaptations that maintain the operational integrity of the network. The paper examines both a simple (purely random) MTD system as well as an intelligent MTD system that uses attack indicators to augment adaptation selection. A set of simulation-based experiments show that such an MTD system may in fact be able to reduce an attacker's success likelihood. These results are a preliminary step towards understanding and quantifying the impact of MTDs on computer networks.",
"title": ""
}
] |
scidocsrr
|
1842bf47a9f87ec1f44aaea20918afe9
|
DCFNet: Deep Neural Network with Decomposed Convolutional Filters
|
[
{
"docid": "8fd893ef59f788742de78d8a279496ca",
"text": "A wavelet scattering network computes a translation invariant image representation, which is stable to deformations and preserves high frequency information for classification. It cascades wavelet transform convolutions with non-linear modulus and averaging operators. The first network layer outputs SIFT-type descriptors whereas the next layers provide complementary invariant information which improves classification. The mathematical analysis of wavelet scattering networks explain important properties of deep convolution networks for classification. A scattering representation of stationary processes incorporates higher order moments and can thus discriminate textures having same Fourier power spectrum. State of the art classification results are obtained for handwritten digits and texture discrimination, with a Gaussian kernel SVM and a generative PCA classifier.",
"title": ""
}
] |
[
{
"docid": "a5879d5e7934380913cd2683ba2525b9",
"text": "This paper deals with the design & development of a theft control system for an automobile, which is being used to prevent/control the theft of a vehicle. The developed system makes use of an embedded system based on GSM technology. The designed & developed system is installed in the vehicle. An interfacing mobile is also connected to the microcontroller, which is in turn, connected to the engine. Once, the vehicle is being stolen, the information is being used by the vehicle owner for further processing. The information is passed onto the central processing insurance system, where by sitting at a remote place, a particular number is dialed by them to the interfacing mobile that is with the hardware kit which is installed in the vehicle. By reading the signals received by the mobile, one can control the ignition of the engine; say to lock it or to stop the engine immediately. Again it will come to the normal condition only after entering a secured password. The owner of the vehicle & the central processing system will know this secured password. The main concept in this design is introducing the mobile communications into the embedded system. The designed unit is very simple & low cost. The entire designed unit is on a single chip. When the vehicle is stolen, owner of vehicle may inform to the central processing system, then they will stop the vehicle by just giving a ring to that secret number and with the help of SIM tracking knows the location of vehicle and informs to the local police or stops it from further movement.",
"title": ""
},
{
"docid": "39a36a96f354977d137ff736486c37a3",
"text": "In a class of games known as Stackelberg games, one agent (the leader) must commit to a strategy that can be observed by the other agent (the follower or adversary) before the adversary chooses its own strategy. We consider Bayesian Stackelberg games, in which the leader is uncertain about the types of adversary it may face. Such games are important in security domains, where, for example, a security agent (leader) must commit to a strategy of patrolling certain areas, and a robber (follower) has a chance to observe this strategy over time before choosing its own strategy of where to attack. This paper presents an efficient exact algorithm for finding the optimal strategy for the leader to commit to in these games. This algorithm, DOBSS, is based on a novel and compact mixed-integer linear programming formulation. Compared to the most efficient algorithm known previously for this problem, DOBSS is not only faster, but also leads to higher quality solutions, and does not suffer from problems of infeasibility that were faced by this previous algorithm. Note that DOBSS is at the heart of the ARMOR system that is currently being tested for security scheduling at the Los Angeles International Airport.",
"title": ""
},
{
"docid": "3510615d09b9cc7cf3be154d50da7e27",
"text": "We propose a non-parametric model for pedestrian motion based on Gaussian Process regression, in which trajectory data are modelled by regressing relative motion against current position. We show how the underlying model can be learned in an unsupervised fashion, demonstrating this on two databases collected from static surveillance cameras. We furthermore exemplify the use of model for prediction, comparing the recently proposed GP-Bayesfilters with a Monte Carlo method. We illustrate the benefit of this approach for long term motion prediction where parametric models such as Kalman Filters would perform poorly.",
"title": ""
},
{
"docid": "db3bb02dde6c818b173cf12c9c7440b7",
"text": "PURPOSE\nThe authors conducted a systematic review of the published literature on social media use in medical education to answer two questions: (1) How have interventions using social media tools affected outcomes of satisfaction, knowledge, attitudes, and skills for physicians and physicians-in-training? and (2) What challenges and opportunities specific to social media have educators encountered in implementing these interventions?\n\n\nMETHOD\nThe authors searched the MEDLINE, CINAHL, ERIC, Embase, PsycINFO, ProQuest, Cochrane Library, Web of Science, and Scopus databases (from the start of each through September 12, 2011) using keywords related to social media and medical education. Two authors independently reviewed the search results to select peer-reviewed, English-language articles discussing social media use in educational interventions at any level of physician training. They assessed study quality using the Medical Education Research Study Quality Instrument.\n\n\nRESULTS\nFourteen studies met inclusion criteria. Interventions using social media tools were associated with improved knowledge (e.g., exam scores), attitudes (e.g., empathy), and skills (e.g., reflective writing). The most commonly reported opportunities related to incorporating social media tools were promoting learner engagement (71% of studies), feedback (57%), and collaboration and professional development (both 36%). The most commonly cited challenges were technical issues (43%), variable learner participation (43%), and privacy/security concerns (29%). Studies were generally of low to moderate quality; there was only one randomized controlled trial.\n\n\nCONCLUSIONS\nSocial media use in medical education is an emerging field of scholarship that merits further investigation. Educators face challenges in adapting new technologies, but they also have opportunities for innovation.",
"title": ""
},
{
"docid": "ec26449b0d78b3f2b80404d340548d02",
"text": "A novel beam-forming phased array system using a substrate integrated waveguide (SIW) fed Yagi-Uda array antenna is presented. This phase array antenna employs an integrated waveguide structure lens as a beam forming network (BFN). A prototype phased array system is designed with 7 beam ports, 9 array ports, and 8 dummy ports. A 10 GHz SIW-fed Bow-tie linear array antenna is proposed with a nonplanar structure to scan over (-24°, +24°) with SIW lens.",
"title": ""
},
{
"docid": "bbf242fd4722abbba0bc993a636f50c2",
"text": "Since the publication of its first edition in 1995, Artificial Intelligence: A Modern Approach has become a classic in our field. Even researchers outside AI, working in other areas of computer science, are familiar with the text and have gained a better appreciation of our field thanks to the efforts of its authors, Stuart Russell of UC Berkeley and Peter Norvig of Google Inc. It has been adopted by over 1000 universities in over 100 countries, and has provided an excellent introduction to AI to several hundreds of thousands of students worldwide. The book not only stands out in the way it provides a clear and comprehensive introduction to almost all aspects of AI for a student entering our field, it also provides a tremendous resource for experienced AI researchers interested in a good introduction to subfields of AI outside of their own area of specialization. In fact, many researchers enjoy reading insightful descriptions of their own area, combined, of course, with the tense moment of checking the author index to see whether their own work made it into the book. Fortunately, due in part to the comprehensive nature of the text, almost all AI researchers who have been around for a few years can be proud to see their own work cited. Writing such a high-quality and erudite overview of our field, while distilling key aspects of literally thousands of research papers, is a daunting task that requires a unique talent; the Russell and Norvig author team has clearly handled this challenge exceptionally well. Given the impact of the first edition of the book, a challenge for the authors was to keep such a unique text up-to-date in the face of rapid developments in AI over the past decade and a half. Fortunately, the authors have succeeded admirably in this challenge by bringing out a second edition in 2003 and now a third edition in 2010. Each of these new editions involves major rewrites and additions to the book to keep it fully current. The revisions also provide an insightful overview of the evolution of AI in recent years. The text covers essentially all major areas of AI, while providing ample and balanced coverage of each of the subareas. For certain subfields, part of the text was provided by respective subject experts. In particular, Jitendra Malik and David Forsyth contributed the chapter on computer vision, Sebastian Thrun wrote the chapter on robotics, and Vibhu Mittal helped with the chapter on natural language. Nick Hay, Mehran Sahami, and Ernest Davis contributed to the engaging set of exercises for students. Overall, this book brings together deep knowledge of various facets of AI, from the authors as well as from many experts in various subfields. The topics covered in the book are woven together via the theme of a grand challenge in AI — that of creating an intelligent agent, one that takes “the best possible [rational] action in [any] situation.” Every aspect of AI is considered in the context of such an agent. For instance, the book discusses agents that solve problems through search, planning, and reasoning, agents that are situated in the physical world, agents that learn from observations, agents that interact with the world through vision and perception, and agents that manipulate the physical world through",
"title": ""
},
{
"docid": "95cf20e163a86f102edabdc699725630",
"text": "As we all know security is needed when we want to send data over any medium so this requires a secure medium to send data. That’s why steganography comes in mind whose aim is to send data securely without knowing of any hacker. In this paper, a new technique is projected whose aim is to keep secrete communication intact. The proposed method blends the advantage of 2 bit LSB and XOR operation. In this, first we are XORing the 8th, 1st bit of data and 7th, 2nd bit of data after this two bit are obtained. These obtained bits are replaced at the LSB position. However, with some way, any person get know about hidden message and it takes the LSB position bit then there are no chances of getting message as it is not the actual message. An experiment was performed with different dataset of images. Furthermore, it was observed that the proposed method promises good result as the PSNR and MSE are good. When the method was compared with other existing methods, it shows enhancement in the imperceptibility and message capacity. Keyword Steganography, XOR, Information Hiding, LSB",
"title": ""
},
{
"docid": "d3501679c9652df1faaaff4c391be567",
"text": "This paper presents a demonstration of how AI can be useful in the game design and development process of a modern board game. By using an artificial intelligence algorithm to play a substantial amount of matches of the Ticket to Ride board game and collecting data, we can analyze several features of the gameplay as well as of the game board. Results revealed loopholes in the game’s rules and pointed towards trends in how the game is played. We are then led to the conclusion that large scale simulation utilizing artificial intelligence can offer valuable information regarding modern board games and their designs that would ordinarily be prohibitively expensive or time-consuming to discover manually.",
"title": ""
},
{
"docid": "5638ba62bcbfd1bd5e46b4e0dccf0d94",
"text": "Sentiment analysis aims to automatically uncover the underlying attitude that we hold towards an entity. The aggregation of these sentiment over a population represents opinion polling and has numerous applications. Current text-based sentiment analysis rely on the construction of dictionaries and machine learning models that learn sentiment from large text corpora. Sentiment analysis from text is currently widely used for customer satisfaction assessment and brand perception analysis, among others. With the proliferation of social media, multimodal sentiment analysis is set to bring new opportunities with the arrival of complementary data streams for improving and going beyond text-based sentiment analysis. Since sentiment can be detected through affective traces it leaves, such as facial and vocal displays, multimodal sentiment analysis offers promising avenues for analyzing facial and vocal expressions in addition to the transcript or textual content. These approaches leverage emotion recognition and context inference to determine the underlying polarity and scope of an individual’s sentiment. In this survey, we define sentiment and the problem of multimodal sentiment analysis and review recent developments in multimodal sentiment analysis in different domains, including spoken reviews, images, video blogs, human-machine and human-human interaction. Challenges and opportunities of this emerging field are also discussed leading to our thesis that multimodal sentiment analysis holds a significant untapped potential.",
"title": ""
},
{
"docid": "06a91d87398ef65bbfa95ab860972fbe",
"text": "A novel variable reluctance (VR) resolver with nonoverlapping tooth-coil windings is proposed in this paper. It significantly simplifies the manufacturing process of multilayer windings in conventional products. Finite element (FE) analysis is used to illustrate the basic operating principle, followed by analytical derivation of main parameters and optimization of major dimensions, including air-gap length and slot opening width. Based on winding distributions and FE results, it is shown that identical stator and winding can be employed for a resolver with three different numbers of rotor poles. Further, other stator slot/rotor pole combinations based on the nonoverlapping tooth-coil windings are generalized. In addition, the influence of eccentricity and end-winding leakage on the proposed topology is investigated. Finally, a prototype is fabricated and tested to verify the analysis, including main parameters and electrical angle error.",
"title": ""
},
{
"docid": "834a0c043799097579441a0ca4713eea",
"text": "As users pan and zoom, display content can disappear into off-screen space, particularly on small-screen devices. The clipping of locations, such as relevant places on a map, can make spatial cognition tasks harder. Halo is a visualization technique that supports spatial cognition by showing users the location of off-screen objects. Halo accomplishes this by surrounding off-screen objects with rings that are just large enough to reach into the border region of the display window. From the portion of the ring that is visible on-screen, users can infer the off-screen location of the object at the center of the ring. We report the results of a user study comparing Halo with an arrow-based visualization technique with respect to four types of map-based route planning tasks. When using the Halo interface, users completed tasks 16-33% faster, while there were no significant differences in error rate for three out of four tasks in our study.",
"title": ""
},
{
"docid": "6ae78c5e82030e76c87ef9759ba8a464",
"text": "The European innovation project PERFoRM (Production harmonizEd Reconfiguration of Flexible Robots and Machinery) is aiming for a harmonized integration of research results in the area of flexible and reconfigurable manufacturing systems. Based on the cyber-physical system (CPS) paradigm, existing technologies and concepts are researched and integrated in an architecture which is enabling the application of these new technologies in real industrial environments. To implement such a flexible cyber-physical system, one of the core requirements for each involved component is a harmonized communication, which enables the capability to collaborate with each other in an intelligent way. But especially when integrating multiple already existing production components into such a cyber-physical system, one of the major issues is to deal with the various communication protocols and data representations coming with each individual cyber-physical component. To tackle this issue, the solution foreseen within PERFoRM's architecture is to use an integration platform, the PERFoRM Industrial Manufacturing Middleware, to enable all connected components to interact with each other through the Middleware and without having to implement new interfaces for each. This paper describes the basic requirements of such a Middleware and how it fits into the PERFoRM architecture and gives an overview about the internal design and functionality.",
"title": ""
},
{
"docid": "a691642e6d27c0df3508a2ab953e4392",
"text": "Deep Learning has enabled remarkable progress over the last years on a variety of tasks, such as image recognition, speech recognition, and machine translation. One crucial aspect for this progress are novel neural architectures. Currently employed architectures have mostly been developed manually by human experts, which is a time-consuming and error-prone process. Because of this, there is growing interest in automated neural architecture search methods. We provide an overview of existing work in this field of research and categorize them according to three dimensions: search space, search strategy, and performance estima-",
"title": ""
},
{
"docid": "3f24f730953fb9719087cad6ffb3e494",
"text": "It is very difficult for human beings to manually summarize large documents of text. Text summarization solves this problem. Nowadays, Text summarization systems are among the most attractive research areas. Text summarization (TS) is used to provide a shorter version of the original text and keeping the overall meaning. There are various methods that aim to find out well-formed summaries. One of the most commonly used methods is the Latent Semantic Analysis (LSA). In this review, we present a comparative study among almost algorithms based on Latent Semantic Analysis (LSA) approach.",
"title": ""
},
{
"docid": "ddcb206f6538cf5bd2804d12d65912df",
"text": "k-anonymity provides a measure of privacy protection by preventing re-identification of data to fewer than a group of k data items. While algorithms exist for producing k-anonymous data, the model has been that of a single source wanting to publish data. Due to privacy issues, it is common that data from different sites cannot be shared directly. Therefore, this paper presents a two-party framework along with an application that generates k-anonymous data from two vertically partitioned sources without disclosing data from one site to the other. The framework is privacy preserving in the sense that it satisfies the secure definition commonly defined in the literature of Secure Multiparty Computation.",
"title": ""
},
{
"docid": "ada6153aeeddcc385de538062f2f7e4c",
"text": "As analysts attempt to make sense of a collection of documents, such as intelligence analysis reports, they need to “connect the dots” between pieces of information that may initially seem unrelated. We conducted a user study to analyze the cognitive process by which users connect pairs of documents and how they spatialize connections. Users created conceptual stories that connected the dots using a range of organizational strategies and spatial representations. Insights from our study can drive the design of data mining algorithms and visual analytic tools to support analysts' complex cognitive processes.",
"title": ""
},
{
"docid": "f1dc40c02d162988ca118c6e4d15ad06",
"text": "Spheres are popular geometric primitives found in many manufactured objects. However, sphere fitting and extraction have not been investigated in depth. In this paper, a robust method is proposed to extract multiple spheres accurately and simultaneously from unorganized point clouds. Moreover, a novel validation step is presented to assess the quality of the detected spheres, which help remove the confusion between perfect spheres and sphere-like shapes such as ellipsoids and paraboloids. A novel sampling strategy is introduced to reduce computational burden for sphere extraction. Experiments on both synthetic and scanned point clouds with different levels of noise and outliers are conducted and the results compared to state-of-the-art methods. These experiments demonstrate the efficiency and robustness of the proposed sphere extraction method.",
"title": ""
},
{
"docid": "f18a0ae573711eb97b9b4150d53182f3",
"text": "The Electrocardiogram (ECG) is commonly used to detect arrhythmias. Traditionally, a single ECG observation is used for diagnosis, making it difficult to detect irregular arrhythmias. Recent technology developments, however, have made it cost-effective to collect large amounts of raw ECG data over time. This promises to improve diagnosis accuracy, but the large data volume presents new challenges for cardiologists. This paper introduces ECGLens, an interactive system for arrhythmia detection and analysis using large-scale ECG data. Our system integrates an automatic heartbeat classification algorithm based on convolutional neural network, an outlier detection algorithm, and a set of rich interaction techniques. We also introduce A-glyph, a novel glyph designed to improve the readability and comparison of ECG signals. We report results from a comprehensive user study showing that A-glyph improves the efficiency in arrhythmia detection, and demonstrate the effectiveness of ECGLens in arrhythmia detection through two expert interviews.",
"title": ""
},
{
"docid": "9b575699e010919b334ac3c6bc429264",
"text": "Over the last decade, keyword search over relational data has attracted considerable attention. A possible approach to face this issue is to transform keyword queries into one or more SQL queries to be executed by the relational DBMS. Finding these queries is a challenging task since the information they represent may be modeled across different elements where the data of interest is stored, but also to find out how these elements are interconnected. All the approaches that have been proposed so far provide a monolithic solution. In this work, we, instead, divide the problem into three steps: the first one, driven by the user's point of view, takes into account what the user has in mind when formulating keyword queries, the second one, driven by the database perspective, considers how the data is represented in the database schema. Finally, the third step combines these two processes. We present the theory behind our approach, and its implementation into a system called QUEST (QUEry generator for STructured sources), which has been deeply tested to show the efficiency and effectiveness of our approach. Furthermore, we report on the outcomes of a number of experimental results that we",
"title": ""
}
] |
scidocsrr
|
f8fbe293eee4b933bed1685ed562d965
|
Convolutional recurrent neural networks for electrocardiogram classification
|
[
{
"docid": "a39f988fa6f7a55662f5a8821e9ad87c",
"text": "We develop an algorithm which exceeds the performance of board certified cardiologists in detecting a wide range of heart arrhythmias from electrocardiograms recorded with a single-lead wearable monitor. We build a dataset with more than 500 times the number of unique patients than previously studied corpora. On this dataset, we train a 34-layer convolutional neural network which maps a sequence of ECG samples to a sequence of rhythm classes. Committees of boardcertified cardiologists annotate a gold standard test set on which we compare the performance of our model to that of 6 other individual cardiologists. We exceed the average cardiologist performance in both recall (sensitivity) and precision (positive predictive value).",
"title": ""
},
{
"docid": "c9a2150bc7a0fe419249189eb5a5a53a",
"text": "One of the challenges in modeling cognitive events from electroencephalogram (EEG) data is finding representations that are invariant to interand intra-subject differences, as well as to inherent noise associated with such data. Herein, we propose a novel approach for learning such representations from multi-channel EEG time-series, and demonstrate its advantages in the context of mental load classification task. First, we transform EEG activities into a sequence of topologypreserving multi-spectral images, as opposed to standard EEG analysis techniques that ignore such spatial information. Next, we train a deep recurrent-convolutional network inspired by state-of-the-art video classification to learn robust representations from the sequence of images. The proposed approach is designed to preserve the spatial, spectral, and temporal structure of EEG which leads to finding features that are less sensitive to variations and distortions within each dimension. Empirical evaluation on the cognitive load classification task demonstrated significant improvements in classification accuracy over current state-of-the-art approaches in this field.",
"title": ""
}
] |
[
{
"docid": "5318baa10a6db98a0f31c6c30fdf6104",
"text": "In image analysis, the images are often represented by multiple visual features (also known as multiview features), that aim to better interpret them for achieving remarkable performance of the learning. Since the processes of feature extraction on each view are separated, the multiple visual features of images may include overlap, noise, and redundancy. Thus, learning with all the derived views of the data could decrease the effectiveness. To address this, this paper simultaneously conducts a hierarchical feature selection and a multiview multilabel (MVML) learning for multiview image classification, via embedding a proposed a new block-row regularizer into the MVML framework. The block-row regularizer concatenating a Frobenius norm (F-norm) regularizer and an l2,1-norm regularizer is designed to conduct a hierarchical feature selection, in which the F-norm regularizer is used to conduct a high-level feature selection for selecting the informative views (i.e., discarding the uninformative views) and the 12,1-norm regularizer is then used to conduct a low-level feature selection on the informative views. The rationale of the use of a block-row regularizer is to avoid the issue of the over-fitting (via the block-row regularizer), to remove redundant views and to preserve the natural group structures of data (via the F-norm regularizer), and to remove noisy features (the 12,1-norm regularizer), respectively. We further devise a computationally efficient algorithm to optimize the derived objective function and also theoretically prove the convergence of the proposed optimization method. Finally, the results on real image datasets show that the proposed method outperforms two baseline algorithms and three state-of-the-art algorithms in terms of classification performance.",
"title": ""
},
{
"docid": "7bea13124037f4e21b918f08c81b9408",
"text": "U.S. health care system is plagued by rising cost and limited access. While the cost of care is increasing faster than the rate of inflation, people living in rural areas have very limited access to quality health care due to a shortage of physicians and facilities in these areas. Information and communication technologies in general and telemedicine in particular offer great promise to extend quality care to underserved rural communities at an affordable cost. However, adoption of telemedicine among the various stakeholders of the health care system has not been very encouraging. Based on an analysis of the extant research literature, this study identifies critical factors that impede the adoption of telemedicine, and offers suggestions to mitigate these challenges.",
"title": ""
},
{
"docid": "15f51cbbb75d236a5669f613855312e0",
"text": "The recent work of Gatys et al., who characterized the style of an image by the statistics of convolutional neural network filters, ignited a renewed interest in the texture generation and image stylization problems. While their image generation technique uses a slow optimization process, recently several authors have proposed to learn generator neural networks that can produce similar outputs in one quick forward pass. While generator networks are promising, they are still inferior in visual quality and diversity compared to generation-by-optimization. In this work, we advance them in two significant ways. First, we introduce an instance normalization module to replace batch normalization with significant improvements to the quality of image stylization. Second, we improve diversity by introducing a new learning formulation that encourages generators to sample unbiasedly from the Julesz texture ensemble, which is the equivalence class of all images characterized by certain filter responses. Together, these two improvements take feed forward texture synthesis and image stylization much closer to the quality of generation-via-optimization, while retaining the speed advantage.",
"title": ""
},
{
"docid": "47de26ecd5f759afa7361c7eff9e9b25",
"text": "At many teaching hospitals, it is common practice for on-call radiology residents to interpret radiology examinations; such reports are later reviewed and revised by an attending physician before being used for any decision making. In case there are substantial problems in the resident’s initial report, the resident is called and the problems are reviewed to prevent similar future reporting errors. However, due to the large volume of reports produced, attending physicians rarely discuss the problems side by side with residents, thus missing an educational opportunity. In this work, we introduce a pipeline to discriminate between reports with significant discrepancies and those with non-significant discrepancies. The former contain severe errors or mis-interpretations, thus representing a great learning opportunity for the resident; the latter presents only minor differences (often stylistic) and have a minor role in the education of a resident. By discriminating between the two, the proposed system could flag those reports that an attending radiology should definitely review with residents under their supervision. We evaluated our approach on 350 manually annotated radiology reports sampled from a collection of tens of thousands. The proposed classifier achieves an Area Under the Curve (AUC) of 0.837, which represent a 14% improvement over the baselines. Furthermore, the classifier reduces the False Negative Rate (FNR) by 52%, a desirable performance metric for any recall-oriented task such as the one studied",
"title": ""
},
{
"docid": "2cca7bc6aad1da4146dea7b99987fcb4",
"text": "The telecare medicine information system (TMIS) allows patients and doctors to access medical services or medical information at remote sites. Therefore, it could bring us very big convenient. To safeguard patients’ privacy, authentication schemes for the TMIS attracted wide attention. Recently, Tan proposed an efficient biometrics-based authentication scheme for the TMIS and claimed their scheme could withstand various attacks. However, in this paper, we point out that Tan’s scheme is vulnerable to the Denial-of-Service attack. To enhance security, we also propose an improved scheme based on Tan’s work. Security and performance analysis shows our scheme not only could overcome weakness in Tan’s scheme but also has better performance.",
"title": ""
},
{
"docid": "18810138af571332e67d42c27816cf6b",
"text": "In this work we address the task of segmenting an object into its parts, or semantic part segmentation. We start by adapting a state-of-the-art semantic segmentation system to this task, and show that a combination of a fully-convolutional Deep CNN system coupled with Dense CRF labelling provides excellent results for a broad range of object categories. Still, this approach remains agnostic to highlevel constraints between object parts. We introduce such prior information by means of the Restricted Boltzmann Machine, adapted to our task and train our model in an discriminative fashion, as a hidden CRF, demonstrating that prior information can yield additional improvements. We also investigate the performance of our approach “in the wild”, without information concerning the objects’ bounding boxes, using an object detector to guide a multi-scale segmentation scheme. We evaluate the performance of our approach on the Penn-Fudan and LFW datasets for the tasks of pedestrian parsing and face labelling respectively. We show superior performance with respect to competitive methods that have been extensively engineered on these benchmarks, as well as realistic qualitative results on part segmentation, even for occluded or deformable objects. We also provide quantitative and extensive qualitative results on three classes from the PASCAL Parts dataset. Finally, we show that our multi-scale segmentation scheme can boost accuracy, recovering segmentations for finer parts.",
"title": ""
},
{
"docid": "b4ed57258b85ab4d81d5071fc7ad2cc9",
"text": "We present LEAR (Lexical Entailment AttractRepel), a novel post-processing method that transforms any input word vector space to emphasise the asymmetric relation of lexical entailment (LE), also known as the IS-A or hyponymy-hypernymy relation. By injecting external linguistic constraints (e.g., WordNet links) into the initial vector space, the LE specialisation procedure brings true hyponymyhypernymy pairs closer together in the transformed Euclidean space. The proposed asymmetric distance measure adjusts the norms of word vectors to reflect the actual WordNetstyle hierarchy of concepts. Simultaneously, a joint objective enforces semantic similarity using the symmetric cosine distance, yielding a vector space specialised for both lexical relations at once. LEAR specialisation achieves state-of-the-art performance in the tasks of hypernymy directionality, hypernymy detection, and graded lexical entailment, demonstrating the effectiveness and robustness of the proposed asymmetric specialisation model.",
"title": ""
},
{
"docid": "641754ee9332e1032838d0dba7712607",
"text": "Medication administration is an increasingly complex process, influenced by the number of medications on the market, the number of medications prescribed for each patient, new medical technology and numerous administration policies and procedures. Adverse events initiated by medication error are a crucial area to improve patient safety. This project looked at the complexity of the medication administration process at a regional hospital and the effect of two medication distribution systems. A reduction in work complexity and time spent gathering medication and supplies, was a goal of this work; but more importantly was determining what barriers to safety and efficiency exist in the medication administration process and the impact of barcode scanning and other technologies. The concept of mobile medication units is attractive to both managers and clinicians; however it is only one solution to the problems with medication administration. Introduction and Background Medication administration is an increasingly complex process, influenced by the number of medications on the market, the number of medications prescribed for each patient, and the numerous policies and procedures created for their administration. Mayo and Duncan (2004) found that a “single [hospital] patient can receive up to 18 medications per day, and a nurse can administer as many as 50 medications per shift” (p. 209). While some researchers indicated that the solution is more nurse education or training (e.g. see Mayo & Duncan, 2004; and Tang, Sheu, Yu, Wei, & Chen, 2007), it does not appear that they have determined the feasibility of this solution and the increased time necessary to look up every unfamiliar medication. Most of the research which focuses on the causes of medication errors does not examine the processes involved in the administration of the medication. And yet, understanding the complexity in the nurses’ processes and workflow is necessary to develop safeguards and create more robust systems that reduce the probability of errors and adverse events. Current medication administration processes include many \\ tasks, including but not limited to, assessing the patient to obtain pertinent data, gathering medications, confirming the five rights (right dose, patient, route, medication, and time), administering the medications, documenting administration, and observing for therapeutic and untoward effects. In studies of the delivery of nursing care in acute care settings, Potter et al. (2005) found that nurses spent 16% their time preparing or administering medication. In addition to the amount of time that the nurses spent in preparing and administering medication, Potter et al found that a significant number of interruptions occurred during this critical process. Interruptions impact the cognitive workload of the nurse, and create an environment where medication errors are more likely to occur. A second environmental factor that affects the nurses’ workflow, is the distance traveled to administer care during a shift. Welker, Decker, Adam, & Zone-Smith (2006) found that on average, ward nurses who were assigned three patients walked just over 4.1 miles per shift while a nurse assigned to six patients walked over 4.8 miles. As a large number of interruptions (22%) occurred within the medication rooms, which were highly visible and in high traffic locations (Potter et al., 2005), and while collecting supplies or traveling to and from patient rooms (Ebright, Patterson, Chalko, & Render, 2003), reducing the distances and frequency of repeated travel could have the ability to decrease the number of interruptions and possibly errors in medication administration. Adding new technology, revising policies and procedures, and providing more education have often been the approaches taken to reduce medication errors. Unfortunately these new technologies, such as computerized order entry and electronic medical records / charting, and new procedures, for instance bar code scanning both the medicine and the patient, can add complexity to the nurse’s taskload. The added complexity in correspondence with the additional time necessary to complete the additional steps can lead to workarounds and variations in care. Given the problems in the current medication administration processes, this work focused on facilitating the nurse’s role in the medication administration process. This study expands on the Braswell and Duggar (2006) investigation and compares processes at baseline and postintroduction of a new mobile medication system. To do this, the current medication administration and distribution process was fully documented to determine a baseline in workload complexity. Then a new mobile medication center was installed to allow nurses easier access to patient medications while traveling on the floor, and the medication administration and distribution process was remapped to demonstrate where process complexities were reduced and nurse workflow is more efficient. A similar study showed that the time nurses spend gathering medications and supplies can be dramatically reduced through this type of system (see Braswell & Duggar, 2006); however, they did not directly investigate the impact on the nursing process. Thus, this research is presented to document the impact of this technology on the nursing workflow at a regional hospital, and as an expansion on the work begun by Braswell and Duggar.",
"title": ""
},
{
"docid": "63a27881760b8ca7cbf544b63df71ee0",
"text": "Self-bearing motors (SBM) use a single magnetic structure for rotational motoring as well as for noncontact levitation. They are sometimes referred to as bearingless motors or combined motor-bearings. In this paper, we propose a new type of self-bearing motors based on toroidally-wound brushless DC machines. This type of SBM can be made to be passively stable in the axial direction and for out-of-plane rotations. To achieve self-bearing operation, we derive a force-current model and show that the levitation force is decoupled from the rotational torque. To overcome the singularity problem in the force-current model, we propose a phase selection algorithm in which the phase that may cause singularity is counted out when inverting the force-current model. Through finite element analyses and experiments, we validate the force-current model and the phase selection algorithm.",
"title": ""
},
{
"docid": "96c99065e84f87c02e0625b9e700c7a9",
"text": "The financial crisis of2007 – 2009 began with a major failure in credit markets. The causes of this failure stretch far beyond inadequate mathematical mo deling (see Donnelly and Embrechts [2010] and Brigo et al. [2009] for detailed discussions from a athematical finance perspective). Nevertheless, it is clear that some of the more popular model s of credit risk were shown to be flawed. Many of these models were and are popular because they are mathematically tractable, allowing easy computation of various risk measures. More re alistic (and complex) models come at a significant computational cost, often requiring Monte Carlo methodsto estimate quantities of interest. The purpose of this chapter is to survey the Monte Carlo techn iques that are used in portfolio credit risk modeling. We discuss various approaches for mod eling the dependencies between individual components of a portfolio and focus on two princi pal risk measures: Value at Risk (VaR) and Expected Shortfall (ES). The efficient estimation of the credit risk measures is often computationally expensive, as it involves the estimation of small quantiles. Rare-event sim ulation techniques such as importance sampling can significantly reduce the computational burden , but the choice of a good importance sampling distribution can be a difficult mathematical probl em. Recent simulation techniques such as the cross-entropy met hod [Rubinstein and Kroese, 2004] have greatly enhanced the applicability of importanc e sampling techniques by adaptively choosing the importance sampling distribution, based on sa mples from the original simulation model. The remainder of this chapter is organized as follows. In Sec tion 2 we describe the general model framework for credit portfolio loss. Section 3 discus se the crude and importance sampling approaches to estimating risk measures via the Monte C arlo method. Various applications to specific models (including Bernoulli mixture models, fac tor models, copula models and intensity models) are given in Section 4. Many of these models capt ure empirical features of credit risk, such as default clustering, that are not captured by th e s andard Gaussian models. Finally, the Appendix contains the essentials on rare-event simulat ion nd adaptive importance sampling.",
"title": ""
},
{
"docid": "5a25af5b9c51b7b1a7b36f0c9b121add",
"text": "BACKGROUND\nCircumcision is a common procedure, but regional and societal attitudes differ on whether there is a need for a male to be circumcised and, if so, at what age. This is an important issue for many parents, but also pediatricians, other doctors, policy makers, public health authorities, medical bodies, and males themselves.\n\n\nDISCUSSION\nWe show here that infancy is an optimal time for clinical circumcision because an infant's low mobility facilitates the use of local anesthesia, sutures are not required, healing is quick, cosmetic outcome is usually excellent, costs are minimal, and complications are uncommon. The benefits of infant circumcision include prevention of urinary tract infections (a cause of renal scarring), reduction in risk of inflammatory foreskin conditions such as balanoposthitis, foreskin injuries, phimosis and paraphimosis. When the boy later becomes sexually active he has substantial protection against risk of HIV and other viral sexually transmitted infections such as genital herpes and oncogenic human papillomavirus, as well as penile cancer. The risk of cervical cancer in his female partner(s) is also reduced. Circumcision in adolescence or adulthood may evoke a fear of pain, penile damage or reduced sexual pleasure, even though unfounded. Time off work or school will be needed, cost is much greater, as are risks of complications, healing is slower, and stitches or tissue glue must be used.\n\n\nSUMMARY\nInfant circumcision is safe, simple, convenient and cost-effective. The available evidence strongly supports infancy as the optimal time for circumcision.",
"title": ""
},
{
"docid": "a9e3a6b4aefcc5396b72a37d3d250a3a",
"text": "Interactive visual applications often rely on animation to transition from one display state to another. There are multiple animation techniques to choose from, and it is not always clear which should produce the best visual correspondences between display elements. One major factor is whether the animation relies on staggering-an incremental delay in start times across the moving elements. It has been suggested that staggering may reduce occlusion, while also reducing display complexity and producing less overwhelming animations, though no empirical evidence has demonstrated these advantages. Work in perceptual psychology does show that reducing occlusion, and reducing inter-object proximity (crowding) more generally, improves performance in multiple object tracking. We ran simulations confirming that staggering can in some cases reduce crowding in animated transitions involving dot clouds (as found in, e.g., animated 2D scatterplots). We empirically evaluated the effect of two staggering techniques on tracking tasks, focusing on cases that should most favour staggering. We found that introducing staggering has a negligible, or even negative, impact on multiple object tracking performance. The potential benefits of staggering may be outweighed by strong costs: a loss of common-motion grouping information about which objects travel in similar paths, and less predictability about when any specific object would begin to move. Staggering may be beneficial in some conditions, but they have yet to be demonstrated. The present results are a significant step toward a better understanding of animation pacing, and provide direction for further research.",
"title": ""
},
{
"docid": "b40afca0ce1fa18ee5ad254548cae427",
"text": "Temporal noise sets the fundamental limit on image sensor performance, especially under low illumination and in video applications. In a CCD image sensor, temporal noise is primarily due to the photodetector shot noise and the output amplifier thermal and 1 noise. CMOS image sensors suffer from higher noise than CCDs due to the additional pixel and column amplifier transistor thermal and 1 noise. Noise analysis is further complicated by the time-varying circuit models, the fact that the reset transistor operates in subthreshold during reset, and the nonlinearity of the charge to voltage conversion, which is becoming more pronounced as CMOS technology scales. The paper presents a detailed and rigorous analysis of temporal noise due to thermal and shot noise sources in CMOS active pixel sensor (APS) that takes into consideration these complicating factors. Performing time-domain analysis, instead of the more traditional frequency-domain analysis, we find that the reset noise power due to thermal noise is at most half of its commonly quoted value. This result is corroborated by several published experimental data including data presented in this paper. The lower reset noise, however, comes at the expense of image lag. We find that alternative reset methods such as overdriving the reset transistor gate or using a pMOS transistor can alleviate lag, but at the expense of doubling the reset noise power. We propose a new reset method that alleviates lag without increasing reset noise.",
"title": ""
},
{
"docid": "9f5e4d52df5f13a80ccdb917a899bb9e",
"text": "This paper proposes a robust background model-based dense-visual-odometry (BaMVO) algorithm that uses an RGB-D sensor in a dynamic environment. The proposed algorithm estimates the background model represented by the nonparametric model from depth scenes and then estimates the ego-motion of the sensor using the energy-based dense-visual-odometry approach based on the estimated background model in order to consider moving objects. Experimental results demonstrate that the ego-motion is robustly obtained by BaMVO in a dynamic environment.",
"title": ""
},
{
"docid": "470a065b8389e4ba285099fd2deb37bf",
"text": "Kozlowski syndrome is the most common type of spondylometaphyseal dysplasia (SMD). It is characterized by short stature (130 to 150 cm), pectus carinatum, limited elbow and hip movement, mild bowleg deformity, and curvature of the spinal column. Children with Kozlowski dwarfism usually are not recognized at birth, since they have normal clinical features, weight, and size. This article reports the dental treatment and oral findings of a 14-year-old female patient with Kozlowski dwarfism.",
"title": ""
},
{
"docid": "907b99894de9bfeb4f20bf766a5fc87f",
"text": "Clustering, the process of grouping together similar items into distinct partitions, is a common type of unsupervised machine learning that can be useful for summarizing and aggregating complex multi-dimensional data. However, data can be clustered in many ways, and there exist a large body of algorithms designed to reveal different patterns. While having access to a wide variety of algorithms is helpful, in practice, it is quite difficult for data scientists to choose and parameterize algorithms to get the clustering results relevant for their dataset and analytical tasks. To alleviate this problem, we built Clustervision, a visual analytics tool that helps ensure data scientists find the right clustering among the large amount of techniques and parameters available. Our system clusters data using a variety of clustering techniques and parameters and then ranks clustering results utilizing five quality metrics. In addition, users can guide the system to produce more relevant results by providing task-relevant constraints on the data. Our visual user interface allows users to find high quality clustering results, explore the clusters using several coordinated visualization techniques, and select the cluster result that best suits their task. We demonstrate this novel approach using a case study with a team of researchers in the medical domain and showcase that our system empowers users to choose an effective representation of their complex data.",
"title": ""
},
{
"docid": "7189db9bf887827cb59823b7c084e80d",
"text": "With the growing volume of publications in the Computer Science (CS) discipline, tracking the research evolution and predicting the future research trending topics are of great importance for researchers to keep up with the rapid progress of research. Within a research area, there are many top conferences that publish the latest research results. These conferences mutually influence each other and jointly promote the development of the research area. To predict the trending topics of mutually influenced conferences, we propose a correlated neural influence model, which has the ability to capture the sequential properties of research evolution in each individual conference and discover the dependencies among different conferences simultaneously. The experiments conducted on a scientific dataset including conferences in artificial intelligence and data mining show that our model consistently outperforms the other state-of-the-art methods. We also demonstrate the interpretability and predictability of the proposed model by providing its answers to two questions of concern, i.e., what the next rising trending topics are and for each conference who the most influential peer is.",
"title": ""
},
{
"docid": "a62e6c9f37d4193eb5ec1f5f4a5af4e8",
"text": "Computer viruses have become the main threat of the safety and security of industry. Unfortunately, no mature products of anti-virus can protect computers effectively. This paper presents an approach of virus detection which is based on analysis and distilling of representative behavior characteristic and systemic description of the suspicious behaviors indicated by the sequences of APIs which called under Windows. Based on decompilation analysis, according to the determinant of Bayes Algorithm, and by the validation of abundant sample space, the technique implements the virus detection by suspicious behavior identification.",
"title": ""
},
{
"docid": "3429145583d25ba1d603b5ade11f4312",
"text": "Sequential pattern mining is an important data mining problem with broad applications. It is challenging since one may need to examine a combinatorially explosive number of possible subsequence patterns. Most of the previously developed sequential pattern mining methods follow the methodology of which may substantially reduce the number of combinations to be examined. However, still encounters problems when a sequence database is large and/or when sequential patterns to be mined are numerous and/or long. In this paper, we propose a novel sequential pattern mining method, called PrefixSpan (i.e., Prefix-projected Sequential pattern mining), which explores prefixprojection in sequential pattern mining. PrefixSpan mines the complete set of patterns but greatly reduces the efforts of candidate subsequence generation. Moreover, prefix-projection substantially reduces the size of projected databases and leads to efficient processing. Our performance study shows that PrefixSpan outperforms both the -based GSP algorithm and another recently proposed method, FreeSpan, in mining large sequence",
"title": ""
},
{
"docid": "d3156f87367e8f55c3e62d376352d727",
"text": "The topic of deep-learning has recently received considerable attention in the machine learning research community, having great potential to liberate computer scientists from hand-engineering training datasets, because the method can learn the desired features automatically. This is particularly beneficial in medical research applications of machine learning, where getting good hand labelling of data is especially expensive. We propose application of a single-layer sparse-auto encoder to dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) for fully automatic classification of tissue types in a large unlabelled dataset with minimal human interference -- in a manner similar to data-mining. DCE-MRI analysis, looking at the change of the MR contrast-agent concentration over successively acquired images, is time-series analysis. We analyse the change of brightness (which is related to the contrast-agent concentration) of the DCE-MRI images over time to classify different tissue types in the images. Therefore our system is an application of an auto encoder to time-series analysis while the demonstrated result and further possible successive application areas are in computer vision. We discuss the important factors affecting performance of the system in applying the auto encoder to the time-series analysis of DCE-MRI medical image data.",
"title": ""
}
] |
scidocsrr
|
a8a16997b739cef96f2bc0d0db96f780
|
Performance of Double-Output Induction Generator for Wind Energy Conversion Systems
|
[
{
"docid": "8066246656f6a9a3060e42efae3b197f",
"text": "The paper describes the engineering and design of a doubly fed induction generator (DFIG), using back-to-back PWM voltage-source converters in the rotor circuit. A vector-control scheme for the supply-side PWM converter results in independent control of active and reactive power drawn from the supply, while ensuring sinusoidal supply currents. Vector control of the rotor-connected converter provides for wide speed-range operation; the vector scheme is embedded in control loops which enable optimal speed tracking for maximum energy capture from the wind. An experimental rig, which represents a 1.5 kW variable speed wind-energy generation system is described, and experimental results are given that illustrate the excellent performance characteristics of the system. The paper considers a grid-connected system; a further paper will describe a stand-alone system.",
"title": ""
}
] |
[
{
"docid": "3465c3bc8f538246be5d7f8c8d1292c2",
"text": "The minimal depth of a maximal subtree is a dimensionless order statistic measuring the predictiveness of a variable in a survival tree. We derive the distribution of the minimal depth and use it for high-dimensional variable selection using random survival forests. In big p and small n problems (where p is the dimension and n is the sample size), the distribution of the minimal depth reveals a “ceiling effect” in which a tree simply cannot be grown deep enough to properly identify predictive variables. Motivated by this limitation, we develop a new regularized algorithm, termed RSF-Variable Hunting. This algorithm exploits maximal subtrees for effective variable selection under such scenarios. Several applications are presented demonstrating the methodology, including the problem of gene selection using microarray data. In this work we focus only on survival settings, although our methodology also applies to other random forests applications, including regression and classification settings. All examples presented here use the R-software package randomSurvivalForest.",
"title": ""
},
{
"docid": "d51408ad40bdc9a3a846aaf7da907cef",
"text": "Accessing online information from various data sources has become a necessary part of our everyday life. Unfortunately such information is not always trustworthy, as different sources are of very different qualities and often provide inaccurate and conflicting information. Existing approaches attack this problem using unsupervised learning methods, and try to infer the confidence of the data value and trustworthiness of each source from each other by assuming values provided by more sources are more accurate. However, because false values can be widespread through copying among different sources and out-of-date data often overwhelm up-to-date data, such bootstrapping methods are often ineffective.\n In this paper we propose a semi-supervised approach that finds true values with the help of ground truth data. Such ground truth data, even in very small amount, can greatly help us identify trustworthy data sources. Unlike existing studies that only provide iterative algorithms, we derive the optimal solution to our problem and provide an iterative algorithm that converges to it. Experiments show our method achieves higher accuracy than existing approaches, and it can be applied on very huge data sets when implemented with MapReduce.",
"title": ""
},
{
"docid": "55f95c7b59f17fb210ebae97dbd96d72",
"text": "Clustering is a widely studied data mining problem in the text domains. The problem finds numerous applications in customer segmentation, classification, collaborative filtering, visualization, document organization, and indexing. In this chapter, we will provide a detailed survey of the problem of text clustering. We will study the key challenges of the clustering problem, as it applies to the text domain. We will discuss the key methods used for text clustering, and their relative advantages. We will also discuss a number of recent advances in the area in the context of social network and linked data.",
"title": ""
},
{
"docid": "4b119a8c360f680ccdf0d1e72f0d8083",
"text": "This paper presents a review of the research status for real-time simulation of electric machines. The machine models considered are the lumped parameter models, including the phase-domain, <inline-formula> <tex-math notation=\"LaTeX\">$d$ </tex-math></inline-formula>–<inline-formula> <tex-math notation=\"LaTeX\">$q$ </tex-math></inline-formula>, and voltage-behind-reactance models, as well as the physics-based models, including the finite-element method and magnetic equivalent circuit models. These models are initially presented along with their relative advantages and disadvantages with respect to modeling fidelity and their computational intensity. A field-programmable gate array, a graphics processing unit, a chip multiprocessor, and computer clusters are the main hardware platforms for real-time simulations. An overview of such hardware platforms is presented and their comparative performances are evaluated with respect to real-time simulation of electric machines and drives on the basis of simulation acceleration, machine types, and modeling methodology.",
"title": ""
},
{
"docid": "bffbecf26ca3a6e5586b240e0131f325",
"text": "The development of high-resolution neuroimaging and multielectrode electrophysiological recording provides neuroscientists with huge amounts of multivariate data. The complexity of the data creates a need for statistical summary, but the local averaging standardly applied to this end may obscure the effects of greatest neuroscientific interest. In neuroimaging, for example, brain mapping analysis has focused on the discovery of activation, i.e., of extended brain regions whose average activity changes across experimental conditions. Here we propose to ask a more general question of the data: Where in the brain does the activity pattern contain information about the experimental condition? To address this question, we propose scanning the imaged volume with a \"searchlight,\" whose contents are analyzed multivariately at each location in the brain.",
"title": ""
},
{
"docid": "031562142f7a2ffc64156f9d09865604",
"text": "The demand for video content is continuously increasing as video sharing on the Internet is becoming enormously popular recently. This demand, with its high bandwidth requirements, has a considerable impact on the load of the network infrastructure. As more users access videos from their mobile devices, the load on the current wireless infrastructure (which has limited capacity) will be even more significant. Based on observations from many local video sharing scenarios, in this paper, we study the tradeoffs of using Wi-Fi ad-hoc mode versus infrastructure mode for video streaming between adjacent devices. We thus show the potential of direct device-to-device communication as a way to reduce the load on the wireless infrastructure and to improve user experiences. Setting up experiments for WiFi devices connected in ad-hoc mode, we collect measurements for various video streaming scenarios and compare them to the case where the devices are connected through access points. The results show the improvements in latency, jitter and loss rate. More importantly, the results show that the performance in direct device-to-device streaming is much more stable in contrast to the access point case, where different factors affect the performance causing widely unpredictable qualities.",
"title": ""
},
{
"docid": "8327cb7a8d39ce8f8f982aa38cdd517e",
"text": "Although many valuable visualizations have been developed to gain insights from large data sets, selecting an appropriate visualization for a specific data set and goal remains challenging for non-experts. In this paper, we propose a novel approach for knowledge-assisted, context-aware visualization recommendation. Both semantic web data and visualization components are annotated with formalized visualization knowledge from an ontology. We present a recommendation algorithm that leverages those annotations to provide visualization components that support the users’ data and task. We successfully proved the practicability of our approach by integrating it into two research prototypes. Keywords-recommendation, visualization, ontology, mashup",
"title": ""
},
{
"docid": "976ae17105f83e45a177c81441da3afa",
"text": "In the Google Play store, an introduction page is associated with every mobile application (app) for users to acquire its details, including screenshots, description, reviews, etc. However, it remains a challenge to identify what items influence users most when downloading an app. To explore users’ perspective, we conduct a survey to inquire about this question. The results of survey suggest that the participants pay most attention to the app description which gives users a quick overview of the app. Although there exist some guidelines about how to write a good app description to attract more downloads, it is hard to define a high quality app description. Meanwhile, there is no tool to evaluate the quality of app description. In this paper, we employ the method of crowdsourcing to extract the attributes that affect the app descriptions’ quality. First, we download some app descriptions from Google Play, then invite some participants to rate their quality with the score from one (very poor) to five (very good). The participants are also requested to explain every score’s reasons. By analyzing the reasons, we extract the attributes that the participants consider important during evaluating the quality of app descriptions. Finally, we train the supervised learning models on a sample of 100 app descriptions. In our experiments, the support vector machine model obtains up to 62% accuracy. In addition, we find that the permission, the number of paragraphs and the average number of words in one feature play key roles in defining a good app description.",
"title": ""
},
{
"docid": "cab6e4e40a4bfb8557b22f1ad88ec7df",
"text": "A common thread in various approaches for model reduction, clustering, feature extraction, classification, and blind source separation (BSS) is to represent the original data by a lower-dimensional approximation obtained via matrix or tensor (multiway array) factorizations or decompositions. The notion of matrix/tensor factorizations arises in a wide range of important applications and each matrix/tensor factorization makes different assumptions regarding component (factor) matrices and their underlying structures. So choosing the appropriate one is critical in each application domain. Approximate low-rank matrix and tensor factorizations play fundamental roles in enhancing the data and extracting latent (hidden) components.",
"title": ""
},
{
"docid": "c2c85e02b2eb3c73ece4e43aae42ff28",
"text": "The security of many computer systems hinges on the secrecy of a single word – if an adversary obtains knowledge of a password, they will gain access to the resources controlled by this password. Human users are the ‘weakest link’ in password control, due to our propensity to reuse passwords and to create weak ones. Policies which forbid such unsafe password practices are often violated, even if these policies are well-advertised. We have studied how users perceive their accounts and their passwords. Our participants mentally classified their accounts and passwords into a few groups, based on a small number of perceived similarities. Our participants used stronger passwords, and reused passwords less, in account groups which they considered more important. Our participants thus demonstrated awareness of the basic tenets of password safety, but they did not behave safely in all respects. Almost half of our participants reused at least one of the passwords in their high-importance accounts. Our findings add to the body of evidence that a typical computer user suffers from ‘password overload’. Our concepts of password and account grouping point the way toward more intuitive user interfaces for passwordand account-management systems. .",
"title": ""
},
{
"docid": "48d5952fa77f40b7b6a9dbb9f2a62b33",
"text": "BACKGROUND\nPhysical activity has long been considered as an important component of a healthy lifestyle. Although many efforts have been made to promote physical activity, there is no effective global intervention for physical activity promotion. Some researchers have suggested that Pokémon GO, a location-based augmented reality game, was associated with a short-term increase in players' physical activity on a global scale, but the details are far from clear.\n\n\nOBJECTIVE\nThe objective of our study was to study the relationship between Pokémon GO use and players' physical activity and how the relationship varies across players with different physical activity levels.\n\n\nMETHODS\nWe conducted a field study in Hong Kong to investigate if Pokémon GO use was associated with physical activity. Pokémon GO players were asked to report their demographics through a survey; data on their Pokémon GO behaviors and daily walking and running distances were collected from their mobile phones. Participants (n=210) were Hong Kong residents, aged 13 to 65 years, who played Pokémon GO using iPhone 5 or 6 series in 5 selected types of built environment. We measured the participants' average daily walking and running distances over a period of 35 days, from 14 days before to 21 days after game installation. Multilevel modeling was used to identify and examine the predictors (including Pokémon GO behaviors, weather, demographics, and built environment) of the relationship between Pokémon GO use and daily walking and running distances.\n\n\nRESULTS\nThe average daily walking and running distances increased by 18.1% (0.96 km, approximately 1200 steps) in the 21 days after the participants installed Pokémon GO compared with the average distances over the 14 days before installation (P<.001). However, this association attenuated over time and was estimated to disappear 24 days after game installation. Multilevel models indicated that Pokémon GO had a stronger and more lasting association among the less physically active players compared with the physically active ones (P<.001). Playing Pokémon GO in green space had a significant positive relationship with daily walking and running distances (P=.03). Moreover, our results showed that whether Pokémon GO was played, the number of days played, weather (total rainfall, bright sunshine, mean air temperature, and mean wind speed), and demographics (age, gender, income, education, and body mass index) were associated with daily walking and running distances.\n\n\nCONCLUSIONS\nPokémon GO was associated with a short-term increase in the players' daily walking and running distances; this association was especially strong among less physically active participants. Pokémon GO can build new links between humans and green space and encourage people to engage in physical activity. Our results show that location-based augmented reality games, such as Pokémon GO, have the potential to be a global public health intervention tool.",
"title": ""
},
{
"docid": "137449952a30730185552ed6fca4d8ba",
"text": "BACKGROUND\nPoor sleep quality and depression negatively impact the health-related quality of life of patients with type 2 diabetes, but the combined effect of the two factors is unknown. This study aimed to assess the interactive effects of poor sleep quality and depression on the quality of life in patients with type 2 diabetes.\n\n\nMETHODS\nPatients with type 2 diabetes (n = 944) completed the Diabetes Specificity Quality of Life scale (DSQL) and questionnaires on sleep quality and depression. The products of poor sleep quality and depression were added to the logistic regression model to evaluate their multiplicative interactions, which were expressed as the relative excess risk of interaction (RERI), the attributable proportion (AP) of interaction, and the synergy index (S).\n\n\nRESULTS\nPoor sleep quality and depressive symptoms both increased DSQL scores. The co-presence of poor sleep quality and depressive symptoms significantly reduced DSQL scores by a factor of 3.96 on biological interaction measures. The relative excess risk of interaction was 1.08. The combined effect of poor sleep quality and depressive symptoms was observed only in women.\n\n\nCONCLUSIONS\nPatients with both depressive symptoms and poor sleep quality are at an increased risk of reduction in diabetes-related quality of life, and this risk is particularly high for women due to the interaction effect. Clinicians should screen for and treat sleep difficulties and depressive symptoms in patients with type 2 diabetes.",
"title": ""
},
{
"docid": "10bc07996e9016d4de30e27d869b9da7",
"text": "Information extraction and knowledge discovery regarding adverse drug reaction (ADR) from large-scale clinical texts are very useful and needy processes. Two major difficulties of this task are the lack of domain experts for labeling examples and intractable processing of unstructured clinical texts. Even though most previous works have been conducted on these issues by applying semisupervised learning for the former and a word-based approach for the latter, they face with complexity in an acquisition of initial labeled data and ignorance of structured sequence of natural language. In this study, we propose automatic data labeling by distant supervision where knowledge bases are exploited to assign an entity-level relation label for each drug-event pair in texts, and then, we use patterns for characterizing ADR relation. The multiple-instance learning with expectation-maximization method is employed to estimate model parameters. The method applies transductive learning to iteratively reassign a probability of unknown drug-event pair at the training time. By investigating experiments with 50,998 discharge summaries, we evaluate our method by varying large number of parameters, that is, pattern types, pattern-weighting models, and initial and iterative weightings of relations for unlabeled data. Based on evaluations, our proposed method outperforms the word-based feature for NB-EM (iEM), MILR, and TSVM with F1 score of 11.3%, 9.3%, and 6.5% improvement, respectively.",
"title": ""
},
{
"docid": "6e5792c73b34eacc7bef2c8777da5147",
"text": "Neural network machine translation systems have recently demonstrated encouraging results. We examine the performance of a recently proposed recurrent neural network model for machine translation on the task of Japanese-to-English translation. We observe that with relatively little training the model performs very well on a small hand-designed parallel corpus, and adapts to grammatical complexity with ease, given a small vocabulary. The success of this model on a small corpus warrants more investigation of its performance on a larger corpus.",
"title": ""
},
{
"docid": "f065684c26f71567c092ee6c85d5e831",
"text": "Various types of killings occur within family matrices. The news media highlight the dramatic components, and even novels now use it as a theme. 1 However, a psychiatric understanding remains elusive. Not all killings within a family are familicidal. For want of a better term, I have called the killing of more than one member of a family by another family member \"familicide.\" The destruction of the family unit appears to be the goal. Such behavior comes within the category of \"mass murders\" where a number of victims are killed in a short period of time by one person. However, in mass murders the victims are not exclusively family members. The case of one person committing a series of homicides over an extended period of time, such as months or years, also differs from familicide. The latter can result in the perpetrator getting killed or injured in the process, or subsequently attempting a suicidal act. However, neither injury, nor suicide, nor death of the perpetrator is an indispensable part of familicide. Fifteen different theories purport to explain physical violence within the nuclear family. 2 Varieties of killings within a family are subvarieties and familicide is yet a rarer event. Pedicide is the killing of a child by a parent. These are usually cases of one child being killed by one parent. If the child happens to be an infant, the act is infanticide. Many of the latter are situations where a mother kills her infant and is diagnosed schizophrenic or psychotic depressive. Child beating by a parent can result in inadvertent death. One sibling killing another is fratricide. A child killing a parent is parricide, or more specifically patricide or matricide. Uxoricide is one spouse killing another. Each of these behaviors has its own intrapsychic and interpersonal correlates. Such correlates often involve victimologic aspects. As a caveat, and based on this study, we should not assume that the perpetrators in familicide all bear one diagnosis even in a descriptive nosological sense. A distinction is needed between intra familial homicides related to psychiatric disturbance in one family member and collective types of violence in which families are destroyed. Extermination of families based on national, ethnic, racial or religious backgrounds are not",
"title": ""
},
{
"docid": "b2de917d74765e39562c60c74a88d7f3",
"text": "Computer-phobic university students are easy to find today especially when it come to taking online courses. Affect has been shown to influence users’ perceptions of computers. Although self-reported computer anxiety has declined in the past decade, it continues to be a significant issue in higher education and online courses. More importantly, anxiety seems to be a critical variable in relation to student perceptions of online courses. A substantial amount of work has been done on computer anxiety and affect. In fact, the technology acceptance model (TAM) has been extensively used for such studies where affect and anxiety were considered as antecedents to perceived ease of use. However, few, if any, have investigated the interplay between the two constructs as they influence perceived ease of use and perceived usefulness towards using online systems for learning. In this study, the effects of affect and anxiety (together and alone) on perceptions of an online learning system are investigated. Results demonstrate the interplay that exists between affect and anxiety and their moderating roles on perceived ease of use and perceived usefulness. Interestingly, the results seem to suggest that affect and anxiety may exist simultaneously as two weights on each side of the TAM scale.",
"title": ""
},
{
"docid": "c945ef3a4e223a70212413b4948fcbc0",
"text": "Text generation is a fundamental building block in natural language processing tasks. Existing sequential models performs autoregression directly over the text sequence and have difficulty generating long sentences of complex structures. This paper advocates a simple approach that treats sentence generation as a tree-generation task. By explicitly modelling syntactic structures in a constituent syntactic tree and performing topdown, breadth-first tree generation, our model fixes dependencies appropriately and performs implicit global planning. This is in contrast to transition-based depth-first generation process, which has difficulty dealing with incomplete texts when parsing and also does not incorporate future contexts in planning. Our preliminary results on two generation tasks and one parsing task demonstrate that this is an effective strategy.",
"title": ""
},
{
"docid": "4c49cebd579b2fef196d7ce600b1a044",
"text": "A GPU cluster is a cluster equipped with GPU devices. Excellent acceleration is achievable for computation-intensive tasks (e. g. matrix multiplication and LINPACK) and bandwidth-intensive tasks with data locality (e. g. finite-difference simulation). Bandwidth-intensive tasks such as large-scale FFTs without data locality are harder to accelerate, as the bottleneck often lies with the PCI between main memory and GPU device memory or the communication network between workstation nodes. That means optimizing the performance of FFT for a single GPU device will not improve the overall performance. This paper uses large-scale FFT as an example to show how to achieve substantial speedups for these more challenging tasks on a GPU cluster. Three GPU-related factors lead to better performance: firstly the use of GPU devices improves the sustained memory bandwidth for processing large-size data; secondly GPU device memory allows larger subtasks to be processed in whole and hence reduces repeated data transfers between memory and processors; and finally some costly main-memory operations such as matrix transposition can be significantly sped up by GPUs if necessary data adjustment is performed during data transfers. This technique of manipulating array dimensions during data transfer is the main technical contribution of this paper. These factors (as well as the improved communication library in our implementation) attribute to 24.3x speedup with respect to FFTW and 7x speedup with respect to Intel MKL for 4096 3D single-precision FFT on a 16-node cluster with 32 GPUs. Around 5x speedup with respect to both standard libraries are achieved for double precision.",
"title": ""
},
{
"docid": "7f9be60b9ee4565306a06b6b4d69e8d1",
"text": "In this paper, we propose a novel electromyographic (EMG) control interface to control motion and joints compliance of a supernumerary robotic finger. The supernumerary robotic fingers are a recently introduced class of wearable robotics that provides users additional robotic limbs in order to compensate or augment the existing abilities of natural limbs without substituting them. Since supernumerary robotic fingers are supposed to closely interact and perform actions in synergy with the human limbs, the control principles of extra finger should have similar behavior as human's ones including the ability of regulating the compliance. So that, it is important to propose a control interface and to consider the actuators and sensing capabilities of the robotic extra finger compatible to implement stiffness regulation control techniques. We propose EMG interface and a control approach to regulate the compliance of the device through servo actuators. In particular, we use a commercial EMG armband for gesture recognition to be associated with the motion control of the robotic device and surface one channel EMG electrodes interface to regulate the compliance of the robotic device. We also present an updated version of a robotic extra finger where the adduction/abduction motion is realized through ball bearing and spur gears mechanism. We have validated the proposed interface with two sets of experiments related to compensation and augmentation. In the first set of experiments, different bimanual tasks have been performed with the help of the robotic device and simulating a paretic hand since this novel wearable system can be used to compensate the missing grasping abilities in chronic stroke patients. In the second set, the robotic extra finger is used to enlarge the workspace and manipulation capability of healthy hands. In both sets, the same EMG control interface has been used. The obtained results demonstrate that the proposed control interface is intuitive and can successfully be used, not only to control the motion of a supernumerary robotic finger but also to regulate its compliance. The proposed approach can be exploited also for the control of different wearable devices that has to actively cooperate with the human limbs.",
"title": ""
},
{
"docid": "39ccd0efd846c2314da557b73a326e85",
"text": "We address the problem of recognizing situations in images. Given an image, the task is to predict the most salient verb (action), and fill its semantic roles such as who is performing the action, what is the source and target of the action, etc. Different verbs have different roles (e.g. attacking has weapon), and each role can take on many possible values (nouns). We propose a model based on Graph Neural Networks that allows us to efficiently capture joint dependencies between roles using neural networks defined on a graph. Experiments with different graph connectivities show that our approach that propagates information between roles significantly outperforms existing work, as well as multiple baselines. We obtain roughly 3-5% improvement over previous work in predicting the full situation. We also provide a thorough qualitative analysis of our model and influence of different roles in the verbs.",
"title": ""
}
] |
scidocsrr
|
aa7474ccb58694853e353cd597534649
|
Fuzzy Delphi and back-propagation model for sales forecasting in PCB industry
|
[
{
"docid": "f9824ae0b73ebecf4b3a893392e77d67",
"text": "This paper proposes genetic algorithms (GAs) approach to feature discretization and the determination of connection weights for artificial neural networks (ANNs) to predict the stock price index. Previous research proposed many hybrid models of ANN and GA for the method of training the network, feature subset selection, and topology optimization. In most of these studies, however, GA is only used to improve the learning algorithm itself. In this study, GA is employed not only to improve the learning algorithm, but also to reduce the complexity in feature space. GA optimizes simultaneously the connection weights between layers and the thresholds for feature discretization. The genetically evolved weights mitigate the well-known limitations of the gradient descent algorithm. In addition, globally searched feature discretization reduces the dimensionality of the feature space and eliminates irrelevant factors. Experimental results show that GA approach to the feature discretization model outperforms the other two conventional models. q 2000 Published by Elsevier Science Ltd.",
"title": ""
}
] |
[
{
"docid": "e19445c2ea8e19002a85ec9ace463990",
"text": "In this paper we propose a system that takes attendance of student and maintaining its records in an academic institute automatically. Manually taking the attendance and maintaining it for a long time makes it difficult task as well as wastes a lot of time. For this reason an efficient system is designed. This system takes attendance with the help of a fingerprint sensor module and all the records are saved on a computer. Fingerprint sensor module and LCD screen are dynamic which can move in the room. In order to mark the attendance, student has to place his/her finger on the fingerprint sensor module. On identification of particular student, his attendance record is updated in the database and he/she is notified through LCD screen. In this system we are going to generate Microsoft excel attendance report on computer. This report will generate automatically after 15 days (depends upon user). This report will be sent to the respected HOD, teacher and student’s parents email Id.",
"title": ""
},
{
"docid": "b91b42da0e7ffe838bf9d7ab0bd54bea",
"text": "When creating line drawings, artists frequently depict intended curves using multiple, tightly clustered, or overdrawn, strokes. Given such sketches, human observers can readily envision these intended, aggregate, curves, and mentally assemble the artist's envisioned 2D imagery. Algorithmic stroke consolidation---replacement of overdrawn stroke clusters by corresponding aggregate curves---can benefit a range of sketch processing and sketch-based modeling applications which are designed to operate on consolidated, intended curves. We propose StrokeAggregator, a novel stroke consolidation method that significantly improves on the state of the art, and produces aggregate curve drawings validated to be consistent with viewer expectations. Our framework clusters strokes into groups that jointly define intended aggregate curves by leveraging principles derived from human perception research and observation of artistic practices. We employ these principles within a coarse-to-fine clustering method that starts with an initial clustering based on pairwise stroke compatibility analysis, and then refines it by analyzing interactions both within and in-between clusters of strokes. We facilitate this analysis by computing a common 1D parameterization for groups of strokes via common aggregate curve fitting. We demonstrate our method on a large range of line drawings, and validate its ability to generate consolidated drawings that are consistent with viewer perception via qualitative user evaluation, and comparisons to manually consolidated drawings and algorithmic alternatives.",
"title": ""
},
{
"docid": "ea1e84dfb1889826b0356dcd85182ec4",
"text": "With the support of the wearable devices, healthcare services started a new phase in serving patients need. The new technology adds more facilities and luxury to the healthcare services, Also changes patients' lifestyles from the traditional way of monitoring to the remote home monitoring. Such new approach faces many challenges related to security as sensitive data get transferred through different type of channels. They are four main dimensions in terms of security scope such as trusted sensing, computation, communication, privacy and digital forensics. In this paper we will try to focus on the security challenges of the wearable devices and IoT and their advantages in healthcare sectors.",
"title": ""
},
{
"docid": "a7b9505a029e58531f250c5728dbeef4",
"text": "This paper proposes an object recognition approach intended for extracting, analyzing and clustering of features from RGB image views from given objects. Extracted features are matched with features in learned object models and clustered in Hough-space to find a consistent object pose. Hypotheses for valid poses are verified by computing a homography from detected features. Using that homography features are back projected onto the input image and the resulting area is checked for possible presence of other objects. This approach is applied by our team homer[at]UniKoblenz in the RoboCup[at]Home league. Besides the proposed framework, this work offers the computer vision community with online programs available as open source software.",
"title": ""
},
{
"docid": "28b493b0f30c6605ff0c22ccea5d2ace",
"text": "A serious threat today is malicious executables. It is designed to damage computer system and some of them spread over network without the knowledge of the owner using the system. Two approaches have been derived for it i.e. Signature Based Detection and Heuristic Based Detection. These approaches performed well against known malicious programs but cannot catch the new malicious programs. Different researchers have proposed methods using data mining and machine learning for detecting new malicious programs. The method based on data mining and machine learning has shown good results compared to other approaches. This work presents a static malware detection system using data mining techniques such as Information Gain, Principal component analysis, and three classifiers: SVM, J48, and Naïve Bayes. For overcoming the lack of usual anti-virus products, we use methods of static analysis to extract valuable features of Windows PE file. We extract raw features of Windows executables which are PE header information, DLLs, and API functions inside each DLL of Windows PE file. Thereafter, Information Gain, calling frequencies of the raw features are calculated to select valuable subset features, and then Principal Component Analysis is used for dimensionality reduction of the selected features. By adopting the concepts of machine learning and data-mining, we construct a static malware detection system which has a detection rate of 99.6%.",
"title": ""
},
{
"docid": "cff3b4f6db26e66893a9db95fb068ef1",
"text": "In this paper, we consider the task of text categorization as a graph classification problem. By representing textual documents as graph-of-words instead of historical n-gram bag-of-words, we extract more discriminative features that correspond to long-distance n-grams through frequent subgraph mining. Moreover, by capitalizing on the concept of k-core, we reduce the graph representation to its densest part – its main core – speeding up the feature extraction step for little to no cost in prediction performances. Experiments on four standard text classification datasets show statistically significant higher accuracy and macro-averaged F1-score compared to baseline approaches.",
"title": ""
},
{
"docid": "933af670e35c8271a483f795cadf62f9",
"text": "We perform modal analysis of short-term swing dynamics in multi-machine power systems. The analysis is based on the so-called Koopman operator, a linear, infinite-dimensional operator that is defined for any nonlinear dynamical system and captures full information of the system. Modes derived through spectral analysis of the Koopman operator, called Koopman modes, provide a nonlinear extension of linear oscillatory modes. Computation of the Koopman modes extracts single-frequency, spatial modes embedded in non-stationary data of short-term, nonlinear swing dynamics, and it provides a novel technique for identification of coherent swings and machines.",
"title": ""
},
{
"docid": "17db3273504bba730c9e43c8ea585250",
"text": "In this paper, License plate localization and recognition (LPLR) is presented. It uses image processing and character recognition technology in order to identify the license number plates of the vehicles automatically. This system is considerable interest because of its good application in traffic monitoring systems, surveillance devices and all kind of intelligent transport system. The objective of this work is to design algorithm for License Plate Localization and Recognition (LPLR) of Tanzanian License Plates. The plate numbers used are standard ones with black and yellow or black and white colors. Also, the letters and numbers are placed in the same row (identical vertical levels), resulting in frequent changes in the horizontal intensity. Due to that, the horizontal changes of the intensity have been easily detected, since the rows that contain the number plates are expected to exhibit many sharp variations. Hence, the edge finding method is exploited to find the location of the plate. To increase readability of the plate number, part of the image was enhanced, noise removal and smoothing median filter is used due to easy development. The algorithm described in this paper is implemented using MATLAB 7.11.0(R2010b).",
"title": ""
},
{
"docid": "700c016add5f44c3fbd560d84b83b290",
"text": "This paper describes a novel framework, called I<scp>n</scp>T<scp>ens</scp>L<scp>i</scp> (\"intensely\"), for producing fast single-node implementations of dense tensor-times-matrix multiply (T<scp>tm</scp>) of arbitrary dimension. Whereas conventional implementations of T<scp>tm</scp> rely on explicitly converting the input tensor operand into a matrix---in order to be able to use any available and fast general matrix-matrix multiply (G<scp>emm</scp>) implementation---our framework's strategy is to carry out the T<scp>tm</scp> <i>in-place</i>, avoiding this copy. As the resulting implementations expose tuning parameters, this paper also describes a heuristic empirical model for selecting an optimal configuration based on the T<scp>tm</scp>'s inputs. When compared to widely used single-node T<scp>tm</scp> implementations that are available in the Tensor Toolbox and Cyclops Tensor Framework (C<scp>tf</scp>), In-TensLi's in-place and input-adaptive T<scp>tm</scp> implementations achieve 4× and 13× speedups, showing Gemm-like performance on a variety of input sizes.",
"title": ""
},
{
"docid": "7a1e32dc80550704207c5e0c7e73da26",
"text": "Stock markets are affected by many uncertainties and interrelated economic and political factors at both local and global levels. The key to successful stock market forecasting is achieving best results with minimum required input data. To determine the set of relevant factors for making accurate predictions is a complicated task and so regular stock market analysis is very essential. More specifically, the stock market’s movements are analyzed and predicted in order to retrieve knowledge that could guide investors on when to buy and sell. It will also help the investor to make money through his investment in the stock market. This paper surveys large number of resources from research papers, web-sources, company reports and other available sources.",
"title": ""
},
{
"docid": "db2cd0762b560faf3aaf5e27ad3e13a1",
"text": "Soil is an excellent niche of growth of many microorganisms: protozoa, fungi, viruses, and bacteria. Some microorganisms are able to colonize soil surrounding plant roots, the rhizosphere, making them come under the influence of plant roots (Hiltner 1904; Kennedy 2005). These bacteria are named rhizobacteria. Rhizobacteria are rhizosphere competent bacteria able to multiply and colonize plant roots at all stages of plant growth, in the presence of a competing microflora (Antoun and Kloepper 2001) where they are in contact with other microorganisms. This condition is wildly encountered in natural, non-autoclaved soils. Generally, interactions between plants and microorganisms can be classified as pathogenic, saprophytic, and beneficial (Lynch 1990). Beneficial interactions involve plant growth promoting rhizobacteria (PGPR), generally refers to a group of soil and rhizosphere free-living bacteria colonizing roots in a competitive environment and exerting a beneficial effect on plant growth (Kloepper and Schroth 1978; Lazarovits and Nowak 1997; Kloepper et al. 1989; Kloepper 2003; Bakker et al. 2007). However, numerous researchers tend to enlarge this restrictive definition of rhizobacteria as any root-colonizing bacteria and consider endophytic bacteria in symbiotic association: Rhizobia with legumes and the actinomycete Frankia associated with some phanerogams as PGPR genera. Among PGPRs are representatives of the following genera: Acinetobacter, Agrobacterium, Arthrobacter, Azoarcus, Azospirillum, Azotobacter, Bacillus, Burkholderia, Enterobacter, Klebsiella, Pseudomonas, Rhizobium, Serratia, and Thiobacillus. Some of these genera such as Azoarcus spp., Herbaspirillum, and Burkholderia include endophytic species.",
"title": ""
},
{
"docid": "0b2f0b36bb458221b340b5e4a069fe2b",
"text": "The Dendritic Cell Algorithm (DCA) is inspired by the function of the dendritic cells of the human immune system. In nature, dendritic cells are the intrusion detection agents of the human body, policing the tissue and organs for potential invaders in the form of pathogens. In this research, and abstract model of DC behaviour is developed and subsequently used to form an algorithm, the DCA. The abstraction process was facilitated through close collaboration with laboratorybased immunologists, who performed bespoke experiments, the results of which are used as an integral part of this algorithm. The DCA is a population based algorithm, with each agent in the system represented as an ‘artificial DC’. Each DC has the ability to combine multiple data streams and can add context to data suspected as anomalous. In this chapter the abstraction process and details of the resultant algorithm are given. The algorithm is applied to numerous intrusion detection problems in computer security including the detection of port scans and botnets, where it has produced impressive results with relatively low rates of false positives.",
"title": ""
},
{
"docid": "19a43980ea19d374c5fd4ea6c7dd8221",
"text": "Many authors have speculated about a close relationship between vocal expression of emotions and musical expression of emotions. but evidence bearing on this relationship has unfortunately been lacking. This review of 104 studies of vocal expression and 41 studies of music performance reveals similarities between the 2 channels concerning (a) the accuracy with which discrete emotions were communicated to listeners and (b) the emotion-specific patterns of acoustic cues used to communicate each emotion. The patterns are generally consistent with K. R. Scherer's (1986) theoretical predictions. The results can explain why music is perceived as expressive of emotion, and they are consistent with an evolutionary perspective on vocal expression of emotions. Discussion focuses on theoretical accounts and directions for future research.",
"title": ""
},
{
"docid": "ef640dfcbed4b93413b03cd5c2ec3859",
"text": "MaxStream is a federated stream processing system that seamlessly integrates multiple autonomous and heterogeneous Stream Processing Engines (SPEs) and databases. In this paper, we propose to demonstrate the key features of MaxStream using two application scenarios, namely the Sales Map & Spikes business monitoring scenario and the Linear Road Benchmark, each with a different set of requirements. More specifically, we will show how the MaxStream Federator can translate and forward the application queries to two different commercial SPEs (Coral8 and StreamBase), as well as how it does so under various persistency requirements.",
"title": ""
},
{
"docid": "e6a332a8dab110262beb1fc52b91945c",
"text": "Models are crucial in the engineering design process because they can be used for both the optimization of design parameters and the prediction of performance. Thus, models can significantly reduce design, development and optimization costs. This paper proposes a novel equivalent electrical model for Darrieus-type vertical axis wind turbines (DTVAWTs). The proposed model was built from the mechanical description given by the Paraschivoiu double-multiple streamtube model and is based on the analogy between mechanical and electrical circuits. This work addresses the physical concepts and theoretical formulations underpinning the development of the model. After highlighting the working principle of the DTVAWT, the step-by-step development of the model is presented. For assessment purposes, simulations of aerodynamic characteristics and those of corresponding electrical components are performed and compared.",
"title": ""
},
{
"docid": "6982c79b6fa2cda4f0323421f8e3b4be",
"text": "We propose split-brain autoencoders, a straightforward modification of the traditional autoencoder architecture, for unsupervised representation learning. The method adds a split to the network, resulting in two disjoint sub-networks. Each sub-network is trained to perform a difficult task – predicting one subset of the data channels from another. Together, the sub-networks extract features from the entire input signal. By forcing the network to solve cross-channel prediction tasks, we induce a representation within the network which transfers well to other, unseen tasks. This method achieves state-of-the-art performance on several large-scale transfer learning benchmarks.",
"title": ""
},
{
"docid": "759bb2448f1d34d3742fec38f273135e",
"text": "Although below-knee prostheses have been commercially available for some time, today's devices are completely passive, and consequently, their mechanical properties remain fixed with walking speed and terrain. A lack of understanding of the ankle-foot biomechanics and the dynamic interaction between an amputee and a prosthesis is one of the main obstacles in the development of a biomimetic ankle-foot prosthesis. In this paper, we present a novel ankle-foot emulator system for the study of human walking biomechanics. The emulator system is comprised of a high performance, force-controllable, robotic ankle-foot worn by an amputee interfaced to a mobile computing unit secured around his waist. We show that the system is capable of mimicking normal ankle-foot walking behaviour. An initial pilot study supports the hypothesis that the emulator may provide a more natural gait than a conventional passive prosthesis",
"title": ""
},
{
"docid": "cf42ab9460b2665b6537d6172b4ef3fb",
"text": "Small drones are being utilized in monitoring, transport, safety and disaster management, and other domains. Envisioning that drones form autonomous networks incorporated into the air traffic, we describe a high-level architecture for the design of a collaborative aerial system consisting of drones with on-board sensors and embedded processing, sensing, coordination, and networking capabilities. We implement a multi-drone system consisting of quadcopters and demonstrate its potential in disaster assistance, search and rescue, and aerial monitoring. Furthermore, we illustrate design challenges and present potential solutions based on the lessons learned so far.",
"title": ""
},
{
"docid": "afbb8cf8f580d3100fe495066bc09349",
"text": "D2D communication has been proposed as a transmission approach to improve the resource efficiency and lighten the heavy load of the base station(BS) in LTE-Advanced system. In D2D communication, resource efficiency is seriously influenced by the resource allocation scheme. Additionally, the selfish behaviors of user equipment(UE), such as Unknown Channel Quality(UCQ) Problem potentially harm the system efficiency. For instance, we observed that UEs may report their experienced D2D quality untruthfully in order to gain unfair advantages in resource allocation. The so-called UCQ issue imposes a negative impact on the resource utilization efficiency. In this paper, we propose to use Game Theory to analyze this issue. First, we studied the resource allocating scheme concerning the benefit of the BS, and then analyze the system efficiency and the equilibrium. Second, we discussed the UCQ problem in D2D communication. We proposed a contract-based mechanism to resolve the UCQ problem by eliminating the incentive of UEs to report untruthfully with designed service contracts. The simulation results represents the feasibility and the effectiveness of our approach.",
"title": ""
},
{
"docid": "ed66f53511c404a2c8281ec74b86a1d4",
"text": "This paper is to present method to mitigate arcing defect encountered at pad etch. The problem was detected during wafer disposition due to equipment alarm. Based on observation, this burnt- like defect material, is observed to have inhibited the wafer surface and exposing the top metal line. The inclination was observed mainly during main etch step. The wafer is believed to have encountered plasma instability during transition from Main Etch (ME) to Over Etch (OE) step. However, this is only detected during backside helium leak alarm. This arcing defect was caused by several factors, of which were related to recipe, wafer condition, processing tool and product design. The approach taken was to mitigate these issues where recipe optimization and tighter equipment parameter control were implemented. The design of experiment was presented to find the optimal setting for backside helium flow and chucking voltage. Apart from that, chamber mix run also plays an important role.",
"title": ""
}
] |
scidocsrr
|
524e09572d7d88989ee2f2d8375170e9
|
Applying deep learning to classify pornographic images and videos
|
[
{
"docid": "c7dd6824c8de3e988bb7f58141458ef9",
"text": "We present a method to classify images into different categories of pornographic content to create a system for filtering pornographic images from network traffic. Although different systems for this application were presented in the past, most of these systems are based on simple skin colour features and have rather poor performance. Recent advances in the image recognition field in particular for the classification of objects have shown that bag-of-visual-words-approaches are a good method for many image classification problems. The system we present here, is based on this approach, uses a task-specific visual vocabulary and is trained and evaluated on an image database of 8500 images from different categories. It is shown that it clearly outperforms earlier systems on this dataset and further evaluation on two novel web-traffic collections shows the good performance of the proposed system.",
"title": ""
}
] |
[
{
"docid": "2191ed336872593e0abcbfea60b0502b",
"text": "The modern mobile communication systems requires high gain, large bandwidth and minimal size antenna's that are capable of providing better performance over a wide range of frequency spectrum. This requirement leads to the design of Microstrip patch antenna. This paper proposes the design of 4-Element microstrip patch antenna array which uses the corporate feed technique for excitation. Low dielectric constant substrates are generally preferred for maximum radiation. Thus it prefers Taconic as a dielectric substrate. Desired patch antenna design is initially simulated by using high frequency simulation software SONNET and FEKO and patch antenna is designed as per requirements. Antenna dimensions such as Length (L), Width (W) and substrate Dielectric Constant (εr) and parameters like Return Loss, Gain and Impedance are calculated using high frequency simulation software. The antenna has been designed for the range 9-11 GHz. Hence this antenna is highly suitable for X-band applications.",
"title": ""
},
{
"docid": "b9f2639c0cda5c98865d9c6fc9003104",
"text": "We propose a statistical model applicable to character level language modeling and show that it is a good fit for both, program source code and English text. The model is parameterized by a program from a domain-specific language (DSL) that allows expressing non-trivial data dependencies. Learning is done in two phases: (i) we synthesize a program from the DSL, essentially learning a good representation for the data, and (ii) we learn parameters from the training data – the process is done via counting, as in simple language models such as n-gram. Our experiments show that the precision of our model is comparable to that of neural networks while sharing a number of advantages with n-gram models such as fast query time and the capability to quickly add and remove training data samples. Further, the model is parameterized by a program that can be manually inspected, understood and updated, addressing a major problem of neural networks.",
"title": ""
},
{
"docid": "0b146cb20ed80b17f607251fba7e25d7",
"text": "Presence is widely accepted as the key concept to be considered in any research involving human interaction with Virtual Reality (VR). Since its original description, the concept of presence has developed over the past decade to be considered by many researchers as the essence of any experience in a virtual environment. The VR generating systems comprise two main parts: a technological component and a psychological experience. The different relevance given to them produced two different but coexisting visions of presence: the rationalist and the psychological/ecological points of view. The rationalist point of view considers a VR system as a collection of specific machines with the necessity of the inclusion of the concept of presence. The researchers agreeing with this approach describe the sense of presence as a function of the experience of a given medium (Media Presence). The main result of this approach is the definition of presence as the perceptual illusion of non-mediation produced by means of the disappearance of the medium from the conscious attention of the subject. At the other extreme, there is the psychological or ecological perspective (Inner Presence). Specifically, this perspective considers presence as a neuropsychological phenomenon, evolved from the interplay of our biological and cultural inheritance, whose goal is the control of the human activity. Given its key role and the rate at which new approaches to understanding and examining presence are appearing, this chapter draws together current research on presence to provide an up to date overview of the most widely accepted approaches to its understanding and measurement.",
"title": ""
},
{
"docid": "4c7e66c0447f7eb527396c369dfdeb19",
"text": "What are the functions of curiosity? What are the mechanisms of curiosity-driven learning? We approach these questions about the living using concepts and tools from machine learning and developmental robotics. We argue that curiosity-driven learning enables organisms to make discoveries to solve complex problems with rare or deceptive rewards. By fostering exploration and discovery of a diversity of behavioural skills, and ignoring these rewards, curiosity can be efficient to bootstrap learning when there is no information, or deceptive information, about local improvement towards these problems. We also explain the key role of curiosity for efficient learning of world models. We review both normative and heuristic computational frameworks used to understand the mechanisms of curiosity in humans, conceptualizing the child as a sense-making organism. These frameworks enable us to discuss the bi-directional causal links between curiosity and learning, and to provide new hypotheses about the fundamental role of curiosity in self-organizing developmental structures through curriculum learning. We present various developmental robotics experiments that study these mechanisms in action, both supporting these hypotheses to understand better curiosity in humans and opening new research avenues in machine learning and artificial intelligence. Finally, we discuss challenges for the design of experimental paradigms for studying curiosity in psychology and cognitive neuroscience.",
"title": ""
},
{
"docid": "714d0183b4c18611d31ea32005886ca6",
"text": "This paper explores the issue of automatically generated ungrammatical data and its use in error detection, with a focus on the task of classifying a sentence as grammatical or ungrammatical. We present an error generation tool called GenERRate and show how GenERRate can be used to improve the performance of a classifier on learner data. We describe initial attempts to replicate Cambridge Learner Corpus errors using GenERRate.",
"title": ""
},
{
"docid": "590e4b3726aa1f92232451432fb7a36b",
"text": "Necrophagous insects are important in the decomposition of cadavers. The close association between insects and corpses and the use of insects in medicocriminal investigations is the subject of forensic entomology. The present paper reviews the historical background of this discipline, important postmortem processes, and discusses the scientific basis underlying attempts to determine the time interval since death. Using medical techniques, such as the measurement of body temperature or analysing livor and rigor mortis, time since death can only be accurately measured for the first two or three days after death. In contrast, by calculating the age of immature insect stages feeding on a corpse and analysing the necrophagous species present, postmortem intervals from the first day to several weeks can be estimated. These entomological methods may be hampered by difficulties associated with species identification, but modern DNA techniques are contributing to the rapid and authoritative identification of necrophagous insects. Other uses of entomological data include the toxicological examination of necrophagous larvae from a corpse to identify and estimate drugs and toxicants ingested by the person when alive and the proof of possible postmortem manipulations. Forensic entomology may even help in investigations dealing with people who are alive but in need of care, by revealing information about cases of neglect.",
"title": ""
},
{
"docid": "a5d16384d928da7bcce7eeac45f59e2e",
"text": "Innovative rechargeable batteries that can effectively store renewable energy, such as solar and wind power, urgently need to be developed to reduce greenhouse gas emissions. All-solid-state batteries with inorganic solid electrolytes and electrodes are promising power sources for a wide range of applications because of their safety, long-cycle lives and versatile geometries. Rechargeable sodium batteries are more suitable than lithium-ion batteries, because they use abundant and ubiquitous sodium sources. Solid electrolytes are critical for realizing all-solid-state sodium batteries. Here we show that stabilization of a high-temperature phase by crystallization from the glassy state dramatically enhances the Na(+) ion conductivity. An ambient temperature conductivity of over 10(-4) S cm(-1) was obtained in a glass-ceramic electrolyte, in which a cubic Na(3)PS(4) crystal with superionic conductivity was first realized. All-solid-state sodium batteries, with a powder-compressed Na(3)PS(4) electrolyte, functioned as a rechargeable battery at room temperature.",
"title": ""
},
{
"docid": "0396940ea3ced8d79ba3eda1fae2c469",
"text": "Adblocking tools like Adblock Plus continue to rise in popularity, potentially threatening the dynamics of advertising revenue streams. In response, a number of publishers have ramped up efforts to develop and deploy mechanisms for detecting and/or counter-blocking adblockers (which we refer to as anti-adblockers), effectively escalating the online advertising arms race. In this paper, we develop a scalable approach for identifying third-party services shared across multiple websites and use it to provide a first characterization of antiadblocking across the Alexa Top-5K websites. We map websites that perform anti-adblocking as well as the entities that provide anti-adblocking scripts. We study the modus operandi of these scripts and their impact on popular adblockers. We find that at least 6.7% of websites in the Alexa Top-5K use anti-adblocking scripts, acquired from 12 distinct entities – some of which have a direct interest in nourishing the online advertising industry.",
"title": ""
},
{
"docid": "b98c34a4be7f86fb9506a6b1620b5d3e",
"text": "A portable civilian GPS spoofer is implemented on a digital signal processor and used to characterize spoofing effects and develop defenses against civilian spoofing. This work is intended to equip GNSS users and receiver manufacturers with authentication methods that are effective against unsophisticated spoofing attacks. The work also serves to refine the civilian spoofing threat assessment by demonstrating the challenges involved in mounting a spoofing attack.",
"title": ""
},
{
"docid": "ab0f8feac4000464d406369bea87955a",
"text": "Modern operating system kernels employ address space layout randomization (ASLR) to prevent control-flow hijacking attacks and code-injection attacks. While kernel security relies fundamentally on preventing access to address information, recent attacks have shown that the hardware directly leaks this information. Strictly splitting kernel space and user space has recently been proposed as a theoretical concept to close these side channels. However, this is not trivially possible due to architectural restrictions of the x86 platform. In this paper we present KAISER, a system that overcomes limitations of x86 and provides practical kernel address isolation. We implemented our proof-of-concept on top of the Linux kernel, closing all hardware side channels on kernel address information. KAISER enforces a strict kernel and user space isolation such that the hardware does not hold any information about kernel addresses while running in user mode. We show that KAISER protects against double page fault attacks, prefetch side-channel attacks, and TSX-based side-channel attacks. Finally, we demonstrate that KAISER has a runtime overhead of only 0.28%.",
"title": ""
},
{
"docid": "4ddad3c97359faf4b927167800fe77be",
"text": "Micro-expressions are facial expressions which are fleeting and reveal genuine emotions that people try to conceal. These are important clues for detecting lies and dangerous behaviors and therefore have potential applications in various fields such as the clinical field and national security. However, recognition through the naked eye is very difficult. Therefore, researchers in the field of computer vision have tried to develop micro-expression detection and recognition algorithms but lack spontaneous micro-expression databases. In this study, we attempted to create a database of spontaneous micro-expressions which were elicited from neutralized faces. Based on previous psychological studies, we designed an effective procedure in lab situations to elicit spontaneous micro-expressions and analyzed the video data with care to offer valid and reliable codings. From 1500 elicited facial movements filmed under 60fps, 195 micro-expressions were selected. These samples were coded so that the first, peak and last frames were tagged. Action units (AUs) were marked to give an objective and accurate description of the facial movements. Emotions were labeled based on psychological studies and participants' self-report to enhance the validity.",
"title": ""
},
{
"docid": "fa240a48947a43b9130ee7f48c3ad463",
"text": "Content distribution on today's Internet operates primarily in two modes: server-based and peer-to-peer (P2P). To leverage the advantages of both modes while circumventing their key limitations, a third mode: peer-to-server/peer (P2SP) has emerged in recent years. Although P2SP can provide efficient hybrid server-P2P content distribution, P2SP generally works in a closed manner by only utilizing its private owned servers to accelerate its private organized peer swarms. Consequently, P2SP still has its limitations in both content abundance and server bandwidth. To this end, the fourth mode (or says a generalized mode of P2SP) has appeared as \"open-P2SP\" that integrates various third-party servers, contents, and data transfer protocols all over the Internet into a large, open, and federated P2SP platform. In this paper, based on a large-scale commercial open-P2SP system named \"QQXuanfeng\" , we investigate the key challenging problems, practical designs and real-world performances of open-P2SP. Such \"white-box\" study of open-P2SP provides solid experiences and helpful heuristics to the designers of similar systems.",
"title": ""
},
{
"docid": "e5e3cbe942723ef8e3524baf56121bf5",
"text": "Requirements prioritization is recognized as an important activity in product development. In this paper, we describe the current state of requirements prioritization practices in two case companies and present the practical challenges involved. Our study showed that requirements prioritization is an ambiguous concept and current practices in the companies are informal. Requirements prioritization requires complex context-specific decision-making and must be performed iteratively in many phases during development work. Practitioners are seeking more systematic ways to prioritize requirements but they find it difficult to pay attention to all the relevant factors that have an effect on priorities and explicitly to draw different stakeholder views together. In addition, practitioners need more information about real customer preferences.",
"title": ""
},
{
"docid": "a377b31c0cb702c058f577ca9c3c5237",
"text": "Problem statement: Extensive research efforts in the area of Natural L anguage Processing (NLP) were focused on developing reading comprehens ion Question Answering systems (QA) for Latin based languages such as, English, French and German . Approach: However, little effort was directed towards the development of such systems for bidirec tional languages such as Arabic, Urdu and Farsi. In general, QA systems are more sophisticated and more complex than Search Engines (SE) because they seek a specific and somewhat exact answer to the query. Results: Existing Arabic QA system including the most recent described excluded one or both types of questions (How and Why) from their work because of the difficulty of handling these questions. In this study, we present a new approach and a new questio nanswering system (QArabPro) for reading comprehensi on texts in Arabic. The overall accuracy of our system is 84%. Conclusion/Recommendations: These results are promising compared to existing systems. Our system handles all types of questions including (How and why).",
"title": ""
},
{
"docid": "280c39aea4584e6f722607df68ee28dc",
"text": "Statistical parametric speech synthesis (SPSS) using deep neural networks (DNNs) has shown its potential to produce naturally-sounding synthesized speech. However, there are limitations in the current implementation of DNN-based acoustic modeling for speech synthesis, such as the unimodal nature of its objective function and its lack of ability to predict variances. To address these limitations, this paper investigates the use of a mixture density output layer. It can estimate full probability density functions over real-valued output features conditioned on the corresponding input features. Experimental results in objective and subjective evaluations show that the use of the mixture density output layer improves the prediction accuracy of acoustic features and the naturalness of the synthesized speech.",
"title": ""
},
{
"docid": "231d8ef95d02889d70000d70d8743004",
"text": "Last decade witnessed a lot of research in the field of sentiment analysis. Understanding the attitude and the emotions that people express in written text proved to be really important and helpful in sociology, political science, psychology, market research, and, of course, artificial intelligence. This paper demonstrates a rule-based approach to clause-level sentiment analysis of reviews in Ukrainian. The general architecture of the implemented sentiment analysis system is presented, the current stage of research is described and further work is explained. The main emphasis is made on the design of rules for computing sentiments.",
"title": ""
},
{
"docid": "422adf480622a0b6011c8d0941767ba9",
"text": "The paper presents a method for the calculus of the currents in the elementary conductors and the additional winding losses for high power a.c. machines. The accuracy method estimation and the results for a hydro-generator of 216 MW validate the proposed method for the design of the Roebel bars.",
"title": ""
},
{
"docid": "9665d430c2483451ee705f0263c151a0",
"text": "Radio spectrum needed for applications such as mobile telephony, digital video broadcasting (DVB), wireless local area networks (WiFi), wireless sensor networks (ZigBee), and internet of things is enormous and continues to grow exponentially. Since spectrum is limited and the current usage can be inefficient, cognitive radio paradigm has emerged to exploit the licensed and/or underutilized spectrum much more effectively. In this article, we present the motivation for and details of cognitive radio. A critical requirement for cognitive radio is the accurate, real-time estimation of spectrum usage. We thus review various spectrum sensing techniques, propagation effects, interference modeling, spatial randomness, upper layer details, and several existing cognitive radio standards.",
"title": ""
},
{
"docid": "0d22f929d72e44c6bf2902a753c8d79b",
"text": "Gödel's theorem may be demonstrated using arguments having an information-theoretic flavor. In such an approach it is possible to argue that if a theorem contains more information than a given set of axioms, then it is impossible for the theorem to be derived from the axioms. In contrast with the traditional proof based on the paradox of the liar, this new viewpoint suggests that the incompleteness phenomenon discovered by Gödel is natural and widespread rather than pathological and unusual.",
"title": ""
},
{
"docid": "dc72881043c7aa01ecec7bb7edfa8daf",
"text": "Image colorization is the task to color a grayscale image with limited color cues. In this work, we present a novel method to perform image colorization using sparse representation. Our method first trains an over-complete dictionary in YUV color space. Then taking a grayscale image and a small subset of color pixels as inputs, our method colorizes overlapping image patches via sparse representation; it is achieved by seeking sparse representations of patches that are consistent with both the grayscale image and the color pixels. After that, we aggregate the colorized patches with weights to get an intermediate result. This process iterates until the image is properly colorized. Experimental results show that our method leads to high-quality colorizations with small number of given color pixels. To demonstrate one of the applications of the proposed method, we apply it to transfer the color of one image onto another to obtain a visually pleasing image.",
"title": ""
}
] |
scidocsrr
|
ffb70857cc49a30ecb2ebb35f81a1b77
|
Predicting the Optimal Spacing of Study: A Multiscale Context Model of Memory
|
[
{
"docid": "3ade96c73db1f06d7e0c1f48a0b33387",
"text": "To achieve enduring retention, people must usually study information on multiple occasions. How does the timing of study events affect retention? Prior research has examined this issue only in a spotty fashion, usually with very short time intervals. In a study aimed at characterizing spacing effects over significant durations, more than 1,350 individuals were taught a set of facts and--after a gap of up to 3.5 months--given a review. A final test was administered at a further delay of up to 1 year. At any given test delay, an increase in the interstudy gap at first increased, and then gradually reduced, final test performance. The optimal gap increased as test delay increased. However, when measured as a proportion of test delay, the optimal gap declined from about 20 to 40% of a 1-week test delay to about 5 to 10% of a 1-year test delay. The interaction of gap and test delay implies that many educational practices are highly inefficient.",
"title": ""
}
] |
[
{
"docid": "115d3bc01e9b7fe41bdd9fc987c8676c",
"text": "A novel switching median filter incorporating with a powerful impulse noise detection method, called the boundary discriminative noise detection (BDND), is proposed in this paper for effectively denoising extremely corrupted images. To determine whether the current pixel is corrupted, the proposed BDND algorithm first classifies the pixels of a localized window, centering on the current pixel, into three groups-lower intensity impulse noise, uncorrupted pixels, and higher intensity impulse noise. The center pixel will then be considered as \"uncorrupted,\" provided that it belongs to the \"uncorrupted\" pixel group, or \"corrupted.\" For that, two boundaries that discriminate these three groups require to be accurately determined for yielding a very high noise detection accuracy-in our case, achieving zero miss-detection rate while maintaining a fairly low false-alarm rate, even up to 70% noise corruption. Four noise models are considered for performance evaluation. Extensive simulation results conducted on both monochrome and color images under a wide range (from 10% to 90%) of noise corruption clearly show that our proposed switching median filter substantially outperforms all existing median-based filters, in terms of suppressing impulse noise while preserving image details, and yet, the proposed BDND is algorithmically simple, suitable for real-time implementation and application.",
"title": ""
},
{
"docid": "9864bce09ff74218fb817aab62e70081",
"text": "Nowadays, sentiment analysis methods become more and more popular especially with the proliferation of social media platform users number. In the same context, this paper presents a sentiment analysis approach which can faithfully translate the sentimental orientation of Arabic Twitter posts, based on a novel data representation and machine learning techniques. The proposed approach applied a wide range of features: lexical, surface-form, syntactic, etc. We also made use of lexicon features inferred from two Arabic sentiment words lexicons. To build our supervised sentiment analysis system, we use several standard classification methods (Support Vector Machines, K-Nearest Neighbour, Naïve Bayes, Decision Trees, Random Forest) known by their effectiveness over such classification issues.\n In our study, Support Vector Machines classifier outperforms other supervised algorithms in Arabic Twitter sentiment analysis. Via an ablation experiments, we show the positive impact of lexicon based features on providing higher prediction performance.",
"title": ""
},
{
"docid": "5046b1c5f72fd28026005f4a80f864a6",
"text": "The BAF is a corpus of English and French translations, hand-aligned at the sentence level, which was developed by the University of Montreal's RALI laboratory, within the \"Action de recherche concertée\" (ARC) A2, a cooperative research project initiated and financed by the AUPELF-UREF. The corpus, which totals approximately 800 000 words, is primarily intended as an evaluation tool in the development of automatic bilingual text alignment method. In this paper, we discuss why this corpus was assembled, how it was produced, and what it contains. We also describe some of the computer tools that were developed and used in the process.",
"title": ""
},
{
"docid": "c9299fb17f4c7f5cc4471b22a39c8231",
"text": "Touch-based tablet UIs provide few shortcut mechanisms for rapid command selection; as a result, command selection on tablets often requires slow traversal of menus. We developed a new selection technique for multi-touch tablets, called FastTap, that uses thumb-and-finger touches to show and choose from a spatially-stable grid-based overlay interface. FastTap allows novices to view and inspect the full interface, but once item locations are known, FastTap allows people to select commands with a single quick thumb-and-finger tap. The interface helps users develop expertise, since the motor actions carried out as a novice rehearse the expert behavior. A controlled study showed that FastTap was significantly faster (by 33% per selection overall) than marking menus, both for novices and experts, and without reduction in accuracy or subjective preference. Our work introduces a new and efficient selection mechanism that supports rapid command execution on touch tablets, for both novices and experts.",
"title": ""
},
{
"docid": "46fdb284160db9b9b10fed2745cd1f59",
"text": "The TCB shall be found resistant to penetration. Near flawless penetration testing is a requirement for high-rated secure systems — those rated above B1 based on the Trusted Computer System Evaluation Criteria (TCSEC) and its Trusted Network and Database Interpretations (TNI and TDI). Unlike security functional testing, which demonstrates correct behavior of the product's advertised security controls, penetration testing is a form of stress testing which exposes weaknesses — that is, flaws — in the trusted computing base (TCB). This essay describes the Flaw Hypothesis Methodology (FHM), the earliest comprehensive and widely used method for conducting penetrations testing. It reviews motivation for penetration testing and penetration test planning, which establishes the goals, ground rules, and resources available for testing. The TCSEC defines \" flaw \" as \" an error of commission, omission, or oversight in a system that allows protection mechanisms to be bypassed. \" This essay amplifies the definition of a flaw as a demonstrated unspecified capability that can be exploited to violate security policy. The essay provides an overview of FHM and its analogy to a heuristic-based strategy game. The 10 most productive ways to generate hypothetical flaws are described as part of the method, as are ways to confirm them. A review of the results and representative generic flaws discovered over the past 20 years is presented. The essay concludes with the assessment that FHM is applicable to the European ITSEC and with speculations about future methods of penetration analysis using formal methods, that is, mathematically 270 Information Security specified design, theorems, and proofs of correctness of the design. One possible development could be a rigorous extension of FHM to be integrated into the development process. This approach has the potential of uncovering problems early in the design , enabling iterative redesign. A security threat exists when there are the opportunity, motivation, and technical means to attack: the when, why, and how. FHM deals only with the \" how \" dimension of threats. It is a requirement for high-rated secure systems (for example, TCSEC ratings above B1) that penetration testing be completed without discovery of security flaws in the evaluated product, as part of a product or system evaluation [DOD85, NCSC88b, NCSC92]. Unlike security functional testing, which demonstrates correct behavior of the product's advertised security controls, penetration testing is a form of stress testing, which exposes weaknesses or flaws in the trusted computing base (TCB). It has …",
"title": ""
},
{
"docid": "dd7f7d18b12cb71ed4c3acecf6383462",
"text": "Identifying malicious software executables is made difficult by the constant adaptations introduced by miscreants in order to evade detection by antivirus software. Such changes are akin to mutations in biological sequences. Recently, high-throughput methods for gene sequence classification have been developed by the bioinformatics and computational biology communities. In this paper, we apply methods designed for gene sequencing to detect malware in a manner robust to attacker adaptations. Whereas most gene classification tools are optimized for and restricted to an alphabet of four letters (nucleic acids), we have selected the Strand gene sequence classifier for malware classification. Strand’s design can easily accommodate unstructured data with any alphabet, including source code or compiled machine code. To demonstrate that gene sequence classification tools are suitable for classifying malware, we apply Strand to approximately 500 GB of malware data provided by the Kaggle Microsoft Malware Classification Challenge (BIG 2015) used for predicting nine classes of polymorphic malware. Experiments show that, with minimal adaptation, the method achieves accuracy levels well above 95% requiring only a fraction of the training times used by the winning team’s method.",
"title": ""
},
{
"docid": "bbf764205f770481b787e76db5a3b614",
"text": "A∗ is a popular path-finding algorithm, but it can only be applied to those domains where a good heuristic function is known. Inspired by recent methods combining Deep Neural Networks (DNNs) and trees, this study demonstrates how to train a heuristic represented by a DNN and combine it with A∗ . This new algorithm which we call א∗ can be used efficiently in domains where the input to the heuristic could be processed by a neural network. We compare א∗ to N-Step Deep QLearning (DQN Mnih et al. 2013) in a driving simulation with pixel-based input, and demonstrate significantly better performance in this scenario.",
"title": ""
},
{
"docid": "99ea03e76dccbca67efd8ea1963f9ec2",
"text": "This paper treats the correspondence between the reference type of NPs (i.e., mass nouns, count nouns, measure constructions, plurals) and the temporal constitution of verbal predicates (i.e., activities, accomplishments). A theory will be developed that handles the well known influence of the reference type of NPs in argument positions on the temporal constitution of the verbal expressions, assuming an event semantics with lattice structures and thematic roles as primitive relations between events and objects. Some consequences for the theory of thematic roles will be discussed, and the effect of partitive case marking on the verbal aspect, as in Finnish, and of aspectual marking on the definiteness of NPs, like in Slavic, will be explained.",
"title": ""
},
{
"docid": "3724a800d0c802203835ef9f68a87836",
"text": "This paper presents SUD, a system for running existing Linux device drivers as untrusted user-space processes. Even if the device driver is controlled by a malicious adversary, it cannot compromise the rest of the system. One significant challenge of fully isolating a driver is to confine the actions of its hardware device. SUD relies on IOMMU hardware, PCI express bridges, and messagesignaled interrupts to confine hardware devices. SUD runs unmodified Linux device drivers, by emulating a Linux kernel environment in user-space. A prototype of SUD runs drivers for Gigabit Ethernet, 802.11 wireless, sound cards, USB host controllers, and USB devices, and it is easy to add a new device class. SUD achieves the same performance as an in-kernel driver on networking benchmarks, and can saturate a Gigabit Ethernet link. SUD incurs a CPU overhead comparable to existing runtime driver isolation techniques, while providing much stronger isolation guarantees for untrusted drivers. Finally, SUD requires minimal changes to the kernel—just two kernel modules comprising 4,000 lines of code—which may at last allow the adoption of these ideas in practice.",
"title": ""
},
{
"docid": "6d5d1788907e7b903fef9434ef069f35",
"text": "We introduce a generative model, we call Tensorial Mixture Models (TMMs) based on mixtures of basic component distributions over local structures (e.g. patches in an image) where the dependencies between the local-structures are represented by a ”priors tensor” holding the prior probabilities of assigning a component distribution to each local-structure. In their general form, TMMs are intractable as the prior tensor is typically of exponential size. However, when the priors tensor is decomposed it gives rise to an arithmetic circuit which in turn transforms the TMM into a Convolutional Arithmetic Circuit (ConvAC). A ConvAC corresponds to a shallow (single hidden layer) network when the priors tensor is decomposed by a CP (sum of rank-1) approach and corresponds to a deep network when the decomposition follows the Hierarchical Tucker (HT) model. The ConvAC representation of a TMM possesses several attractive properties. First, the inference is tractable and is implemented by a forward pass through a deep network. Second, the architectural design of the model follows the deep networks community design, i.e., the structure of TMMs is determined by just two easily understood factors: size of pooling windows and number of channels. Finally, we demonstrate the effectiveness of our model when tackling the problem of classification with missing data, leveraging TMMs unique ability of tractable marginalization which leads to optimal classifiers regardless of the missingness distribution.",
"title": ""
},
{
"docid": "d4b696766e698fdf6f29f8bdd38fc8d2",
"text": "It has been observed that the degrees of the topologies of several communication networks follow heavy tailed statistics. What is the impact of such heavy tailed statistics on the performance of basic communication tasks that a network is presumed to support? How does performance scale with the size of the network? We study routing in families of sparse random graphs whose degrees follow heavy tailed distributions. Instantiations of such random graphs have been proposed as models for the topology of the Internet at the level of Autonomous Systems as well as at the level of routers. Let n be the number of nodes. Suppose that for each pair of nodes with degrees du and dv we have O(du dv) units of demand. Thus the total demand is O(n2). We argue analytically and experimentally that in the considered random graph model such demand patterns can be routed so that the flow through each link is at most O(n log2 n). This is to be compared with a bound O(n2) that holds for arbitrary graphs. Similar results were previously known for sparse random regular graphs, a.k.a. \"expander graphs.\" The significance is that Internet-like topologies, which grow in a dynamic, decentralized fashion and appear highly inhomogeneous, can support routing with performance characteristics comparable to those of their regular counterparts, at least under the assumption of uniform demand and capacities. Our proof uses approximation algorithms for multicommodity flow and establishes strong bounds of a generalization of \"expansion,\" namely \"conductance.\" Besides routing, our bounds on conductance have further implications, most notably on the gap between first and second eigenvalues of the stochastic normalization of the adjacency matrix of the graph.",
"title": ""
},
{
"docid": "7bb24a2e4ab62eaba90876462a9e527e",
"text": "The concept of Smart City IoTisa comprehensive and layered framework that caters to the needs of multiple facets of projects related to smart city and thus allowing cities to utilise urban networking in order to increase economic prowess, and build more efficient, unique technological solutions to deal with the numerous challenges of the city. Smart City is the product of advanced development of the new era of information technology and smart economy, based on the mesh networking of the Internet, telecommunications network, broadcast network, wireless networking and other end-to-end sensor networking where Internet of Things technology (IoT) as its heart. The Internet of Things is modular approach to integrate sensors (RFID, IR, GPS, laser scanners, etc.) Into everyday objects, and inter connecting them over the internet through specific protocols for exchange of information and communications, which leads to achieving intelligent recognition, location tracking, monitoring and management. Along with the technological support from IoT, smart cities quintessentially need to conquer three features of being; instrumented, interconnected and intelligent. A Smart City can only be formed by interconnecting all of these intelligent features at their advanced stage of IOT development. The objective of this paper is how Internet and Sensors can help to develop a city to smart.",
"title": ""
},
{
"docid": "78bc13c6b86ea9a8fda75b66f665c39f",
"text": "We propose a stochastic answer network (SAN) to explore multi-step inference strategies in Natural Language Inference. Rather than directly predicting the results given the inputs, the model maintains a state and iteratively refines its predictions. Our experiments show that SAN achieves the state-of-the-art results on three benchmarks: Stanford Natural Language Inference (SNLI) dataset, MultiGenre Natural Language Inference (MultiNLI) dataset and Quora Question Pairs dataset.",
"title": ""
},
{
"docid": "4540d9e955e1c38de8997424465a9b2b",
"text": "The Internet Movie Database (IMDB) is one of the largest online resources for general movie information combined with a forum in which users can rate movies. We investigate the extent to which a movie’s average user rating can be predicted after learning the relationship between the rating and a movie’s various attributes from a training set. Two methods are evaluated: kernel regression and model trees. Modifications to standard algorithms for training these two regressors lead to better prediction accuracy on this data set.",
"title": ""
},
{
"docid": "36ae895829fda8c8b58bf49eaa607695",
"text": "In this paper, we describe SymDiff, a language-agnostic tool for equivalence checking and displaying semantic (behavioral) differences over imperative programs. The tool operates on an intermediate verification language Boogie, for which translations exist from various source languages such as C, C# and x86. We discuss the tool and the front-end interface to target various source languages. Finally, we provide a brief description of the front-end for C programs.",
"title": ""
},
{
"docid": "3176f0a4824b2dd11d612d55b4421881",
"text": "This article reviews some of the criticisms directed towards the eclectic paradigm of international production over the past decade, and restates its main tenets. The second part of the article considers a number of possible extensions of the paradigm and concludes by asserting that it remains \"a robust general framework for explaining and analysing not only the economic rationale of economic production but many organisational nd impact issues in relation to MNE activity as well.\"",
"title": ""
},
{
"docid": "14b06c786127363d5bdaee4602b15a42",
"text": "Instant messaging applications continue to grow in popularity as a means of communicating and sharing multimedia files. The information contained within these applications can prove invaluable to law enforcement in the investigation of crimes. Kik messenger is a recently introduced instant messaging application that has become very popular in a short period of time, especially among young users. The novelty of Kik means that there has been little forensic examination conducted on this application. This study addresses this issue by investigating Kik messenger on Apple iOS devices. The goal was to locate and document artefacts created or modified by Kik messenger on devices installed with the latest version of iOS, as well as in iTunes backup files. Once achieved, the secondary goal was to analyse the artefacts to decode and interpret their meaning and by doing so, be able to answer the typical questions faced by forensic investigators. A detailed description of artefacts created or modified by Kik messenger is provided. Results from experiments showed that deleted images are not only recoverable from the device, but can also be located and downloaded from Kik servers. A process to link data from multiple database tables producing accurate chat histories is explained. These outcomes can be used by law enforcement to investigate crimes and by software developers to create tools to recover evidence. © 2016 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "ab390e0bee6b8fb33cda52821c7787ff",
"text": "Zero-day polymorphic worms pose a serious threat to the Internet security. With their ability to rapidly propagate, these worms increasingly threaten the Internet hosts and services. Not only can they exploit unknown vulnerabilities but can also change their own representations on each new infection or can encrypt their payloads using a different key per infection. They have many variations in the signatures of the same worm thus, making their fingerprinting very difficult. Therefore, signature-based defenses and traditional security layers miss these stealthy and persistent threats. This paper provides a detailed survey to outline the research efforts in relation to detection of modern zero-day malware in form of zero-day polymorphic worms.",
"title": ""
},
{
"docid": "7c56d7bd2ca8e03ba828343dbb6f38bd",
"text": "The goal of Spoken Term Detection (STD) technology is to allow open vocabulary search over large collections of speech content. In this paper, we address cases where search term(s) of interest (queries) are acoustic examples. This is provided either by identifying a region of interest in a speech stream or by speaking the query term. Queries often relate to named-entities and foreign words, which typically have poor coverage in the vocabulary of Large Vocabulary Continuous Speech Recognition (LVCSR) systems. Throughout this paper, we focus on query-by-example search for such out-of-vocabulary (OOV) query terms. We build upon a finite state transducer (FST) based search and indexing system [1] to address the query by example search for OOV terms by representing both the query and the index as phonetic lattices from the output of an LVCSR system. We provide results comparing different representations and generation mechanisms for both queries and indexes built with word and combined word and subword units [2]. We also present a two-pass method which uses query-by-example search using the best hit identified in an initial pass to augment the STD search results. The results demonstrate that query-by-example search can yield a significantly better performance, measured using Actual Term-Weighted Value (ATWV), of 0.479 when compared to a baseline ATWV of 0.325 that uses reference pronunciations for OOVs. Further improvements can be obtained with the proposed two pass approach and filtering using the expected unigram counts from the LVCSR system's lexicon.",
"title": ""
},
{
"docid": "73d4a47d4aba600b4a3bcad6f7f3588f",
"text": "Humans can easily perform tasks that use vision and language jointly, such as describing a scene and answering questions about objects in the scene and how they are related. Image captioning and visual question & answer are two popular research tasks that have emerged from advances in deep learning and the availability of datasets that specifically address these problems. However recent work has shown that deep learning based solutions to these tasks are just as brittle as solutions for only vision or only natural language tasks. Image captioning is vulnerable to adversarial perturbations; novel objects, which are not described in training data, and contextual biases in training data can degrade performance in surprising ways. For these reasons, it is important to find ways in which general-purpose knowledge can guide connectionist models. We investigate challenges to integrate existing ontologies and knowledge bases with deep learning solutions, and possible approaches for overcoming such challenges. We focus on geo-referenced data such as geo-tagged images and videos that capture outdoor scenery. Geo-knowledge bases are domain specific knowledge bases that contain concepts and relations that describe geographic objects. This work proposes to increase the robustness of automatic scene description and inference by leveraging geo-knowledge bases along with the strengths of deep learning for visual object detection and classification.",
"title": ""
}
] |
scidocsrr
|
7b6bbd1e6a831fe4770bb4de6765c024
|
Internet scale string attribute publish/subscribe data networks
|
[
{
"docid": "d3e35963e85ade6e3e517ace58cb3911",
"text": "In this paper, we present the design and evaluation of PeerDB, a peer-to-peer (P2P) distributed data sharing system. PeerDB distinguishes itself from existing P2P systems in several ways. First, it is a full-fledge data management system that supports fine-grain content-based searching. Second, it facilitates sharing of data without shared schema. Third, it combines the power of mobile agents into P2P systems to perform operations at peers’ sites. Fourth, PeerDB network is self-configurable, i.e., a node can dynamically optimize the set of peers that it can communicate directly with based on some optimization criterion. By keeping peers that provide most information or services in close proximity (i.e, direct communication), the network bandwidth can be better utilized and system performance can be optimized. We implemented and evaluated PeerDB on a cluster of 32 Pentium II PCs. Our experimental results show that PeerDB can effectively exploit P2P technologies for distributed data sharing.",
"title": ""
},
{
"docid": "6513c4ca4197e9ff7028e527a621df0a",
"text": "The development of complex distributed systems demands for the creation of suitable architectural styles (or paradigms) and related run-time infrastructures. An emerging style that is receiving increasing attention is based on the notion of event. In an event-based architecture, distributed software components interact by generating and consuming events. An event is the occurrence of some state change in a component of a software system, made visible to the external world. The occurrence of an event in a component is asynchronously notified to any other component that has declared some interest in it. This paradigm (usually called “publish/subscribe” from the names of the two basic operations that regulate the communication) holds the promise of supporting a flexible and effective interaction among highly reconfigurable, distributed software components. In the past two years, we have developed an object-oriented infrastructure called JEDI (Java Event-based Distributed Infrastructure). JEDI supports the development and operation of event-based systems and has been used to implement a significant example of distributed system, namely, the OPSS workflow management system (WFMS). The paper illustrates JEDI main features and how we have used them to implement OPSS. Moreover, the paper provides an initial evaluation of our experiences in using the event-based architectural style and a classification of some of the event-based infrastructures presented in the literature.",
"title": ""
}
] |
[
{
"docid": "4c030e022b3b44b8bbae801c1f6e721a",
"text": "This paper presents DAMESRL1, a flexible and open source framework for deep semantic role labeling (SRL). DAMESRL aims to facilitate easy exploration of model structures for multiple languages with different characteristics. It provides flexibility in its model construction in terms of word representation, sequence representation, output modeling, and inference styles and comes with clear output visualization. Additionally, it handles various input and output formats and comes with clear output visualization. The framework is available under the Apache 2.0 license.",
"title": ""
},
{
"docid": "ffb7b58d947aa15cd64efbadb0f9543d",
"text": "A multi-armed bandit is an experiment with the goal of accumulating rewards from a payoff distribution with unknown parameters that are to be learned sequentially. This article describes a heuristic for managing multi-armed bandits called randomized probability matching, which randomly allocates observations to arms according the Bayesian posterior probability that each arm is optimal. Advances in Bayesian computation have made randomized probability matching easy to apply to virtually any payoff distribution. This flexibility frees the experimenter to work with payoff distributions that correspond to certain classical experimental designs that have the potential to outperform methods that are ‘optimal’ in simpler contexts. I summarize the relationships between randomized probability matching and several related heuristics that have been used in the reinforcement learning literature. Copyright q 2010 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "bd6f23972644f6239ab1a40e9b20aa1e",
"text": "This paper presents a machine-learning software solution that performs a multi-dimensional prediction of QoE (Quality of Experience) based on network-related SIFs (System Influence Factors) as input data. The proposed solution is verified through experimental study based on video streaming emulation over LTE (Long Term Evolution) which allows the measurement of network-related SIF (i.e., delay, jitter, loss), and subjective assessment of MOS (Mean Opinion Score). Obtained results show good performance of proposed MOS predictor in terms of mean prediction error and thereby can serve as an encouragement to implement such solution in all-IP (Internet Protocol) real environment.",
"title": ""
},
{
"docid": "47f9724fd9dc25eda991854074ac0afa",
"text": "This paper reviews the state of the art in piezoelectric energy harvesting. It presents the basics of piezoelectricity and discusses materials choice. The work places emphasis on material operating modes and device configurations, from resonant to non-resonant devices and also to rotational solutions. The reviewed literature is compared based on power density and bandwidth. Lastly, the question of power conversion is addressed by reviewing various circuit solutions.",
"title": ""
},
{
"docid": "ddd275168d4e066df5e5937790a93986",
"text": " The Jyros (JR) and the Advancing The Standard (ATS) valves were compared with the St. Jude Medical (SJM) valve in the mitral position to study the effects of design differences, installed valve orientation to the flow, and closing sounds using particle tracking velocimetry and particle image velocimetry methods utilizing a high-speed video flow visualization technique to map the velocity field. Sound measurements were made to confirm the claims of the manufacturers. Based on the experimental data, the following general conclusions can be made: On the vertical measuring plane which passes through the centers of the aortic and the mitral valves, the SJM valve shows a distinct circulatory flow pattern when the valve is installed in the antianatomical orientation; the SJM valve maintains the flow through the central orifice quite well; the newer curved leaflet JR valve and the ATS valve, which does not fully open during the peak flow phase, generates a higher but divergent flow close to the valve location when the valve was installed anatomically. The antianatomically installed JR valve showed diverse and less distinctive flow patterns and slower velocity on the central measuring plane than the SJM valve did, with noticeably lower valve closing noise. On the velocity field directly below the mitral valve that is normal to the previous measuring plane, the three valves show symmetrical twin circulations due to the divergent nature of the flow generated by the two inclined half discs; the SJM valve with centrally downward circulation is contrasted by the two other valves with peripherally downward circulation. These differences may have an important role in generation of the valve closing sound.",
"title": ""
},
{
"docid": "d58425a613f9daea2677d37d007f640e",
"text": "Recently the improved bag of features (BoF) model with locality-constrained linear coding (LLC) and spatial pyramid matching (SPM) achieved state-of-the-art performance in image classification. However, only adopting SPM to exploit spatial information is not enough for satisfactory performance. In this paper, we use hierarchical temporal memory (HTM) cortical learning algorithms to extend this LLC & SPM based model. HTM regions consist of HTM cells are constructed to spatial pool the LLC codes. Each cell receives a subset of LLC codes, and adjacent subsets are overlapped so that more spatial information can be captured. Additionally, HTM cortical learning algorithms have two processes: learning phase which make the HTM cell only receive most frequent LLC codes, and inhibition phase which ensure that the output of HTM regions is sparse. The experimental results on Caltech 101 and UIUC-Sport dataset show the improvement on the original LLC & SPM based model.",
"title": ""
},
{
"docid": "e27da58188be54b71187d3489fa6b4e7",
"text": "In a prospective-longitudinal study of a representative birth cohort, we tested why stressful experiences lead to depression in some people but not in others. A functional polymorphism in the promoter region of the serotonin transporter (5-HT T) gene was found to moderate the influence of stressful life events on depression. Individuals with one or two copies of the short allele of the 5-HT T promoter polymorphism exhibited more depressive symptoms, diagnosable depression, and suicidality in relation to stressful life events than individuals homozygous for the long allele. This epidemiological study thus provides evidence of a gene-by-environment interaction, in which an individual's response to environmental insults is moderated by his or her genetic makeup.",
"title": ""
},
{
"docid": "9da6883a9fe700aeb84208efbf0a56a3",
"text": "With the increasing demand for more energy efficient buildings, the construction industry is faced with the challenge to ensure that the energy efficiency predicted during the design is realised once a building is in use. There is, however, significant evidence to suggest that buildings are not performing as well as expected and initiatives such as PROBE and CarbonBuzz aim to illustrate the extent of this so called „Performance Gap‟. This paper discusses the underlying causes of discrepancies between detailed energy modelling predictions and in-use performance of occupied buildings (after the twelve month liability period). Many of the causal factors relate to the use of unrealistic input parameters regarding occupancy behaviour and facilities management in building energy models. In turn, this is associated with the lack of feedback to designers once a building has been constructed and occupied. This paper aims to demonstrate how knowledge acquired from Post-Occupancy Evaluation (POE) can be used to produce more accurate energy performance models. A case study focused specifically on lighting, small power and catering equipment in a high density office building is presented. Results show that by combining monitored data with predictive energy modelling, it was possible to increase the accuracy of the model to within 3% of actual electricity consumption values. Future work will seek to use detailed POE data to develop a set of evidence based benchmarks for energy consumption in office buildings. It is envisioned that these benchmarks will inform designers on the impact of occupancy and management on the actual energy consumption of buildings. Moreover, it should enable the use of more realistic input parameters in energy models, bringing the predicted figures closer to reality.",
"title": ""
},
{
"docid": "a28267004a26f08550d2b2b129fff860",
"text": "Falls accounted for 5.9% of the childhood deaths due to trauma in a review of the medical examiner's files in a large urban county. Falls represented the seventh leading cause of traumatic death in all children 15 years of age or younger, but the third leading cause of death in children 1 to 4 years old. The mean age of those with accidental falls was 2.3 years, which is markedly younger than that seen in hospital admission series, suggesting that infants are much more likely to die from a fall than older children. Forty-one per cent of the deaths occurred from \"minor\" falls such as falls from furniture or while playing; 50% were falls from a height of one story or greater; the remainder were falls down stairs. Of children falling from less than five stories, death was due to a lethal head injury in 86%. Additionally, 61.3% of the children with head injuries had mass lesions which would have required acute neurosurgical intervention. The need for an organized pediatric trauma system is demonstrated as more than one third of the children were transferred to another hospital, with more than half of these deteriorating during the delay. Of the patients with \"minor\" falls, 38% had parental delay in seeking medical attention, with deterioration of all. The trauma system must also incorporate the education of parents and medical personnel to the potential lethality of \"minor\" falls in infants and must legislate injury prevention programs.",
"title": ""
},
{
"docid": "19a1f9c9f3dec6f90d08479f0669d0dc",
"text": "We present a multi-stream bi-directional recurrent neural network for fine-grained action detection. Recently, twostream convolutional neural networks (CNNs) trained on stacked optical flow and image frames have been successful for action recognition in videos. Our system uses a tracking algorithm to locate a bounding box around the person, which provides a frame of reference for appearance and motion and also suppresses background noise that is not within the bounding box. We train two additional streams on motion and appearance cropped to the tracked bounding box, along with full-frame streams. Our motion streams use pixel trajectories of a frame as raw features, in which the displacement values corresponding to a moving scene point are at the same spatial position across several frames. To model long-term temporal dynamics within and between actions, the multi-stream CNN is followed by a bi-directional Long Short-Term Memory (LSTM) layer. We show that our bi-directional LSTM network utilizes about 8 seconds of the video sequence to predict an action label. We test on two action detection datasets: the MPII Cooking 2 Dataset, and a new MERL Shopping Dataset that we introduce and make available to the community with this paper. The results demonstrate that our method significantly outperforms state-of-the-art action detection methods on both datasets.",
"title": ""
},
{
"docid": "3d3101e08720513e1b7891cddead8967",
"text": "Conclusion A new model with the self-adaptive attention temperature for the softness of attention distribution; Improved results on the datasets and showed that attention temperature differs for decoding diverse words; Try to figure out better demonstration for the effects of temperature. Abstract A new NMT model with self-adaptive attention temperature; Attention varies at each time step based on the temperature; Improved results on the benchmark datasets; Analysis shows that temperatures vary when translating words of different types.",
"title": ""
},
{
"docid": "c274b4396b73d076e38cb79a0799c943",
"text": "This paper addresses the development of a model that reproduces the dynamic behaviour of a redundant, 7 degrees of freedom robotic manipulator, namely the Kuka Lightweight Robot IV, in the Robotic Surgery Laboratory of the Instituto Superior Técnico. For this purpose, the control architecture behind the Lightweight Robot (LWR) is presented, as well as, the joint and the Cartesian level impedance control aspects. Then, the manipulator forward and inverse kinematic models are addressed, in which the inverse kinematics relies on the Closed Loop Inverse Kinematic method (CLIK). Redundancy resolution methods are used to ensure that the joint angle values remain bounded considering their physical limits. The joint level model is the first presented, followed by the Cartesian level model. The redundancy inherent to the Cartesian model is compensated by a null space controller, developed by employing the impedance superposition method. Finally, the effect of possible faults occurring in the system are simulated using the derived model.",
"title": ""
},
{
"docid": "45cee79008d25916e8f605cd85dd7f3a",
"text": "In exploring the emotional climate of long-term marriages, this study used an observational coding system to identify specific emotional behaviors expressed by middle-aged and older spouses during discussions of a marital problem. One hundred and fifty-six couples differing in age and marital satisfaction were studied. Emotional behaviors expressed by couples differed as a function of age, gender, and marital satisfaction. In older couples, the resolution of conflict was less emotionally negative and more affectionate than in middle-aged marriages. Differences between husbands and wives and between happy and unhappy marriages were also found. Wives were more affectively negative than husbands, whereas husbands were more defensive than wives, and unhappy marriages involved greater exchange of negative affect than happy marriages.",
"title": ""
},
{
"docid": "b7378cf12d2ca44a6142b2f6eab2d3a6",
"text": "Most of the cytotoxic chemotherapeutic agents have poor aqueous solubility. These molecules are associated with poor physicochemical and biopharmaceutical properties, which makes the formulation difficult. An important approach in this regard is the use of combination of cyclodextrin and nanotechnology in delivery system. This paper provides an overview of limitations associated with anticancer drugs, their complexation with cyclodextrins, loading/encapsulating the complexed drugs into carriers, and various approaches used for the delivery. The present review article aims to assess the utility of cyclodextrin-based carriers like liposomes, niosomes, nanoparticles, micelles, millirods, and siRNA for delivery of antineoplastic agents. These systems based on cyclodextrin complexation and nanotechnology will camouflage the undesirable properties of drug and lead to synergistic or additive effect. Cyclodextrin-based nanotechnology seems to provide better therapeutic effect and sustain long life of healthy and recovered cells. Still, considerable study on delivery system and administration routes of cyclodextrin-based carriers is necessary with respect to their pharmacokinetics and toxicology to substantiate their safety and efficiency. In future, it would be possible to resolve the conventional and current issues associated with the development and commercialization of antineoplastic agents.",
"title": ""
},
{
"docid": "e2b153aba78b2831a7f1ecc1b26e0fc9",
"text": "Recent gene expression profiling of breast cancer has identified specific subtypes with clinical, biologic, and therapeutic implications. The basal-like group of tumors is characterized by an expression signature similar to that of the basal/myoepithelial cells of the breast and is reported to have transcriptomic characteristics similar to those of tumors arising in BRCA1 germline mutation carriers. They are associated with aggressive behavior and poor prognosis, and typically do not express hormone receptors or HER-2 (\"triple-negative\" phenotype). Therefore, patients with basal-like cancers are unlikely to benefit from currently available targeted systemic therapy. Although basal-like tumors are characterized by distinctive morphologic, genetic, immunophenotypic, and clinical features, neither an accepted consensus on routine clinical identification and definition of this aggressive subtype of breast cancer nor a way of systematically classifying this complex group of tumors has been described. Different definitions are, therefore, likely to produce variable and contradictory results that may hamper consistent identification and development of treatment strategies for these tumors. In this review, we discuss definition, heterogeneity, morphologic spectrum, relation to BRCA1, and clinical significance of this important class of breast cancer.",
"title": ""
},
{
"docid": "0a7e755387f037cab0a51472763e620f",
"text": "Introduction: Nowadays, one of the most important questions in teaching and learning involves increasing the degree of students’ engagement in learning. According to Astin’s Theory of Student engagement, the best learning environment is one in which it is possible to increase students’ engagement. The current study investigates the influences that using these networks for educational purposes may have on learners’ engagement, motivation, and learning.",
"title": ""
},
{
"docid": "dabfcb6d1b2df628113a8f68ed0555a5",
"text": "With the fast-growing demand of location-based services in indoor environments, indoor positioning based on fingerprinting has attracted significant interest due to its high accuracy. In this paper, we present a novel deep-learning-based indoor fingerprinting system using channel state information (CSI), which is termed DeepFi. Based on three hypotheses on CSI, the DeepFi system architecture includes an offline training phase and an online localization phase. In the offline training phase, deep learning is utilized to train all the weights of a deep network as fingerprints. Moreover, a greedy learning algorithm is used to train the weights layer by layer to reduce complexity. In the online localization phase, we use a probabilistic method based on the radial basis function to obtain the estimated location. Experimental results are presented to confirm that DeepFi can effectively reduce location error, compared with three existing methods in two representative indoor environments.",
"title": ""
},
{
"docid": "0b79fc06afe7782e7bdcdbd96cc1c1a0",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/journals/annals.html. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission.",
"title": ""
},
{
"docid": "2aa5f065e63a9bc0e24f74d4a37a7ea6",
"text": "Dataflow programming models are suitable to express multi-core streaming applications. The design of high-quality embedded systems in that context requires static analysis to ensure the liveness and bounded memory of the application. However, many streaming applications have a dynamic behavior. The previously proposed dataflow models for dynamic applications do not provide any static guarantees or only in exchange of significant restrictions in expressive power or automation. To overcome these restrictions, we propose the schedulable parametric dataflow (SPDF) model. We present static analyses and a quasi-static scheduling algorithm. We demonstrate our approach using a video decoder case study.",
"title": ""
}
] |
scidocsrr
|
4fbdc06c462980a0830a1778bdb8ad8c
|
Probabilistic programming in Python using PyMC3
|
[
{
"docid": "8439309414a9999abbd0e0be95a25fb8",
"text": "Cython is a Python language extension that allows explicit type declarations and is compiled directly to C. As such, it addresses Python's large overhead for numerical loops and the difficulty of efficiently using existing C and Fortran code, which Cython can interact with natively.",
"title": ""
},
{
"docid": "809d03fd69aebc7573463756a535de18",
"text": "We describe Venture, an interactive virtual machine for probabilistic programming that aims to be sufficiently expressive, extensible, and efficient for general-purpose use. Like Church, probabilistic models and inference problems in Venture are specified via a Turing-complete, higher-order probabilistic language descended from Lisp. Unlike Church, Venture also provides a compositional language for custom inference strategies, assembled from scalable implementations of several exact and approximate techniques. Venture is thus applicable to problems involving widely varying model families, dataset sizes and runtime/accuracy constraints. We also describe four key aspects of Venture’s implementation that build on ideas from probabilistic graphical models. First, we describe the stochastic procedure interface (SPI) that specifies and encapsulates primitive random variables, analogously to conditional probability tables in a Bayesian network. The SPI supports custom control flow, higher-order probabilistic procedures, partially exchangeable sequences and “likelihood-free” stochastic simulators, all with custom proposals. It also supports the integration of external models that dynamically create, destroy and perform inference over latent variables hidden from Venture. Second, we describe probabilistic execution traces (PETs), which represent execution histories of Venture programs. Like Bayesian networks, PETs capture conditional dependencies, but PETs also represent existential dependencies and exchangeable coupling. Third, we describe partitions of execution histories called scaffolds that can be efficiently constructed from PETs and that factor global inference problems into coherent sub-problems. Finally, we describe a family of stochastic regeneration algorithms for efficiently modifying PET fragments contained within scaffolds without visiting conditionally independent random choices. Stochastic regeneration insulates inference algorithms from the complexities introduced by changes in execution structure, with runtime that scales linearly in cases where previous approaches often scaled quadratically and were therefore impractical. We show how to use stochastic regeneration and the SPI to implement general-purpose inference strategies such as Metropolis-Hastings, Gibbs sampling, and blocked proposals based on hybrids with both particle Markov chain Monte Carlo and mean-field variational inference techniques.",
"title": ""
},
{
"docid": "83fda0277ebcdb6aeae216a38553db9c",
"text": "Variational inference is a scalable technique for approximate Bayesian inference. Deriving variational inference algorithms requires tedious model-specific calculations; this makes it di cult for non-experts to use. We propose an automatic variational inference algorithm, automatic di erentiation variational inference ( ); we implement it in Stan (code available), a probabilistic programming system. In the user provides a Bayesian model and a dataset, nothing else. We make no conjugacy assumptions and support a broad class of models. The algorithm automatically determines an appropriate variational family and optimizes the variational objective. We compare to sampling across hierarchical generalized linear models, nonconjugate matrix factorization, and a mixture model. We train the mixture model on a quarter million images. With we can use variational inference on any model we write in Stan.",
"title": ""
}
] |
[
{
"docid": "15c0f63bb4ab47e47d2bb9789cf404f4",
"text": "This review provides an account of the Study of Mathematically Precocious Youth (SMPY) after 35 years of longitudinal research. Findings from recent 20-year follow-ups from three cohorts, plus 5- or 10-year findings from all five SMPY cohorts (totaling more than 5,000 participants), are presented. SMPY has devoted particular attention to uncovering personal antecedents necessary for the development of exceptional math-science careers and to developing educational interventions to facilitate learning among intellectually precocious youth. Along with mathematical gifts, high levels of spatial ability, investigative interests, and theoretical values form a particularly promising aptitude complex indicative of potential for developing scientific expertise and of sustained commitment to scientific pursuits. Special educational opportunities, however, can markedly enhance the development of talent. Moreover, extraordinary scientific accomplishments require extraordinary commitment both in and outside of school. The theory of work adjustment (TWA) is useful in conceptualizing talent identification and development and bridging interconnections among educational, counseling, and industrial psychology. The lens of TWA can clarify how some sex differences emerge in educational settings and the world of work. For example, in the SMPY cohorts, although more mathematically precocious males than females entered math-science careers, this does not necessarily imply a loss of talent because the women secured similar proportions of advanced degrees and high-level careers in areas more correspondent with the multidimensionality of their ability-preference pattern (e.g., administration, law, medicine, and the social sciences). By their mid-30s, the men and women appeared to be happy with their life choices and viewed themselves as equally successful (and objective measures support these subjective impressions). Given the ever-increasing importance of quantitative and scientific reasoning skills in modern cultures, when mathematically gifted individuals choose to pursue careers outside engineering and the physical sciences, it should be seen as a contribution to society, not a loss of talent.",
"title": ""
},
{
"docid": "1fcaa9ebde2922c13ce42f8f90c9c6ba",
"text": "Despite advances in HIV treatment, there continues to be great variability in the progression of this disease. This paper reviews the evidence that depression, stressful life events, and trauma account for some of the variation in HIV disease course. Longitudinal studies both before and after the advent of highly active antiretroviral therapies (HAART) are reviewed. To ensure a complete review, PubMed was searched for all English language articles from January 1990 to July 2007. We found substantial and consistent evidence that chronic depression, stressful events, and trauma may negatively affect HIV disease progression in terms of decreases in CD4 T lymphocytes, increases in viral load, and greater risk for clinical decline and mortality. More research is warranted to investigate biological and behavioral mediators of these psychoimmune relationships, and the types of interventions that might mitigate the negative health impact of chronic depression and trauma. Given the high rates of depression and past trauma in persons living with HIV/AIDS, it is important for healthcare providers to address these problems as part of standard HIV care.",
"title": ""
},
{
"docid": "98571cb7f32b389683e8a9e70bd87339",
"text": "We identify two issues with the family of algorithms based on the Adversarial Imitation Learning framework. The first problem is implicit bias present in the reward functions used in these algorithms. While these biases might work well for some environments, they can also lead to sub-optimal behavior in others. Secondly, even though these algorithms can learn from few expert demonstrations, they require a prohibitively large number of interactions with the environment in order to imitate the expert for many real-world applications. In order to address these issues, we propose a new algorithm called Discriminator-Actor-Critic that uses off-policy Reinforcement Learning to reduce policy-environment interaction sample complexity by an average factor of 10. Furthermore, since our reward function is designed to be unbiased, we can apply our algorithm to many problems without making any task-specific adjustments.",
"title": ""
},
{
"docid": "25d8f623dbb39e9f34648bb8cd5b3eba",
"text": "In the present paper a band pass microwave filter with spoof surface plasmon polaritions (SPPs) structure is designed and simulated using FDTD method. The filter is composed of four sections, where in the third section, parallel and symmetrically arranged corrugated metallic strip is the spoof SPPs transmission line, in which, periodically arranged rectangular slot is designed to fulfill the spoof SPPs effect. In the fourth section, the spoof SPPs lines are coupled with each other to realize the transmission zero in the up stop band range. The transmission and reflection properties of the spoof SPPs filter in microwave frequency region are elaborately investigated by FDTD method. Through adjusting the geometrical dimensions of the plasmonic coupling structure as well as the rectangular slots structure, the bandwidth and suppression characteristics of the filter can be precisely controlled and the filter has great resistant ability to space electromagnetic interference.",
"title": ""
},
{
"docid": "834af0b828702aae0482a2e31e3f8a40",
"text": "We routinely hear vendors claim that their systems are “secure.” However, without knowing what assumptions are made by the vendor, it is hard to justify such a claim. Prior to claiming the security of a system, it is important to identify the threats to the system in question. Enumerating the threats to a system helps system architects develop realistic and meaningful security requirements. In this paper, we investigate how threat modeling can be used as foundations for the specification of security requirements. Although numerous works have been published on threat modeling, there is a lack of integrated, systematic approach toward threat modeling for complex systems. We examine the differences between modeling software products and complex systems, and outline our approach for identifying threats of networked systems. We also present three case studies of threat modeling: Software-Defined Radio, a network traffic monitoring tool (VisFlowConnect), and a cluster security monitoring tool (NVisionCC).",
"title": ""
},
{
"docid": "640f9ca0bec934786b49f7217e65780b",
"text": "Social Networking has become today’s lifestyle and anyone can easily receive information about everyone in the world. It is very useful if a personal identity can be obtained from the mobile device and also connected to social networking. Therefore, we proposed a face recognition system on mobile devices by combining cloud computing services. Our system is designed in the form of an application developed on Android mobile devices which utilized the Face.com API as an image data processor for cloud computing services. We also applied the Augmented Reality as an information viewer to the users. The result of testing shows that the system is able to recognize face samples with the average percentage of 85% with the total computation time for the face recognition system reached 7.45 seconds, and the average augmented reality translation time is 1.03 seconds to get someone’s information.",
"title": ""
},
{
"docid": "669ea8d461b46927793714d622f253b7",
"text": "Solution of inverse kinematic equations is complex problem, the complexity comes from the nonlinearity of joint space and Cartesian space mapping and having multiple solution. In this work, four adaptive neurofuzzy networks ANFIS are implemented to solve the inverse kinematics of 4-DOF SCARA manipulator. The implementation of ANFIS is easy, and the simulation of it shows that it is very fast and give acceptable error.",
"title": ""
},
{
"docid": "026408a6ad888ea0bcf298a23ef77177",
"text": "The microwave power transmission is an approach for wireless power transmission. As an important component of a microwave wireless power transmission systems, microwave rectennas are widely studied. A rectenna based on a microstrip dipole antenna and a microwave rectifier with high conversion efficiency were designed at 2.45 GHz. The dipole antenna achieved a gain of 5.2 dBi, a return loss greater than 10 dB, and a bandwidth of 20%. The microwave to DC (MW-DC) conversion efficiency of the rectifier was measured as 83% with 20 dBm input power and 600 Ω load. There are 72 rectennas to form an array with an area of 50 cm by 50 cm. The measured results show that the arrangement of the rectenna connection is an effective way to improve the total conversion efficiency, when the microwave power distribution is not uniform on rectenna array. The experimental results show that the highest microwave power transmission efficiency reaches 67.6%.",
"title": ""
},
{
"docid": "116b5f129e780a99a1d78ec02a1fb092",
"text": "We present a family of three interactive Context-Aware Selection Techniques (CAST) for the analysis of large 3D particle datasets. For these datasets, spatial selection is an essential prerequisite to many other analysis tasks. Traditionally, such interactive target selection has been particularly challenging when the data subsets of interest were implicitly defined in the form of complicated structures of thousands of particles. Our new techniques SpaceCast, TraceCast, and PointCast improve usability and speed of spatial selection in point clouds through novel context-aware algorithms. They are able to infer a user's subtle selection intention from gestural input, can deal with complex situations such as partially occluded point clusters or multiple cluster layers, and can all be fine-tuned after the selection interaction has been completed. Together, they provide an effective and efficient tool set for the fast exploratory analysis of large datasets. In addition to presenting Cast, we report on a formal user study that compares our new techniques not only to each other but also to existing state-of-the-art selection methods. Our results show that Cast family members are virtually always faster than existing methods without tradeoffs in accuracy. In addition, qualitative feedback shows that PointCast and TraceCast were strongly favored by our participants for intuitiveness and efficiency.",
"title": ""
},
{
"docid": "8ddf6f978cfa3e4352c607a8e4d6d66a",
"text": "Due to the ability of encoding and mapping semantic information into a highdimensional latent feature space, neural networks have been successfully used for detecting events to a certain extent. However, such a feature space can be easily contaminated by spurious features inherent in event detection. In this paper, we propose a self-regulated learning approach by utilizing a generative adversarial network to generate spurious features. On the basis, we employ a recurrent network to eliminate the fakes. Detailed experiments on the ACE 2005 and TAC-KBP 2015 corpora show that our proposed method is highly effective and adaptable.",
"title": ""
},
{
"docid": "203312195c3df688a594d0c05be72b5a",
"text": "Convolutional Neural Networks (CNNs) have been recently introduced in the domain of session-based next item recommendation. An ordered collection of past items the user has interacted with in a session (or sequence) are embedded into a 2-dimensional latent matrix, and treated as an image. The convolution and pooling operations are then applied to the mapped item embeddings. In this paper, we first examine the typical session-based CNN recommender and show that both the generative model and network architecture are suboptimal when modeling long-range dependencies in the item sequence. To address the issues, we introduce a simple, but very effective generative model that is capable of learning high-level representation from both short- and long-range item dependencies. The network architecture of the proposed model is formed of a stack of holed convolutional layers, which can efficiently increase the receptive fields without relying on the pooling operation. Another contribution is the effective use of residual block structure in recommender systems, which can ease the optimization for much deeper networks. The proposed generative model attains state-of-the-art accuracy with less training time in the next item recommendation task. It accordingly can be used as a powerful recommendation baseline to beat in future, especially when there are long sequences of user feedback.",
"title": ""
},
{
"docid": "eeff8eeb391e789a40cb8f900fa241e3",
"text": "We extend Stochastic Gradient Variational Bayes to perform posterior inference for the weights of Stick-Breaking processes. This development allows us to define a Stick-Breaking Variational Autoencoder (SB-VAE), a Bayesian nonparametric version of the variational autoencoder that has a latent representation with stochastic dimensionality. We experimentally demonstrate that the SB-VAE, and a semisupervised variant, learn highly discriminative latent representations that often outperform the Gaussian VAE’s.",
"title": ""
},
{
"docid": "342c39b533e6a94edd72530ca3d57a54",
"text": "Graph-embedding along with its linearization and kernelization provides a general framework that unifies most traditional dimensionality reduction algorithms. From this framework, we propose a new manifold learning technique called discriminant locally linear embedding (DLLE), in which the local geometric properties within each class are preserved according to the locally linear embedding (LLE) criterion, and the separability between different classes is enforced by maximizing margins between point pairs on different classes. To deal with the out-of-sample problem in visual recognition with vector input, the linear version of DLLE, i.e., linearization of DLLE (DLLE/L), is directly proposed through the graph-embedding framework. Moreover, we propose its multilinear version, i.e., tensorization of DLLE, for the out-of-sample problem with high-order tensor input. Based on DLLE, a procedure for gait recognition is described. We conduct comprehensive experiments on both gait and face recognition, and observe that: 1) DLLE along its linearization and tensorization outperforms the related versions of linear discriminant analysis, and DLLE/L demonstrates greater effectiveness than the linearization of LLE; 2) algorithms based on tensor representations are generally superior to linear algorithms when dealing with intrinsically high-order data; and 3) for human gait recognition, DLLE/L generally obtains higher accuracy than state-of-the-art gait recognition algorithms on the standard University of South Florida gait database.",
"title": ""
},
{
"docid": "0e4c0ffb4c6f036fc872b2a5fd9eeaf4",
"text": "This paper proposes a fast and simple mapping method for lens distortion correction. Typical correction methods use a distortion model defined on distorted coordinates. They need inverse mapping for distortion correction. Inverse mapping of distortion equations is not trivial; approximation must be taken for real time applications. We propose a distortion model defined on ideal undistorted coordinates, so that we can reduce computation time and maintain the high accuracy. We verify accuracy and efficiency of the proposed method from experiments.",
"title": ""
},
{
"docid": "10f726ffc8ee1727b1c905f67fc80686",
"text": "Previous monocular depth estimation methods take a single view and directly regress the expected results. Though recent advances are made by applying geometrically inspired loss functions during training, the inference procedure does not explicitly impose any geometrical constraint. Therefore these models purely rely on the quality of data and the effectiveness of learning to generalize. This either leads to suboptimal results or the demand of huge amount of expensive ground truth labelled data to generate reasonable results. In this paper, we show for the first time that the monocular depth estimation problem can be reformulated as two sub-problems, a view synthesis procedure followed by stereo matching, with two intriguing properties, namely i) geometrical constraints can be explicitly imposed during inference; ii) demand on labelled depth data can be greatly alleviated. We show that the whole pipeline can still be trained in an end-to-end fashion and this new formulation plays a critical role in advancing the performance. The resulting model outperforms all the previous monocular depth estimation methods as well as the stereo block matching method in the challenging KITTI dataset by only using a small number of real training data. The model also generalizes well to other monocular depth estimation benchmarks. We also discuss the implications and the advantages of solving monocular depth estimation using stereo methods.",
"title": ""
},
{
"docid": "54cd27447dffe93350eba701e5c89a10",
"text": "From recalling long forgotten experiences based on a familiar scent or on a piece of music, to lip reading aided conversation in noisy environments or travel sickness caused by mismatch of the signals from vision and the vestibular system, the human perception manifests countless examples of subtle and effortless joint adoption of the multiple senses provided to us by evolution. Emulating such multisensory (or multimodal, i.e., comprising multiple types of input modes or modalities) processing computationally offers tools for more effective, efficient, or robust accomplishment of many multimedia tasks using evidence from the multiple input modalities. Information from the modalities can also be analyzed for patterns and connections across them, opening up interesting applications not feasible with a single modality, such as prediction of some aspects of one modality based on another. In this dissertation, multimodal analysis techniques are applied to selected video tasks with accompanying modalities. More specifically, all the tasks involve some type of analysis of videos recorded by non-professional videographers using mobile devices. Fusion of information from multiple modalities is applied to recording environment classification from video and audio as well as to sport type classification from a set of multi-device videos, corresponding audio, and recording device motion sensor data. The environment classification combines support vector machine (SVM) classifiers trained on various global visual low-level features with audio event histogram based environment classification using k nearest neighbors (k-NN). Rule-based fusion schemes with genetic algorithm (GA)-optimized modality weights are compared to training a SVM classifier to perform the multimodal fusion. A comprehensive selection of fusion strategies is compared for the task of classifying the sport type of a set of recordings from a common event. These include fusion prior to, simultaneously with, and after classification; various approaches for using modality quality estimates; and fusing soft confidence scores as well as crisp single-class predictions. Additionally, different strategies are examined for aggregating the decisions of single videos to a collective prediction from the set of videos recorded concurrently with multiple devices. In both tasks multimodal analysis shows clear advantage over separate classification of the modalities. Another part of the work investigates cross-modal pattern analysis and audio-based video editing. This study examines the feasibility of automatically timing shot cuts of multi-camera concert recordings according to music-related cutting patterns learnt from professional concert videos. Cut timing is a crucial part of automated creation of multicamera mashups, where shots from multiple recording devices from a common event are alternated with the aim at mimicing a professionally produced video. In the framework, separate statistical models are formed for typical patterns of beat-quantized cuts in short segments, differences in beats between consecutive cuts, and relative deviation of cuts from exact beat times. Based on music meter and audio change point analysis of a new",
"title": ""
},
{
"docid": "19acedd03589d1fd1173dd1565d11baf",
"text": "This is the first report on the microbial diversity of xaj-pitha, a rice wine fermentation starter culture through a metagenomics approach involving Illumine-based whole genome shotgun (WGS) sequencing method. Metagenomic DNA was extracted from rice wine starter culture concocted by Ahom community of Assam and analyzed using a MiSeq® System. A total of 2,78,231 contigs, with an average read length of 640.13 bp, were obtained. Data obtained from the use of several taxonomic profiling tools were compared with previously reported microbial diversity studies through the culture-dependent and culture-independent method. The microbial community revealed the existence of amylase producers, such as Rhizopus delemar, Mucor circinelloides, and Aspergillus sp. Ethanol producers viz., Meyerozyma guilliermondii, Wickerhamomyces ciferrii, Saccharomyces cerevisiae, Candida glabrata, Debaryomyces hansenii, Ogataea parapolymorpha, and Dekkera bruxellensis, were found associated with the starter culture along with a diverse range of opportunistic contaminants. The bacterial microflora was dominated by lactic acid bacteria (LAB). The most frequent occurring LAB was Lactobacillus plantarum, Lactobacillus brevis, Leuconostoc lactis, Weissella cibaria, Lactococcus lactis, Weissella para mesenteroides, Leuconostoc pseudomesenteroides, etc. Our study provided a comprehensive picture of microbial diversity associated with rice wine fermentation starter and indicated the superiority of metagenomic sequencing over previously used techniques.",
"title": ""
},
{
"docid": "8055b2c65d5774000fe4fa81ff83efb7",
"text": "Changes in measured image irradiance have many physical causes and are the primary cue for several visual processes, such as edge detection and shape from shading. Using physical models for charged-coupled device ( C C D ) video cameras and material reflectance, we quantify the variation in digitized pixel values that is due to sensor noise and scene variation. This analysis forms the basis of algorithms for camera characterization and calibration and for scene description. Specifically, algorithms are developed for estimating the parameters of camera noise and for calibrating a camera to remove the effects of fixed pattern nonuniformity and spatial variation in dark current. While these techniques have many potential uses, we describe in particular how they can be used to estimate a measure of scene variation. This measure is independent of image irradiance and can be used to identify a surface from a single sensor band over a range of situations. Experimental results confirm that the models presented in this paper are useful for modeling the different sources of variation in real images obtained from video cameras. Index T e m s C C D cameras, computer vision, camera calibration, noise estimation, reflectance variation, sensor modeling.",
"title": ""
},
{
"docid": "afcde1fb33c3e36f35890db09c548a1f",
"text": "Since their inception, captchas have been widely used for preventing fraudsters from performing illicit actions. Nevertheless, economic incentives have resulted in an arms race, where fraudsters develop automated solvers and, in turn, captcha services tweak their design to break the solvers. Recent work, however, presented a generic attack that can be applied to any text-based captcha scheme. Fittingly, Google recently unveiled the latest version of reCaptcha. The goal of their new system is twofold; to minimize the effort for legitimate users, while requiring tasks that are more challenging to computers than text recognition. ReCaptcha is driven by an “advanced risk analysis system” that evaluates requests and selects the difficulty of the captcha that will be returned. Users may be required to click in a checkbox, or solve a challenge by identifying images with similar content. In this paper, we conduct a comprehensive study of reCaptcha, and explore how the risk analysis process is influenced by each aspect of the request. Through extensive experimentation, we identify flaws that allow adversaries to effortlessly influence the risk analysis, bypass restrictions, and deploy large-scale attacks. Subsequently, we design a novel low-cost attack that leverages deep learning technologies for the semantic annotation of images. Our system is extremely effective, automatically solving 70.78% of the image reCaptcha challenges, while requiring only 19 seconds per challenge. We also apply our attack to the Facebook image captcha and achieve an accuracy of 83.5%. Based on our experimental findings, we propose a series of safeguards and modifications for impacting the scalability and accuracy of our attacks. Overall, while our study focuses on reCaptcha, our findings have wide implications; as the semantic information conveyed via images is increasingly within the realm of automated reasoning, the future of captchas relies on the exploration of novel directions.",
"title": ""
},
{
"docid": "e573d85271e3f3cc54b774de8a5c6dd9",
"text": "This paper explores the use of a learned classifier for post-OCR text correction. Experiments with the Arabic language show that this approach, which integrates a weighted confusion matrix and a shallow language model, improves the vast majority of segmentation and recognition errors, the most frequent types of error on our dataset.",
"title": ""
}
] |
scidocsrr
|
aaa68f7ae2a7fb025c06dccd69d00a96
|
Relaying Protocols for Wireless Energy Harvesting and Information Processing
|
[
{
"docid": "2b540b2e48d5c381e233cb71c0cf36fe",
"text": "In this paper we review the most peculiar and interesting information-theoretic and communications features of fading channels. We first describe the statistical models of fading channels which are frequently used in the analysis and design of communication systems. Next, we focus on the information theory of fading channels, by emphasizing capacity as the most important performance measure. Both single-user and multiuser transmission are examined. Further, we describe how the structure of fading channels impacts code design, and finally overview equalization of fading multipath channels.",
"title": ""
},
{
"docid": "8836fddeb496972fa38005fd2f8a4ed4",
"text": "Energy harvesting has grown from long-established concepts into devices for powering ubiquitously deployed sensor networks and mobile electronics. Systems can scavenge power from human activity or derive limited energy from ambient heat, light, radio, or vibrations. Ongoing power management developments enable battery-powered electronics to live longer. Such advances include dynamic optimization of voltage and clock rate, hybrid analog-digital designs, and clever wake-up procedures that keep the electronics mostly inactive. Exploiting renewable energy resources in the device's environment, however, offers a power source limited by the device's physical survival rather than an adjunct energy store. Energy harvesting's true legacy dates to the water wheel and windmill, and credible approaches that scavenge energy from waste heat or vibration have been around for many decades. Nonetheless, the field has encountered renewed interest as low-power electronics, wireless standards, and miniaturization conspire to populate the world with sensor networks and mobile devices. This article presents a whirlwind survey through energy harvesting, spanning historic and current developments.",
"title": ""
},
{
"docid": "c23dc5fdb8c2d3b7314d895bbcb13832",
"text": "Wireless power transfer (WPT) is a promising new solution to provide convenient and perpetual energy supplies to wireless networks. In practice, WPT is implementable by various technologies such as inductive coupling, magnetic resonate coupling, and electromagnetic (EM) radiation, for short-/mid-/long-range applications, respectively. In this paper, we consider the EM or radio signal enabled WPT in particular. Since radio signals can carry energy as well as information at the same time, a unified study on simultaneous wireless information and power transfer (SWIPT) is pursued. Specifically, this paper studies a multiple-input multiple-output (MIMO) wireless broadcast system consisting of three nodes, where one receiver harvests energy and another receiver decodes information separately from the signals sent by a common transmitter, and all the transmitter and receivers may be equipped with multiple antennas. Two scenarios are examined, in which the information receiver and energy receiver are separated and see different MIMO channels from the transmitter, or co-located and see the identical MIMO channel from the transmitter. For the case of separated receivers, we derive the optimal transmission strategy to achieve different tradeoffs for maximal information rate versus energy transfer, which are characterized by the boundary of a so-called rate-energy (R-E) region. For the case of co-located receivers, we show an outer bound for the achievable R-E region due to the potential limitation that practical energy harvesting receivers are not yet able to decode information directly. Under this constraint, we investigate two practical designs for the co-located receiver case, namely time switching and power splitting, and characterize their achievable R-E regions in comparison to the outer bound.",
"title": ""
}
] |
[
{
"docid": "d90d40a59f91b59bd63a3c52a8d715a4",
"text": "The paradigm shift from planar (two dimensional (2D)) to vertical (three-dimensional (3D)) models has placed the NAND flash technology on the verge of a design evolution that can handle the demands of next-generation storage applications. However, it also introduces challenges that may obstruct the realization of such 3D NAND flash. Specifically, we observed that the fast threshold drift (fast-drift) in a charge-trap flash-based 3D NAND cell can make it lose a critical fraction of the stored charge relatively soon after programming and generate errors.\n In this work, we first present an elastic read reference (VRef) scheme (ERR) for reducing such errors in ReveNAND—our fast-drift aware 3D NAND design. To address the inherent limitation of the adaptive VRef, we introduce a new intra-block page organization (hitch-hike) that can enable stronger error correction for the error-prone pages. In addition, we propose a novel reinforcement-learning-based smart data refill scheme (iRefill) to counter the impact of fast-drift with minimum performance and hardware overhead. Finally, we present the first analytic model to characterize fast-drift and evaluate its system-level impact. Our results show that, compared to conventional 3D NAND design, our ReveNAND can reduce fast-drift errors by 87%, on average, and can lower the ECC latency and energy overheads by 13× and 10×, respectively.",
"title": ""
},
{
"docid": "4f3d2b869322125a8fad8a39726c99f8",
"text": "Routing Protocol for Low Power and Lossy Networks (RPL) is the routing protocol for IoT and Wireless Sensor Networks. RPL is a lightweight protocol, having good routing functionality, but has basic security functionality. This may make RPL vulnerable to various attacks. Providing security to IoT networks is challenging, due to their constrained nature and connectivity to the unsecured internet. This survey presents the elaborated review on the security of Routing Protocol for Low Power and Lossy Networks (RPL). This survey is built upon the previous work on RPL security and adapts to the security issues and constraints specific to Internet of Things. An approach to classifying RPL attacks is made based on Confidentiality, Integrity, and Availability. Along with that, we surveyed existing solutions to attacks which are evaluated and given possible solutions (theoretically, from various literature) to the attacks which are not yet evaluated. We further conclude with open research challenges and future work needs to be done in order to secure RPL for Internet of Things (IoT).",
"title": ""
},
{
"docid": "da694b74b3eaae46d15f589e1abef4b8",
"text": "Impaired water quality caused by human activity and the spread of invasive plant and animal species has been identified as a major factor of degradation of coastal ecosystems in the tropics. The main goal of this study was to evaluate the performance of AnnAGNPS (Annualized NonPoint Source Pollution Model), in simulating runoff and soil erosion in a 48 km watershed located on the Island of Kauai, Hawaii. The model was calibrated and validated using 2 years of observed stream flow and sediment load data. Alternative scenarios of spatial rainfall distribution and canopy interception were evaluated. Monthly runoff volumes predicted by AnnAGNPS compared well with the measured data (R 1⁄4 0.90, P < 0.05); however, up to 60% difference between the actual and simulated runoff were observed during the driest months (May and July). Prediction of daily runoff was less accurate (R 1⁄4 0.55, P < 0.05). Predicted and observed sediment yield on a daily basis was poorly correlated (R 1⁄4 0.5, P < 0.05). For the events of small magnitude, the model generally overestimated sediment yield, while the opposite was true for larger events. Total monthly sediment yield varied within 50% of the observed values, except for May 2004. Among the input parameters the model was most sensitive to the values of ground residue cover and canopy cover. It was found that approximately one third of the watershed area had low sediment yield (0e1 t ha 1 y ), and presented limited erosion threat. However, 5% of the area had sediment yields in excess of 5 t ha 1 y . Overall, the model performed reasonably well, and it can be used as a management tool on tropical watersheds to estimate and compare sediment loads, and identify ‘‘hot spots’’ on the landscape. 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "1e4daa242bfee88914b084a1feb43212",
"text": "In this paper, we present a novel approach of human activity prediction. Human activity prediction is a probabilistic process of inferring ongoing activities from videos only containing onsets (i.e. the beginning part) of the activities. The goal is to enable early recognition of unfinished activities as opposed to the after-the-fact classification of completed activities. Activity prediction methodologies are particularly necessary for surveillance systems which are required to prevent crimes and dangerous activities from occurring. We probabilistically formulate the activity prediction problem, and introduce new methodologies designed for the prediction. We represent an activity as an integral histogram of spatio-temporal features, efficiently modeling how feature distributions change over time. The new recognition methodology named dynamic bag-of-words is developed, which considers sequential nature of human activities while maintaining advantages of the bag-of-words to handle noisy observations. Our experiments confirm that our approach reliably recognizes ongoing activities from streaming videos with a high accuracy.",
"title": ""
},
{
"docid": "7b82678399bf90fd3b08e85f5a3fc39d",
"text": "Language and vision provide complementary information. Integrating both modalities in a single multimodal representation is an unsolved problem with wide-reaching applications to both natural language processing and computer vision. In this paper, we present a simple and effective method that learns a language-to-vision mapping and uses its output visual predictions to build multimodal representations. In this sense, our method provides a cognitively plausible way of building representations, consistent with the inherently reconstructive and associative nature of human memory. Using seven benchmark concept similarity tests we show that the mapped (or imagined) vectors not only help to fuse multimodal information, but also outperform strong unimodal baselines and state-of-the-art multimodal methods, thus exhibiting more human-like judgments. Ultimately, the present work sheds light on fundamental questions of natural language understanding concerning the fusion of vision and language such as the plausibility of more associative and reconstructive approaches.",
"title": ""
},
{
"docid": "1884e92beb10bb653af5b8efa967e92d",
"text": "Presents an overview of current design techniques for operational amplifiers implemented in CMOS and NMOS technology at a tutorial level. Primary emphasis is placed on CMOS amplifiers because of their more widespread use. Factors affecting voltage gain, input noise, offsets, common mode and power supply rejection, power dissipation, and transient response are considered for the traditional bipolar-derived two-stage architecture. Alternative circuit approaches for optimization of particular performance aspects are summarized, and examples are given.",
"title": ""
},
{
"docid": "ece965df2822fa177a87bb1d41405d52",
"text": "Sexual murders and sexual serial killers have always been of popular interest with the public. Professionals are still mystified as to why sexual killers commit the “ultimate crime” of both sexual assault and homicide. Questions emerge as to why some sexual offenders kill one time vs in a serial manner. It is understood that the vast majority of sexual offenders such as pedophiles and adult rapists do NOT kill their victims. The purpose of this chapter is to explore serial sexual murder in terms of both theoretical and clinical parameters in an attempt to understand why they commit the “ultimate crime.” We will also examine the similarities and differences between serial sexual murderers and typical rape offenders who do not kill their victims. Using real-life examples of wellknown serial killers, we will compare the “theoretical” with the “practical;” what happened, why it happened, and what we may be able to do about it. The authors of this chapter present two perspectives: (1) A developmental motivational view as to why serial killers commit these homicides, and (2) Implications for treatment of violent offenders. To adequately present these perspectives, we must look at four distinct areas: (1) Differentiating between the two types of “lust” murderers i.e. rapists and sexual serial killers, (2) Examining personality or lifestyle themes, (3) Exploration of the mind-body developmental process, and (4) treatment applications for violent offenders.",
"title": ""
},
{
"docid": "f9a3f69cf26b279fa8600fd2ebbc3426",
"text": "We introduce Interactive Question Answering (IQA), the task of answering questions that require an autonomous agent to interact with a dynamic visual environment. IQA presents the agent with a scene and a question, like: \"Are there any apples in the fridge?\" The agent must navigate around the scene, acquire visual understanding of scene elements, interact with objects (e.g. open refrigerators) and plan for a series of actions conditioned on the question. Popular reinforcement learning approaches with a single controller perform poorly on IQA owing to the large and diverse state space. We propose the Hierarchical Interactive Memory Network (HIMN), consisting of a factorized set of controllers, allowing the system to operate at multiple levels of temporal abstraction. To evaluate HIMN, we introduce IQUAD V1, a new dataset built upon AI2-THOR [35], a simulated photo-realistic environment of configurable indoor scenes with interactive objects. IQUAD V1 has 75,000 questions, each paired with a unique scene configuration. Our experiments show that our proposed model outperforms popular single controller based methods on IQUAD V1. For sample questions and results, please view our video: https://youtu.be/pXd3C-1jr98.",
"title": ""
},
{
"docid": "a4aab340255c068137d3b3a1daaf97b5",
"text": "We present here SEMILAR, a SEMantic simILARity toolkit. SEMILAR implements a number of algorithms for assessing the semantic similarity between two texts. It is available as a Java library and as a Java standalone application offering GUI-based access to the implemented semantic similarity methods. Furthermore, it offers facilities for manual semantic similarity annotation by experts through its component SEMILAT (a SEMantic simILarity Annotation Tool).",
"title": ""
},
{
"docid": "6ccdfa4cc3bfbb8bf8f488aaf0c0fc1e",
"text": "Strings are ubiquitous in computer systems and hence string processing has attracted extensive research effort from computer scientists in diverse areas. One of the most important problems in string processing is to efficiently evaluate the similarity between two strings based on a specified similarity measure. String similarity search is a fundamental problem in information retrieval, database cleaning, biological sequence analysis, and more. While a large number of dissimilarity measures on strings have been proposed, edit distance is the most popular choice in a wide spectrum of applications. Existing indexing techniques for similarity search queries based on edit distance, e.g., approximate selection and join queries, rely mostly on n-gram signatures coupled with inverted list structures. These techniques are tailored for specific query types only, and their performance remains unsatisfactory especially in scenarios with strict memory constraints or frequent data updates. In this paper\n we propose the Bed-tree, a B+-tree based index structure for evaluating all types of similarity queries on edit distance and normalized edit distance. We identify the necessary properties of a mapping from the string space to the integer space for supporting searching and pruning for these queries. Three transformations are proposed that capture different aspects of information inherent in strings, enabling efficient pruning during the search process on the tree. Compared to state-of-the-art methods on string similarity search, the Bed-tree is a complete solution that meets the requirements of all applications, providing high scalability and fast response time.",
"title": ""
},
{
"docid": "56bd18820903da1917ca5d194b520413",
"text": "The problem of identifying subtle time-space clustering of dis ease, as may be occurring in leukemia, is described and reviewed. Published approaches, generally associated with studies of leuke mia, not dependent on knowledge of the underlying population for their validity, are directed towards identifying clustering by establishing a relationship between the temporal and the spatial separations for the n(n —l)/2 possible pairs which can be formed from the n observed cases of disease. Here it is proposed that statistical power can be improved by applying a reciprocal trans form to these separations. While a permutational approach can give valid probability levels for any observed association, for reasons of practicability, it is suggested that the observed associa tion be tested relative to its permutational variance. Formulas and computational procedures for doing so are given. While the distance measures between points represent sym metric relationships subject to mathematical and geometric regu larities, the variance formula developed is appropriate for ar bitrary relationships. Simplified procedures are given for the ease of symmetric and skew-symmetric relationships. The general pro cedure is indicated as being potentially useful in other situations as, for example, the study of interpersonal relationships. Viewing the procedure as a regression approach, the possibility for extend ing it to nonlinear and mult ¡variatesituations is suggested. Other aspects of the problem and of the procedure developed are discussed.",
"title": ""
},
{
"docid": "a354949d97de673e71510618a604e264",
"text": "Fast Magnetic Resonance Imaging (MRI) is highly in demand for many clinical applications in order to reduce the scanning cost and improve the patient experience. This can also potentially increase the image quality by reducing the motion artefacts and contrast washout. However, once an image field of view and the desired resolution are chosen, the minimum scanning time is normally determined by the requirement of acquiring sufficient raw data to meet the Nyquist–Shannon sampling criteria. Compressive Sensing (CS) theory has been perfectly matched to the MRI scanning sequence design with much less required raw data for the image reconstruction. Inspired by recent advances in deep learning for solving various inverse problems, we propose a conditional Generative Adversarial Networks-based deep learning framework for de-aliasing and reconstructing MRI images from highly undersampled data with great promise to accelerate the data acquisition process. By coupling an innovative content loss with the adversarial loss our de-aliasing results are more realistic. Furthermore, we propose a refinement learning procedure for training the generator network, which can stabilise the training with fast convergence and less parameter tuning. We demonstrate that the proposed framework outperforms state-of-the-art CS-MRI methods, in terms of reconstruction error and perceptual image quality. In addition, our method can reconstruct each image in 0.22ms–0.37ms, which is promising for real-time applications.",
"title": ""
},
{
"docid": "abd57e2ae1e88332c0076a4ec2da167b",
"text": "In the last decade, developers have been increasingly sharing their questions with each other through Question and Answer (Q&A) websites. As a result, these websites have become valuable knowledge repositories, covering a wealth of topics related to particular programming languages. This knowledge is even more useful as the developer community evaluates both questions and answers through a voting mechanism. As votes accumulate, the developer community recognizes reputed members and further trusts their answers. In this paper, we analyze the community's questions and answers to determine the developers' personality traits, using the Linguistic Inquiry and Word Count (LIWC). We explore the personality traits of Stack Overflow authors by categorizing them into different categories based on their reputation. Through textual analysis of Stack Overflow posts, we found that the top reputed authors are more extroverted compared to medium and low reputed users. Moreover, authors of up-voted posts express significantly less negative emotions than authors of down-voted posts.",
"title": ""
},
{
"docid": "a2dfa8007b3a13da31a768fe07393d15",
"text": "Predicting the time and effort for a software problem has long been a difficult task. We present an approach that automatically predicts the fixing effort, i.e., the person-hours spent on fixing an issue. Our technique leverages existing issue tracking systems: given a new issue report, we use the Lucene framework to search for similar, earlier reports and use their average time as a prediction. Our approach thus allows for early effort estimation, helping in assigning issues and scheduling stable releases. We evaluated our approach using effort data from the JBoss project. Given a sufficient number of issues reports, our automatic predictions are close to the actual effort; for issues that are bugs, we are off by only one hour, beating na¨ýve predictions by a factor of four.",
"title": ""
},
{
"docid": "2a0577aa61ca1cbde207306fdb5beb08",
"text": "In recent years, researchers have shown that unwanted web tracking is on the rise, as advertisers are trying to capitalize on users' online activity, using increasingly intrusive and sophisticated techniques. Among these, browser fingerprinting has received the most attention since it allows trackers to uniquely identify users despite the clearing of cookies and the use of a browser's private mode. In this paper, we investigate and quantify the fingerprintability of browser extensions, such as, AdBlock and Ghostery. We show that an extension's organic activity in a page's DOM can be used to infer its presence, and develop XHound, the first fully automated system for fingerprinting browser extensions. By applying XHound to the 10,000 most popular Google Chrome extensions, we find that a significant fraction of popular browser extensions are fingerprintable and could thus be used to supplement existing fingerprinting methods. Moreover, by surveying the installed extensions of 854 users, we discover that many users tend to install different sets of fingerprintable browser extensions and could thus be uniquely, or near-uniquely identifiable by extension-based fingerprinting. We use XHound's results to build a proof-of-concept extension-fingerprinting script and show that trackers can fingerprint tens of extensions in just a few seconds. Finally, we describe why the fingerprinting of extensions is more intrusive than the fingerprinting of other browser and system properties, and sketch two different approaches towards defending against extension-based fingerprinting.",
"title": ""
},
{
"docid": "0496af98bbef3d4d6f5e7a67e9ef5508",
"text": "Cancer is second only to heart disease as a cause of death in the US, with a further negative economic impact on society. Over the past decade, details have emerged which suggest that different glycosylphosphatidylinositol (GPI)-anchored proteins are fundamentally involved in a range of cancers. This post-translational glycolipid modification is introduced into proteins via the action of the enzyme GPI transamidase (GPI-T). In 2004, PIG-U, one of the subunits of GPI-T, was identified as an oncogene in bladder cancer, offering a direct connection between GPI-T and cancer. GPI-T is a membrane-bound, multi-subunit enzyme that is poorly understood, due to its structural complexity and membrane solubility. This review is divided into three sections. First, we describe our current understanding of GPI-T, including what is known about each subunit and their roles in the GPI-T reaction. Next, we review the literature connecting GPI-T to different cancers with an emphasis on the variations in GPI-T subunit over-expression. Finally, we discuss some of the GPI-anchored proteins known to be involved in cancer onset and progression and that serve as potential biomarkers for disease-selective therapies. Given that functions for only one of GPI-T's subunits have been robustly assigned, the separation between healthy and malignant GPI-T activity is poorly defined.",
"title": ""
},
{
"docid": "fb63c9a8bc15bc7dc490b316d35c24e5",
"text": "Our objective was to document variations in the topography of pelvic floor nerves (PFN) and describe a nerve-free zone adjacent to the sacrospinous ligament (SSL). Pelvic floor dissections were performed on 15 female cadavers. The course of the PFN was described in relation to the ischial spine (IS) and the SSL. The pudendal nerve (PN) passed medial to the IS and posterior to the SSL at a mean distance of 0.6 cm (SD = ±0.4) in 80% of cadavers. In 40% of cadavers, an inferior rectal nerve (IRN) variant pierced the SSL at a distance of 1.9 cm (SD = ±0.7) medial to the IS. The levator ani nerve (LAN), coursed over the superior surface of the SSL–coccygeus muscle complex at a mean distance of 2.5 cm (SD = ±0.7) medial to the IS. Anatomic variations were found which challenge the classic description of PFN. A nerve-free zone is situated in the medial third of the SSL.",
"title": ""
},
{
"docid": "93c84b6abfe30ff7355e4efc310b440b",
"text": "Parallel file systems (PFS) are widely-used in modern computing systems to mask the ever-increasing performance gap between computing and data access. PFSs favor large requests, and do not work well for small requests, especially small random requests. Newer Solid State Drives (SSD) have excellent performance on small random data accesses, but also incur a high monetary cost. In this study, we propose a hybrid architecture named the Smart Selective SSD Cache (S4D-Cache), which employs a small set of SSD-based file servers as a selective cache of conventional HDD-based file servers. A novel scheme is introduced to identify performance-critical data, and conduct selective cache admission to fully utilize the hybrid architecture in terms of data-access parallelism and randomness. We have implemented an S4D-Cache under the MPI-IO and PVFS2 parallel file system. Our experiments show that S4D-Cache can significantly improve I/O throughput, and is a promising approach for parallel applications.",
"title": ""
},
{
"docid": "aabae18789f9aab997ea7e1a92497de7",
"text": "We develop, in this paper, a representation of time and events that supports a range of reasoning tasks such as monitoring and detection of event patterns which may facilitate the explanation of root cause(s) of faults. We shall compare two approaches to event definition: the active database approach in which events are defined in terms of the conditions for their detection at an instant, and the knowledge representation approach in which events are defined in terms of the conditions for their occurrence over an interval. We shall show the shortcomings of the former definition and employ a three-valued temporal first order nonmonotonic logic, extended with events, in order to integrate both definitions.",
"title": ""
}
] |
scidocsrr
|
4dc5046c59ea36a55c7ea33e79e9e8f7
|
Voice Impersonation Using Generative Adversarial Networks
|
[
{
"docid": "81a1c561f60f281187ec6ae4c9f42129",
"text": "In this paper, we describe a novel spectral conversion method for voice conversion (VC). A Gaussian mixture model (GMM) of the joint probability density of source and target features is employed for performing spectral conversion between speakers. The conventional method converts spectral parameters frame by frame based on the minimum mean square error. Although it is reasonably effective, the deterioration of speech quality is caused by some problems: 1) appropriate spectral movements are not always caused by the frame-based conversion process, and 2) the converted spectra are excessively smoothed by statistical modeling. In order to address those problems, we propose a conversion method based on the maximum-likelihood estimation of a spectral parameter trajectory. Not only static but also dynamic feature statistics are used for realizing the appropriate converted spectrum sequence. Moreover, the oversmoothing effect is alleviated by considering a global variance feature of the converted spectra. Experimental results indicate that the performance of VC can be dramatically improved by the proposed method in view of both speech quality and conversion accuracy for speaker individuality.",
"title": ""
},
{
"docid": "708fbc1eff4d96da2f3adaa403db3090",
"text": "We propose a new system for generating art. The system generates art by looking at art and learning about style; and becomes creative by increasing the arousal potential of the generated art by deviating from the learned styles. We build over Generative Adversarial Networks (GAN), which have shown the ability to learn to generate novel images simulating a given distribution. We argue that such networks are limited in their ability to generate creative products in their original design. We propose modifications to its objective to make it capable of generating creative art by maximizing deviation from established styles and minimizing deviation from art distribution. We conducted experiments to compare the response of human subjects to the generated art with their response to art created by artists. The results show that human subjects could not distinguish art generated by the proposed system from art generated by contemporary artists and shown in top art",
"title": ""
}
] |
[
{
"docid": "4186e2c50355516bf8860a7fea4415cc",
"text": "Performing approximate data matching has always been an intriguing problem for both industry and academia. This task becomes even more challenging when the requirement of data privacy rises. In this paper, we propose a novel technique to address the problem of efficient privacy-preserving approximate record linkage. The secure framework we propose consists of two basic components. First, we utilize a secure blocking component based on phonetic algorithms statistically enhanced to improve security. Second, we use a secure matching component where actual approximate matching is performed using a novel private approach of the Levenshtein Distance algorithm. Our goal is to combine the speed of private blocking with the increased accuracy of approximate secure matching. Category: Ubiquitous computing; Security and privacy",
"title": ""
},
{
"docid": "2710599258f440d27efe958ed2cfb576",
"text": "In this paper, we present an evaluation of learning algorithms of a novel rule evaluation support method for postprocessing of mined results with rule evaluation models based on objective indices. Post-processing of mined results is one of the key processes in a data mining process. However, it is difficult for human experts to completely evaluate several thousands of rules from a large dataset with noises. To reduce the costs in such rule evaluation task, we have developed the rule evaluation support method with rule evaluation models, which learn from objective indices for mined classification rules and evaluations by a human expert for each rule. To enhance adaptability of rule evaluation models, we introduced a constructive meta-learning system to choose proper learning algorithms. Then, we have done the case study on the meningitis data mining as an actual problem",
"title": ""
},
{
"docid": "481bcce9e339ae72e7c1908692f01e3c",
"text": "Elastic deformation of the track effects on EMS maglev vehicles safety and comfort, brought challenges to the design of the suspension control system. In order to solve this problem, a nonlinear dynamic model of single magnet EMS maglev vehicles is built, discussed the model reference adaptive control method in the application of maglev vehicles suspension control; designed the controller based on Lyapunov stability theory. A numerical simulation is performed with MATLAB, and the effectiveness of the control method was verified.",
"title": ""
},
{
"docid": "f14e31f4aac9abb197d102d6fe235eaf",
"text": "In research of time series forecasting, a lot of uncertainty is still related to the task of selecting an appropriate forecasting method for a problem. It is not only the individual algorithms that are available in great quantities; combination approaches have been equally popular in the last decades. Alone the question of whether to choose the most promising individual method or a combination is not straightforward to answer. Usually, expert knowledge is needed to make an informed decision, however, in many cases this is not feasible due to lack of resources like time, money and manpower. This work identifies an extensive feature set describing both the time series and the pool of individual forecasting methods. The applicability of different meta-learning approaches are investigated, first to gain knowledge on which model works best in which situation, later to improve forecasting performance. Results show the superiority of a rankingbased combination of methods over simple model selection approaches.",
"title": ""
},
{
"docid": "c1694750a148296c8b907eb6d1a86074",
"text": "A field experiment was carried out to implement a remote sensing energy balance (RSEB) algorithm for estimating the incoming solar radiation (Rsi), net radiation (Rn), sensible heat flux (H), soil heat flux (G) and latent heat flux (LE) over a drip-irrigated olive (cv. Arbequina) orchard located in the Pencahue Valley, Maule Region, Chile (35 ̋251S; 71 ̋441W; 90 m above sea level). For this study, a helicopter-based unmanned aerial vehicle (UAV) was equipped with multispectral and infrared thermal cameras to obtain simultaneously the normalized difference vegetation index (NDVI) and surface temperature (Tsurface) at very high resolution (6 cm ˆ 6 cm). Meteorological variables and surface energy balance components were measured at the time of the UAV overpass (near solar noon). The performance of the RSEB algorithm was evaluated using measurements of H and LE obtained from an eddy correlation system. In addition, estimated values of Rsi and Rn were compared with ground-truth measurements from a four-way net radiometer while those of G were compared with soil heat flux based on flux plates. Results indicated that RSEB algorithm estimated LE and H with errors of 7% and 5%, respectively. Values of the root mean squared error (RMSE) and mean absolute error (MAE) for LE were 50 and 43 W m ́2 while those for H were 56 and 46 W m ́2, respectively. Finally, the RSEB algorithm computed Rsi, Rn and G with error less than 5% and with values of RMSE and MAE less than 38 W m ́2. Results demonstrated that multispectral and thermal cameras placed on an UAV could provide an excellent tool to evaluate the intra-orchard spatial variability of Rn, G, H, LE, NDVI and Tsurface over the tree canopy and soil surface between rows.",
"title": ""
},
{
"docid": "e903c6b579037d023f5c5bf1b64cc32c",
"text": "In order to increase the fault tolerance of a motor drive, multiphase systems are adopted. Since custom solutions are expensive, machines with dual three-phase windings supplied by two parallel converters seem to be more convenient. In the event of a fault, one of the two three-phase windings (the faulty winding) is disconnected, and the motor is operated by means of the healthy winding only. A fractional-slot permanent-magnet (PM) motor with 12 slots and 10 poles is considered with two different rotor topologies: the interior PM (IPM) rotor and the surface-mounted PM rotor. Various winding configurations of dual three-phase windings are taken into account, comparing average torque, torque ripple, mutual coupling among phases, overload capability, and short-circuit behavior. Considerations are given regarding the winding arrangements so as to avoid excessive torque ripple and unbalanced radial forces in faulty operating conditions. An IPM motor prototype has been built, and experimental results are carried out in order to verify the numerical predictions.",
"title": ""
},
{
"docid": "7fa92e07f76bcefc639ae807147b8d7b",
"text": "We present a novel method for discovering parallel sentences in comparable, non-parallel corpora. We train a maximum entropy classifier that, given a pair of sentences, can reliably determine whether or not they are translations of each other. Using this approach, we extract parallel data from large Chinese, Arabic, and English non-parallel newspaper corpora. We evaluate the quality of the extracted data by showing that it improves the performance of a state-of-the-art statistical machine translation system. We also show that a good-quality MT system can be built from scratch by starting with a very small parallel corpus (100,000 words) and exploiting a large non-parallel corpus. Thus, our method can be applied with great benefit to language pairs for which only scarce resources are available.",
"title": ""
},
{
"docid": "51f50e82a592be23c522b1c8500663f3",
"text": "Interpretable machine learning tackles the important problem that humans cannot understand the behaviors of complex machine learning models and how these classifiers arrive at a particular decision. Although many approaches have been proposed, a comprehensive understanding of the achievements and challenges is still lacking. This paper provides a survey covering existing techniques and methods to increase the interpretability of machine learning models and also discusses the crucial issues to consider in future work such as interpretation design principles and evaluation metrics in order to push forward the area of interpretable machine learning.",
"title": ""
},
{
"docid": "19be2e9b4e97620b5f6422c45a3b43f6",
"text": "Human beings have developed a diverse food culture. Many factors like ingredients, visual appearance, courses (e.g., breakfast and lunch), flavor and geographical regions affect our food perception and choice. In this work, we focus on multi-dimensional food analysis based on these food factors to benefit various applications like summary and recommendation. For that solution, we propose a delicious recipe analysis framework to incorporate various types of continuous and discrete attribute features and multi-modal information from recipes. First, we develop a Multi-Attribute Theme Modeling (MATM) method, which can incorporate arbitrary types of attribute features to jointly model them and the textual content. We then utilize a multi-modal embedding method to build the correlation between the learned textual theme features from MATM and visual features from the deep learning network. By learning attribute-theme relations and multi-modal correlation, we are able to fulfill different applications, including (1) flavor analysis and comparison for better understanding the flavor patterns from different dimensions, such as the region and course, (2) region-oriented multi-dimensional food summary with both multi-modal and multi-attribute information and (3) multi-attribute oriented recipe recommendation. Furthermore, our proposed framework is flexible and enables easy incorporation of arbitrary types of attributes and modalities. Qualitative and quantitative evaluation results have validated the effectiveness of the proposed method and framework on the collected Yummly dataset.",
"title": ""
},
{
"docid": "5cd70dede0014f4a58c0dc8460ba8513",
"text": "In this paper the Model Predictive Control (MPC) strategy is used to solve the mobile robot trajectory tracking problem, where controller must ensure that robot follows pre-calculated trajectory. The so-called explicit optimal controller design and implementation are described. The MPC solution is calculated off-line and expressed as a piecewise affine function of the current state of a mobile robot. A linearized kinematic model of a differential drive mobile robot is used for the controller design purpose. The optimal controller, which has a form of a look-up table, is tested in simulation and experimentally.",
"title": ""
},
{
"docid": "f950b6c682948d1787bf17824a4a1d9f",
"text": "Historically, mailing lists have been the preferred means for coordinating development and user support activities. With the emergence and popularity growth of social Q&A sites such as the StackExchange network (e.g., StackOverflow), this is beginning to change. Such sites offer different socio-technical incentives to their participants than mailing lists do, e.g., rich web environments to store and manage content collaboratively, or a place to showcase their knowledge and expertise more vividly to peers or potential recruiters. A key difference between StackExchange and mailing lists is gamification, i.e., StackExchange participants compete to obtain reputation points and badges. In this paper, we use a case study of R (a widely-used tool for data analysis) to investigate how mailing list participation has evolved since the launch of StackExchange. Our main contribution is the assembly of a joint data set from the two sources, in which participants in both the texttt{r-help} mailing list and StackExchange are identifiable. This permits their activities to be linked across the two resources and also over time. With this data set we found that user support activities show a strong shift away from texttt{r-help}. In particular, mailing list experts are migrating to StackExchange, where their behaviour is different. First, participants active both on texttt{r-help} and on StackExchange are more active than those who focus exclusively on only one of the two. Second, they provide faster answers on StackExchange than on texttt{r-help}, suggesting they are motivated by the emph{gamified} environment. To our knowledge, our study is the first to directly chart the changes in behaviour of specific contributors as they migrate into gamified environments, and has important implications for knowledge management in software engineering.",
"title": ""
},
{
"docid": "90125582272e3f16a34d5d0c885f573a",
"text": "RNAs have been shown to undergo transfer between mammalian cells, although the mechanism behind this phenomenon and its overall importance to cell physiology is not well understood. Numerous publications have suggested that RNAs (microRNAs and incomplete mRNAs) undergo transfer via extracellular vesicles (e.g., exosomes). However, in contrast to a diffusion-based transfer mechanism, we find that full-length mRNAs undergo direct cell-cell transfer via cytoplasmic extensions characteristic of membrane nanotubes (mNTs), which connect donor and acceptor cells. By employing a simple coculture experimental model and using single-molecule imaging, we provide quantitative data showing that mRNAs are transferred between cells in contact. Examples of mRNAs that undergo transfer include those encoding GFP, mouse β-actin, and human Cyclin D1, BRCA1, MT2A, and HER2. We show that intercellular mRNA transfer occurs in all coculture models tested (e.g., between primary cells, immortalized cells, and in cocultures of immortalized human and murine cells). Rapid mRNA transfer is dependent upon actin but is independent of de novo protein synthesis and is modulated by stress conditions and gene-expression levels. Hence, this work supports the hypothesis that full-length mRNAs undergo transfer between cells through a refined structural connection. Importantly, unlike the transfer of miRNA or RNA fragments, this process of communication transfers genetic information that could potentially alter the acceptor cell proteome. This phenomenon may prove important for the proper development and functioning of tissues as well as for host-parasite or symbiotic interactions.",
"title": ""
},
{
"docid": "c2b111e9c4e408a6660a4e73a0286858",
"text": "Software-defined networking (SDN) has recently gained unprecedented attention from industry and research communities, and it seems unlikely that this will be attenuated in the near future. The ideas brought by SDN, although often described as a “revolutionary paradigm shift” in networking, are not completely new since they have their foundations in programmable networks and control-data plane separation projects. SDN promises simplified network management by enabling network automation, fostering innovation through programmability, and decreasing CAPEX and OPEX by reducing costs and power consumption. In this paper, we aim at analyzing and categorizing a number of relevant research works toward realizing SDN promises. We first provide an overview on SDN roots and then describe the architecture underlying SDN and its main components. Thereafter, we present existing SDN-related taxonomies and propose a taxonomy that classifies the reviewed research works and brings relevant research directions into focus. We dedicate the second part of this paper to studying and comparing the current SDN-related research initiatives and describe the main issues that may arise due to the adoption of SDN. Furthermore, we review several domains where the use of SDN shows promising results. We also summarize some foreseeable future research challenges.",
"title": ""
},
{
"docid": "cbe70e9372d1588f075d2037164b3077",
"text": "Regularization is one of the crucial ingredients of deep learning, yet the term regularization has various definitions, and regularization methods are often studied separately from each other. In our work we present a systematic, unifying taxonomy to categorize existing methods. We distinguish methods that affect data, network architectures, error terms, regularization terms, and optimization procedures. We do not provide all details about the listed methods; instead, we present an overview of how the methods can be sorted into meaningful categories and sub-categories. This helps revealing links and fundamental similarities between them. Finally, we include practical recommendations both for users and for developers of new regularization methods.",
"title": ""
},
{
"docid": "be5419a2175c5b21c8b7b1930a5a23f5",
"text": "Disambiguation to Wikipedia (D2W) is the task of linking mentions of concepts in text to their corresponding Wikipedia entries. Most previous work has focused on linking terms in formal texts (e.g. newswire) to Wikipedia. Linking terms in short informal texts (e.g. tweets) is difficult for systems and humans alike as they lack a rich disambiguation context. We first evaluate an existing Twitter dataset as well as the D2W task in general. We then test the effects of two tweet context expansion methods, based on tweet authorship and topic-based clustering, on a state-of-the-art D2W system and evaluate the results. TITLE AND ABSTRACT IN BASQUE Testuinguruaren Hedapenaren Analisia eta Hobekuntza Mikroblogak Wikifikatzeko Esanahia Wikipediarekiko Argitzea (D2W) deritzo testuetan aurkitutako kontzeptuen aipamenak Wikipedian dagozkien sarrerei lotzeari. Aurreko lan gehienek testu formalak (newswire, esate baterako) lotu dituzte Wikipediarekin. Testu informalak (tweet-ak, esate baterako) lotzea, ordea, zaila da bai sistementzat eta baita gizakiontzat ere, argipena erraztuko luketen testuingururik ez dutelako. Lehenik eta behin, Twitter-en gainean sortutako datu-sorta bat, eta D2W ataza bera ebaluatzen ditugu. Ondoren, egungo D2W sistema baten gainean testuingurua hedatzeko bi teknika aztertu eta ebaluatzen ditugu. Bi teknika hauek tweet-aren egilean eta gaikako multzokatze metodo batean oinarritzen dira.",
"title": ""
},
{
"docid": "11f895a889ec366745a9568c012d959b",
"text": "In undergoing this life, many people always try to do and get the best. New knowledge, experience, lesson, and everything that can improve the life will be done. However, many people sometimes feel confused to get those things. Feeling the limited of experience and sources to be better is one of the lacks to own. However, there is a very simple thing that can be done. This is what your teacher always manoeuvres you to do this one. Yeah, reading is the answer. Reading a book as this transaction processing concepts and techniques and other references can enrich your life quality. How can it be?",
"title": ""
},
{
"docid": "38499d78ab2b66f87e8314d75ff1c72f",
"text": "We investigated large-scale systems organization of the whole human brain using functional magnetic resonance imaging (fMRI) data acquired from healthy volunteers in a no-task or 'resting' state. Images were parcellated using a prior anatomical template, yielding regional mean time series for each of 90 regions (major cortical gyri and subcortical nuclei) in each subject. Significant pairwise functional connections, defined by the group mean inter-regional partial correlation matrix, were mostly either local and intrahemispheric or symmetrically interhemispheric. Low-frequency components in the time series subtended stronger inter-regional correlations than high-frequency components. Intrahemispheric connectivity was generally related to anatomical distance by an inverse square law; many symmetrical interhemispheric connections were stronger than predicted by the anatomical distance between bilaterally homologous regions. Strong interhemispheric connectivity was notably absent in data acquired from a single patient, minimally conscious following a brainstem lesion. Multivariate analysis by hierarchical clustering and multidimensional scaling consistently defined six major systems in healthy volunteers-- corresponding approximately to four neocortical lobes, medial temporal lobe and subcortical nuclei- - that could be further decomposed into anatomically and functionally plausible subsystems, e.g. dorsal and ventral divisions of occipital cortex. An undirected graph derived by thresholding the healthy group mean partial correlation matrix demonstrated local clustering or cliquishness of connectivity and short mean path length compatible with prior data on small world characteristics of non-human cortical anatomy. Functional MRI demonstrates a neurophysiological architecture of the normal human brain that is anatomically sensible, strongly symmetrical, disrupted by acute brain injury, subtended predominantly by low frequencies and consistent with a small world network topology.",
"title": ""
},
{
"docid": "98297ca2e4ae71f9e20daf96b248bc08",
"text": "The smart devices have been used in the most major domain like the healthcare, transportation, smart home, smart city and more. However, this technology has been exposed to many vulnerabilities, which may lead to cybercrime through the devices. With the IoT constraints and low-security mechanisms applied, the device could be easily been attacked, treated and exploited by cyber criminals where the smart devices could provide wrong data where it can lead to wrong interpretation and actuation to the legitimate users. To comply with the IoT characteristics, two approaches towards of having the investigation for IoT forensic is proposed by emphasizing the pre-investigation phase and implementing the real-time investigation to ensure the data and potential evidence is collected and preserved throughout the investigation.",
"title": ""
},
{
"docid": "d74c287c60b404961fc1775ddffc7d46",
"text": "Numerous dietary compounds, ubiquitous in fruits, vegetables and spices have been isolated and evaluated during recent years for their therapeutic potential. These compounds include flavonoid and non-flavonoid polyphenols, which describe beneficial effects against a variety of ailments. The notion that these plant products have health promoting effects emerged because their intake was related to a reduced incidence of cancer, cardiovascular, neurological, respiratory, and age-related diseases. Exposure of the body to a stressful environment challenges cell survival and increases the risk of chronic disease developing. The polyphenols afford protection against various stress-induced toxicities through modulating intercellular cascades which inhibit inflammatory molecule synthesis, the formation of free radicals, nuclear damage and induce antioxidant enzyme expression. These responses have the potential to increase life expectancy. The present review article focuses on curcumin, resveratrol, and flavonoids and seeks to summarize their anti-inflammatory, cytoprotective and DNA-protective properties.",
"title": ""
},
{
"docid": "9d7a67f2cd12a6fd033ad102fb9c526e",
"text": "We begin by pretraining the source task model, fS , using the task loss on the labeled source data. Next, we perform pixel-level adaptation using our image space GAN losses together with semantic consistency and cycle consistency losses. This yeilds learned parameters for the image transformations, GS!T and GT!S , image discriminators, DS and DT , as well as an initial setting of the task model, fT , which is trained using pixel transformed source images and the corresponding source pixel labels. Finally, we perform feature space adpatation in order to update the target semantic model, fT , to have features which are aligned between the source images mapped into target style and the real target images. During this phase, we learn the feature discriminator, Dfeat and use this to guide the representation update to fT . In general, our method could also perform phases 2 and 3 simultaneously, but this would require more GPU memory then available at the time of these experiments.",
"title": ""
}
] |
scidocsrr
|
477717b583d7b33aa37bdb9a169c2a01
|
Mutual Component Analysis for Heterogeneous Face Recognition
|
[
{
"docid": "08e03ec7a26e00c92f799dfb6c07174e",
"text": "Heterogeneous face recognition (HFR) involves matching two face images from alternate imaging modalities, such as an infrared image to a photograph or a sketch to a photograph. Accurate HFR systems are of great value in various applications (e.g., forensics and surveillance), where the gallery databases are populated with photographs (e.g., mug shot or passport photographs) but the probe images are often limited to some alternate modality. A generic HFR framework is proposed in which both probe and gallery images are represented in terms of nonlinear similarities to a collection of prototype face images. The prototype subjects (i.e., the training set) have an image in each modality (probe and gallery), and the similarity of an image is measured against the prototype images from the corresponding modality. The accuracy of this nonlinear prototype representation is improved by projecting the features into a linear discriminant subspace. Random sampling is introduced into the HFR framework to better handle challenges arising from the small sample size problem. The merits of the proposed approach, called prototype random subspace (P-RS), are demonstrated on four different heterogeneous scenarios: 1) near infrared (NIR) to photograph, 2) thermal to photograph, 3) viewed sketch to photograph, and 4) forensic sketch to photograph.",
"title": ""
},
{
"docid": "64f2091b23a82fae56751a78d433047c",
"text": "Aging variation poses a serious problem to automatic face recognition systems. Most of the face recognition studies that have addressed the aging problem are focused on age estimation or aging simulation. Designing an appropriate feature representation and an effective matching framework for age invariant face recognition remains an open problem. In this paper, we propose a discriminative model to address face matching in the presence of age variation. In this framework, we first represent each face by designing a densely sampled local feature description scheme, in which scale invariant feature transform (SIFT) and multi-scale local binary patterns (MLBP) serve as the local descriptors. By densely sampling the two kinds of local descriptors from the entire facial image, sufficient discriminatory information, including the distribution of the edge direction in the face image (that is expected to be age invariant) can be extracted for further analysis. Since both SIFT-based local features and MLBP-based local features span a high-dimensional feature space, to avoid the overfitting problem, we develop an algorithm, called multi-feature discriminant analysis (MFDA) to process these two local feature spaces in a unified framework. The MFDA is an extension and improvement of the LDA using multiple features combined with two different random sampling methods in feature and sample space. By random sampling the training set as well as the feature space, multiple LDA-based classifiers are constructed and then combined to generate a robust decision via a fusion rule. Experimental results show that our approach outperforms a state-of-the-art commercial face recognition engine on two public domain face aging data sets: MORPH and FG-NET. We also compare the performance of the proposed discriminative model with a generative aging model. A fusion of discriminative and generative models further improves the face matching accuracy in the presence of aging.",
"title": ""
},
{
"docid": "804cee969d47d912d8bdc40f3a3eeb32",
"text": "The problem of matching a forensic sketch to a gallery of mug shot images is addressed in this paper. Previous research in sketch matching only offered solutions to matching highly accurate sketches that were drawn while looking at the subject (viewed sketches). Forensic sketches differ from viewed sketches in that they are drawn by a police sketch artist using the description of the subject provided by an eyewitness. To identify forensic sketches, we present a framework called local feature-based discriminant analysis (LFDA). In LFDA, we individually represent both sketches and photos using SIFT feature descriptors and multiscale local binary patterns (MLBP). Multiple discriminant projections are then used on partitioned vectors of the feature-based representation for minimum distance matching. We apply this method to match a data set of 159 forensic sketches against a mug shot gallery containing 10,159 images. Compared to a leading commercial face recognition system, LFDA offers substantial improvements in matching forensic sketches to the corresponding face images. We were able to further improve the matching performance using race and gender information to reduce the target gallery size. Additional experiments demonstrate that the proposed framework leads to state-of-the-art accuracys when matching viewed sketches.",
"title": ""
},
{
"docid": "60cb22e89255e33d5f06ee90627731a7",
"text": "Building intelligent systems that are capable of extracting high-level representations from high-dimensional sensory data lies at the core of solving many computer vision-related tasks. We propose the multispectral neural networks (MSNN) to learn features from multicolumn deep neural networks and embed the penultimate hierarchical discriminative manifolds into a compact representation. The low-dimensional embedding explores the complementary property of different views wherein the distribution of each view is sufficiently smooth and hence achieves robustness, given few labeled training data. Our experiments show that spectrally embedding several deep neural networks can explore the optimum output from the multicolumn networks and consistently decrease the error rate compared with a single deep network.",
"title": ""
}
] |
[
{
"docid": "c8d5a8d13d3cd9e150537bd8957a4512",
"text": "Classroom interactivity has a number of significant benefits: it promotes an active learning environment, provides greater feedback for lecturers, increases student motivation, and enables a learning community (Bishop, Dinkins, & Dominick, 2003; Mazur, 1998; McConnell et al., 2006). On the other hand, interactive activities for large classes (over 100 students) have proven to be quite difficult and, often, inefficient (Freeman & Blayney, 2005).",
"title": ""
},
{
"docid": "fa396377fbec310c9d4b9792cc66f9b9",
"text": "Attention-based deep learning model as a human-centered smart technology has become the state-of-the-art method in addressing relation extraction, while implementing natural language processing. How to effectively improve the computational performance of that model has always been a research focus in both academic and industrial communities. Generally, the structures of model would greatly affect the final results of relation extraction. In this article, a deep learning model with a novel structure is proposed. In our model, after incorporating the highway network into a bidirectional gated recurrent unit, the attention mechanism is additionally utilized in an effort to assign weights of key issues in the network structure. Here, the introduction of highway network could enable the proposed model to capture much more semantic information. Experiments on a popular benchmark data set are conducted, and the results demonstrate that the proposed model outperforms some existing relation extraction methods. Furthermore, the performance of our method is also tested in the analysis of geological data, where the relation extraction in Chinese geological field is addressed and a satisfactory display result is achieved.",
"title": ""
},
{
"docid": "9239ff0e4c8849498f4b8eaae6826d8e",
"text": "High employee turnover rate in Malaysia’s retail industry has become a major issue that needs to be addressed. This study determines the levels of job satisfaction, organizational commitment, and turnover intention of employees in a retail company in Malaysia. The relationships between job satisfaction and organizational commitment on turnover intention are also investigated. A questionnaire was developed using Job Descriptive Index, Organizational Commitment Questionnaire, and Lee and Mowday’s turnover intention items and data were collected from 62 respondents. The findings suggested that the respondents were moderately satisfied with job satisfaction facets such as promotion, work itself, co-workers, and supervisors but were unsatisfied with salary. They also had moderate commitment level with considerably high intention to leave the organization. All satisfaction facets (except for co-workers) and organizational commitment were significantly and negatively related to turnover intention. Based on the findings, retention strategies of retail employees were proposed. Keywords—Job satisfaction, organizational commitment, retail employees, turnover intention.",
"title": ""
},
{
"docid": "23ae026d482a0d4805cac3bb0762aed0",
"text": "Time series motifs are pairs of individual time series, or subsequences of a longer time series, which are very similar to each other. As with their discrete analogues in computational biology, this similarity hints at structure which has been conserved for some reason and may therefore be of interest. Since the formalism of time series motifs in 2002, dozens of researchers have used them for diverse applications in many different domains. Because the obvious algorithm for computing motifs is quadratic in the number of items, more than a dozen approximate algorithms to discover motifs have been proposed in the literature. In this work, for the first time, we show a tractable exact algorithm to find time series motifs. As we shall show through extensive experiments, our algorithm is up to three orders of magnitude faster than brute-force search in large datasets. We further show that our algorithm is fast enough to be used as a subroutine in higher level data mining algorithms for anytime classification, near-duplicate detection and summarization, and we consider detailed case studies in domains as diverse as electroencephalograph interpretation and entomological telemetry data mining.",
"title": ""
},
{
"docid": "fe06ac2458e00c5447a255486189f1d1",
"text": "The design and control of robots from the perspective of human safety is desired. We propose a mechanical compliance control system as a new pneumatic arm control system. However, safety against collisions with obstacles in an unpredictable environment is difficult to insure in previous system. The main feature of the proposed system is that the two desired pressure values are calculated by using two other desired values, the end compliance of the arm and the end position and posture of the arm.",
"title": ""
},
{
"docid": "2515c04775dc0a1e1d96692da208c257",
"text": "We present a computational method for extracting simple descriptions of high dimensional data sets in the form of simplicial complexes. Our method, called Mapper, is based on the idea of partial clustering of the data guided by a set of functions defined on the data. The proposed method is not dependent on any particular clustering algorithm, i.e. any clustering algorithm may be used with Mapper. We implement this method and present a few sample applications in which simple descriptions of the data present important information about its structure.",
"title": ""
},
{
"docid": "c90b05657b7673257db617b62d0ed80c",
"text": "Automated tongue image segmentation, in Chinese medicine, is difficult due to two special factors: 1) there are many pathological details on the surface of the tongue, which have a large influence on edge extraction; 2) the shapes of the tongue bodies captured from various persons (with different diseases) are quite different, so they are impossible to describe properly using a predefined deformable template. To address these problems, in this paper, we propose an original technique that is based on a combination of a bi-elliptical deformable template (BEDT) and an active contour model, namely the bi-elliptical deformable contour (BEDC). The BEDT captures gross shape features by using the steepest decent method on its energy function in the parameter space. The BEDC is derived from the BEDT by substituting template forces for classical internal forces, and can deform to fit local details. Our algorithm features fully automatic interpretation of tongue images and a consistent combination of global and local controls via the template force. We apply the BEDC to a large set of clinical tongue images and present experimental results.",
"title": ""
},
{
"docid": "800aa2ecdf0a29c7fa7860c6b0618a6b",
"text": "This paper presents three topological classes of dc-to-dc converters, totaling nine converters (each class with three buck, boost, and buck-boost voltage transfer function topologies), which offer continuous input and output energy flow, applicable and mandatory for renewable energy source, maximum power point tracking and maximum source energy extraction. A current sourcing output caters for converter module output parallel connection. The first class of three topologies employs both series input and output inductance, while anomalously the other two classes of six related topologies employ only either series input (three topologies) or series output (three topologies) inductance. All nine converter topologies employ the same elements, while additional load shunting capacitance creates a voltage sourcing output. Converter time-domain simulations and experimental results for the converters support and extol the concepts and analysis presented.",
"title": ""
},
{
"docid": "fb67e237688deb31bd684c714a49dca5",
"text": "In order to mitigate investments, stock price forecasting has attracted more attention in recent years. Aiming at the discreteness, non-normality, high-noise in high-frequency data, a support vector machine regression (SVR) algorithm is introduced in this paper. However, the characteristics in different periods of the same stock, or the same periods of different stocks are significantly different. So, SVR with fixed parameters is difficult to satisfy with the constantly changing data flow. To tackle this problem, an adaptive SVR was proposed for stock data at three different time scales, including daily data, 30-min data, and 5-min data. Experiments show that the improved SVR with dynamic optimization of learning parameters by particle swarm optimization can get a better result than compared methods including SVR and back-propagation neural network.",
"title": ""
},
{
"docid": "4ba81ce5756f2311dde3fa438f81e527",
"text": "To prevent password breaches and guessing attacks, banks increasingly turn to two-factor authentication (2FA), requiring users to present at least one more factor, such as a one-time password generated by a hardware token or received via SMS, besides a password. We can expect some solutions – especially those adding a token – to create extra work for users, but little research has investigated usability, user acceptance, and perceived security of deployed 2FA. This paper presents an in-depth study of 2FA usability with 21 UK online banking customers, 16 of whom had accounts with more than one bank. We collected a rich set of qualitative and quantitative data through two rounds of semi-structured interviews, and an authentication diary over an average of 11 days. Our participants reported a wide range of usability issues, especially with the use of hardware tokens, showing that the mental and physical workload involved shapes how they use online banking. Key targets for improvements are (i) the reduction in the number of authentication steps, and (ii) removing features that do not add any security but negatively affect the user experience.",
"title": ""
},
{
"docid": "5473962c6c270df695b965cbcc567369",
"text": "Medical professionals need a reliable prediction methodology to diagnose cancer and distinguish between the different stages in cancer. Classification is a data mining function that assigns items in a collection to target groups or classes. C4.5 classification algorithm has been applied to SEER breast cancer dataset to classify patients into either “Carcinoma in situ” (beginning or pre-cancer stage) or “Malignant potential” group. Pre-processing techniques have been applied to prepare the raw dataset and identify the relevant attributes for classification. Random test samples have been selected from the pre-processed data to obtain classification rules. The rule set obtained was tested with the remaining data. The results are presented and discussed. Keywords— Breast Cancer Diagnosis, Classification, Clinical Data, SEER Dataset, C4.5 Algorithm",
"title": ""
},
{
"docid": "c6e6099599be3cd2d1d87c05635f4248",
"text": "PURPOSE\nThe Food Cravings Questionnaires are among the most often used measures for assessing the frequency and intensity of food craving experiences. However, there is a lack of studies that have examined specific cut-off scores that may indicate pathologically elevated levels of food cravings.\n\n\nMETHODS\nReceiver-Operating-Characteristic analysis was used to determine sensitivity and specificity of scores on the Food Cravings Questionnaire-Trait-reduced (FCQ-T-r) for discriminating between individuals with (n = 43) and without (n = 389) \"food addiction\" as assessed with the Yale Food Addiction Scale 2.0.\n\n\nRESULTS\nA cut-off score of 50 on the FCQ-T-r discriminated between individuals with and without \"food addiction\" with high sensitivity (85%) and specificity (93%).\n\n\nCONCLUSIONS\nFCQ-T-r scores of 50 and higher may indicate clinically relevant levels of trait food craving.\n\n\nLEVEL OF EVIDENCE\nLevel V, descriptive study.",
"title": ""
},
{
"docid": "8589ec481e78d14fbeb3e6e4205eee50",
"text": "This paper presents a novel ensemble classifier generation technique RotBoost, which is constructed by combining Rotation Forest and AdaBoost. The experiments conducted with 36 real-world data sets available from the UCI repository, among which a classification tree is adopted as the base learning algorithm, demonstrate that RotBoost can generate ensemble classifiers with significantly lower prediction error than either Rotation Forest or AdaBoost more often than the reverse. Meanwhile, RotBoost is found to perform much better than Bagging and MultiBoost. Through employing the bias and variance decompositions of error to gain more insight of the considered classification methods, RotBoost is seen to simultaneously reduce the bias and variance terms of a single tree and the decrement achieved by it is much greater than that done by the other ensemble methods, which leads RotBoost to perform best among the considered classification procedures. Furthermore, RotBoost has a potential advantage over AdaBoost of suiting parallel execution. 2008 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "ee8a54ee9cd0b3c9a57d8c5ae2b237c2",
"text": "Relatively little is known about how commodity consumption amongst African-Americans affirms issues of social organization within society. Moreover, the lack of primary documentation on the attitudes of African-American (A-A) commodity consumers contributes to the distorting image of A-A adolescents who actively engage in name-brand sneaker consumption; consequently maintaining the stigma of A-A adolescents being ‘addicted to brands’ (Chin, 2001). This qualitative study sought to employ the attitudes of African-Americans from an urban/metropolitan high school in dialogue on the subject of commodity consumption; while addressing the concepts of structure and agency with respect to name-brand sneaker consumption. Additionally, this study integrated three theoretical frameworks that were used to assess the participants’ engagement as consumers of name-brand sneakers. Through a focus group and analysis of surveys, it was discovered that amongst the African-American adolescent population, sneaker consumption imparted a means of attaining a higher socio-economic status, while concurrently providing an outlet for ‘acting’ as agents within the constraints of a constructed social structure. This study develops a practical method of analyzing several issues within commodity consumption, specifically among African-American adolescents. Prior to an empirical application of several theoretical frameworks, the researcher assessed the role of sneaker production as it predates sneaker consumption. Labor-intensive production of name-brand footwear is almost exclusively located in Asia (Vanderbilt, 1998), and has become the formula for efficient, profitable production in name-brand sneaker factories. Moreover, the production of such footwear is controlled by the demand for commodified products in the global economy. Southeast Asian manufacturing facilities owned by popular athletic footwear companies generate between $830 million and $5 billion a year from sneaker consumption (Vanderbilt, 1998). The researcher asks, What are the characteristics that determine the role of African-American consumers within the name-brand sneaker industry? The manner in which athletic name-brand footwear is consumed is a process that is directly associated with the social satisfaction of the consumer (Stabile, 2000). In this study, the researcher investigated the attitudes of adolescents towards name-brand sneaker consumption and production in order to determine how their perceived socioeconomic status affected by their consumption. Miller (2002) suggests that the consumption practices of young African-Americans present a central understanding of the act of consumption itself. While an analysis of consumption is vital in determining how and to whom a product is marketed Chin (2001), whose argument will be discussed further into this study, McNair ScholarS JourNal • VoluMe 8 111 explicates that (commodity) consumption is significant because it provides an understanding of the socially constructed society in which economically disadvantaged children are a part of.",
"title": ""
},
{
"docid": "967aae790b938ccb219ecf68965c5b02",
"text": "This paper describes the control algorithms of the high speed mobile robot Kurt3D. Kurt3D drives up to 4 m/s autonomously and reliably in an unknown office environment. We present the reliable hardware, fast control cycle algorithms and a novel set value computation scheme for achieving these velocities. In addition we sketch a real-time capable laser based position tracking method that is well suited for driving with these velocities.",
"title": ""
},
{
"docid": "68f74c4fc9d1afb00ac2ec0221654410",
"text": "Most algorithms in 3-D Computer Vision rely on the pinhole camera model because of its simplicity, whereas video optics, especially low-cost wide-angle or fish-eye lens, generate a lot of non-linear distortion which can be critical. To find the distortion parameters of a camera, we use the following fundamental property: a camera follows the pinhole model if and only if the projection of every line in space onto the camera is a line. Consequently, if we find the transformation on the video image so that every line in space is viewed in the transformed image as a line, then we know how to remove the distortion from the image. The algorithm consists of first doing edge extraction on a possibly distorted video sequence, then doing polygonal approximation with a large tolerance on these edges to extract possible lines from the sequence, and then finding the parameters of our distortion model that best transform these edges to segments. Results are presented on real video images, compared with distortion calibration obtained by a full camera calibration method which uses a calibration grid.",
"title": ""
},
{
"docid": "2e42ab12b43022d22b9459cfaea6f436",
"text": "Treemaps provide an interesting solution for representing hierarchical data. However, most studies have mainly focused on layout algorithms and paid limited attention to the interaction with treemaps. This makes it difficult to explore large data sets and to get access to details, especially to those related to the leaves of the trees. We propose the notion of zoomable treemaps (ZTMs), an hybridization between treemaps and zoomable user interfaces that facilitates the navigation in large hierarchical data sets. By providing a consistent set of interaction techniques, ZTMs make it possible for users to browse through very large data sets (e.g., 700,000 nodes dispatched amongst 13 levels). These techniques use the structure of the displayed data to guide the interaction and provide a way to improve interactive navigation in treemaps.",
"title": ""
},
{
"docid": "f9090b6e113445a268fc02894f7f846b",
"text": "Reducing inventory levels is a major supply chain management challenge in automobile industries. With the development of information technology new cooperative supply chain contracts emerge such as Vendor-Managed Inventory (VMI). This research aims to look at the literature of information management of VMI and the Internet of Things, then analyzes information flow model of VMI system. The paper analyzes information flow management of VMI system in automobile parts inbound logistics based on the environment of Internet of Things.",
"title": ""
},
{
"docid": "5339bd241f053214673ead767476077d",
"text": "----------------------------------------------------------------------ABSTRACT----------------------------------------------------------This paper is a general survey of all the security issues existing in the Internet of Things (IoT) along with an analysis of the privacy issues that an end-user may face as a consequence of the spread of IoT. The majority of the survey is focused on the security loopholes arising out of the information exchange technologies used in Internet of Things. No countermeasure to the security drawbacks has been analyzed in the paper.",
"title": ""
},
{
"docid": "cda6f812328d1a883b0c5938695981fe",
"text": "This paper investigates the problem of weakly-supervised semantic segmentation, where image-level labels are used as weak supervision. Inspired by the successful use of Convolutional Neural Networks (CNNs) for fully-supervised semantic segmentation, we choose to directly train the CNNs over the oversegmented regions of images for weakly-supervised semantic segmentation. Although there are a few studies on CNNs-based weakly-supervised semantic segmentation, they have rarely considered the noise issue, i.e., the initial weak labels (e.g., social tags) may be noisy. To cope with this issue, we thus propose graph-boosted CNNs (GB-CNNs) for weakly-supervised semantic segmentation. In our GB-CNNs, the graph-based model provides the initial supervision for training the CNNs, and then the outcomes of the CNNs are used to retrain the graph-based model. This training procedure is iteratively implemented to boost the results of semantic segmentation. Experimental results demonstrate that the proposed model outperforms the state-of-the-art weakly-supervised methods. More notably, the proposed model is shown to be more robust in the noisy setting for weakly-supervised semantic segmentation.",
"title": ""
}
] |
scidocsrr
|
dfe59d5e8af8c2568308796d8e767666
|
Perceived Effect of Personality Traits on Information Seeking Behaviour of Postgraduate Students in Universities in Benue State , Nigeria
|
[
{
"docid": "76cedf5536bd886b5838c2a5e027de79",
"text": "This article reports a meta-analysis of personality-academic performance relationships, based on the 5-factor model, in which cumulative sample sizes ranged to over 70,000. Most analyzed studies came from the tertiary level of education, but there were similar aggregate samples from secondary and tertiary education. There was a comparatively smaller sample derived from studies at the primary level. Academic performance was found to correlate significantly with Agreeableness, Conscientiousness, and Openness. Where tested, correlations between Conscientiousness and academic performance were largely independent of intelligence. When secondary academic performance was controlled for, Conscientiousness added as much to the prediction of tertiary academic performance as did intelligence. Strong evidence was found for moderators of correlations. Academic level (primary, secondary, or tertiary), average age of participant, and the interaction between academic level and age significantly moderated correlations with academic performance. Possible explanations for these moderator effects are discussed, and recommendations for future research are provided.",
"title": ""
},
{
"docid": "e0a314eb1fe221791bc08094d0c04862",
"text": "The present study was undertaken with the objective to explore the influence of the five personality dimensions on the information seeking behaviour of the students in higher educational institutions. Information seeking behaviour is defined as the sum total of all those activities that are usually undertaken by the students of higher education to collect, utilize and process any kind of information needed for their studies. Data has been collected from 600 university students of the three broad disciplines of studies from the Universities of Eastern part of India (West Bengal). The tools used for the study were General Information schedule (GIS), Information Seeking Behaviour Inventory (ISBI) and NEO-FFI Personality Inventory. Product moment correlation has been worked out between the scores in ISBI and those in NEO-FFI Personality Inventory. The findings indicated that the five personality traits are significantly correlated to all the dimensions of information seeking behaviour of the university students.",
"title": ""
},
{
"docid": "0d0fd1c837b5e45b83ee590017716021",
"text": "General intelligence and personality traits from the Five-Factor model were studied as predictors of academic achievement in a large sample of Estonian schoolchildren from elementary to secondary school. A total of 3618 students (1746 boys and 1872 girls) from all over Estonia attending Grades 2, 3, 4, 6, 8, 10, and 12 participated in this study. Intelligence, as measured by the Raven’s Standard Progressive Matrices, was found to be the best predictor of students’ grade point average (GPA) in all grades. Among personality traits (measured by self-reports on the Estonian Big Five Questionnaire for Children in Grades 2 to 4 and by the NEO Five Factor Inventory in Grades 6 to 12), Openness, Agreeableness, and Conscientiousness correlated positively and Neuroticism correlated negatively with GPA in almost every grade. When all measured variables were entered together into a regression model, intelligence was still the strongest predictor of GPA, being followed by Agreeableness in Grades 2 to 4 and Conscientiousness in Grades 6 to 12. Interactions between predictor variables and age accounted for only a small percentage of variance in GPA, suggesting that academic achievement relies basically on the same mechanisms through the school years. 2006 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "bf29ab51f0f2bba9b96e8afb963635e7",
"text": "ÐThis paper describes an efficient algorithm for inexact graph matching. The method is purely structural, that is to say, it uses only the edge or connectivity structure of the graph and does not draw on node or edge attributes. We make two contributions. Commencing from a probability distribution for matching errors, we show how the problem of graph matching can be posed as maximum-likelihood estimation using the apparatus of the EM algorithm. Our second contribution is to cast the recovery of correspondence matches between the graph nodes in a matrix framework. This allows us to efficiently recover correspondence matches using singular value decomposition. We experiment with the method on both real-world and synthetic data. Here, we demonstrate that the method offers comparable performance to more computationally demanding methods. Index TermsÐInexact graph matching, EM algorithm, matrix factorization, mixture models, Delaunay triangulations.",
"title": ""
},
{
"docid": "8557c77501fbdc29a4cd0f161224ca8c",
"text": "We present a preliminary analysis of the fundamental viability of meta-learning, revisiting the No Free Lunch (NFL) theorem. The analysis shows that given some simple and very basic assumptions, the NFL theorem is of little relevance to research in Machine Learning. We augment the basic NFL framework to illustrate that the notion of an Ultimate Learning Algorithm is well defined. We show that, although cross-validation still is not a viable way to construct general-purpose learning algorithms, meta-learning offers a natural alternative. We still have to pay for our lunch, but the cost is reasonable: the necessary fundamental assumptions are ones we all make anyway.",
"title": ""
},
{
"docid": "fc7c7828428a4018a8aaddaff4eb5b3f",
"text": "Data mining is comprised of many data analysis techniques. Its basic objective is to discover the hidden and useful data pattern from very large set of data. Graph mining, which has gained much attention in the last few decades, is one of the novel approaches for mining the dataset represented by graph structure. Graph mining finds its applications in various problem domains, including: bioinformatics, chemical reactions, Program flow structures, computer networks, social networks etc. Different data mining approaches are used for mining the graph-based data and performing useful analysis on these mined data. In literature various graph mining approaches have been proposed. Each of these approaches is based on either classification; clustering or decision trees data mining techniques. In this study, we present a comprehensive review of various graph mining techniques. These different graph mining techniques have been critically evaluated in this study. This evaluation is based on different parameters. In our future work, we will provide our own classification based graph mining technique which will efficiently and accurately perform mining on the graph structured data.",
"title": ""
},
{
"docid": "7c611108aa760808e6558b86394a5318",
"text": "Single-cell RNA sequencing (scRNA-seq) is a fast growing approach to measure the genome-wide transcriptome of many individual cells in parallel, but results in noisy data with many dropout events. Existing methods to learn molecular signatures from bulk transcriptomic data may therefore not be adapted to scRNA-seq data, in order to automatically classify individual cells into predefined classes. We propose a new method called DropLasso to learn a molecular signature from scRNA-seq data. DropLasso extends the dropout regularisation technique, popular in neural network training, to estimate sparse linear models. It is well adapted to data corrupted by dropout noise, such as scRNA-seq data, and we clarify how it relates to elastic net regularisation. We provide promising results on simulated and real scRNA-seq data, suggesting that DropLasso may be better adapted than standard regularisations to infer molecular signatures from scRNA-seq data. DropLasso is freely available as an R package at https://github.com/jpvert/droplasso",
"title": ""
},
{
"docid": "22bd367cdda112e715f7c5535bc72ebb",
"text": "This paper introduces a complete side channel analysis toolbox, inclusive of the analog capture hardware, target device, capture software, and analysis software. The highly modular design allows use of the hardware and software with a variety of existing systems. The hardware uses a synchronous capture method which greatly reduces the required sample rate, while also reducing the data storage requirement, and improving synchronization of traces. The synchronous nature of the hardware lends itself to fault injection, and a module to generate glitches of programmable width is also provided. The entire design (hardware and software) is open-source, and maintained in a publicly available repository. Several long example capture traces are provided for researchers looking to evaluate standard cryptographic implementations.",
"title": ""
},
{
"docid": "e913a4d2206be999f0278d48caa4708a",
"text": "Widespread deployment of the Internet enabled building of an emerging IT delivery model, i.e., cloud computing. Albeit cloud computing-based services have rapidly developed, their security aspects are still at the initial stage of development. In order to preserve cybersecurity in cloud computing, cybersecurity information that will be exchanged within it needs to be identified and discussed. For this purpose, we propose an ontological approach to cybersecurity in cloud computing. We build an ontology for cybersecurity operational information based on actual cybersecurity operations mainly focused on non-cloud computing. In order to discuss necessary cybersecurity information in cloud computing, we apply the ontology to cloud computing. Through the discussion, we identify essential changes in cloud computing such as data-asset decoupling and clarify the cybersecurity information required by the changes such as data provenance and resource dependency information.",
"title": ""
},
{
"docid": "cdb252ec09b2cca79e1d4efa11722bd3",
"text": "Energy efficient communication is a fundamental problem in wireless ad-hoc and sensor networks. In this paper, we explore the feasibility of a distributed beamforming approach to this problem, with a cluster of distributed transmitters emulating a centralized antenna array so as to transmit a common message signal coherently to a distant base station. The potential SNR gains from beamforming are well-known. However, realizing these gains requires synchronization of the individual carrier signals in phase and frequency. In this paper we show that a large fraction of the beamforming gains can be realised even with imperfect synchronization corresponding to phase errors with moderately large variance. We present a master-slave architecture where a designated master transmitter coordinates the synchronization of other (slave) transmitters for beamforming. We observe that the transmitters can achieve distributed beamforming with minimal coordination with the base station using channel reciprocity. Thus, inexpensive local coordination with a master transmitter makes the expensive communication with a distant base station receiver more efficient. However, the duplexing constraints of the wireless channel place a fundamental limitation on the achievable accuracy of synchronization. We present a stochastic analysis that demonstrates the robustness of beamforming gains with imperfect synchronization, and demonstrate a tradeoff between synchronization overhead and beamforming gains. We also present simulation results for the phase errors that validate the analysis",
"title": ""
},
{
"docid": "a795ee8c4c50bd348b21191456604453",
"text": "The need for organizations to innovate and furthermore to ceaselessly innovate is stressed throughout the modern management literature on innovation. This need comes from increasing competition and customer demands and new market areas. Closely linked, but not synonymous, with innovation is the body of knowledge referred to collectively as knowledge management. Within this discourse knowledge is considered as a potential key competitive advantage, by helping to increase innovation within the organization. This paper focuses on the role of knowledge management in sustaining and enhancing innovation in organizations. In particular the paper seeks to establish a knowledge management model within which the principles of innovation can be incorporated. First, there is a brief review of the innovation and knowledge management literature and their respective synergies. From this literature a possible knowledge management model which incorporates innovation is suggested. Second, a research study is discussed which seeks to further examine and develop the model using an inductive grounded theory approach. The study involved socially constructed workshops representing 25 organizations, each of which constructed meanings in regard to innovation and the key areas of knowledge management as outlined in the model. Overall it was found that effective systematic knowledge management can incorporate innovation drivers in key areas which will result in both increased business and employee bene®ts. Copyright # 2000 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "fbb48416c34d4faee1a87ac2efaf466d",
"text": "Do unsupervised methods for learning rich, contextualized token representations obviate the need for explicit modeling of linguistic structure in neural network models for semantic role labeling (SRL)? We address this question by incorporating the massively successful ELMo embeddings (Peters et al., 2018) into LISA (Strubell et al., 2018), a strong, linguisticallyinformed neural network architecture for SRL. In experiments on the CoNLL-2005 shared task we find that though ELMo outperforms typical word embeddings, beginning to close the gap in F1 between LISA with predicted and gold syntactic parses, syntactically-informed models still outperform syntax-free models when both use ELMo, especially on out-of-domain data. Our results suggest that linguistic structures are indeed still relevant in this golden age of deep learning for NLP.",
"title": ""
},
{
"docid": "972c4fdae7e5c9598e47ec3e342dbaca",
"text": "A evolução e o futuro da logística e do gerenciamento da cadeia de suprimentos Abstract This article will be divided into three sections: past, present, and future. The past section will trace major events that created business logistics as it is practiced today. In particular, do the events portend the future of business logistics and supply chain management? The present section will attempt to summarize the state of business logistics. How business logistics relates to supply chain management will be addressed. The future section will make some predictions as to the issues that need to be addressed and the events that will likely take place in the near term.",
"title": ""
},
{
"docid": "ce3ac7716734e2ebd814900d77ca3dfb",
"text": "The large pose discrepancy between two face images is one of the fundamental challenges in automatic face recognition. Conventional approaches to pose-invariant face recognition either perform face frontalization on, or learn a pose-invariant representation from, a non-frontal face image. We argue that it is more desirable to perform both tasks jointly to allow them to leverage each other. To this end, this paper proposes a Disentangled Representation learning-Generative Adversarial Network (DR-GAN) with three distinct novelties. First, the encoder-decoder structure of the generator enables DR-GAN to learn a representation that is both generative and discriminative, which can be used for face image synthesis and pose-invariant face recognition. Second, this representation is explicitly disentangled from other face variations such as pose, through the pose code provided to the decoder and pose estimation in the discriminator. Third, DR-GAN can take one or multiple images as the input, and generate one unified identity representation along with an arbitrary number of synthetic face images. Extensive quantitative and qualitative evaluation on a number of controlled and in-the-wild databases demonstrate the superiority of DR-GAN over the state of the art in both learning representations and rotating large-pose face images.",
"title": ""
},
{
"docid": "3abd8454fc91eb28e2911872ae8bf3af",
"text": "Graphene sheets—one-atom-thick two-dimensional layers of sp2-bonded carbon—are predicted to have a range of unusual properties. Their thermal conductivity and mechanical stiffness may rival the remarkable in-plane values for graphite (∼3,000 W m-1 K-1 and 1,060 GPa, respectively); their fracture strength should be comparable to that of carbon nanotubes for similar types of defects; and recent studies have shown that individual graphene sheets have extraordinary electronic transport properties. One possible route to harnessing these properties for applications would be to incorporate graphene sheets in a composite material. The manufacturing of such composites requires not only that graphene sheets be produced on a sufficient scale but that they also be incorporated, and homogeneously distributed, into various matrices. Graphite, inexpensive and available in large quantity, unfortunately does not readily exfoliate to yield individual graphene sheets. Here we present a general approach for the preparation of graphene-polymer composites via complete exfoliation of graphite and molecular-level dispersion of individual, chemically modified graphene sheets within polymer hosts. A polystyrene–graphene composite formed by this route exhibits a percolation threshold of ∼0.1 volume per cent for room-temperature electrical conductivity, the lowest reported value for any carbon-based composite except for those involving carbon nanotubes; at only 1 volume per cent, this composite has a conductivity of ∼0.1 S m-1, sufficient for many electrical applications. Our bottom-up chemical approach of tuning the graphene sheet properties provides a path to a broad new class of graphene-based materials and their use in a variety of applications.",
"title": ""
},
{
"docid": "2831d24ae1b76a9a8204c9f79aec27e1",
"text": "Spittlebugs from the genus Aeneolamia are important pests of sugarcane. Although the use of the entomopathogenic fungus Metarhizum anisopliae s.l. for control of this pest is becoming more common in Mexico, fundamental information regarding M. anisopliae in sugarcane plantations is practically non-existent. Using phylogenetic analysis, we determined the specific diversity of Metarhizium spp. infecting adult spittlebugs in sugarcane plantations from four Mexican states. We obtained 29 isolates of M. anisopliae s.str. Haplotype network analysis revealed the existence of eight haplotypes. Eight selected isolates, representing the four Mexican states, were grown at different temperatures in vitro; isolates from Oaxaca achieved the greatest growth followed by isolates from Veracruz, San Luis Potosi and Tabasco. No relationship was found between in vitro growth and haplotype diversity. Our results represent a significant contribution to the better understanding of the ecology of Metarhizum spp. in the sugarcane agroecosystem.",
"title": ""
},
{
"docid": "e6cae5bec5bb4b82794caca85d3412a2",
"text": "Detection of abusive language in user generated online content has become an issue of increasing importance in recent years. Most current commercial methods make use of blacklists and regular expressions, however these measures fall short when contending with more subtle, less ham-fisted examples of hate speech. In this work, we develop a machine learning based method to detect hate speech on online user comments from two domains which outperforms a state-ofthe-art deep learning approach. We also develop a corpus of user comments annotated for abusive language, the first of its kind. Finally, we use our detection tool to analyze abusive language over time and in different settings to further enhance our knowledge of this behavior.",
"title": ""
},
{
"docid": "b99944ad31c5ad81d0e235c200a332b4",
"text": "This paper introduces speech-based visual question answering (VQA), the task of generating an answer given an image and a spoken question. Two methods are studied: an end-to-end, deep neural network that directly uses audio waveforms as input versus a pipelined approach that performs ASR (Automatic Speech Recognition) on the question, followed by text-based visual question answering. Furthermore, we investigate the robustness of both methods by injecting various levels of noise into the spoken question and find both methods to be tolerate noise at similar levels.",
"title": ""
},
{
"docid": "8ce33eef3eaa1f89045d916869813d5d",
"text": "This paper introduces a deep neural network model for subband-based speech 1 synthesizer. The model benefits from the short bandwidth of the subband signals 2 to reduce the complexity of the time-domain speech generator. We employed 3 the multi-level wavelet analysis/synthesis to decompose/reconstruct the signal to 4 subbands in time domain. Inspired from the WaveNet, a convolutional neural 5 network (CNN) model predicts subband speech signals fully in time domain. Due 6 to the short bandwidth of the subbands, a simple network architecture is enough to 7 train the simple patterns of the subbands accurately. In the ground truth experiments 8 with teacher forcing, the subband synthesizer outperforms the fullband model 9 significantly. In addition, by conditioning the model on the phoneme sequence 10 using a pronunciation dictionary, we have achieved the first fully time-domain 11 neural text-to-speech (TTS) system. The generated speech of the subband TTS 12 shows comparable quality as the fullband one with a slighter network architecture 13 for each subband. 14",
"title": ""
},
{
"docid": "ab8af5f48be6b0b7769b8875e528be84",
"text": "A feedback vertex set of a graph is a subset of vertices that contains at least one vertex from every cycle in the graph. The problem considered is that of finding a minimum feedback vertex set given a weighted and undirected graph. We present a simple and efficient approximation algorithm with performance ratio of at most 2, improving previous best bounds for either weighted or unweighted cases of the problem. Any further improvement on this bound, matching the best constant factor known for the vertex cover problem, is deemed challenging. The approximation principle, underlying the algorithm, is based on a generalized form of the classical local ratio theorem, originally developed for approximation of the vertex cover problem, and a more flexible style of its application.",
"title": ""
},
{
"docid": "b3ffe7b94b8965be5fb4f702c4ce5f3d",
"text": "BACKGROUND\nThe ability to rise from sitting to standing is critical to an individual's quality of life, as it is a prerequisite for functional independence. The purpose of the current study was to examine the hypothesis that test durations as assessed with the instrumented repeated Sit-To-Stand (STS) show stronger associations with health status, functional status and daily physical activity of older adults than manually recorded test durations.\n\n\nMETHODS\nIn 63 older participants (mean age 83 ±6.9 years, 51 female), health status was assessed using the European Quality of Life questionnaire and functional status was assessed using the physical function index of the of the RAND-36. Physical performance was measured using a wearable sensor-based STS test. From this test, durations, sub-durations and kinematics of the STS movements were estimated and analysed. In addition, physical activity was measured for one week using an activity monitor and episodes of lying, sitting, standing and locomotion were identified. Associations between STS parameters with health status, functional status and daily physical activity were assessed.\n\n\nRESULTS\nThe manually recorded STS times were not significantly associated with health status (p = 0.457) and functional status (p = 0.055), whereas the instrumented STS times were (both p = 0.009). The manually recorded STS durations showed a significant association to daily physical activity for mean sitting durations (p = 0.042), but not for mean standing durations (p = 0.230) and mean number of locomotion periods (p = 0.218). Furthermore, durations of the dynamic sit-to-stand phase of the instrumented STS showed more significant associations with health status, functional status and daily physical activity (all p = 0.001) than the static phases standing and sitting (p = 0.043-0.422).\n\n\nCONCLUSIONS\nAs hypothesized, instrumented STS durations were more strongly associated with participant health status, functional status and physical activity than manually recorded STS durations in older adults. Furthermore, instrumented STS allowed assessment of the dynamic phases of the test, which were likely more informative than the static sitting and standing phases.",
"title": ""
},
{
"docid": "badb04b676d3dab31024e8033fc8aec4",
"text": "Review was undertaken from February 1969 to January 1998 at the State forensic science center (Forensic Science) in Adelaide, South Australia, of all cases of murder-suicide involving children <16 years of age. A total of 13 separate cases were identified involving 30 victims, all of whom were related to the perpetrators. There were 7 male and 6 female perpetrators (age range, 23-41 years; average, 31 years) consisting of 6 mothers, 6 father/husbands, and 1 uncle/son-in-law. The 30 victims consisted of 11 daughters, 11 sons, 1 niece, 1 mother-in-law, and 6 wives of the assailants. The 23 children were aged from 10 months to 15 years (average, 6.0 years). The 6 mothers murdered 9 children and no spouses, with 3 child survivors. The 6 fathers murdered 13 children and 6 wives, with 1 child survivor. This study has demonstrated a higher percentage of female perpetrators than other studies of murder-suicide. The methods of homicide and suicide used were generally less violent among the female perpetrators compared with male perpetrators. Fathers killed not only their children but also their wives, whereas mothers murdered only their children. These results suggest differences between murder-suicides that involve children and adult-only cases, and between cases in which the mother rather than the father is the perpetrator.",
"title": ""
},
{
"docid": "a5a36d7d267e299088d05dafa1ce2b6c",
"text": "Agent-based modelling is a bottom-up approach to understanding systems which provides a powerful tool for analysing complex, non-linear markets. The method involves creating artificial agents designed to mimic the attributes and behaviours of their real-world counterparts. The system’s macro-observable properties emerge as a consequence of these attributes and behaviours and the interactions between them. The simulation output may be potentially used for explanatory, exploratory and predictive purposes. The aim of this paper is to introduce the reader to some of the basic concepts and methods behind agent-based modelling and to present some recent business applications of these tools, including work in the telecoms and media markets.",
"title": ""
}
] |
scidocsrr
|
7224a08fdb3c91848e3c8f2864d9421b
|
Social norms and energy conservation
|
[
{
"docid": "d143e0dadb1b145bb4293024b46c2c8e",
"text": "Firms spend billions of dollars developing advertising content, yet there is little field evidence on how much or how it affects demand. We analyze a direct mail field experiment in South Africa implemented by a consumer lender that randomized advertising content, loan price, and loan offer deadlines simultaneously. We find that advertising content significantly affects demand. Although it was difficult to predict ex ante which specific advertising features would matter most in this context, the features that do matter have large effects. Showing fewer example loans, not suggesting a particular use for the loan, or including a photo of an attractive woman increases loan demand by about as much as a 25% reduction in the interest rate. The evidence also suggests that advertising content persuades by appealing “peripherally” to intuition rather than reason. Although the advertising content effects point to an important role for persuasion and related psychology, our deadline results do not support the psychological prediction that shorter deadlines may help overcome time-management problems; instead, demand strongly increases with longer deadlines. Gender Connection Gender Informed Analysis Gender Outcomes Gender disaggregated access to credit IE Design Randomized Control Trial Intervention The study uses a large-scale direct-mail field experiment to study the effects of advertising content on real decisions, involving nonnegligible sums, among experienced decision makers. A consumer lender in South Africa randomized advertising content and the interest rate in actual offers to 53,000 former clients. The variation in advertising content comes from eight “features” that varied the presentation of the loan offer. We worked together with the lender to create six features relevant to the extensive literature (primarily from laboratory experiments in psychology and decision sciences) on how “frames” and “cues” may affect choices. Specifically, mailers varied in whether they included a person’s photograph on the letter, suggestions for how to use the loan proceeds, a large or small table of example loans, information about the interest rate as well as the monthly payments, a comparison to competitors’ interest rates, and mention of a promotional raffle for a cell phone. Mailers also included two features that were the lender’s choice, rather than motivated by a body of psychological evidence: reference to the interest rate as “special” or P ub lic D is cl os ur e A ut ho riz ed P ub lic D is cl os ur e A ut ho riz ed P ub lic D is cl os ur e A ut ho riz ed P ub lic D is cl os ur e A ut ho riz ed P ub lic D is cl os ur e A ut ho riz ed P ub lic D is cl os ur e A ut ho riz ed P ub lic D is cl os ur e A ut ho riz ed P ub lic D is cl os ur e A ut ho riz ed enGender Impact: The World Bank’s Gender Impact Evaluation Database Last updated: 14 August 2013 2 “low,” and mention of speaking the local language. Our research design enables us to estimate demand sensitivity to advertising content and to compare it directly to price sensitivity. An additional randomization of the offer expiration date also allows us to study demand sensitivity to deadlines. Intervention Period The bank offered loans with repayment periods ranging from 4 to 18 months. Deadlines for response were randomly allocated from 2 weeks to 6 weeks. Sample population 5194 formers clients who had borrowed from the money-lender in the previous 24 months. Comparison conditions There are six different features of the pamphlet that were randomized. There was no control group. Unit of analysis Individual borrower Evaluation Period The study evaluates responses to the mail advertising experiment. Results Simplifying the loan description led to a significant increase in takeup of the loan equivalent to a 200 basis point reduction in interest rates. Including a comparison feature in the letter had no impact on takeup. The race of the person featured on the photo had no impact on takeup of the loan. The gender of the person featured led to a significant increase of takeup when a woman was featuredthe effect size was also similar to a 200 basis point reduction in the interest rate. Male clients were much more likely to takeup the loan when a woman was featured. Featuring a man did not affect the decision making of female clients. Including a promotional giveaway and a suggestion phone call both significantly increased takeup. Primary study limitations Because of the large amount of variations, the sample size only allowed for the identification of economically large effects. Funding Source National Science Foundation, The Bill and Melinda Gates Foundation, USAID/BASIS Reference(s) Bertrand, M., Karlan, D., Mullainathan, S., Shafir, E., & Zinman, J. (2010) \"What's advertising content worth? Evidence from a consumer credit marketing field experiment,\" The Quarterly Journal of Economics, 125(1), 263-306. Link to Studies http://qje.oxfordjournals.org/content/125/1/263.short Microdata",
"title": ""
}
] |
[
{
"docid": "68bec5db1d6c897bbd1571771d5c92cf",
"text": "Circle hairs (CH) represent a body hair growth disorder characterized by asymptomatic presence of hairs with typical circular or spiraliform arrangement, not associated with follicular or inflammatory abnormalities. Although this condition is rarely reported, it is probably underestimated, as a medical consultation for CH only is rare in practice. Trichoscopic and histopathological findings of CH have never been reported and this article will present and discuss six cases along with literature review.",
"title": ""
},
{
"docid": "b6f9d5015fddbf92ab44ae6ce2f7d613",
"text": "Emojis are small images that are commonly included in social media text messages. The combination of visual and textual content in the same message builds up a modern way of communication, that automatic systems are not used to deal with. In this paper we extend recent advances in emoji prediction by putting forward a multimodal approach that is able to predict emojis in Instagram posts. Instagram posts are composed of pictures together with texts which sometimes include emojis. We show that these emojis can be predicted by using the text, but also using the picture. Our main finding is that incorporating the two synergistic modalities, in a combined model, improves accuracy in an emoji prediction task. This result demonstrates that these two modalities (text and images) encode different information on the use of emojis and therefore can complement each other.",
"title": ""
},
{
"docid": "c411fc52d40cf1f67ddad0c448c6235a",
"text": "Intel’s Software Guard Extensions (SGX) is a set of extensions to the Intel architecture that aims to provide integrity and confidentiality guarantees to securitysensitive computation performed on a computer where all the privileged software (kernel, hypervisor, etc) is potentially malicious. This paper analyzes Intel SGX, based on the 3 papers [14, 79, 139] that introduced it, on the Intel Software Developer’s Manual [101] (which supersedes the SGX manuals [95, 99]), on an ISCA 2015 tutorial [103], and on two patents [110, 138]. We use the papers, reference manuals, and tutorial as primary data sources, and only draw on the patents to fill in missing information. This paper does not reflect the information available in two papers [74, 109] that were published after the first version of this paper. This paper’s contributions are a summary of the Intel-specific architectural and micro-architectural details needed to understand SGX, a detailed and structured presentation of the publicly available information on SGX, a series of intelligent guesses about some important but undocumented aspects of SGX, and an analysis of SGX’s security properties.",
"title": ""
},
{
"docid": "bc384d12513dc76bf76f11acd04d39f4",
"text": "Traffic sign detection is an important task in traffic sign recognition systems. Chinese traffic signs have their unique features compared with traffic signs of other countries. Convolutional neural networks (CNNs) have achieved a breakthrough in computer vision tasks and made great success in traffic sign classification. In this paper, we present a Chinese traffic sign detection algorithm based on a deep convolutional network. To achieve real-time Chinese traffic sign detection, we propose an end-to-end convolutional network inspired by YOLOv2. In view of the characteristics of traffic signs, we take the multiple 1 × 1 convolutional layers in intermediate layers of the network and decrease the convolutional layers in top layers to reduce the computational complexity. For effectively detecting small traffic signs, we divide the input images into dense grids to obtain finer feature maps. Moreover, we expand the Chinese traffic sign dataset (CTSD) and improve the marker information, which is available online. All experimental results evaluated according to our expanded CTSD and German Traffic Sign Detection Benchmark (GTSDB) indicate that the proposed method is the faster and more robust. The fastest detection speed achieved was 0.017 s per image.",
"title": ""
},
{
"docid": "c61efe1758f6599e5cc069185bb02d48",
"text": "Modeling the face aging process is a challenging task due to large and non-linear variations present in different stages of face development. This paper presents a deep model approach for face age progression that can efficiently capture the non-linear aging process and automatically synthesize a series of age-progressed faces in various age ranges. In this approach, we first decompose the long-term age progress into a sequence of short-term changes and model it as a face sequence. The Temporal Deep Restricted Boltzmann Machines based age progression model together with the prototype faces are then constructed to learn the aging transformation between faces in the sequence. In addition, to enhance the wrinkles of faces in the later age ranges, the wrinkle models are further constructed using Restricted Boltzmann Machines to capture their variations in different facial regions. The geometry constraints are also taken into account in the last step for more consistent age-progressed results. The proposed approach is evaluated using various face aging databases, i.e. FGNET, Cross-Age Celebrity Dataset (CACD) and MORPH, and our collected large-scale aging database named AginG Faces in the Wild (AGFW). In addition, when ground-truth age is not available for input image, our proposed system is able to automatically estimate the age of the input face before aging process is employed.",
"title": ""
},
{
"docid": "e14d1f7f7e4f7eaf0795711fb6260264",
"text": "In this paper, we treat tracking as a learning problem of estimating the location and the scale of an object given its previous location, scale, as well as current and previous image frames. Given a set of examples, we train convolutional neural networks (CNNs) to perform the above estimation task. Different from other learning methods, the CNNs learn both spatial and temporal features jointly from image pairs of two adjacent frames. We introduce multiple path ways in CNN to better fuse local and global information. A creative shift-variant CNN architecture is designed so as to alleviate the drift problem when the distracting objects are similar to the target in cluttered environment. Furthermore, we employ CNNs to estimate the scale through the accurate localization of some key points. These techniques are object-independent so that the proposed method can be applied to track other types of object. The capability of the tracker of handling complex situations is demonstrated in many testing sequences.",
"title": ""
},
{
"docid": "6cbcd5288423895c4aeff8524ca5ac6c",
"text": "We report a quantitative analysis of the cross-utterance coordination observed in child-directed language, where successive utterances often overlap in a manner that makes their constituent structure more prominent, and describe the application of a recently published unsupervised algorithm for grammar induction to the largest available corpus of such language, producing a grammar capable of accepting and generating novel wellformed sentences. We also introduce a new corpus-based method for assessing the precision and recall of an automatically acquired generative grammar without recourse to human judgment. The present work sets the stage for the eventual development of more powerful unsupervised algorithms for language acquisition, which would make use of the coordination structures present in natural child-directed speech.",
"title": ""
},
{
"docid": "2d43992a8eb6e97be676c04fc9ebd8dd",
"text": "Social interactions and interpersonal communication has undergone significant changes in recent years. Increasing awareness of privacy issues and events such as the Snowden disclosures have led to the rapid growth of a new generation of anonymous social networks and messaging applications. By removing traditional concepts of strong identities and social links, these services encourage communication between strangers, and allow users to express themselves without fear of bullying or retaliation.\n Despite millions of users and billions of monthly page views, there is little empirical analysis of how services like Whisper have changed the shape and content of social interactions. In this paper, we present results of the first large-scale empirical study of an anonymous social network, using a complete 3-month trace of the Whisper network covering 24 million whispers written by more than 1 million unique users. We seek to understand how anonymity and the lack of social links affect user behavior. We analyze Whisper from a number of perspectives, including the structure of user interactions in the absence of persistent social links, user engagement and network stickiness over time, and content moderation in a network with minimal user accountability. Finally, we identify and test an attack that exposes Whisper users to detailed location tracking. We have notified Whisper and they have taken steps to address the problem.",
"title": ""
},
{
"docid": "25deed9855199ef583524a2eef0456f0",
"text": "We introduce a method for creating very dense reconstructions of datasets, particularly turn-table varieties. The method takes in initial reconstructions (of any origin) and makes them denser by interpolating depth values in two-dimensional image space within a superpixel region and then optimizing the interpolated value via image consistency analysis across neighboring images in the dataset. One of the core assumptions in this method is that depth values per pixel will vary gradually along a gradient for a given object. As such, turntable datasets, such as the dinosaur dataset, are particularly easy for our method. Our method modernizes some existing techniques and parallelizes them on a GPU, which produces results faster than other densification methods.",
"title": ""
},
{
"docid": "711b3ed2cb9da33199dcc18f8b3fc98d",
"text": "In this paper, we propose two ways of improving image classification based on bag-of-words representation [25]. Two shortcomings of this representation are the loss of the spatial information of visual words and the presence of noisy visual words due to the coarseness of the vocabulary building process. On the one hand, we propose a new representation of images that goes further in the analogy with textual data: visual sentences, that allows us to \"read\" visual words in a certain order, as in the case of text. We can therefore consider simple spatial relations between words. We also present a new image classification scheme that exploits these relations. It is based on the use of language models, a very popular tool from speech and text analysis communities. On the other hand, we propose new techniques to eliminate useless words, one based on geometric properties of the keypoints, the other on the use of probabilistic Latent Semantic Analysis (pLSA). Experiments show that our techniques can significantly improve image classification, compared to a classical Support Vector Machine-based classification.",
"title": ""
},
{
"docid": "99b00dcd6097f4d49f61886b7013252c",
"text": "The analysis of various parameters of metal oxides and the search of criteria, which could be used during material selection for solid-state gas sensor pplications, were the main objectives of this review. For these purposes the correlation between electro-physical (band gap, electroconductivity, ype of conductivity, oxygen diffusion), thermodynamic, surface, electronic, structural properties, catalytic activity and gas-sensing characteristics f metal oxides designed for solid-state sensors was established. It has been discussed the role of metal oxide manufacturability, chemical activity, nd parameter’s stability in sensing material choice as well. 2007 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "50795998e83dafe3431c3509b9b31235",
"text": "In this study, the daily movement directions of three frequently traded stocks (GARAN, THYAO and ISCTR) in Borsa Istanbul were predicted using deep neural networks. Technical indicators obtained from individual stock prices and dollar-gold prices were used as features in the prediction. Class labels indicating the movement direction were found using daily close prices of the stocks and they were aligned with the feature vectors. In order to perform the prediction process, the type of deep neural network, Convolutional Neural Network, was trained and the performance of the classification was evaluated by the accuracy and F-measure metrics. In the experiments performed, using both price and dollar-gold features, the movement directions in GARAN, THYAO and ISCTR stocks were predicted with the accuracy rates of 0.61, 0.578 and 0.574 respectively. Compared to using the price based features only, the use of dollar-gold features improved the classification performance.",
"title": ""
},
{
"docid": "188a0cad004be51f62968c55f9551ba2",
"text": "This paper investigates the control of an uninterruptible power supply (UPS) using a combined measurement of capacitor and load currents in the same current sensor arrangement. The purpose of this combined measurement is, on one hand, to reach a similar performance as that obtained in the inductor current controller with load current feedforward and, on the other hand, to easily obtain an estimate of the inductor current for overcurrent protection capability. Based on this combined current measurement, a voltage controller based on resonant harmonic filters is investigated in order to compensate for unbalance and harmonic distortion on the load. Adaptation is included to cope with uncertainties in the system parameters. It is shown that after transformations the proposed controller gets a simple and practical form that includes a bank of resonant filters, which is in agreement with the internal model principle and corresponds to similar approaches proposed recently. The controller is based on a frequency-domain description of the periodic disturbances, which include both symmetric components, namely, the negative and positive sequence. Experimental results on the output stage of a three-phase three-wire UPS are presented to assess the performance of the proposed algorithm",
"title": ""
},
{
"docid": "bfae60b46b97cf2491d6b1136c60f6a6",
"text": "Educational data mining concerns with developing methods for discovering knowledge from data that come from educational domain. In this paper we used educational data mining to improve graduate students’ performance, and overcome the problem of low grades of graduate students. In our case study we try to extract useful knowledge from graduate students data collected from the college of Science and Technology – Khanyounis. The data include fifteen years period [1993-2007]. After preprocessing the data, we applied data mining techniques to discover association, classification, clustering and outlier detection rules. In each of these four tasks, we present the extracted knowledge and describe its importance in educational domain.",
"title": ""
},
{
"docid": "22c6ae71c708d5e2d1bc7e5e085c4842",
"text": "Head pose estimation is a fundamental task for face and social related research. Although 3D morphable model (3DMM) based methods relying on depth information usually achieve accurate results, they usually require frontal or mid-profile poses which preclude a large set of applications where such conditions can not be garanteed, like monitoring natural interactions from fixed sensors placed in the environment. A major reason is that 3DMM models usually only cover the face region. In this paper, we present a framework which combines the strengths of a 3DMM model fitted online with a prior-free reconstruction of a 3D full head model providing support for pose estimation from any viewpoint. In addition, we also proposes a symmetry regularizer for accurate 3DMM fitting under partial observations, and exploit visual tracking to address natural head dynamics with fast accelerations. Extensive experiments show that our method achieves state-of-the-art performance on the public BIWI dataset, as well as accurate and robust results on UbiPose, an annotated dataset of natural interactions that we make public and where adverse poses, occlusions or fast motions regularly occur.",
"title": ""
},
{
"docid": "32b8f971302926fd75f418df0aef91a3",
"text": "Cartoon-to-photo facial translation could be widely used in different applications, such as law enforcement and anime remaking. Nevertheless, current general-purpose imageto-image models usually produce blurry or unrelated results in this task. In this paper, we propose a Cartoon-to-Photo facial translation with Generative Adversarial Networks (CP-GAN) for inverting cartoon faces to generate photo-realistic and related face images. In order to produce convincing faces with intact facial parts, we exploit global and local discriminators to capture global facial features and three local facial regions, respectively. Moreover, we use a specific content network to capture and preserve face characteristic and identity between cartoons and photos. As a result, the proposed approach can generate convincing high-quality faces that satisfy both the characteristic and identity constraints of input cartoon faces. Compared with recent works on unpaired image-to-image translation, our proposed method is able to generate more realistic and correlative images.",
"title": ""
},
{
"docid": "53981a65161ff4cc6c892b986b9720d2",
"text": "Leadership is an important aspect of social organization that affects the processes of group formation, coordination, and decision-making in human societies, as well as in the social system of many other animal species. The ability to identify leaders based on their behavior and the subsequent reactions of others opens opportunities to explore how group decisions are made. Understanding who exerts influence provides key insights into the structure of social organizations. In this paper, we propose a simple yet powerful leadership inference framework extracting group coordination periods and determining leadership based on the activity of individuals within a group. We are able to not only identify a leader or leaders but also classify the type of leadership model that is consistent with observed patterns of group decision-making. The framework performs well in differentiating a variety of leadership models (e.g. dictatorship, linear hierarchy, or local influence). We propose five simple features that can be used to categorize characteristics of each leadership model, and thus make model classification possible. The proposed approach automatically (1) identifies periods of coordinated group activity, (2) determines the identities of leaders, and (3) classifies the likely mechanism by which the group coordination occurred. We demonstrate our framework on both simulated and real-world data: GPS tracks of a baboon troop and video-tracking of fish schools, as well as stock market closing price data of the NASDAQ index. The results of our leadership model are consistent with ground-truthed biological data and the framework finds many known events in financial data which are not otherwise reflected in the aggregate NASDAQ index. Our approach is easily generalizable to any coordinated activity data from interacting entities.",
"title": ""
},
{
"docid": "dcaa36372cdc34b12ae26875b90c5d56",
"text": "This paper presents two different implementations of four Quadrant CMOS Analog Multiplier Circuits. The Multipliers are designed in current mode. Current squarer and translinear loops are the basic blocks for both the structures in realization of mathematical equations. The structures have simplicity in implementation. The proposed multiplier structures are designed in implementing in 180 nm CMOS technology with a supply of 1.8 V & 1.2 V resp. The structures have frequency bandwidth of 493 MHz & 75 MHz with a power consumption of 146.78μW & 36.08μW respectively.",
"title": ""
},
{
"docid": "874f1c0584e0a364b92673c1c94f358f",
"text": "SUMMARY\nALOHOMORA is a software tool designed to facilitate genome-wide linkage studies performed with high-density single nucleotide polymorphism (SNP) marker panels such as the Affymetrix GeneChip(R) Human Mapping 10K Array. Genotype data are converted into appropriate formats for a number of common linkage programs and subjected to standard quality control routines before linkage runs are started. ALOHOMORA is written in Perl and may be used to perform state-of-the-art linkage scans in small and large families with any genetic model. Options for using different genetic maps or ethnicity-specific allele frequencies are implemented. Graphic outputs of whole-genome multipoint LOD score values are provided for the entire dataset as well as for individual families.\n\n\nAVAILABILITY\nALOHOMORA is available free of charge for non-commercial research institutions. For more details, see http://gmc.mdc-berlin.de/alohomora/",
"title": ""
},
{
"docid": "e88cab4c5e93b96fd39d63cd35de00fa",
"text": "Visual recognition algorithms are required today to exhibit adaptive abilities. Given a deep model trained on a specific, given task, it would be highly desirable to be able to adapt incrementally to new tasks, preserving scalability as the number of new tasks increases, while at the same time avoiding catastrophic forgetting issues. Recent work has shown that masking the internal weights of a given original conv-net through learned binary variables is a promising strategy. We build upon this intuition and take into account more elaborated affine transformations of the convolutional weights that include learned binary masks. We show that with our generalization it is possible to achieve significantly higher levels of adaptation to new tasks, enabling the approach to compete with fine tuning strategies by requiring slightly more than 1 bit per network parameter per additional task. Experiments on two popular benchmarks showcase the power of our approach, that achieves the new state of the art on the Visual Decathlon Challenge.",
"title": ""
}
] |
scidocsrr
|
2a8322bd3b9ee283ec064e0a754f6646
|
Digital forensics investigations in the Cloud
|
[
{
"docid": "7ed58e8ec5858bdcb5440123aea57bb1",
"text": "The demand for cloud computing is increasing because of the popularity of digital devices and the wide use of the Internet. Among cloud computing services, most consumers use cloud storage services that provide mass storage. This is because these services give them various additional functions as well as storage. It is easy to access cloud storage services using smartphones. With increasing utilization, it is possible for malicious users to abuse cloud storage services. Therefore, a study on digital forensic investigation of cloud storage services is necessary. This paper proposes new procedure for investigating and analyzing the artifacts of all accessible devices, such as Windows, Mac, iPhone, and Android smartphone.",
"title": ""
},
{
"docid": "a6defeca542d1586e521a56118efc56f",
"text": "We expose and explore technical and trust issues that arise in acquiring forensic evidence from infrastructure-as-aservice cloud computing and analyze some strategies for addressing these challenges. First, we create a model to show the layers of trust required in the cloud. Second, we present the overarching context for a cloud forensic exam and analyze choices available to an examiner. Third, we provide for the first time an evaluation of popular forensic acquisition tools including Guidance EnCase and AccesData Forensic Toolkit, and show that they can successfully return volatile and non-volatile data from the cloud. We explain, however, that with those techniques judge and jury must accept a great deal of trust in the authenticity and integrity of the data from many layers of the cloud model. In addition, we explore four other solutions for acquisition—Trusted Platform Modules, the management plane, forensics as a service, and legal solutions, which assume less trust but require more cooperation from the cloud service provider. Our work lays a foundation for future development of new acquisition methods for the cloud that will be trustworthy and forensically sound. Our work also helps forensic examiners, law enforcement, and the court evaluate confidence in evidence from the cloud.",
"title": ""
}
] |
[
{
"docid": "4c54ccdc2c6219e185b701c75eb9e5b4",
"text": "HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Perceived development of psychological characteristics in Male and Female elite gymnasts Claire Calmels, Fabienne D’Arripe-Longueville, Magaly Hars, Nadine Debois",
"title": ""
},
{
"docid": "37342f65a722eaca7359aacbfbe61091",
"text": "Video-surveillance and traffic analysis systems can be heavily improved using vision-based techniques to extract, manage and track objects in the scene. However, problems arise due to shadows. In particular, moving shadows can affect the correct localization, measurements and detection of moving objects. This work aims to present a technique for shadow detection and suppression used in a system for moving visual object detection and tracking. The major novelty of the shadow detection technique is the analysis carried out in the HSV color space to improve the accuracy in detecting shadows. This paper exploits comparison of shadow suppression using RGB and HSV color space in moving object detection and results in this paper are more encouraging using HSV colour space over RGB colour space. Keywords— Shadow detection; HSV color space; RGB color space.",
"title": ""
},
{
"docid": "14db67557fabc058fe61a7b45af54ecf",
"text": "“Primal heuristics” are a key contributor to the improved performance of exact branch-and-bound solvers for combinatorial optimization and integer programming. Perhaps the most crucial question concerning primal heuristics is that of at which nodes they should run, to which the typical answer is via hard-coded rules or fixed solver parameters tuned, offline, by trial-and-error. Alternatively, a heuristic should be run when it is most likely to succeed, based on the problem instance’s characteristics, the state of the search, etc. In this work, we study the problem of deciding at which node a heuristic should be run, such that the overall (primal) performance of the solver is optimized. To our knowledge, this is the first attempt at formalizing and systematically addressing this problem. Central to our approach is the use of Machine Learning (ML) for predicting whether a heuristic will succeed at a given node. We give a theoretical framework for analyzing this decision-making process in a simplified setting, propose a ML approach for modeling heuristic success likelihood, and design practical rules that leverage the ML models to dynamically decide whether to run a heuristic at each node of the search tree. Experimentally, our approach improves the primal performance of a stateof-the-art Mixed Integer Programming solver by up to 6% on a set of benchmark instances, and by up to 60% on a family of hard Independent Set instances.",
"title": ""
},
{
"docid": "34a91cfa5fb869d85b6b0a355025a228",
"text": "The wide-variety of real-time software systems, including telecontrol/telepresence systems, robotic systems, and mission planning systems, can entail dynamic code synthesis based on runtime mission-specific requirements and operating conditions. This necessitates the need for dynamic dependability assessment to ensure that these systems perform as specified and not fail in catastrophic ways. One approach in achieving this is to dynamically assess the modules in the synthesized code using software defect prediction techniques. Statistical models; such as stepwise multi-linear regression models and multivariate models, and machine learning approaches, such as artificial neural networks, instance-based reasoning, Bayesian-belief networks, decision trees, and rule inductions, have been investigated for predicting software quality. However, there is still no consensus about the best predictor model for software defects. In this paper; we evaluate different predictor models on four different real-time software defect data sets. The results show that a combination of IR and instance-based learning along with the consistency-based subset evaluation technique provides a relatively better consistency in accuracy prediction compared to other models. The results also show that \"size\" and \"complexity\" metrics are not sufficient for accurately predicting real-time software defects.",
"title": ""
},
{
"docid": "0b6693195ef302e2c160d65956d80eea",
"text": "Let f : Sd−1 × Sd−1 → R be a function of the form f(x,x′) = g(〈x,x′〉) for g : [−1, 1] → R. We give a simple proof that shows that poly-size depth two neural networks with (exponentially) bounded weights cannot approximate f whenever g cannot be approximated by a low degree polynomial. Moreover, for many g’s, such as g(x) = sin(πdx), the number of neurons must be 2 . Furthermore, the result holds w.r.t. the uniform distribution on Sd−1 × Sd−1. As many functions of the above form can be well approximated by poly-size depth three networks with polybounded weights, this establishes a separation between depth two and depth three networks w.r.t. the uniform distribution on Sd−1 × Sd−1.",
"title": ""
},
{
"docid": "05520d9ec32fca131dab3a7a0fbea2f1",
"text": "Non-Orthogonal Multiple Access (NOMA) is considered as a promising downlink Multiple Access (MA) scheme for future radio access. In this paper two power allocation strategies for NOMA are proposed. The first strategy is based on channel state information experienced by NOMA users. The other strategy is based on pre-defined QoS per NOMA user. In this paper we develop mathematical models for the proposed strategies. Also we clarify the potential gains of NOMA using proposed power allocation strategies over Orthogonal Multiple Access (OMA). Simulation results showed that NOMA performance using the proposed strategies achieves superior performance compared to that for OMA.",
"title": ""
},
{
"docid": "0d59ab6748a16bf4deedfc8bd79e4d71",
"text": "Paget's disease (PD) is a chronic progressive disease of the bone characterized by abnormal bone metabolism affecting either a single bone (monostotic) or many bones (polyostotic) with uncertain etiology. We report a case of PD in a 70-year-old male, which was initially identified as osteonecrosis of the maxilla. Non-drug induced osteonecrosis in PD is rare and very few cases have been reported in the literature.",
"title": ""
},
{
"docid": "d479a9db29c28ab81695a67bca103256",
"text": "To compare the efficacy of chlorhexidine-gluconate versus povidone iodine in preoperative skin preparation in the prevention of surgical site infections (SSIs) in clean-contaminated upper abdominal surgeries. This was a prospective randomized controlled trial conducted on patients undergoing clean-contaminated upper abdominal surgeries. A total of 351 patients 18–70 years old were randomized into two groups; chlorhexidine and povidone iodine skin preparation before surgery. The incidence of SSIs in the chlorhexidine group was 10.8 %, in comparison to 17.9 % in the povidone iodine group. The odds ratio was 0.6 in favor of chlorhexidine use, but the results were not statistically significant (P = 0.06). In the first postoperative week, SSIs developed in 7 % of patients in the chlorhexidine group and 14.1 % in the povidone iodine group (P = 0.03), and in the second postoperative week, SSIs were present in 4.1 % of the patients in the chlorhexidine group and 4.4 % in the povidone iodine group, which was not statistically significant (P = 0.88). The incidence of SSIs after clean-contaminated upper abdominal surgeries was lower with the use of chlorhexidine skin preparation than with povidone iodine preparation, although the results were not statistically significant. However, the odds ratio between the two groups favored the use of chlorhexidine over povidone iodine for preventing SSIs.",
"title": ""
},
{
"docid": "cb70ab2056242ca739adde4751fbca2c",
"text": "In this paper, we consider the task of learning control policies for text-based games. In these games, all interactions in the virtual world are through text and the underlying state is not observed. The resulting language barrier makes such environments challenging for automatic game players. We employ a deep reinforcement learning framework to jointly learn state representations and action policies using game rewards as feedback. This framework enables us to map text descriptions into vector representations that capture the semantics of the game states. We evaluate our approach on two game worlds, comparing against baselines using bag-ofwords and bag-of-bigrams for state representations. Our algorithm outperforms the baselines on both worlds demonstrating the importance of learning expressive representations. 1",
"title": ""
},
{
"docid": "20606412557c925d003330265cbcc6f2",
"text": "Ideally, an electricity supply should invariably show a perfectly sinusoidal voltage signal at every customer location. However, for a number of reasons, utilities often find it hard to preserve such desirable conditions. The deviation of the voltage and Current waveforms from sinusoidal is expressed as harmonic distortion. The harmonic distortion in the power system is increasing with wide use of nonlinear loads. Thus, it is important to analyze and evaluate the various harmonic problems in the power system and introduce the appropriate solution techniques. A single-tuned filter design is illustrated and implemented in a model which created in Electromagnetic Transient Analysis Program (ETAP). The system was constructed for this simulation and does not represent one particular real life system Furthermore, the model was used to test the effects of injecting harmonic current by variable speed drive (VFD) into 3 situations. Firstly, the scheme with capacitor banks only, then, using a single-tuned filter and finally both schemes coherently.",
"title": ""
},
{
"docid": "b7f5f3aeab3fc15c9f1b18c45089301b",
"text": "In this paper we assess developments in the mobile payments in light of new technologies (e.g., NFC) and solution frameworks (e.g., Android). With the recent announcements of Google and other players to enter the mobile payment markets the already dynamic arena of mobile payments is getting more complex and competitive. Everyone is jockeying to new positions by announcing novel payment solutions based e.g. on NFC systems. We argue, however, that significant and recurrent social, institutional, and business challenges remain to be solved for successful mobile payment platforms to emerge. In order to get these multi-sided platforms to diffuse multiple stakeholders have to be simultaneously convinced for related value propositions and more workable economic arrangements need to be forged.",
"title": ""
},
{
"docid": "65ac52564041b0c2e173560d49ec762f",
"text": "Constructionism can be a powerful framework for teaching complex content to novices. At the core of constructionism is the suggestion that by enabling learners to build creative artifacts that require complex content to function, those learners will have opportunities to learn this content in contextualized, personally-meaningful ways. In this paper, we investigate the relevance of a set of approaches broadly called “educational data mining” or “learning analytics” (henceforth, EDM) to help provide a basis for quantitative research on constructionist learning which does not abandon the richness seen as essential by many researchers in that paradigm. We suggest that EDM may have the potential to support research that is meaningful and useful both to researchers working actively in the constructionist tradition but also to wider communities. Finally, we explore potential collaborations between researchers in the EDM and constructionist traditions; such collaborations have the potential to enhance the ability of constructionist researchers to make rich inference about learning and learners, while providing EDM researchers with many interesting new research questions and challenges. In recent years, project-based, student-centered approaches to education have gained prominence, due in part to an increased demand for higher-level skills in the job market (Levi and Murname, 2004), positive research findings on the effectiveness of such approaches (Barron, Pearson, et al., 2008), and a broader acceptance in public policy circles, as shown, for example, by the Next Generation Science Standards (NGSS Lead States, 2013). While several approaches for this type of learning exist, Constructionism is one of the most popular and well-developed ones (Papert, 1980). In this paper, we investigate the relevance of a set of approaches called “educational data mining” or “learning analytics” (henceforth abbreviated as ‘EDM’) (R. Baker & Yacef, 2009; Romero & Ventura, 2010a; R. Baker & Siemens, in press) to help provide a basis for quantitative research on constructionist learning which does not abandon the richness seen as essential by many researchers in that paradigm. As such, EDM may have the potential to support research that is meaningful and useful both to researchers working actively in the constructionist tradition and to the wider community of learning scientists and policymakers. EDM, broadly, is a set of methods that apply data mining and machine learning techniques such as prediction, classification, and discovery of latent structural regularities to rich, voluminous, and idiosyncratic educational data, potentially similar to those data generated by many constructionist learning environments which allows students to explore and build their own artifacts, computer programs, and media pieces. As such, we identify four axes in which EDM methods may be helpful for constructionist research: 1. EDM methods do not require constructionists to abandon deep qualitative analysis for simplistic summative or confirmatory quantitative analysis; 2. EDM methods can generate different and complementary new analyses to support qualitative research; 3. By enabling precise formative assessments of complex constructs, EDM methods can support an increase in methodological rigor and replicability; 4. EDM can be used to present comprehensible and actionable data to learners and teachers in situ. In order to investigate those axes, we start by describing our perspective on compatibilities and incompatibilities between constructionism and EDM. At the core of constructionism is the suggestion that by enabling learners to build creative artifacts that require complex content to function, those learners will have opportunities to learn that complex content in connected, meaningful ways. Constructionist projects often emphasize making those artifacts (and often data) public, socially relevant, and personally meaningful to learners, and encourage working in social spaces such that learners engage each other to accelerate the learning process. diSessa and Cobb (2004) argue that constructionism serves a framework for action, as it describes its own praxis (i.e., how it matches theory to practice). The learning theory supporting constructionism is classically constructivist, combining concepts from Piaget and Vygotsky (Fosnot, 2005). As constructionism matures as a constructivist framework for action and expands in scale, constructionist projects are becoming both more complex (Reynolds & Caperton, 2011), more scalable (Resnick, Maloney, et al., 2009), and more affordable for schools following significant development in low cost “construction” technologies such as robotics and 3D printers. As such, there have been increasing opportunities to learn more about how students learn in constructionist contexts, advancing the science of learning. These discoveries will have the potential to improve the quality of all constructivist learning experiences. For example, Wilensky and Reisman (2006) have shown how constructionist modeling and simulation can make science learning more accessible, Resnick (1998) has shown how constructionism can reframe programming as art at scale, Buechley & Eisenberg (2008) have used e-textiles to engage female students in robotics, Eisenberg (2011) and Blikstein (2013, 2014) use constructionist digital fabrication to successfully teach programming, engineering, and electronics in a novel, integrated way. The findings of these research and design projects have the potential to be useful to a wide external community of teachers, researchers, practitioners, and other stakeholders. However, connecting findings from the constructionist tradition to the goals of policymakers can be challenging, due to the historical differences in methodology and values between these communities. The resources needed to study such interventions at scale are considerable, given the need to carefully document, code, and analyze each student’s work processes and artifacts. The designs of constructionist research often result in findings that do not map to what researchers, outside interests, and policymakers are expecting, in contrast to conventional controlled studies, which are designed to (more conclusively) answer a limited set of sharply targeted research questions. Due the lack of a common ground to discuss benefits and scalability of constructionist and project-based designs, these designs have been too frequently sidelined to niche institutions such as private schools, museums, or atypical public schools. To understand what the role EDM methods can play in constructionist research, we must frame what we mean by constructionist research more precisely. We follow Papert and Harel (1991) in their situating of constructionism, but they do not constrain the term to one formal definition. The definition is further complicated by the fact that constructionism has many overlaps with other research and design traditions, such as constructivism and socio-constructivism themselves, as well as project-based pedagogies and inquiry-based designs. However, we believe that it is possible to define the subset of constructionism amenable to EDM, a focus we adopt in this article for brevity. In this paper, we focus on the constructionist literature dealing with students learning to construct understandings by constructing (physical or virtual) artifacts, where the students' learning environments are designed and constrained such that building artifacts in/with that environment is designed to help students construct their own understandings. In other words, we are focusing on creative work done in computational environments designed to foster creative and transformational learning, such as NetLogo (Wilensky, 1999), Scratch (Resnick, Maloney, et al., 2009), or LEGO Mindstorms. This sub-category of constructionism can and does generate considerable formative and summative data. It also has the benefit of having a history of success in the classroom. From Papert’s seminal (1972) work through today, constructionist learning has been shown to promote the development of deep understanding of relatively complex content, with many examples ranging from mathematics (Harel, 1990; Wilensky, 1996) to history (Zahn, Krauskopf, Hesse, & Pea, 2010). However, constructionist learning environments, ideas, and findings have yet to reach the majority of classrooms and have had incomplete influence in the broader education research community. There are several potential reasons for this. One of them may be a lack of demonstration that findings are generalizable across populations and across specific content. Another reason is that constructionist activities are seen to be timeconsuming for teachers (Warschauer & Matuchniak, 2010), though, in practice, it has been shown that supporting understanding through project-based work could actually save time (Fosnot, 2005) and enable classroom dynamics that may streamline class preparation (e.g., peer teaching or peer feedback). A last reason is that constructionists almost universally value more deep understanding of scientific principles than facts or procedural skills even in contexts (e.g., many classrooms) in which memorization of facts and procedural skills is the target to be evaluated (Abelson & diSessa, 1986; Papert & Harel, 1991). Therefore, much of what is learned in constructionist environments does not directly translate to test scores or other established metrics. Constructionist research can be useful and convincing to audiences that do not yet take full advantage of the scientific findings of this community, but it requires careful consideration of framing and evidence to reach them. Educational data mining methods pose the potential to both enhance constructionist research, and to support constructionist researchers in communicating their findings in a fashion that other researchers consider valid. Blikstein (2011, p. 110) made ",
"title": ""
},
{
"docid": "312bfca90e57468622e6b3cd2b48a10b",
"text": "Faciogenital dysplasia or Aarskog–Scott syndrome (AAS) is a genetically heterogeneous developmental disorder. The X-linked form of AAS has been ascribed to mutations in the FGD1 gene. However, although AAS may be considered as a relatively frequent clinical diagnosis, mutations have been established in few patients. Genetic heterogeneity and the clinical overlap with a number of other syndromes might explain this discrepancy. In this study, we have conducted a single-strand conformation polymorphism (SSCP) analysis of the entire coding region of FGD1 in 46 AAS patients and identified eight novel mutations, including one insertion, four deletions and three missense mutations (19.56% detection rate). One mutation (528insC) was found in two independent families. The mutations are scattered all along the coding sequence. Phenotypically, all affected males present with the characteristic AAS phenotype. FGD1 mutations were not associated with severe mental retardation. However, neuropsychiatric disorders, mainly behavioural and learning problems in childhood, were observed in five out of 12 mutated individuals. The current study provides further evidence that mutations of FGD1 may cause AAS and expands the spectrum of disease-causing mutations. The importance of considering the neuropsychological phenotype of AAS patients is discussed.",
"title": ""
},
{
"docid": "7c89f5f0e7f3db92c1a2df21f957154d",
"text": "INTRODUCTION\nIn this study, we describe and depict unexpected sequelae of adult medical male circumcision (MMC) using the PrePex device.\n\n\nMATERIALS AND METHODS\nThe PrePex system is an elastic compression device for adult MMC. The device is well studied, has been pre-qualified by the World Health Organization (WHO), and its use is being scaled-up in African countries targeted by WHO. We conducted a PrePex implementation study in routine service delivery among 427 men in the age range of 18-49 in western Kenya. We captured penile photographs to create a record of adverse events (AEs) and to monitor healing. Several unexpected AEs ensued, including some that have not been reported in other PrePex studies. We describe and depict those unexpected complications and resulting treatments to alert circumcision providers in the relevant areas.\n\n\nRESULTS\nWe observed 5 device displacements (1.2%); 3 cases of early sloughing of foreskin tissue (0.7%) among men with long foreskins; 2 cases of a long foreskin obstructing urine flow, as it became dry and necrotic (0.5%); and 2 cases of insufficient foreskin removal caused by invagination for which surgical completion was necessary (0.5%). All of the participants healed completely by day 42 post-circumcision or shortly thereafter.\n\n\nCONCLUSION\nThe potential for these complications should be incorporated into PrePex training programs. Integration of devices into MMC programs in medically underserved areas requires the availability of prompt surgical intervention for some sequelae, particularly displacement events.",
"title": ""
},
{
"docid": "ddaf60e511051f3b7e521c4a90f3f9cf",
"text": "The objective of this study was to determine the effects of formulation excipients and physical characteristics of inhalation particles on their in vitro aerosolization performance, and thereby to maximize their respirable fraction. Dry powders were produced by spray-drying using excipients that are FDA-approved for inhalation as lactose, materials that are endogenous to the lungs as albumin and dipalmitoylphosphatidylcholine (DPPC); and/or protein stabilizers as trehalose or mannitol. Dry powders suitable for deep lung deposition, i.e. with an aerodynamic diameter of individual particles <3 microm, were prepared. They presented 0.04--0.25 g/cm(3) bulk tap densities, 3--5 microm geometric particle sizes, up to 90% emitted doses and 50% respirable fractions in the Andersen cascade impactor using a Spinhaler inhaler device. The incorporation of lactose, albumin and DPPC in the formulation all improved the aerosolization properties, in contrast to trehalose and the mannitol which decreased powder flowability. The relative proportion of the excipients affected aerosol performance as well. The lower the bulk powder tap density, the higher the respirable fraction. Optimization of in vitro aerosolization properties of inhalation dry powders can be achieved by appropriately selecting composition and physical characteristics of the particles.",
"title": ""
},
{
"docid": "ce18f78a9285a68016e7d793122d3079",
"text": "Civic technology, or civic tech, encompasses a rich body of work, inside and outside HCI, around how we shape technology for, and in turn how technology shapes, how we govern, organize, serve, and identify matters of concern for communities. This study builds on previous work by investigating how civic leaders in a large US city conceptualize civic tech, in particular, how they approach the intersection of data, design and civics. We encountered a range of overlapping voices, from providers, to connectors, to volunteers of civic services and resources. Through this account, we identified different conceptions and expectation of data, design and civics, as well as several shared issues around pressing problems and strategic aspirations. Reflecting on this set of issues produced guiding questions, in particular about the current and possible roles for design, to advance civic tech.",
"title": ""
},
{
"docid": "2da1279270b3e8925100f281447bfb6b",
"text": "Consideration of confounding is fundamental to the design and analysis of studies of causal effects. Yet, apart from confounding in experimental designs, the topic is given little or no discussion in most statistics texts. We here provide an overview of confounding and related concepts based on a counterfactual model for causation. Special attention is given to definitions of confounding, problems in control of confounding, the relation of confounding to exchangeability and collapsibility, and the importance of distinguishing confounding from noncollapsibility.",
"title": ""
},
{
"docid": "db207eb0d5896c2aad1f8485bc597e45",
"text": "One of the serious obstacles to the applications of speech emotion recognition systems in real-life settings is the lack of generalization of the emotion classifiers. Many recognition systems often present a dramatic drop in performance when tested on speech data obtained from different speakers, acoustic environments, linguistic content, and domain conditions. In this letter, we propose a novel unsupervised domain adaptation model, called Universum autoencoders, to improve the performance of the systems evaluated in mismatched training and test conditions. To address the mismatch, our proposed model not only learns discriminative information from labeled data, but also learns to incorporate the prior knowledge from unlabeled data into the learning. Experimental results on the labeled Geneva Whispered Emotion Corpus database plus other three unlabeled databases demonstrate the effectiveness of the proposed method when compared to other domain adaptation methods.",
"title": ""
},
{
"docid": "a1f60b03cf3a7dde3090cbf0a926a7e9",
"text": "Secondary analyses of Revised NEO Personality Inventory data from 26 cultures (N = 23,031) suggest that gender differences are small relative to individual variation within genders; differences are replicated across cultures for both college-age and adult samples, and differences are broadly consistent with gender stereotypes: Women reported themselves to be higher in Neuroticism, Agreeableness, Warmth, and Openness to Feelings, whereas men were higher in Assertiveness and Openness to Ideas. Contrary to predictions from evolutionary theory, the magnitude of gender differences varied across cultures. Contrary to predictions from the social role model, gender differences were most pronounced in European and American cultures in which traditional sex roles are minimized. Possible explanations for this surprising finding are discussed, including the attribution of masculine and feminine behaviors to roles rather than traits in traditional cultures.",
"title": ""
},
{
"docid": "7c4cb5f52509ad5a3795e9ce59980fec",
"text": "Line-of-sight stabilization against various disturbances is an essential property of gimbaled imaging systems mounted on mobile platforms. In recent years, the importance of target detection from higher distances has increased. This has raised the need for better stabilization performance. For that reason, stabilization loops are designed such that they have higher gains and larger bandwidths. As these are required for good disturbance attenuation, sufficient loop stability is also needed. However, model uncertainties around structural resonances impose strict restrictions on sufficient loop stability. Therefore, to satisfy high stabilization performance in the presence of model uncertainties, robust control methods are required. In this paper, a robust controller design in LQG/LTR, H∞ , and μ -synthesis framework is described for a two-axis gimbal. First, the performance criteria and weights are determined to minimize the stabilization error with moderate control effort under known platform disturbance profile. Second, model uncertainties are determined by considering locally linearized models at different operating points. Next, robust LQG/LTR, H∞ , and μ controllers are designed. Robust stability and performance of the three designs are investigated and compared. The paper finishes with the experimental performances to validate the designed robust controllers.",
"title": ""
}
] |
scidocsrr
|
b841e662d83c9173e6c9acbea4a34e7d
|
Real-time eSports Match Result Prediction
|
[
{
"docid": "fe17c22b98cd2319628ff513f32b54a0",
"text": "In this paper, we tried using logistic regression to predict the winning side of DotA2 games based on hero lineups. We collected data using API provided by the game developer. We find out that only based on hero lineup to predict the game result is not good enough. We also tried to select feature using stepwise regression and the result is better than using all the heroes and hero combos as features.",
"title": ""
},
{
"docid": "b638e384285bbb03bdc71f2eb2b27ff8",
"text": "In this paper, we present two win predictors for the popular online game Dota 2. The first predictor uses full post-match data and the second predictor uses only hero selection data. We will explore and build upon existing work on the topic as well as detail the specifics of both algorithms including data collection, exploratory analysis, feature selection, modeling, and results.",
"title": ""
}
] |
[
{
"docid": "7a1083d9d292ba3f240c17df0d149a52",
"text": "0377-2217/$ see front matter 2012 Elsevier B.V. A doi:10.1016/j.ejor.2012.01.019 ⇑ Corresponding author. Tel.: +31 50 363 8617; fax E-mail addresses: [email protected] (W. Rom (R. Teunter), [email protected] (W. van Jaarsvel 1 Tel.: +31 50 363 7020; fax: +31 53 489 2032. 2 Tel.: +31 10 408 1472; fax: +31 10 408 9640. Forecasting spare parts demand is notoriously difficult, as demand is typically intermittent and lumpy. Specialized methods such as that by Croston are available, but these are not based on the repair operations that cause the intermittency and lumpiness of demand. In this paper, we do propose a method that, in addition to the demand for spare parts, considers the type of component repaired. This two-step forecasting method separately updates the average number of parts needed per repair and the number of repairs for each type of component. The method is tested in an empirical, comparative study for a service provider in the aviation industry. Our results show that the two-step method is one of the most accurate methods, and that it performs considerably better than Croston’s method. Moreover, contrary to other methods, the two-step method can use information on planned maintenance and repair operations to reduce forecasts errors by up to 20%. We derive further analytical and simulation results that help explain the empirical findings. 2012 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "99f1bbd3eeda4aee35a96d684de81511",
"text": "Perimeter protection aims at identifying intrusions across the temporary base established by army in critical regions. Convex-hull algorithm is used to determine the boundary nodes among a set of nodes in the network. To study the effectiveness of such algorithm, we opted three variations, such as distributed approach, centralized, and mobile approach, suitable for wireless sensor networks for boundary detection. The convex-hull approaches are simulated with different node density, and the performance is measured in terms of energy consumption, boundary detection time, and accuracy. Results from the simulations highlight that the convex-hull approach is effective under densely deployed nodes in an environment. The different approaches of convex-hull algorithm are found to be suitable under different sensor network application scenarios.",
"title": ""
},
{
"docid": "c3fcc103374906a1ba21658c5add67fe",
"text": "Behavioural scoring models are generally used to estimate the probability that a customer of a financial institution who owns a credit product will default on this product in a fixed time horizon. However, one single customer usually purchases many credit products from an institution while behavioural scoring models generally treat each of these products independently. In order to make credit risk management easier and more efficient, it is interesting to develop customer default scoring models. These models estimate the probability that a customer of a certain financial institution will have credit issues with at least one product in a fixed time horizon. In this study, three strategies to develop customer default scoring models are described. One of the strategies is regularly utilized by financial institutions and the other two will be proposed herein. The performance of these strategies is compared by means of an actual data bank supplied by a financial institution and a Monte Carlo simulation study. Journal of the Operational Research Society advance online publication, 20 April 2016; doi:10.1057/jors.2016.23",
"title": ""
},
{
"docid": "12ee117f58c5bd5b6794de581bfcacdb",
"text": "The visualization of complex network traffic involving a large number of communication devices is a common yet challenging task. Traditional layout methods create the network graph with overwhelming visual clutter, which hinders the network understanding and traffic analysis tasks. The existing graph simplification algorithms (e.g. community-based clustering) can effectively reduce the visual complexity, but lead to less meaningful traffic representations. In this paper, we introduce a new method to the traffic monitoring and anomaly analysis of large networks, namely Structural Equivalence Grouping (SEG). Based on the intrinsic nature of the computer network traffic, SEG condenses the graph by more than 20 times while preserving the critical connectivity information. Computationally, SEG has a linear time complexity and supports undirected, directed and weighted traffic graphs up to a million nodes. We have built a Network Security and Anomaly Visualization (NSAV) tool based on SEG and conducted case studies in several real-world scenarios to show the effectiveness of our technique.",
"title": ""
},
{
"docid": "89d4143e7845d191433882f3fa5aaa26",
"text": "There is a large variety of objects and appliances in human environments, such as stoves, coffee dispensers, juice extractors, and so on. It is challenging for a roboticist to program a robot for each of these object types and for each of their instantiations. In this work, we present a novel approach to manipulation planning based on the idea that many household objects share similarly-operated object parts. We formulate the manipulation planning as a structured prediction problem and design a deep learning model that can handle large noise in the manipulation demonstrations and learns features from three different modalities: point-clouds, language and trajectory. In order to collect a large number of manipulation demonstrations for different objects, we developed a new crowd-sourcing platform called Robobarista. We test our model on our dataset consisting of 116 objects with 249 parts along with 250 language instructions, for which there are 1225 crowd-sourced manipulation demonstrations. We further show that our robot can even manipulate objects it has never seen before. Keywords— Robotics and Learning, Crowd-sourcing, Manipulation",
"title": ""
},
{
"docid": "c7048e00cdb56e2f1085d23b9317c147",
"text": "`Design-for-Assembly (DFA)\" is an engineering concept concerned with improving product designs for easier and less costly assembly operations. Much of academic or industrial eeorts in this area have been devoted to the development of analysis tools for measuring the \\assemblability\" of a design. On the other hand, little attention has been paid to the actual redesign process. The goal of this paper is to develop a computer-aided tool for assisting designers in redesigning a product for DFA. One method of redesign, known as the \\replay and modify\" paradigm, is to replay a previous design plan, and modify the plan wherever necessary and possible, in accordance to the original design intention, for newly speciied design goals 24]. The \\replay and modify\" paradigm is an eeective redesign method because it ooers a more global solution than simple local patch-ups. For such a paradigm, design information, such as the design plan and design rationale, must be recorded during design. Unfortunately, such design information is not usually available in practice. To handle the potential absence of the required design information and support the \\replay and modify\" paradigm, the redesign process is modeled as a reverse engineering activity. Reverse engineering roughly refers to an activity of inferring the process, e.g. the design plan, used in creating a given design, and using the inferred knowledge for design recreation or redesign. In this paper, the development of an interactive computer-aided redesign tool for Design-for-Assembly, called REVENGE (REVerse ENGineering), is presented. The architecture of REVENGE is composed of mainly four activities: design analysis, knowledge acquisition, design plan reconstruction, and case-based design modiication. First a DFA analysis is performed to uncover any undesirable aspects of the design with respect to its assemblability. REVENGE , then, interactively solicits designers for useful design information that might not be available from standard design documents such as design rationale. Then, a heuristic algorithm reconstructs a default design plan. A default design plan is a sequence of probable design actions that might have led to the original design. DFA problems identiied during the analysis stage are mapped to the portion of the design plan from which they might have originated. Problems that originate from the earlier portion of the design plan are attacked rst. A case-based approach is used to solve each problem by retrieving a similar redesign case and adapting it to the current situation. REVENGE has been implemented, and has been tested …",
"title": ""
},
{
"docid": "3fce18c6e1f909b91f95667a563aa194",
"text": "In this paper, we describe an approach to content-based retrieval of medical images from a database, and provide a preliminary demonstration of our approach as applied to retrieval of digital mammograms. Content-based image retrieval (CBIR) refers to the retrieval of images from a database using information derived from the images themselves, rather than solely from accompanying text indices. In the medical-imaging context, the ultimate aim of CBIR is to provide radiologists with a diagnostic aid in the form of a display of relevant past cases, along with proven pathology and other suitable information. CBIR may also be useful as a training tool for medical students and residents. The goal of information retrieval is to recall from a database information that is relevant to the user's query. The most challenging aspect of CBIR is the definition of relevance (similarity), which is used to guide the retrieval machine. In this paper, we pursue a new approach, in which similarity is learned from training examples provided by human observers. Specifically, we explore the use of neural networks and support vector machines to predict the user's notion of similarity. Within this framework we propose using a hierarchal learning approach, which consists of a cascade of a binary classifier and a regression module to optimize retrieval effectiveness and efficiency. We also explore how to incorporate online human interaction to achieve relevance feedback in this learning framework. Our experiments are based on a database consisting of 76 mammograms, all of which contain clustered microcalcifications (MCs). Our goal is to retrieve mammogram images containing similar MC clusters to that in a query. The performance of the retrieval system is evaluated using precision-recall curves computed using a cross-validation procedure. Our experimental results demonstrate that: 1) the learning framework can accurately predict the perceptual similarity reported by human observers, thereby serving as a basis for CBIR; 2) the learning-based framework can significantly outperform a simple distance-based similarity metric; 3) the use of the hierarchical two-stage network can improve retrieval performance; and 4) relevance feedback can be effectively incorporated into this learning framework to achieve improvement in retrieval precision based on online interaction with users; and 5) the retrieved images by the network can have predicting value for the disease condition of the query.",
"title": ""
},
{
"docid": "295762c2d1e407dee0080c325338fc70",
"text": "Automotive radars in the 77-81 GHz band will be widely deployed in the coming years. This paper provides a comparison of the bi-phase modulated continuous wave (PMCW) and linear frequency-modulated continuous wave (FMCW) waveforms for these radars. The comparison covers performance, implementation and other non technical aspects. Multiple Input Multiple Output (MIMO) radars require perfectly orthogonal waveforms on the different transmit antennas, preferably transmitting simultaneously for fast illumination. In this paper, we propose two techniques: Outer code and Range domain, to enable MIMO processing on the PMCW radars. The proposed MIMO techniques are verified with both simulation and lab experiments, on a fully integrated deep-submicron CMOS integrated circuit designed for a 79 GHz PMCW radar. Our analysis shows that, although not widely used in the automotive industry, PMCW radars are advantageous for low cost, high volume single-chip production and excellent performance.",
"title": ""
},
{
"docid": "5fa4e054d2afff8a93a961d54734ac3d",
"text": "Intracellular signaling from chloroplast to nucleus followed by a subsequent response in the chloroplast is called retrograde signaling. It not only coordinates the expression of nuclear and chloroplast genes, which is essential for chloroplast biogenesis, but also maintains chloroplast function at optimal levels in response to fluxes in metabolites and changes in environmental conditions. In recent years several putative retrograde signals have been identified and signaling pathways have been proposed. Here we review retrograde signals derived from tetrapyrroles, carotenoids, nucleotides and isoprene precursors in response to abiotic stresses, including oxidative stress. We discuss the responses that these signals elicit and show that they not only modify chloroplast function but also influence other aspects of plant development and adaptation.",
"title": ""
},
{
"docid": "cb456d94420dcc3811983004a1af7c6b",
"text": "A new method for deriving isolated buck-boost (IBB) converter with single-stage power conversion is proposed in this paper and novel IBB converters based on high-frequency bridgeless-interleaved boost rectifiers are presented. The semiconductors, conduction losses, and switching losses are reduced significantly by integrating the interleaved boost converters into the full-bridge diode-rectifier. Various high-frequency bridgeless boost rectifiers are harvested based on different types of interleaved boost converters, including the conventional boost converter and high step-up boost converters with voltage multiplier and coupled inductor. The full-bridge IBB converter with voltage multiplier is analyzed in detail. The voltage multiplier helps to enhance the voltage gain and reduce the voltage stresses of the semiconductors in the rectification circuit. Hence, a transformer with reduced turns ratio and parasitic parameters, and low-voltage rated MOSFETs and diodes with better switching and conduction performances can be applied to improve the efficiency. Moreover, optimized phase-shift modulation strategy is applied to the full-bridge IBB converter to achieve isolated buck and boost conversion. What's more, soft-switching performance of all of the active switches and diodes within the whole operating range is achieved. A 380-V output prototype is fabricated to verify the effectiveness of the proposed IBB converters and its control strategies.",
"title": ""
},
{
"docid": "7d1bdb84425d344155d30f4c26ce47da",
"text": "In the information age, data is pervasive. In some applications, data explosion is a significant phenomenon. The massive data volume poses challenges to both human users and computers. In this project, we propose a new model for identifying representative set from a large database. A representative set is a special subset of the original dataset, which has three main characteristics: It is significantly smaller in size compared to the original dataset. It captures the most information from the original dataset compared to other subsets of the same size. It has low redundancy among the representatives it contains. We use information-theoretic measures such as mutual information and relative entropy to measure the representativeness of the representative set. We first design a greedy algorithm and then present a heuristic algorithm that delivers much better performance. We run experiments on two real datasets and evaluate the effectiveness of our representative set in terms of coverage and accuracy. The experiments show that our representative set attains expected characteristics and captures information more efficiently.",
"title": ""
},
{
"docid": "40fda9cba754c72f1fba17dd3a5759b2",
"text": "Humans can easily recognize handwritten words, after gaining basic knowledge of languages. This knowledge needs to be transferred to computers for automatic character recognition. The work proposed in this paper tries to automate recognition of handwritten hindi isolated characters using multiple classifiers. For feature extraction, it uses histogram of oriented gradients as one feature and profile projection histogram as another feature. The performance of various classifiers has been evaluated using theses features experimentally and quadratic SVM has been found to produce better results.",
"title": ""
},
{
"docid": "6058813ab7c5a2504faea224b9f32bba",
"text": "LinkedIn, with over 1.5 million Groups, has become a popular place for business employees to create private groups to exchange information and communicate. Recent research on social networking sites (SNSs) has widely explored the phenomenon and its positive effects on firms. However, social networking’s negative effects on information security were not adequately addressed. Supported by the credibility, persuasion and motivation theories, we conducted 1) a field experiment, demonstrating how sensitive organizational data can be exploited, followed by 2) a qualitative study of employees engaged in SNSs activities; and 3) interviews with Chief Information Security Officers (CISOs). Our research has resulted in four main findings: 1) employees are easily deceived and susceptible to victimization on SNSs where contextual elements provide psychological triggers to attackers; 2) organizations lack mechanisms to control SNS online security threats, 3) companies need to strengthen their information security policies related to SNSs, where stronger employee identification and authentication is needed, and 4) SNSs have become important security holes where, with the use of social engineering techniques, malicious attacks are easily facilitated.",
"title": ""
},
{
"docid": "c4a895af5fe46e91f599f71403948a2b",
"text": "The rise in popularity of the Android platform has resulted in an explosion of malware threats targeting it. As both Android malware and the operating system itself constantly evolve, it is very challenging to design robust malware mitigation techniques that can operate for long periods of time without the need for modifications or costly re-training. In this paper, we present MAMADROID, an Android malware detection system that relies on app behavior. MAMADROID builds a behavioral model, in the form of a Markov chain, from the sequence of abstracted API calls performed by an app, and uses it to extract features and perform classification. By abstracting calls to their packages or families, MAMADROID maintains resilience to API changes and keeps the feature set size manageable. We evaluate its accuracy on a dataset of 8.5K benign and 35.5K malicious apps collected over a period of six years, showing that it not only effectively detects malware (with up to 99% F-measure), but also that the model built by the system keeps its detection capabilities for long periods of time (on average, 87% and 73% F-measure, respectively, one and two years after training). Finally, we compare against DROIDAPIMINER, a state-of-the-art system that relies on the frequency of API calls performed by apps, showing that MAMADROID significantly outperforms it.",
"title": ""
},
{
"docid": "197f5af02ea53b1dd32167780c4126ed",
"text": "A new technique for summarization is presented here for summarizing articles known as text summarization using neural network and rhetorical structure theory. A neural network is trained to learn the relevant characteristics of sentences by using back propagation technique to train the neural network which will be used in the summary of the article. After training neural network is then modified to feature fusion and pruning the relevant characteristics apparent in summary sentences. Finally, the modified neural network is used to summarize articles and combining it with the rhetorical structure theory to form final summary of an article.",
"title": ""
},
{
"docid": "c225035f5f3ad335e1a7c1e136ccfa2c",
"text": "During each cycle of pre-mRNA splicing, the pre-catalytic spliceosome (B complex) is converted into the activated spliceosome (Bact complex), which has a well-formed active site but cannot proceed to the branching reaction. Here, we present the cryo-EM structure of the human Bact complex in three distinct conformational states. The EM map allows atomic modeling of nearly all protein components of the U2 small nuclear ribonucleoprotein (snRNP), including three of the SF3a complex and seven of the SF3b complex. The structure of the human Bact complex contains 52 proteins, U2, U5, and U6 small nuclear RNA (snRNA), and a pre-mRNA. Three distinct conformations have been captured, representing the early, mature, and late states of the human Bact complex. These complexes differ in the orientation of the Switch loop of Prp8, the splicing factors RNF113A and NY-CO-10, and most components of the NineTeen complex (NTC) and the NTC-related complex. Analysis of these three complexes and comparison with the B and C complexes reveal an ordered flux of components in the B-to-Bact and the Bact-to-B* transitions, which ultimately prime the active site for the branching reaction.",
"title": ""
},
{
"docid": "6784e31e2ec313698a622a7e78288f68",
"text": "Web-based technology is often the technology of choice for distance education given the ease of use of the tools to browse the resources on the Web, the relative affordability of accessing the ubiquitous Web, and the simplicity of deploying and maintaining resources on the WorldWide Web. Many sophisticated web-based learning environments have been developed and are in use around the world. The same technology is being used for electronic commerce and has become extremely popular. However, while there are clever tools developed to understand on-line customer’s behaviours in order to increase sales and profit, there is very little done to automatically discover access patterns to understand learners’ behaviour on web-based distance learning. Educators, using on-line learning environments and tools, have very little support to evaluate learners’ activities and discriminate between different learners’ on-line behaviours. In this paper, we discuss some data mining and machine learning techniques that could be used to enhance web-based learning environments for the educator to better evaluate the leaning process, as well as for the learners to help them in their learning endeavour.",
"title": ""
},
{
"docid": "91eaef6e482601533656ca4786b7a023",
"text": "Budget optimization is one of the primary decision-making issues faced by advertisers in search auctions. A quality budget optimization strategy can significantly improve the effectiveness of search advertising campaigns, thus helping advertisers to succeed in the fierce competition of online marketing. This paper investigates budget optimization problems in search advertisements and proposes a novel hierarchical budget optimization framework (BOF), with consideration of the entire life cycle of advertising campaigns. Then, we formulated our BOF framework, made some mathematical analysis on some desirable properties, and presented an effective solution algorithm. Moreover, we established a simple but illustrative instantiation of our BOF framework which can help advertisers to allocate and adjust the budget of search advertising campaigns. Our BOF framework provides an open testbed environment for various strategies of budget allocation and adjustment across search advertising markets. With field reports and logs from real-world search advertising campaigns, we designed some experiments to evaluate the effectiveness of our BOF framework and instantiated strategies. Experimental results are quite promising, where our BOF framework and instantiated strategies perform better than two baseline budget strategies commonly used in practical advertising campaigns.",
"title": ""
},
{
"docid": "cbbb2c0a9d2895c47c488bed46d8f468",
"text": "We propose a new generative language model for sentences that first samples a prototype sentence from the training corpus and then edits it into a new sentence. Compared to traditional language models that generate from scratch either left-to-right or by first sampling a latent sentence vector, our prototype-then-edit model improves perplexity on language modeling and generates higher quality outputs according to human evaluation. Furthermore, the model gives rise to a latent edit vector that captures interpretable semantics such as sentence similarity and sentence-level analogies.",
"title": ""
},
{
"docid": "c4c95d67756bc85e69e67b4caee25269",
"text": "In this paper, we focus on the synthetic understanding of documents, specifically reading comprehension (RC). A current problem with RC is the need for a method of analyzing the RC system performance to realize further development. We propose a methodology for examining RC systems from multiple viewpoints. Our methodology consists of three steps: define a set of basic skills used for RC, manually annotate questions of an existing RC task, and show the performances for each skill of existing systems that have been proposed for the task. We demonstrated the proposed methodology by annotating MCTest, a freely available dataset for testing RC. The results of the annotation showed that answering RC questions requires combinations of multiple skills. In addition, our defined RC skills were found to be useful and promising for decomposing and analyzing the RC process. Finally, we discuss ways to improve our approach based on the results of two extra annotations.",
"title": ""
}
] |
scidocsrr
|
cbfecf9be1159d05050dcd7d18f1b492
|
A Student Friendly Toolbox for Power System Analysis Using MATLAB
|
[
{
"docid": "16d6862cf891e5219aae10d5fcd6ce92",
"text": "This paper describes the Power System Analysis Toolbox (PSAT), an open source Matlab and GNU/Octave-based software package for analysis and design of small to medium size electric power systems. PSAT includes power flow, continuation power flow, optimal power flow, small-signal stability analysis, and time-domain simulation, as well as several static and dynamic models, including nonconventional loads, synchronous and asynchronous machines, regulators, and FACTS. PSAT is also provided with a complete set of user-friendly graphical interfaces and a Simulink-based editor of one-line network diagrams. Basic features, algorithms, and a variety of case studies are presented in this paper to illustrate the capabilities of the presented tool and its suitability for educational and research purposes.",
"title": ""
}
] |
[
{
"docid": "4b97e5694dc8f1d2e1b5bf8f28bd9b10",
"text": "Poor eating habits are an important public health issue that has large health and economic implications. Many food preferences are established early, but because people make more and more independent eating decisions as they move through adolescence, the transition to independent living during the university days is an important event. To study the phenomenon of food selection, the heath belief model was applied to predict the likelihood of healthy eating among university students. Structural equation modeling was used to investigate the validity of the health belief model (HBM) among 194 students, followed by gender-based analyses. The data strongly supported the HBM. Social change campaign implications are discussed.",
"title": ""
},
{
"docid": "defbecacc15af7684a6f9722349f42e3",
"text": "We present a novel, unsupervised, and distance measure agnostic method for search space reduction in spell correction using neural character embeddings. The embeddings are learned by skip-gram word2vec training on sequences generated from dictionary words in a phonetic informationretentive manner. We report a very high performance in terms of both success rates and reduction of search space on the Birkbeck spelling error corpus. To the best of our knowledge, this is the first application of word2vec to spell correction.",
"title": ""
},
{
"docid": "7d2baafa1e2abb311fe9c68f4f9fe46a",
"text": "In this paper, we present a conversational model that incorporates both context and participant role for two-party conversations. Different architectures are explored for integrating participant role and context information into a Long Short-term Memory (LSTM) language model. The conversational model can function as a language model or a language generation model. Experiments on the Ubuntu Dialog Corpus show that our model can capture multiple turn interaction between participants. The proposed method outperforms a traditional LSTM model as measured by language model perplexity and response ranking. Generated responses show characteristic differences between the two participant roles.",
"title": ""
},
{
"docid": "637a7d7e0c33b6f63f17f9ec77add5a6",
"text": "In spite of its familiar phenomenology, the mechanistic basis for mental effort remains poorly understood. Although most researchers agree that mental effort is aversive and stems from limitations in our capacity to exercise cognitive control, it is unclear what gives rise to those limitations and why they result in an experience of control as costly. The presence of these control costs also raises further questions regarding how best to allocate mental effort to minimize those costs and maximize the attendant benefits. This review explores recent advances in computational modeling and empirical research aimed at addressing these questions at the level of psychological process and neural mechanism, examining both the limitations to mental effort exertion and how we manage those limited cognitive resources. We conclude by identifying remaining challenges for theoretical accounts of mental effort as well as possible applications of the available findings to understanding the causes of and potential solutions for apparent failures to exert the mental effort required of us.",
"title": ""
},
{
"docid": "b1fabdbfea2fcffc8071371de8399b69",
"text": "Cities across the United States are implementing information communication technologies in an effort to improve government services. One such innovation in e-government is the creation of 311 systems, offering a centralized platform where citizens can request services, report non-emergency concerns, and obtain information about the city via hotline, mobile, or web-based applications. The NYC 311 service request system represents one of the most significant links between citizens and city government, accounting for more than 8,000,000 requests annually. These systems are generating massive amounts of data that, when properly managed, cleaned, and mined, can yield significant insights into the real-time condition of the city. Increasingly, these data are being used to develop predictive models of citizen concerns and problem conditions within the city. However, predictive models trained on these data can suffer from biases in the propensity to make a request that can vary based on socio-economic and demographic characteristics of an area, cultural differences that can affect citizens’ willingness to interact with their government, and differential access to Internet connectivity. Using more than 20,000,000 311 requests together with building violation data from the NYC Department of Buildings and the NYC Department of Housing Preservation and Development; property data from NYC Department of City Planning; and demographic and socioeconomic data from the U.S. Census American Community Survey we develop a two-step methodology to evaluate the propensity to complain: (1) we predict, using a gradient boosting regression model, the likelihood of heating and hot water violations for a given building, and (2) we then compare the actual complaint volume for buildings with predicted violations to quantify discrepancies across the City. Our model predicting service request volumes over time will contribute to the efficiency of the 311 system by informing shortand long-term resource allocation strategy and improving the agency’s performance in responding to requests. For instance, the outcome of our longitudinal pattern analysis allows the city to predict building safety hazards early and take action, leading to anticipatory safety and inspection actions. Furthermore, findings will provide novel insight into equity and community engagement through 311, and provide the basis for acknowledging and accounting for Bloomberg Data for Good Exchange Conference. 24-Sep-2017, Chicago, IL, USA. bias in machine learning applications trained on 311 data.",
"title": ""
},
{
"docid": "6e9072f3319ba10557d0b635769b83e7",
"text": "“Big Data” is a term encompassing the use of techniques to capture, process, analyse and visualize potentially large datasets in a reasonable timeframe not accessible to standard IT technologies. By extension, the platform, tools and software used for this purpose are collectively called “Big Data technologies”. In this paper, we provide the meaning, characteristics, models, technologies, life cycle and many other aspects of big data. Keywords— Big Data, Hadoop, Map Reduce, HDFS (Hadoop Distributed File System), Cloud Computing.",
"title": ""
},
{
"docid": "9042faed1193b7bc4c31f2bc239c5d89",
"text": "Hand gesture recognition for human computer interaction is an area of active research in computer vision and machine learning. The primary goal of gesture recognition research is to create a system, which can identify specific human gestures and use them to convey information or for device control. This paper presents a comparative study of four classification algorithms for static hand gesture classification using two different hand features data sets. The approach used consists in identifying hand pixels in each frame, extract features and use those features to recognize a specific hand pose. The results obtained proved that the ANN had a very good performance and that the feature selection and data preparation is an important phase in the all process, when using low-resolution images like the ones obtained with the camera in the current work.",
"title": ""
},
{
"docid": "6ae9bfc681e2a9454196f4aa0c49a4da",
"text": "Previous research has indicated that exposure to traditional media (i.e., television, film, and print) predicts the likelihood of internalization of a thin ideal; however, the relationship between exposure to internet-based social media on internalization of this ideal remains less understood. Social media differ from traditional forms of media by allowing users to create and upload their own content that is then subject to feedback from other users. This meta-analysis examined the association linking the use of social networking sites (SNSs) and the internalization of a thin ideal in females. Systematic searches were performed in the databases: PsychINFO, PubMed, Web of Science, Communication and Mass Media Complete, and ProQuest Dissertations and Theses Global. Six studies were included in the meta-analysis that yielded 10 independent effect sizes and a total of 1,829 female participants ranging in age from 10 to 46 years. We found a positive association between extent of use of SNSs and extent of internalization of a thin ideal with a small to moderate effect size (r = 0.18). The positive effect indicated that more use of SNSs was associated with significantly higher internalization of a thin ideal. A comparison was also made between study outcomes measuring broad use of SNSs and outcomes measuring SNS use solely as a function of specific appearance-related features (e.g., posting or viewing photographs). The use of appearance-related features had a stronger relationship with the internalization of a thin ideal than broad use of SNSs. The finding suggests that the ability to interact with appearance-related features online and be an active participant in media creation is associated with body image disturbance. Future research should aim to explore the way SNS users interact with the media posted online and the relationship linking the use of specific appearance features and body image disturbance.",
"title": ""
},
{
"docid": "5896289f0a9b788ef722756953a580ce",
"text": "Biodiesel, defined as the mono-alkyl esters of vegetable oils or animal fats, is an balternativeQ diesel fuel that is becoming accepted in a steadily growing number of countries around the world. Since the source of biodiesel varies with the location and other sources such as recycled oils are continuously gaining interest, it is important to possess data on how the various fatty acid profiles of the different sources can influence biodiesel fuel properties. The properties of the various individual fatty esters that comprise biodiesel determine the overall fuel properties of the biodiesel fuel. In turn, the properties of the various fatty esters are determined by the structural features of the fatty acid and the alcohol moieties that comprise a fatty ester. Structural features that influence the physical and fuel properties of a fatty ester molecule are chain length, degree of unsaturation, and branching of the chain. Important fuel properties of biodiesel that are influenced by the fatty acid profile and, in turn, by the structural features of the various fatty esters are cetane number and ultimately exhaust emissions, heat of combustion, cold flow, oxidative stability, viscosity, and lubricity. Published by Elsevier B.V.",
"title": ""
},
{
"docid": "27775805c45a82cbd31fd9a5e93f3df1",
"text": "In a dynamic world, mechanisms allowing prediction of future situations can provide a selective advantage. We suggest that memory systems differ in the degree of flexibility they offer for anticipatory behavior and put forward a corresponding taxonomy of prospection. The adaptive advantage of any memory system can only lie in what it contributes for future survival. The most flexible is episodic memory, which we suggest is part of a more general faculty of mental time travel that allows us not only to go back in time, but also to foresee, plan, and shape virtually any specific future event. We review comparative studies and find that, in spite of increased research in the area, there is as yet no convincing evidence for mental time travel in nonhuman animals. We submit that mental time travel is not an encapsulated cognitive system, but instead comprises several subsidiary mechanisms. A theater metaphor serves as an analogy for the kind of mechanisms required for effective mental time travel. We propose that future research should consider these mechanisms in addition to direct evidence of future-directed action. We maintain that the emergence of mental time travel in evolution was a crucial step towards our current success.",
"title": ""
},
{
"docid": "a2688a1169babed7e35a52fa875505d4",
"text": "Crowdsourcing label generation has been a crucial component for many real-world machine learning applications. In this paper, we provide finite-sample exponential bounds on the error rate (in probability and in expectation) of hyperplane binary labeling rules for the Dawid-Skene (and Symmetric DawidSkene ) crowdsourcing model. The bounds can be applied to analyze many commonly used prediction methods, including the majority voting, weighted majority voting and maximum a posteriori (MAP) rules. These bound results can be used to control the error rate and design better algorithms. In particular, under the Symmetric Dawid-Skene model we use simulation to demonstrate that the data-driven EM-MAP rule is a good approximation to the oracle MAP rule which approximately optimizes our upper bound on the mean error rate for any hyperplane binary labeling rule. Meanwhile, the average error rate of the EM-MAP rule is bounded well by the upper bound on the mean error rate of the oracle MAP rule in the simulation.",
"title": ""
},
{
"docid": "b73a9a7770a2bbd5edcc991d7b848371",
"text": "This paper overviews various switched flux permanent magnet machines and their design and performance features, with particular emphasis on machine topologies with reduced magnet usage or without using magnet, as well as with variable flux capability. In addition, this paper also describes their relationships with doubly-salient permanent magnet machines and flux reversal permanent magnet machines.",
"title": ""
},
{
"docid": "590e0965ca61223d5fefb82e89f24fd0",
"text": "Large software projects contain significant code duplication, mainly due to copying and pasting code. Many techniques have been developed to identify duplicated code to enable applications such as refactoring, detecting bugs, and protecting intellectual property. Because source code is often unavailable, especially for third-party software, finding duplicated code in binaries becomes particularly important. However, existing techniques operate primarily on source code, and no effective tool exists for binaries.\n In this paper, we describe the first practical clone detection algorithm for binary executables. Our algorithm extends an existing tree similarity framework based on clustering of characteristic vectors of labeled trees with novel techniques to normalize assembly instructions and to accurately and compactly model their structural information. We have implemented our technique and evaluated it on Windows XP system binaries totaling over 50 million assembly instructions. Results show that it is both scalable and precise: it analyzed Windows XP system binaries in a few hours and produced few false positives. We believe our technique is a practical, enabling technology for many applications dealing with binary code.",
"title": ""
},
{
"docid": "f05225e7e7c35eaafef59487d16a67c9",
"text": "Although constructivism is a concept that has been embraced recently, a great number of sociologists, psychologists, applied linguists, and teachers have provided varied definitions of this concept. Also many philosophers and educationalists such as Piaget, Vygotsky, and Perkins suggest that constructivism and social constructivism try to solve the problems of traditional teaching and learning. This research review represents the meaning and the origin of constructivism, and then discusses the role of leaning, teaching, learner, and teacher in the first part from constructivist perspective. In the second part, the paper discusses the same issues, as presented in the first part, from social constructivist perspective. The purpose of this research review is to make EFL teachers and EFL students more familiar with the importance and guidance of both constructivism and social constructivism perspectives.",
"title": ""
},
{
"docid": "e8f7006c9235e04f16cfeeb9d3c4f264",
"text": "Widespread deployment of biometric systems supporting consumer transactions is starting to occur. Smart consumer devices, such as tablets and phones, have the potential to act as biometric readers authenticating user transactions. However, the use of these devices in uncontrolled environments is highly susceptible to replay attacks, where these biometric data are captured and replayed at a later time. Current approaches to counter replay attacks in this context are inadequate. In order to show this, we demonstrate a simple replay attack that is 100% effective against a recent state-of-the-art face recognition system; this system was specifically designed to robustly distinguish between live people and spoofing attempts, such as photographs. This paper proposes an approach to counter replay attacks for face recognition on smart consumer devices using a noninvasive challenge and response technique. The image on the screen creates the challenge, and the dynamic reflection from the person's face as they look at the screen forms the response. The sequence of screen images and their associated reflections digitally watermarks the video. By extracting the features from the reflection region, it is possible to determine if the reflection matches the sequence of images that were displayed on the screen. Experiments indicate that the face reflection sequences can be classified under ideal conditions with a high degree of confidence. These encouraging results may pave the way for further studies in the use of video analysis for defeating biometric replay attacks on consumer devices.",
"title": ""
},
{
"docid": "3177e9dd683fdc66cbca3bd985f694b1",
"text": "Online communities allow millions of people who would never meet in person to interact. People join web-based discussion boards, email lists, and chat rooms for friendship, social support, entertainment, and information on technical, health, and leisure activities [24]. And they do so in droves. One of the earliest networks of online communities, Usenet, had over nine million unique contributors, 250 million messages, and approximately 200,000 active groups in 2003 [27], while the newer MySpace, founded in 2003, attracts a quarter million new members every day [27].",
"title": ""
},
{
"docid": "ff0c99e547d41fbc71ba1d4ac4a17411",
"text": "Measuring similarities between unlabeled time series trajectories is an important problem in domains as diverse as medicine, astronomy, finance, and computer vision. It is often unclear what is the appropriate metric to use because of the complex nature of noise in the trajectories (e.g. different sampling rates or outliers). Domain experts typically hand-craft or manually select a specific metric, such as dynamic time warping (DTW), to apply on their data. In this paper, we propose Autowarp, an end-to-end algorithm that optimizes and learns a good metric given unlabeled trajectories. We define a flexible and differentiable family of warping metrics, which encompasses common metrics such as DTW, Euclidean, and edit distance. Autowarp then leverages the representation power of sequence autoencoders to optimize for a member of this warping distance family. The output is a metric which is easy to interpret and can be robustly learned from relatively few trajectories. In systematic experiments across different domains, we show that Autowarp often outperforms hand-crafted trajectory similarity metrics.",
"title": ""
},
{
"docid": "a50151963608bccdcb53b3f390db6918",
"text": "In order to obtain more value added products, a product quality control is essentially required Many studies show that quality of agriculture products may be reduced from many causes. One of the most important factors of such quality plant diseases. Consequently, minimizing plant diseases allows substantially improving quality of the product Suitable diagnosis of crop disease in the field is very critical for the increased production. Foliar is the major important fungal disease of cotton and occurs in all growing Indian cotton regions. In this paper I express Technological Strategies uses mobile captured symptoms of Cotton Leaf Spot images and categorize the diseases using support vector machine. The classifier is being trained to achieve intelligent farming, including early detection of disease in the groves, selective fungicide application, etc. This proposed work is based on Segmentation techniques in which, the captured images are processed for enrichment first. Then texture and color Feature extraction techniques are used to extract features such as boundary, shape, color and texture for the disease spots to recognize diseases.",
"title": ""
},
{
"docid": "0ce4a0dfe5ea87fb87f5d39b13196e94",
"text": "Learning useful representations without supervision remains a key challenge in machine learning. In this paper, we propose a simple yet powerful generative model that learns such discrete representations. Our model, the Vector QuantisedVariational AutoEncoder (VQ-VAE), differs from VAEs in two key ways: the encoder network outputs discrete, rather than continuous, codes; and the prior is learnt rather than static. In order to learn a discrete latent representation, we incorporate ideas from vector quantisation (VQ). Using the VQ method allows the model to circumvent issues of “posterior collapse” -— where the latents are ignored when they are paired with a powerful autoregressive decoder -— typically observed in the VAE framework. Pairing these representations with an autoregressive prior, the model can generate high quality images, videos, and speech as well as doing high quality speaker conversion and unsupervised learning of phonemes, providing further evidence of the utility of the learnt representations.",
"title": ""
},
{
"docid": "66fce3b6c516a4fa4281d19d6055b338",
"text": "This paper presents the mechatronic design and experimental validation of a novel powered knee-ankle orthosis for testing torque-driven rehabilitation control strategies. The modular actuator of the orthosis is designed with a torque dense motor and a custom low-ratio transmission (24:1) to provide mechanical transparency to the user, allowing them to actively contribute to their joint kinematics during gait training. The 4.88 kg orthosis utilizes frameless components and light materials, such as aluminum alloy and carbon fiber, to reduce its mass. A human subject experiment demonstrates accurate torque control with high output torque during stance and low backdrive torque during swing at fast walking speeds. This work shows that backdrivability, precise torque control, high torque output, and light weight can be achieved in a powered orthosis without the high cost and complexity of variable transmissions, clutches, and/or series elastic components.",
"title": ""
}
] |
scidocsrr
|
0010df88a03beacfb67cf12b021984a7
|
Effects of poverty and maternal depression on early child development.
|
[
{
"docid": "a89c53f4fbe47e7a5e49193f0786cd6d",
"text": "Although hundreds of studies have documented the association between family poverty and children's health, achievement, and behavior, few measure the effects of the timing, depth, and duration of poverty on children, and many fail to adjust for other family characteristics (for example, female headship, mother's age, and schooling) that may account for much of the observed correlation between poverty and child outcomes. This article focuses on a recent set of studies that explore the relationship between poverty and child outcomes in depth. By and large, this research supports the conclusion that family income has selective but, in some instances, quite substantial effects on child and adolescent well-being. Family income appears to be more strongly related to children's ability and achievement than to their emotional outcomes. Children who live in extreme poverty or who live below the poverty line for multiple years appear, all other things being equal, to suffer the worst outcomes. The timing of poverty also seems to be important for certain child outcomes. Children who experience poverty during their preschool and early school years have lower rates of school completion than children and adolescents who experience poverty only in later years. Although more research is needed on the significance of the timing of poverty on child outcomes, findings to date suggest that interventions during early childhood may be most important in reducing poverty's impact on children.",
"title": ""
}
] |
[
{
"docid": "4da3f01ac76da39be45ab39c1e46bcf0",
"text": "Depth cameras are low-cost, plug & play solution to generate point cloud. 3D depth camera yields depth images which do not convey the actual distance. A 3D camera driver does not support raw depth data output, these are usually filtered and calibrated as per the sensor specifications and hence a method is required to map every pixel back to its original point in 3D space. This paper demonstrates the method to triangulate a pixel from the 2D depth image back to its actual position in 3D space. Further this method illustrates the independence of this mapping operation, which facilitates parallel computing. Triangulation method and ratios between the pixel positions and camera parameters are used to estimate the true position in 3D space. The algorithm performance can be increased by 70% by the usage of TPL libraries. This performance differs from processor to processor",
"title": ""
},
{
"docid": "31a9058f91fc8ebae7e278aabd4baa1b",
"text": "Recent deep learning based approaches have achieved great success on handwriting recognition. Chinese characters are among the most widely adopted writing systems in the world. Previous research has mainly focused on recognizing handwritten Chinese characters. However, recognition is only one aspect for understanding a language, another challenging and interesting task is to teach a machine to automatically write (pictographic) Chinese characters. In this paper, we propose a framework by using the recurrent neural network (RNN) as both a discriminative model for recognizing Chinese characters and a generative model for drawing (generating) Chinese characters. To recognize Chinese characters, previous methods usually adopt the convolutional neural network (CNN) models which require transforming the online handwriting trajectory into image-like representations. Instead, our RNN based approach is an end-to-end system which directly deals with the sequential structure and does not require any domain-specific knowledge. With the RNN system (combining an LSTM and GRU), state-of-the-art performance can be achieved on the ICDAR-2013 competition database. Furthermore, under the RNN framework, a conditional generative model with character embedding is proposed for automatically drawing recognizable Chinese characters. The generated characters (in vector format) are human-readable and also can be recognized by the discriminative RNN model with high accuracy. Experimental results verify the effectiveness of using RNNs as both generative and discriminative models for the tasks of drawing and recognizing Chinese characters.",
"title": ""
},
{
"docid": "da7b39dce3c7c8a08f11db132925fe37",
"text": "In this paper, a new language identification system is presented based on the total variability approach previously developed in the field of speaker identification. Various techniques are employed to extract the most salient features in the lower dimensional i-vector space and the system developed results in excellent performance on the 2009 LRE evaluation set without the need for any post-processing or backend techniques. Additional performance gains are observed when the system is combined with other acoustic systems.",
"title": ""
},
{
"docid": "77da685bf71a77a6ddca71d1cbff6501",
"text": "Interpersonal forgiving was conceptualized in the context of a 2-factor motivational system that governs people's responses to interpersonal offenses. Four studies were conducted to examine the extent to which forgiving could be predicted with relationship-level variables such as satisfaction, commitment, and closeness; offense-level variables such as apology and impact of the offense; and social-cognitive variables such as offender-focused empathy and rumination about the offense. Also described is the development of the transgression-related interpersonal motivations inventory--a self-report measure designed to assess the 2-component motivational system (Avoidance and Revenge) posited to underlie forgiving. The measure demonstrated a variety of desirable psychometric properties, commending its use for future research. As predicted, empathy, apology, rumination, and several indexes of relationship closeness were associated with self-reported forgiving.",
"title": ""
},
{
"docid": "65ed76ddd6f7fd0aea717d2e2643dd16",
"text": "In semi-supervised learning, a number of labeled examples are usually required for training an initial weakly useful predictor which is in turn used for exploiting the unlabeled examples. However, in many real-world applications there may exist very few labeled training examples, which makes the weakly useful predictor difficult to generate, and therefore these semisupervised learning methods cannot be applied. This paper proposes a method working under a two-view setting. By taking advantages of the correlations between the views using canonical component analysis, the proposed method can perform semi-supervised learning with only one labeled training example. Experiments and an application to content-based image retrieval validate the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "d75958dac28d9d8d8c0e6a6269c204ec",
"text": "To ensure more effectiveness in the learning process in educational institutions, categorization of students is a very interesting method to enhance student's learning capabilities by identifying the factors that affect their performance and use their categories to design targeted inventions for improving their quality. Many research works have been conducted on student performances, to improve their grades and to stop them from dropping out from school by using a data driven approach [1] [2]. In this paper, we have proposed a new model to categorize students into 3 categories to determine their learning capabilities and to help them to improve their studying techniques. We have chosen the state of the art of machine learning approach to classify student's nature of study by selecting prominent features of their activity in their academic field. We have chosen a data driven approach where key factors that determines the base of student and classify them into high, medium and low ranks. This process generates a system where we can clearly identify the crucial factors for which they are categorized. Manual construction of student labels is a difficult approach. Therefore, we have come up with a student categorization model on the basis of selected features which are determined by the preprocessing of Dataset and implementation of Random Forest Importance; Chi2 algorithm; and Artificial Neural Network algorithm. For the research we have used Python's Machine Learning libraries: Scikit-Learn [3]. For Deep Learning paradigm we have used Tensor-Flow, Keras. For data processing Pandas library and Matplotlib and Pyplot has been used for graph visualization purpose.",
"title": ""
},
{
"docid": "917e970b54d5c1e11750ddbbe21eaa77",
"text": "The vision of the smart home is increasingly becoming reality. Devices become smart, interconnected and accessible through the Internet. In the classical building automation domain already a lot of devices are interconnected providing interfaces for control or collecting data. Unfortunately for historical reasons they use specialized protocols for their communication hampering the integration into newly introduced smart home technologies. In order to make use of the valuable information gateways are required. BACnet as a protocol of the building automation domain already can make use of IP and defined a way to represent building data as Web services in general, called BACnet/WS. But using full fledged Web services would require too much resources in the scenario of smart home thus we need a more resource friendly solution. In this work a Devices Profile for Web Services (DPWS) adaptation of the BACnet/WS specification is proposed. DPWS enables Web service conform communication with a focus on a small footprint, which in turn enables interdisciplinary communication of constrained devices.",
"title": ""
},
{
"docid": "4a201e61cbb168df4df48fe331817260",
"text": "The use of qualitative research methodology is well established for data generation within healthcare research generally and clinical pharmacy research specifically. In the past, qualitative research methodology has been criticized for lacking rigour, transparency, justification of data collection and analysis methods being used, and hence the integrity of findings. Demonstrating rigour in qualitative studies is essential so that the research findings have the “integrity” to make an impact on practice, policy or both. Unlike other healthcare disciplines, the issue of “quality” of qualitative research has not been discussed much in the clinical pharmacy discipline. The aim of this paper is to highlight the importance of rigour in qualitative research, present different philosophical standpoints on the issue of quality in qualitative research and to discuss briefly strategies to ensure rigour in qualitative research. Finally, a mini review of recent research is presented to illustrate the strategies reported by clinical pharmacy researchers to ensure rigour in their qualitative research studies.",
"title": ""
},
{
"docid": "467ff4b60acb874c0430ae4c20d62137",
"text": "The purpose of this paper is twofold. First, we give a survey of the known methods of constructing lattices in complex hyperbolic space. Secondly, we discuss some of the lattices constructed by Deligne and Mostow and by Thurston in detail. In particular, we give a unified treatment of the constructions of fundamental domains and we relate this to other properties of these lattices.",
"title": ""
},
{
"docid": "64efa01068b761d29c3b402d35c524db",
"text": "Inferring the interactions between different brain areas is an important step towards understanding brain activity. Most often, signals can only be measured from some specific brain areas (e.g., cortex in the case of scalp electroencephalograms). However, those signals may be affected by brain areas from which no measurements are available (e.g., deeper areas such as hippocampus). In this paper, the latter are described as hidden variables in a graphical model; such model quantifies the statistical structure in the neural recordings, conditioned on hidden variables, which are inferred in an automated fashion from the data. As an illustration, electroencephalograms (EEG) of Alzheimer’s disease patients are considered. It is shown that the number of hidden variables in AD EEG is not significantly different from healthy EEG. However, there are fewer interactions between the brain areas, conditioned on those hidden variables. Explanations for these observations are suggested.",
"title": ""
},
{
"docid": "f1ce3eb2b8735205fedc3b651b185ce3",
"text": "Road detection is an important problem with application to driver assistance systems and autonomous, self-guided vehicles. The focus of this paper is on the problem of feature extraction and classification for front-view road detection. Specifically, we propose using Support Vector Machines (SVM) for road detection and effective approach for self-supervised online learning. The proposed road detection algorithm is capable of automatically updating the training data for online training which reduces the possibility of misclassifying road and non-road classes and improves the adaptability of the road detection algorithm. The algorithm presented here can also be seen as a novel framework for self-supervised online learning in the application of classification-based road detection algorithm on intelligent vehicle.",
"title": ""
},
{
"docid": "e2a1ceadf01443a36af225b225e4d521",
"text": "Event detection remains a challenge because of the difficulty of encoding the word semantics in various contexts. Previous approaches have heavily depended on language-specific knowledge and preexisting natural language processing tools. However, not all languages have such resources and tools available compared with English language. A more promising approach is to automatically learn effective features from data, without relying on language-specific resources. In this study, we develop a language-independent neural network to capture both sequence and chunk information from specific contexts and use them to train an event detector for multiple languages without any manually encoded features. Experiments show that our approach can achieve robust, efficient and accurate results for various languages. In the ACE 2005 English event detection task, our approach achieved a 73.4% F-score with an average of 3.0% absolute improvement compared with state-of-the-art. Additionally, our experimental results are competitive for Chinese and Spanish.",
"title": ""
},
{
"docid": "cb106c78ef374f105a33fbae81f1e97c",
"text": "The High Efficiency Video Coding (HEVC) standard enables meeting new video quality demands such as Ultra High Definition (UHD). Its scalable extension (SHVC) allows encoding simultaneously different representations of a video, organised in layers. Thanks to inter-layer predictions, SHVC provides bit-rate savings compared to the equivalent HEVC simulcast encoding. Therefore, SHVC seems a promising solution for both broadcast and storage applications. This paper proposes a multi-layer architecture of a pipelined software HEVC encoders with two main settings: a live setting for real-time encoding and a file setting for encoding with better fidelity. The proposed architecture provides a good trade-off between coding rate and coding efficiency achieving real-time performance of 1080p60 and 1600p30 sequences in 2× spatial scalability. Moreover, experimental results show more than a 26× and 300× speed-up for the file and live settings, respectively, with respect to the scalable reference software (SHM) in an intra-only configuration.",
"title": ""
},
{
"docid": "5228454ef59c012b079885b2cce0c012",
"text": "As a contribution to the HICSS 50 Anniversary Conference, we proposed a new mini-track on Text Mining in Big Data Analytics. This mini-track builds on the successful HICSS Workshop on Text Mining and recognizes the growing importance of unstructured text as a data source for descriptive and predictive analytics in research on collaboration systems and technologies. In this initial iteration of the mini-track, we have accepted three papers that cover conceptual issues, methodological approaches to social media, and the development of categorization models and dictionaries useful in a corporate context. The minitrack highlights the potential of an interdisciplinary research community within the HICSS collaboration systems and technologies track.",
"title": ""
},
{
"docid": "370813b3114c8f8c2611b72876159efe",
"text": "Sciatic nerve structure and nomenclature: epineurium to paraneurium is this a new paradigm? We read with interest the study by Perlas et al., (1) about the sciatic nerve block at the level of its division in the popliteal fossa. We have been developing this technique in our routine practice during the past 7 years and have no doub about the effi cacy and safety of this approach (2,3). However, we do not agree with the author's defi nition of the structure and limits of the nerve. Given the impact of publications from the principal author's research group on the regional anesthesia community, we are compelled to comment on proposed terminology that we feel may create confusion and contribute to the creation of a new paradigm in peripheral nerve blockade. The peripheral nerve is a well-defi ned anatomical entity with an unequivocal histological structure (Figure 1). The fascicle is the noble and functional unit of the nerves. Fascicles are constituted by a group of axons covered individually by the endoneurium and tightly packed within the perineurium. The epineurium comprises all the tissues that hold and surround the fascicles and defi nes the macroscopic external limit of the nerve. Epineurium includes loose connective and adipose tissue and epineurial vessels. Fascicles can be found as isolated units or in groups of fascicles supported and held together into a mixed collagen and fat tissue in different proportions (within the epineurial cover). The epineurium cover is the thin layer of connective tissue that encloses the whole structure and constitutes the anatomical limit of the nerve. It acts as a mechanical barrier (limiting the spread of injected local anesthetic), but not as a physical barrier (allowing the passive diffusion of local anesthetic along the concentration gradient). The paraneurium is the connective tissue that supports and connects the nerve with the surrounding structures (eg, muscles, bone, joints, tendons, and vessels) and acts as a gliding layer. We agree that the limits of the epineurium of the sciatic nerve, like those of the brachial plexus, are more complex than in single nerves. Therefore, the sciatic nerve block deserves special consideration. If we accept that the sciatic nerve is an anatomical unit, the epineurium should include the groups of fascicles that will constitute the tibial and the common peroneal nerves. Similarly, the epineurium of the common peroneal nerve contains the fascicles that will be part of the lateral cutane-ous, …",
"title": ""
},
{
"docid": "06ba0cd00209a7f4f200395b1662003e",
"text": "Changes in human DNA methylation patterns are an important feature of cancer development and progression and a potential role in other conditions such as atherosclerosis and autoimmune diseases (e.g., multiple sclerosis and lupus) is being recognised. The cancer genome is frequently characterised by hypermethylation of specific genes concurrently with an overall decrease in the level of 5 methyl cytosine. This hypomethylation of the genome largely affects the intergenic and intronic regions of the DNA, particularly repeat sequences and transposable elements, and is believed to result in chromosomal instability and increased mutation events. This review examines our understanding of the patterns of cancer-associated hypomethylation, and how recent advances in understanding of chromatin biology may help elucidate the mechanisms underlying repeat sequence demethylation. It also considers how global demethylation of repeat sequences including transposable elements and the site-specific hypomethylation of certain genes might contribute to the deleterious effects that ultimately result in the initiation and progression of cancer and other diseases. The use of hypomethylation of interspersed repeat sequences and genes as potential biomarkers in the early detection of tumors and their prognostic use in monitoring disease progression are also examined.",
"title": ""
},
{
"docid": "503d57c4a643791cb31817de83ca0b87",
"text": "The proponents of cyberspace promise that online discourse will increase political participation and pave the road for a democratic utopia. This article explores the potential for civil discourse in cyberspace by examining the level of civility in 287 discussion threads in political newsgroups. While scholars often use civility and politeness interchangeably, this study argues that this conflation ignores the democratic merit of robust and heated discussion. Therefore, civility was defined in a broader sense, by identifying as civil behaviors that enhance democratic conversation. In support of this distinction, the study results revealed that most messages posted on political newsgroups were civil, and further suggested that because the absence of face-to-face communication fostered more heated discussion, cyberspace might actually promote Lyotard’s vision of democratic emancipation through disagreement and anarchy (Lyotard, 1984). Thus, this study supported the internet’s potential to revive the public sphere, provided that greater diversity and volume of discussion is present.",
"title": ""
},
{
"docid": "278c9ffd2f608bfb88e5bca105b6f94b",
"text": "Hybrid mobile applications (apps) combine the features of Web applications and \"native\" mobile apps. Like Web applications, they are implemented in portable, platform-independent languages such as HTML and JavaScript. Like native apps, they have direct access to local device resources-file system, location, camera, contacts, etc. Hybrid apps are typically developed using hybrid application frameworks such as PhoneGap. The purpose of the framework is twofold. First, it provides an embedded Web browser (for example, WebView on Android) that executes the app's Web code. Second, it supplies \"bridges\" that allow Web code to escape the browser and access local resources on the device. We analyze the software stack created by hybrid frameworks and demonstrate that it does not properly compose the access-control policies governing Web code and local code, respectively. Web code is governed by the same origin policy, whereas local code is governed by the access-control policy of the operating system (for example, user-granted permissions in Android). The bridges added by the framework to the browser have the same local access rights as the entire application, but are not correctly protected by the same origin policy. This opens the door to fracking attacks, which allow foreign-origin Web content included into a hybrid app (e.g., ads confined in iframes) to drill through the layers and directly access device resources. Fracking vulnerabilities are generic: they affect all hybrid frameworks, all embedded Web browsers, all bridge mechanisms, and all platforms on which these frameworks are deployed. We study the prevalence of fracking vulnerabilities in free Android apps based on the PhoneGap framework. Each vulnerability exposes sensitive local resources-the ability to read and write contacts list, local files, etc.-to dozens of potentially malicious Web domains. We also analyze the defenses deployed by hybrid frameworks to prevent resource access by foreign-origin Web content and explain why they are ineffectual. We then present NoFrak, a capability-based defense against fracking attacks. NoFrak is platform-independent, compatible with any framework and embedded browser, requires no changes to the code of the existing hybrid apps, and does not break their advertising-supported business model.",
"title": ""
},
{
"docid": "6605f7e07bed0a173dececa1aa94f809",
"text": "Abstractive summarization, the task of rewriting and compressing a document into a short summary, has achieved considerable success with neural sequence-tosequence models. However, these models can still benefit from stronger natural language inference skills, since a correct summary is logically entailed by the input document, i.e., it should not contain any contradictory or unrelated information. We incorporate such knowledge into an abstractive summarization model via multi-task learning, where we share its decoder parameters with those of an entailment generation model. We achieve promising initial improvements based on multiple metrics and datasets (including a test-only setting). The domain mismatch between the entailment (captions) and summarization (news) datasets suggests that the model is learning some domain-agnostic inference skills.ive summarization, the task of rewriting and compressing a document into a short summary, has achieved considerable success with neural sequence-tosequence models. However, these models can still benefit from stronger natural language inference skills, since a correct summary is logically entailed by the input document, i.e., it should not contain any contradictory or unrelated information. We incorporate such knowledge into an abstractive summarization model via multi-task learning, where we share its decoder parameters with those of an entailment generation model. We achieve promising initial improvements based on multiple metrics and datasets (including a test-only setting). The domain mismatch between the entailment (captions) and summarization (news) datasets suggests that the model is learning some domain-agnostic inference skills.",
"title": ""
},
{
"docid": "f86dfe07f73e2dba05796e6847765e7a",
"text": "OBJECTIVE\nThe aim of this study was to extend previous examinations of aviation accidents to include specific aircrew, environmental, supervisory, and organizational factors associated with two types of commercial aviation (air carrier and commuter/ on-demand) accidents using the Human Factors Analysis and Classification System (HFACS).\n\n\nBACKGROUND\nHFACS is a theoretically based tool for investigating and analyzing human error associated with accidents and incidents. Previous research has shown that HFACS can be reliably used to identify human factors trends associated with military and general aviation accidents.\n\n\nMETHOD\nUsing data obtained from both the National Transportation Safety Board and the Federal Aviation Administration, 6 pilot-raters classified aircrew, supervisory, organizational, and environmental causal factors associated with 1020 commercial aviation accidents that occurred over a 13-year period.\n\n\nRESULTS\nThe majority of accident causal factors were attributed to aircrew and the environment, with decidedly fewer associated with supervisory and organizational causes. Comparisons were made between HFACS causal categories and traditional situational variables such as visual conditions, injury severity, and regional differences.\n\n\nCONCLUSION\nThese data will provide support for the continuation, modification, and/or development of interventions aimed at commercial aviation safety.\n\n\nAPPLICATION\nHFACS provides a tool for assessing human factors associated with accidents and incidents.",
"title": ""
}
] |
scidocsrr
|
7d7fc7fd1e480acfcad4abfac8014e86
|
Neural Architecture Search with Bayesian Optimisation and Optimal Transport
|
[
{
"docid": "697ed30a5d663c1dda8be0183fa4a314",
"text": "Due to the Web expansion, the prediction of online news popularity is becoming a trendy research topic. In this paper, we propose a novel and proactive Intelligent Decision Support System (IDSS) that analyzes articles prior to their publication. Using a broad set of extracted features (e.g., keywords, digital media content, earlier popularity of news referenced in the article) the IDSS first predicts if an article will become popular. Then, it optimizes a subset of the articles features that can more easily be changed by authors, searching for an enhancement of the predicted popularity probability. Using a large and recently collected dataset, with 39,000 articles from the Mashable website, we performed a robust rolling windows evaluation of five state of the art models. The best result was provided by a Random Forest with a discrimination power of 73%. Moreover, several stochastic hill climbing local searches were explored. When optimizing 1000 articles, the best optimization method obtained a mean gain improvement of 15 percentage points in terms of the estimated popularity probability. These results attest the proposed IDSS as a valuable tool for online news authors.",
"title": ""
},
{
"docid": "0f654fde20c49d8997813d985da80ae5",
"text": "Bayesian optimization has been successfully used to optimize complex black-box functions whose evaluations are expensive. In many applications, like in deep learning and predictive analytics, the optimization domain is itself complex and structured. In this work, we focus on use cases where this domain exhibits a known dependency structure. The benefit of leveraging this structure is twofold: we explore the search space more efficiently and posterior inference scales more favorably with the number of observations than Gaussian Process-based approaches published in the literature. We introduce a novel surrogate model for Bayesian optimization which combines independent Gaussian Processes with a linear model that encodes a tree-based dependency structure and can transfer information between overlapping decision sequences. We also design a specialized two-step acquisition function that explores the search space more effectively. Our experiments on synthetic tree-structured objectives and on the tuning of feedforward neural networks show that our method compares favorably with competing approaches.",
"title": ""
}
] |
[
{
"docid": "7b7571705c637f325037e9ee8d8fa1c5",
"text": "Breast cancer is one of the most widespread diseases among women in the UAE and worldwide. Correct and early diagnosis is an extremely important step in rehabilitation and treatment. However, it is not an easy one due to several uncertainties in detection using mammograms. Machine Learning (ML) techniques can be used to develop tools for physicians that can be used as an effective mechanism for early detection and diagnosis of breast cancer which will greatly enhance the survival rate of patients. This paper compares three of the most popular ML techniques commonly used for breast cancer detection and diagnosis, namely Support Vector Machine (SVM), Random Forest (RF) and Bayesian Networks (BN). The Wisconsin original breast cancer data set was used as a training set to evaluate and compare the performance of the three ML classifiers in terms of key parameters such as accuracy, recall, precision and area of ROC. The results obtained in this paper provide an overview of the state of art ML techniques for breast cancer detection.",
"title": ""
},
{
"docid": "38e8a11615701cdd440469ea4fadc91a",
"text": "A biometric authentication system operates by acquiring biometric data from a user and comparing it against the template data stored in a database in order to identify a person or to verify a claimed identity. Most systems store multiple templates per user in order to account for variations observed in a person’s biometric data. In this paper we propose two methods to perform automatic template selection where the goal is to select prototype #ngerprint templates for a #nger from a given set of #ngerprint impressions. The #rst method, called DEND, employs a clustering strategy to choose a template set that best represents the intra-class variations, while the second method, called MDIST, selects templates that exhibit maximum similarity with the rest of the impressions. Matching results on a database of 50 di4erent #ngers, with 200 impressions per #nger, indicate that a systematic template selection procedure as presented here results in better performance than random template selection. The proposed methods have also been utilized to perform automatic template update. Experimental results underscore the importance of these techniques. ? 2003 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "eceb513e5d67d66986597555cf16c814",
"text": "This study examines the statistical validation of a recently developed, fourth-generation (4G) risk–need assessment system (Correctional Offender Management Profiling for Alternative Sanctions; COMPAS) that incorporates a range of theoretically relevant criminogenic factors and key factors emerging from meta-analytic studies of recidivism. COMPAS’s automated scoring provides decision support for correctional agencies for placement decisions, offender management, and treatment planning. The article describes the basic features of COMPAS and then examines the predictive validity of the COMPAS risk scales by fitting Cox proportional hazards models to recidivism outcomes in a sample of presentence investigation and probation intake cases (N = 2,328). Results indicate that the predictive validities for the COMPAS recidivism risk model, as assessed by the area under the receiver operating characteristic curve (AUC), equal or exceed similar 4G instruments. The AUCs ranged from .66 to .80 for diverse offender subpopulations across three outcome criteria, with a majority of these exceeding .70.",
"title": ""
},
{
"docid": "deabd38990de9ed15958bb2ad28d225e",
"text": "Recent IoT-based DDoS attacks have exposed how vulnerable the Internet can be to millions of insufficiently secured IoT devices. To understand the risks of these attacks requires learning about these IoT devices---where are they, how many are there, how are they changing? In this paper, we propose a new method to find IoT devices in Internet to begin to assess this threat. Our approach requires observations of flow-level network traffic and knowledge of servers run by the manufacturers of the IoT devices. We have developed our approach with 10 device models by 7 vendors and controlled experiments. We apply our algorithm to observations from 6 days of Internet traffic at a college campus and partial traffic from an IXP to detect IoT devices.",
"title": ""
},
{
"docid": "a2d97c2b71e6424d3f458b7730be0c90",
"text": "Fault detection in solar photovoltaic (PV) arrays is an essential task for increasing reliability and safety in PV systems. Because of PV's nonlinear characteristics, a variety of faults may be difficult to detect by conventional protection devices, leading to safety issues and fire hazards in PV fields. To fill this protection gap, machine learning techniques have been proposed for fault detection based on measurements, such as PV array voltage, current, irradiance, and temperature. However, existing solutions usually use supervised learning models, which are trained by numerous labeled data (known as fault types) and therefore, have drawbacks: 1) the labeled PV data are difficult or expensive to obtain, 2) the trained model is not easy to update, and 3) the model is difficult to visualize. To solve these issues, this paper proposes a graph-based semi-supervised learning model only using a few labeled training data that are normalized for better visualization. The proposed model not only detects the fault, but also further identifies the possible fault type in order to expedite system recovery. Once the model is built, it can learn PV systems autonomously over time as weather changes. Both simulation and experimental results show the effective fault detection and classification of the proposed method.",
"title": ""
},
{
"docid": "0a0e4219aa1e20886e69cb1421719c4e",
"text": "A wearable two-antenna system to be integrated on a life jacket and connected to Personal Locator Beacons (PLBs) of the Cospas-Sarsat system is presented. Each radiating element is a folded meandered dipole resonating at 406 MHz and includes a planar reflector realized by a metallic foil. The folded dipole and the metallic foil are attached on the opposite sides of the floating elements of the life jacket itself, so resulting in a mechanically stable antenna. The metallic foil improves antenna radiation properties even when the latter is close to the sea surface, shields the human body from EM radiation and makes the radiating system less sensitive to the human body movements. Prototypes have been realized and a measurement campaign has been carried out. The antennas show satisfactory performance also when the life jacket is worn by a user. The proposed radiating elements are intended for the use in a two-antenna scheme in which the transmitter can switch between them in order to meet Cospas-Sarsat system specifications. Indeed, the two antennas provide complementary radiation patterns so that Cospas-Sarsat requirements (satellite constellation coverage and EIRP profile) are fully satisfied.",
"title": ""
},
{
"docid": "2238e67d9ef77fb6fd5a1d7b44e888ef",
"text": "Nationality of a human being is a well-known identifying characteristic used for every major authentication purpose in every country. Albeit advances in application of Artificial Intelligence and Computer Vision in different aspects, its’ contribution to this specific security procedure is yet to be cultivated. With a goal to successfully applying computer vision techniques to predict a human’s nationality based on his facial features, we have proposed this novel method and have achieved an average of 93.6% accuracy with very low misclassification rate. KeywordsNationality, Artificial Intelligence, Computer Vision",
"title": ""
},
{
"docid": "a8ff2ea9e15569de375c34ef252d0dad",
"text": "BIM (Building Information Modeling) has been recently implemented by many Architecture, Engineering, and Construction firms due to its productivity gains and long term benefits. This paper presents the development and implementation of a sustainability assessment framework for an architectural design using BIM technology in extracting data from the digital building model needed for determining the level of sustainability. The sustainability assessment is based on the LEED (Leadership in Energy and Environmental Design) Green Building Rating System, a widely accepted national standards for sustainable building design in the United States. The architectural design of a hotel project is used as a case study to verify the applicability of the framework.",
"title": ""
},
{
"docid": "74d6c2fff4b67d05871ca0debbc4ec15",
"text": "There is great interest in developing rechargeable lithium batteries with higher energy capacity and longer cycle life for applications in portable electronic devices, electric vehicles and implantable medical devices. Silicon is an attractive anode material for lithium batteries because it has a low discharge potential and the highest known theoretical charge capacity (4,200 mAh g(-1); ref. 2). Although this is more than ten times higher than existing graphite anodes and much larger than various nitride and oxide materials, silicon anodes have limited applications because silicon's volume changes by 400% upon insertion and extraction of lithium which results in pulverization and capacity fading. Here, we show that silicon nanowire battery electrodes circumvent these issues as they can accommodate large strain without pulverization, provide good electronic contact and conduction, and display short lithium insertion distances. We achieved the theoretical charge capacity for silicon anodes and maintained a discharge capacity close to 75% of this maximum, with little fading during cycling.",
"title": ""
},
{
"docid": "45c3d3a765e565ad3b870b95f934592a",
"text": "This paper describes a fully automated framework to generate realistic head motion, eye gaze, and eyelid motion simultaneously based on live (or recorded) speech input. Its central idea is to learn separate yet interrelated statistical models for each component (head motion, gaze, or eyelid motion) from a prerecorded facial motion data set: 1) Gaussian Mixture Models and gradient descent optimization algorithm are employed to generate head motion from speech features; 2) Nonlinear Dynamic Canonical Correlation Analysis model is used to synthesize eye gaze from head motion and speech features, and 3) nonnegative linear regression is used to model voluntary eye lid motion and log-normal distribution is used to describe involuntary eye blinks. Several user studies are conducted to evaluate the effectiveness of the proposed speech-driven head and eye motion generator using the well-established paired comparison methodology. Our evaluation results clearly show that this approach can significantly outperform the state-of-the-art head and eye motion generation algorithms. In addition, a novel mocap+video hybrid data acquisition technique is introduced to record high-fidelity head movement, eye gaze, and eyelid motion simultaneously.",
"title": ""
},
{
"docid": "21884cc698736f13736dcc889b8057a3",
"text": "Although deep convolutional neural networks(CNNs) have achieved remarkable results on object detection and segmentation, preand post-processing steps such as region proposals and non-maximum suppression(NMS), have been required. These steps result in high computational complexity and sensitivity to hyperparameters, e.g. thresholds for NMS. In this work, we propose a novel end-to-end trainable deep neural network architecture, which consists of convolutional and recurrent layers, that generates the correct number of object instances and their bounding boxes (or segmentation masks) given an image, using only a single network evaluation without any preor post-processing steps. We have tested on detecting digits in multi-digit images synthesized using MNIST, automatically segmenting digits in these images, and detecting cars in the KITTI benchmark dataset. The proposed approach outperforms a strong CNN baseline on the synthesized digits datasets and shows promising results on KITTI car detection.",
"title": ""
},
{
"docid": "396f0c39b5afbf6bee2f7168f23ecccb",
"text": "This work describes a method for real-time motion detection using an active camera mounted on a padtilt platform. Image mapping is used to align images of different viewpoints so that static camera motion detection can be applied. In the presence of camera position noise, the image mapping is inexact and compensation techniques fail. The use of morphological filtering of motion images is explored to desensitize the detection algorithm to inaccuracies in background compensation. Two motion detection techniques are examined, and experiments to verify the methods are presented. The system successfully extracts moving edges from dynamic images even when the pankilt angles between successive frames are as large as 3\".",
"title": ""
},
{
"docid": "c63421313f4ed9c1689da4e937a07962",
"text": "The life-long learning architecture attempts to create an adaptive agent through the incorporation of prior knowledge over the lifetime of a learning agent. Our paper focuses on task transfer in reinforcement learning and specifically in Q-learning. There are three main model free methods for performing task transfer in Qlearning: direct transfer, soft transfer and memoryguided exploration. In direct transfer Q-values from a previous task are used to initialize the Q-values of the next task. Soft transfer initializes the Q-values of the new task with a weighted average of the standard initialization value and the Q-values of the previous task. In memory-guided exploration the Q-values of previous tasks are used as a guide in the initial exploration of the agent. The weight that the agent gives to its past experience decreases over time. We explore stability issues related to the off-policy nature of memory-guided exploration and compare memory-guided exploration to soft transfer and direct transfer in three different envi-",
"title": ""
},
{
"docid": "6929c8fc722f108c99ce8966b3989bd9",
"text": "Cisco’s NetFlow protocol and Internet engineering task force’s Internet protocol flow information export open standard are widely deployed protocols for collecting network flow statistics. Understanding intricate traffic patterns in these network statistics requires sophisticated flow analysis tools that can efficiently mine network flow records. We present a network flow query language (NFQL), which can be used to write expressive queries to process flow records, aggregate them into groups, apply absolute or relative filters, and invoke Allen interval algebra rules to merge group records. We demonstrate nfql, an implementation of the language that has comparable execution times to SiLK and flow-tools with absolute filters. However, it trades performance when grouping and merging flows in favor of more operational capabilities that help increase the expressiveness of NFQL. We present two applications to demonstrate richer capabilities of the language. We show queries to identify flow signatures of popular applications and behavioural signatures to identify SSH compromise detection attacks.",
"title": ""
},
{
"docid": "8b5d7965ac154da1266874027f0b10a0",
"text": "Matching pedestrians across disjoint camera views, known as person re-identification (re-id), is a challenging problem that is of importance to visual recognition and surveillance. Most existing methods exploit local regions within spatial manipulation to perform matching in local correspondence. However, they essentially extract fixed representations from pre-divided regions for each image and perform matching based on the extracted representation subsequently. For models in this pipeline, local finer patterns that are crucial to distinguish positive pairs from negative ones cannot be captured, and thus making them underperformed. In this paper, we propose a novel deep multiplicative integration gating function, which answers the question of what-and-where to match for effective person re-id. To address what to match, our deep network emphasizes common local patterns by learning joint representations in a multiplicative way. The network comprises two Convolutional Neural Networks (CNNs) to extract convolutional activations, and generates relevant descriptors for pedestrian matching. This thus, leads to flexible representations for pair-wise images. To address where to match, we combat the spatial misalignment by performing spatially recurrent pooling via a four-directional recurrent neural network to impose spatial depenEmail addresses: [email protected] (Lin Wu ), [email protected] (Yang Wang), [email protected] (Xue Li), [email protected] (Junbin Gao) Preprint submitted to Elsevier 25·7·2017 ar X iv :1 70 7. 07 07 4v 1 [ cs .C V ] 2 1 Ju l 2 01 7 dency over all positions with respect to the entire image. The proposed network is designed to be end-to-end trainable to characterize local pairwise feature interactions in a spatially aligned manner. To demonstrate the superiority of our method, extensive experiments are conducted over three benchmark data sets: VIPeR, CUHK03 and Market-1501.",
"title": ""
},
{
"docid": "efd2843175ad0b860ad1607f337addc5",
"text": "We demonstrate the usefulness of the uniform resource locator (URL) alone in performing web page classification. This approach is faster than typical web page classification, as the pages do not have to be fetched and analyzed. Our approach segments the URL into meaningful chunks and adds component, sequential and orthographic features to model salient patterns. The resulting features are used in supervised maximum entropy modeling. We analyze our approach's effectiveness on two standardized domains. Our results show that in certain scenarios, URL-based methods approach the performance of current state-of-the-art full-text and link-based methods.",
"title": ""
},
{
"docid": "f2d8a2b77fd3bc9625ae4f2881bf2729",
"text": "Urothelial carcinoma (UC) is characterized by expression of a plethora of cell surface antigens, thus offering opportunities for specific therapeutic targeting with use of antibody-drug conjugates (ADCs). ADCs are structured from two major constituents, a monoclonal antibody (mAb) against a specific target and a cytotoxic drug connected via a linker molecule. Several ADCs are developed against different UC surface markers, but the ones at most advanced stages of development include sacituzumab govitecan (IMMU-132), enfortumab vedotin (ASG-22CE/ASG-22ME), ASG-15ME for advanced UC, and oportuzumab monatox (VB4-845) for early UC. Several new targets are identified and utilized for novel or existing ADC testing. The most promising ones include human epidermal growth factor receptor 2 (HER2) and members of the fibroblast growth factor receptor axis (FGF/FGFR). Positive preclinical and early clinical results are reported in many cases, thus the next step involves further improving efficacy and reducing toxicity as well as testing combination strategies with approved agents.",
"title": ""
},
{
"docid": "0ab1c92b0559f6e233a8aa90d0c26c39",
"text": "It is an important problem in computational advertising to study the effects of different advertising channels upon user conversions, as advertisers can use the discoveries to plan or optimize advertising campaigns. In this paper, we propose a novel Probabilistic Multi-Touch Attribution (PMTA) model which takes into account not only which ads have been viewed or clicked by the user but also when each such interaction occurred. Borrowing the techniques from survival analysis, we use the Weibull distribution to describe the observed conversion delay and use the hazard rate of conversion to measure the influence of an ad exposure. It has been shown by extensive experiments on a large real-world dataset that our proposed model is superior to state-of-the-art methods in both conversion prediction and attribution analysis. Furthermore, a surprising research finding obtained from this dataset is that search ads are often not the root cause of final conversions but just the consequence of previously viewed ads.",
"title": ""
},
{
"docid": "d8d6bb0f303c4911d9dea9f82d2131a2",
"text": "Text Detection and recognition is a one of the important aspect of image processing. This paper analyzes and compares the methods to handle this task. It summarizes the fundamental problems and enumerates factors that need consideration when addressing these problems. Existing techniques are categorized as either stepwise or integrated and sub-problems are highlighted including digit localization, verification, segmentation and recognition. Special issues associated with the enhancement of degraded text and the processing of video text and multi-oriented text are also addressed. The categories and sub-categories of text are illustrated, benchmark datasets are enumerated, and the performance of the most representative approaches is compared. This review also provides a fundamental comparison and analysis of the remaining problems in the field.",
"title": ""
},
{
"docid": "a8aa7af1b9416d4bd6df9d4e8bcb8a40",
"text": "User-computer dialogues are typically one-sided, with the bandwidth from computer to user far greater than that from user to computer. The movement of a user’s eyes can provide a convenient, natural, and high-bandwidth source of additional user input, to help redress this imbalance. We therefore investigate the introduction of eye movements as a computer input medium. Our emphasis is on the study of interaction techniques that incorporate eye movements into the user-computer dialogue in a convenient and natural way. This chapter describes research at NRL on developing such interaction techniques and the broader issues raised by non-command-based interaction styles. It discusses some of the human factors and technical considerations that arise in trying to use eye movements as an input medium, describes our approach and the first eye movement-based interaction techniques that we have devised and implemented in our laboratory, reports our experiences and observations on them, and considers eye movement-based interaction as an exemplar of a new, more general class of non-command-based user-computer interaction.",
"title": ""
}
] |
scidocsrr
|
6e771401a7714b74726009de41aad0fa
|
Constructing an Interactive Natural Language Interface for Relational Databases
|
[
{
"docid": "5b89c42eb7681aff070448bc22e501ea",
"text": "DISCOVER operates on relational databases and facilitates information discovery on them by allowing its user to issue keyword queries without any knowledge of the database schema or of SQL. DISCOVER returns qualified joining networks of tuples, that is, sets of tuples that are associated because they join on their primary and foreign keys and collectively contain all the keywords of the query. DISCOVER proceeds in two steps. First the Candidate Network Generator generates all candidate networks of relations, that is, join expressions that generate the joining networks of tuples. Then the Plan Generator builds plans for the efficient evaluation of the set of candidate networks, exploiting the opportunities to reuse common subexpressions of the candidate networks. We prove that DISCOVER finds without redundancy all relevant candidate networks, whose size can be data bound, by exploiting the structure of the schema. We prove that the selection of the optimal execution plan (way to reuse common subexpressions) is NP-complete. We provide a greedy algorithm and we show that it provides near-optimal plan execution time cost. Our experimentation also provides hints on tuning the greedy algorithm.",
"title": ""
}
] |
[
{
"docid": "a5e23ca50545378ef32ed866b97fd418",
"text": "In the framework of computer assisted diagnosis of diabetic retinopathy, a new algorithm for detection of exudates is presented and discussed. The presence of exudates within the macular region is a main hallmark of diabetic macular edema and allows its detection with a high sensitivity. Hence, detection of exudates is an important diagnostic task, in which computer assistance may play a major role. Exudates are found using their high grey level variation, and their contours are determined by means of morphological reconstruction techniques. The detection of the optic disc is indispensable for this approach. We detect the optic disc by means of morphological filtering techniques and the watershed transformation. The algorithm has been tested on a small image data base and compared with the performance of a human grader. As a result, we obtain a mean sensitivity of 92.8% and a mean predictive value of 92.4%. Robustness with respect to changes of the parameters of the algorithm has been evaluated.",
"title": ""
},
{
"docid": "8413c39dbb83063364db834502a02647",
"text": "There are many established plastic surgical techniques to effectively address blepharoptosis. Minimally invasive levator advancement (MILA) causes limited disruption to the anatomy while maintaining good height, contour, lid folds, function, and long-term stability. This procedure has been performed in more than 1000 patients since 1993 by the author with consistent, durable results and is a reliable method to correct blepharoptosis. It is not indicated in cases with absent to very poor levator function, where frontalis suspension is the preferred procedure. The MILA technique will be described and illustrated.",
"title": ""
},
{
"docid": "537cf2257d1ca9ef49f023dbdc109e0b",
"text": "0950-7051/$ see front matter 2010 Elsevier B.V. A doi:10.1016/j.knosys.2010.07.006 * Corresponding author. Tel.: +886 3 5712121x573 E-mail addresses: [email protected] (Y.-S (L.-I. Tong). The autoregressive integrated moving average (ARIMA), which is a conventional statistical method, is employed in many fields to construct models for forecasting time series. Although ARIMA can be adopted to obtain a highly accurate linear forecasting model, it cannot accurately forecast nonlinear time series. Artificial neural network (ANN) can be utilized to construct more accurate forecasting model than ARIMA for nonlinear time series, but explaining the meaning of the hidden layers of ANN is difficult and, moreover, it does not yield a mathematical equation. This study proposes a hybrid forecasting model for nonlinear time series by combining ARIMA with genetic programming (GP) to improve upon both the ANN and the ARIMA forecasting models. Finally, some real data sets are adopted to demonstrate the effectiveness of the proposed forecasting model. 2010 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "c9bec74bcb607b0dc4a8372ba28eb4b0",
"text": "Alarm fatigue, a condition in which clinical staff become desensitized to alarms due to the high frequency of unnecessary alarms, is a major patient safety concern. Alarm fatigue is particularly prevalent in the pediatric setting, due to the high level of variation in vital signs with patient age. Existing studies have shown that the current default pediatric vital sign alarm thresholds are inappropriate, and lead to a larger than necessary alarm load. This study leverages a large database containing over 190 patient-years of heart rate data to accurately identify the 1st and 99th percentiles of an individual's heart rate on their first day of vital sign monitoring. These percentiles are then used as personalized vital sign thresholds, which are evaluated by comparing to non-default alarm thresholds used in practice, and by using the presence of major clinical events to infer alarm labels. Using the proposed personalized thresholds would decrease low and high heart rate alarms by up to 50% and 44% respectively, while maintaining sensitivity of 62% and increasing specificity to 49%. The proposed personalized vital sign alarm thresholds will reduce alarm fatigue, thus contributing to improved patient outcomes, shorter hospital stays, and reduced hospital costs.",
"title": ""
},
{
"docid": "3c18cb48bc25f4b9def94871ba6cbd60",
"text": "Three-dimensional (3D) printing, also referred to as additive manufacturing, is a technology that allows for customized fabrication through computer-aided design. 3D printing has many advantages in the fabrication of tissue engineering scaffolds, including fast fabrication, high precision, and customized production. Suitable scaffolds can be designed and custom-made based on medical images such as those obtained from computed tomography. Many 3D printing methods have been employed for tissue engineering. There are advantages and limitations for each method. Future areas of interest and progress are the development of new 3D printing platforms, scaffold design software, and materials for tissue engineering applications.",
"title": ""
},
{
"docid": "d91cc4e5aedd7f961035923003fe425e",
"text": "Humans like to express their opinions and are eager to know others’ opinions. Automatically mining and organizing opinions from heterogeneous information sources are very useful for individuals, organizations and even governments. Opinion extraction, opinion summarization and opinion tracking are three important techniques for understanding opinions. Opinion extraction mines opinions at word, sentence and document levels from articles. Opinion summarization summarizes opinions of articles by telling sentiment polarities, degree and the correlated events. In this paper, both news and web blog articles are investigated. TREC, NTCIR and articles collected from web blogs serve as the information sources for opinion extraction. Documents related to the issue of animal cloning are selected as the experimental materials. Algorithms for opinion extraction at word, sentence and document level are proposed. The issue of relevant sentence selection is discussed, and then topical and opinionated information are summarized. Opinion summarizations are visualized by representative sentences. Text-based summaries in different languages, and from different sources, are compared. Finally, an opinionated curve showing supportive and nonsupportive degree along the timeline is illustrated by an opinion tracking system.",
"title": ""
},
{
"docid": "5c5fa8db6eea04b2b0fa6db5c0b9f655",
"text": "Network intrusion detection systems identify malicious connections and thus help protect networks from attacks. Various data-driven approaches have been used in the development of network intrusion detection systems, which usually lead to either very complex systems or poor generalization ability due to the complexity of this challenge. This paper proposes a data-driven network intrusion detection system using fuzzy interpolation in an effort to address the aforementioned limitations. In particular, the developed system equipped with a sparse rule base not only guarantees the online performance of intrusion detection, but also allows the generation of security alerts from situations which are not directly covered by the existing knowledge base. The proposed system has been applied to a well-known data set for system validation and evaluation with competitive results generated.",
"title": ""
},
{
"docid": "bd0814c1fb426de140579e739ab3ef07",
"text": "This paper introduces a document grounded dataset for conversations. We define “Document Grounded Conversations” as conversations that are about the contents of a specified document. In this dataset the specified documents were Wikipedia articles about popular movies. The dataset contains 4112 conversations with an average of 21.43 turns per conversation. This positions this dataset to not only provide a relevant chat history while generating responses but also provide a source of information that the models could use. We describe two neural architectures that provide benchmark performance on the task of generating the next response. We also evaluate our models for engagement and fluency, and find that the information from the document helps in generating more engaging and fluent responses.",
"title": ""
},
{
"docid": "ac078f78fcf0f675c21a337f8e3b6f5f",
"text": "bstract. Plenoptic cameras, constructed with internal microlens rrays, capture both spatial and angular information, i.e., the full 4-D adiance, of a scene. The design of traditional plenoptic cameras ssumes that each microlens image is completely defocused with espect to the image created by the main camera lens. As a result, nly a single pixel in the final image is rendered from each microlens mage, resulting in disappointingly low resolution. A recently develped alternative approach based on the focused plenoptic camera ses the microlens array as an imaging system focused on the imge plane of the main camera lens. The flexible spatioangular tradeff that becomes available with this design enables rendering of final mages with significantly higher resolution than those from traditional lenoptic cameras. We analyze the focused plenoptic camera in ptical phase space and present basic, blended, and depth-based endering algorithms for producing high-quality, high-resolution imges. We also present our graphics-processing-unit-based impleentations of these algorithms, which are able to render full screen efocused images in real time. © 2010 SPIE and IS&T. DOI: 10.1117/1.3442712",
"title": ""
},
{
"docid": "93076fee7472e1a89b2b3eb93cff4737",
"text": "This paper presents a fast and robust level set method for image segmentation. To enhance the robustness against noise, we embed a Markov random field (MRF) energy function to the conventional level set energy function. This MRF energy function builds the correlation of a pixel with its neighbors and encourages them to fall into the same region. To obtain a fast implementation of the MRF embedded level set model, we explore algebraic multigrid (AMG) and sparse field method (SFM) to increase the time step and decrease the computation domain, respectively. Both AMG and SFM can be conducted in a parallel fashion, which facilitates the processing of our method for big image databases. By comparing the proposed fast and robust level set method with the standard level set method and its popular variants on noisy synthetic images, synthetic aperture radar (SAR) images, medical images, and natural images, we comprehensively demonstrate the new method is robust against various kinds of noises. In particular, the new level set method can segment an image of size 500 × 500 within 3 s on MATLAB R2010b installed in a computer with 3.30-GHz CPU and 4-GB memory.",
"title": ""
},
{
"docid": "a78782e389313600620bfb68fc57a81f",
"text": "Online consumer reviews reflect the testimonials of real people, unlike advertisements. As such, they have critical impact on potential consumers, and indirectly on businesses. According to a Harvard study (Luca 2011), +1 rise in star-rating increases revenue by 5–9%. Problematically, such financial incentives have created a market for spammers to fabricate reviews, to unjustly promote or demote businesses, activities known as opinion spam (Jindal and Liu 2008). A vast majority of existing work on this problem have formulations based on static review data, with respective techniques operating in an offline fashion. Spam campaigns, however, are intended to make most impact during their course. Abnormal events triggered by spammers’ activities could be masked in the load of future events, which static analysis would fail to identify. In this work, we approach the opinion spam problem with a temporal formulation. Specifically, we monitor a list of carefully selected indicative signals of opinion spam over time and design efficient techniques to both detect and characterize abnormal events in real-time. Experiments on datasets from two different review sites show that our approach is fast, effective, and practical to be deployed in real-world systems.",
"title": ""
},
{
"docid": "5673bc2ca9f08516f14485ef8bbba313",
"text": "Analog-to-digital converters are essential building blocks in modern electronic systems. They form the critical link between front-end analog transducers and back-end digital computers that can efficiently implement a wide variety of signal-processing functions. The wide variety of digitalsignal-processing applications leads to the availability of a wide variety of analog-to-digital (A/D) converters of varying price, performance, and quality. Ideally, an A/D converter encodes a continuous-time analog input voltage, VIN , into a series of discrete N -bit digital words that satisfy the relation",
"title": ""
},
{
"docid": "ac1f3c609950a20e3abdbaf2a38764dc",
"text": "Accurate Human Epithelial-2 (HEp-2) cell image classification plays an important role in the diagnosis of many autoimmune diseases. However, the traditional approach requires experienced experts to artificially identify cell patterns, which extremely increases the workload and suffer from the subjective opinion of physician. To address it, we propose a very deep residual network (ResNet) based framework to automatically recognize HEp-2 cell via cross-modal transfer learning strategy. We adopt a residual network of 50 layers (ResNet-50) that are substantially deep to acquire rich and discriminative feature. Compared with typical convolutional network, the main characteristic of residual network lie in the introduction of residual connection, which can solve the degradation problem effectively. Also, we use a cross-modal transfer learning strategy by pre-training the model from a very similar dataset (from ICPR2012 to ICPR2016-Task1). Our proposed framework achieves an average class accuracy of 95.63% on ICPR2012 HEp-2 dataset and a mean class accuracy of 96.87% on ICPR2016-Task1 HEp-2 dataset, which outperforms the traditional methods.",
"title": ""
},
{
"docid": "cca9c87d1bfb8b024f5841c6c4c7ca8d",
"text": "A high-efficiency two-stage resonant inverter with effective control of both the magnitude and phase angle of the output voltage was proposed in this paper for high-frequency ac (HFAC) power-distribution applications, where a number of resonant inverters need to be paralleled. In order to parallel multiple resonant inverters of the same operation frequency, each inverter module needs independent control of the phase angle and magnitude of the output voltage. It is also desirable that the output voltage has very low total harmonics distortion, as well as high efficiency over wide input and load ranges. The proposed resonant inverter consists of two stages. The first stage is a two-switch dc/dc converter with zero-voltage transition, and the second stage is a half-bridge resonant dc/ac inverter with fixed duty ratio. A series-parallel resonant tank is used to achieve high waveform quality of the output voltage. The magnitude of the output voltage is regulated through the duty-ratio control of the first stage with pulsewidth modulation. The phase angle of the output voltage is regulated through a pulse-phase-modulation control of the second stage. The proposed resonant inverter has the advantages of better waveform quality, wide range of input and load variations for soft-switching, and independent control of the phase angle and magnitude of the output voltage, making it an attractive candidate for applications where a number of resonant inverters need to be placed in parallel to the HFAC bus and a number of distributed loads are connected to the HFAC bus. The performance is verified with both simulation and experiments.",
"title": ""
},
{
"docid": "98177cac0d99885e85d1d45b0f885c04",
"text": "A miniature ultra wideband (UWB) bandpass filter that uses a multilayer interdigital structure is presented in this paper. The quarter-wavelength resonators including microstrip line resonator and coplanar waveguide (CPW) resonator are vertically stacked to achieve a strong coupling for the designed UWB filter. By adopting multilayer configurations, a miniature footprint is achieved. A seven-pole UWB bandpass filter is designed and fabricated using the multilayer liquid crystal polymer (LCP) lamination process, and is measured by using a vector network analyzer. The measured and predicted results are presented with good agreements. The fabricated prototype has a 10 dB return loss bandwidth from 2.8 GHz to 11.0 GHz and a compact size of 9.3 mm by 3.6 mm (0.38 λg by 0.15 λg, where λg is the guided wavelength of 50 Ω microstrip line at centre frequency).",
"title": ""
},
{
"docid": "59733877083c5d22bef27af90ac79907",
"text": "We review the past 25 years of research into time series forecasting. In this silver jubilee issue, we naturally highlight results published in journals managed by the International Institute of Forecasters (Journal of Forecasting 1982–1985 and International Journal of Forecasting 1985–2005). During this period, over one third of all papers published in these journals concerned time series forecasting. We also review highly influential works on time series forecasting that have been published elsewhere during this period. Enormous progress has been made in many areas, but we find that there are a large number of topics in need of further development. We conclude with comments on possible future research directions in this field. D 2006 International Institute of Forecasters. Published by Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "802afb132b080625db1b80ae4077dcec",
"text": "The present study proposed a novel Power Assist Recuperation System consisted of motor, inverter, DC-DC converter and 48V Li ion battery for fuel saving of conventional Internal Combustion Engine and hybrid vehicles. As a total solution of the next-generation 48V PARS, key components such as Interior Permanent Magnet Synchronous Motor, MOSFET power switching inverter and bi-directional DC-DC converter with cost competitiveness from in-house power IC's and power modules have been optimized based on a 48V Li ion battery. A deep flux weakening control has been evaluated for the IPMSM's high speed operation. The specific value of how much the proposed 48V PARS can reduce CO2 emission and fuel consumption was estimated by so-called Autonomie software using both efficiency maps of motoring and generating modes. It is noticed that the present 48V PARS gives a considerably enhanced performance in reduction of fuel consumption by as high as 17% for a commercial 1.8L engine.",
"title": ""
},
{
"docid": "f445c13f85f8198708af0610be79d207",
"text": "Embryonic stem (ES) cells are pluripotent and of therapeutic potential in regenerative medicine. Understanding pluripotency at the molecular level should illuminate fundamental properties of stem cells and the process of cellular reprogramming. Through cell fusion the embryonic cell phenotype can be imposed on somatic cells, a process promoted by the homeodomain protein Nanog, which is central to the maintenance of ES cell pluripotency. Nanog is thought to function in concert with other factors such as Oct4 (ref. 8) and Sox2 (ref. 9) to establish ES cell identity. Here we explore the protein network in which Nanog operates in mouse ES cells. Using affinity purification of Nanog under native conditions followed by mass spectrometry, we have identified physically associated proteins. In an iterative fashion we also identified partners of several Nanog-associated proteins (including Oct4), validated the functional relevance of selected newly identified components and constructed a protein interaction network. The network is highly enriched for nuclear factors that are individually critical for maintenance of the ES cell state and co-regulated on differentiation. The network is linked to multiple co-repressor pathways and is composed of numerous proteins whose encoding genes are putative direct transcriptional targets of its members. This tight protein network seems to function as a cellular module dedicated to pluripotency.",
"title": ""
},
{
"docid": "2d68e0e5dbb8da11c26b429f81705b52",
"text": "This paper presents an approach to cash management for automatic teller machine (ATM) network. This approach is based on an artificial neural network to forecast a daily cash demand for every ATM in the network and on the optimization procedure to estimate the optimal cash load for every ATM. During the optimization procedure, the most important factors for ATMs maintenance were considered: cost of cash, cost of cash uploading and cost of daily services. Simulation studies show, that in case of higher cost of cash (interest rate) and lower cost for money uploading, the optimization procedure allows to decrease the ATMs maintenance costs around 15-20 %. For practical implementation of the proposed ATMs’ cash management procedure, further experimental investigations are necessary.",
"title": ""
},
{
"docid": "ec181b897706d101136dcbcef6e84de9",
"text": "Working with large swarms of robots has challenges in calibration, sensing, tracking, and control due to the associated scalability and time requirements. Kilobots solve this through their ease of maintenance and programming, and are widely used in several research laboratories worldwide where their low cost enables large-scale swarms studies. However, the small, inexpensive nature of the Kilobots limits their range of capabilities as they are only equipped with a single sensor. In some studies, this limitation can be a source of motivation and inspiration, while in others it is an impediment. As such, we designed, implemented, and tested a novel system to communicate personalized location-and-state-based information to each robot, and receive information on each robots’ state. In this way, the Kilobots can sense additional information from a virtual environment in real time; for example, a value on a gradient, a direction toward a reference point or a pheromone trail. The augmented reality for Kilobots ( ARK) system implements this in flexible base control software which allows users to define varying virtual environments within a single experiment using integrated overhead tracking and control. We showcase the different functionalities of the system through three demos involving hundreds of Kilobots. The ARK provides Kilobots with additional and unique capabilities through an open-source tool which can be implemented with inexpensive, off-the-shelf hardware.",
"title": ""
}
] |
scidocsrr
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.