query_id
stringlengths 32
32
| query
stringlengths 6
3.9k
| positive_passages
listlengths 1
21
| negative_passages
listlengths 10
100
| subset
stringclasses 7
values |
---|---|---|---|---|
21498e70834a40224f4a104d41a7868e
|
Aggression and Violent Behavior The neurobiology of antisocial personality disorder : The quest for rehabilitation and treatment ☆
|
[
{
"docid": "e9621784df5009b241c563a54583bab9",
"text": "CONTEXT\nPsychopathic antisocial individuals have previously been characterized by abnormal interhemispheric processing and callosal functioning, but there have been no studies on the structural characteristics of the corpus callosum in this group.\n\n\nOBJECTIVES\nTo assess whether (1) psychopathic individuals with antisocial personality disorder show structural and functional impairments in the corpus callosum, (2) group differences are mirrored by correlations between dimensional measures of callosal structure and psychopathy, (3) callosal abnormalities are associated with affective deficits, and (4) callosal abnormalities are independent of psychosocial deficits.\n\n\nDESIGN\nCase-control study.\n\n\nSETTING\nCommunity sample.\n\n\nPARTICIPANTS\nFifteen men with antisocial personality disorder and high psychopathy scores and 25 matched controls, all from a larger sample of 83 community volunteers.\n\n\nMAIN OUTCOME MEASURES\nStructural magnetic resonance imaging measures of the corpus callosum (volume estimate of callosal white matter, thickness, length, and genu and splenium area), functional callosal measures (2 divided visual field tasks), electrodermal and cardiovascular activity during a social stressor, personality measures of affective and interpersonal deficits, and verbal and spatial ability.\n\n\nRESULTS\nPsychopathic antisocial individuals compared with controls showed a 22.6% increase in estimated callosal white matter volume (P<.001), a 6.9% increase in callosal length (P =.002), a 15.3% reduction in callosal thickness (P =.04), and increased functional interhemispheric connectivity (P =.02). Correlational analyses in the larger unselected sample confirmed the association between antisocial personality and callosal structural abnormalities. Larger callosal volumes were associated with affective and interpersonal deficits, low autonomic stress reactivity, and low spatial ability. Callosal abnormalities were independent of psychosocial deficits.\n\n\nCONCLUSIONS\nCorpus callosum abnormalities in psychopathic antisocial individuals may reflect atypical neurodevelopmental processes involving an arrest of early axonal pruning or increased white matter myelination. These findings may help explain affective deficits and previous findings of abnormal interhemispheric transfer in psychopathic individuals.",
"title": ""
}
] |
[
{
"docid": "f3ca98a8e0600f0c80ef539cfc58e77e",
"text": "In this paper, we address a real life waste collection vehicle routing problem with time windows (VRPTW) with consideration of multiple disposal trips and drivers’ lunch breaks. Solomon’s well-known insertion algorithm is extended for the problem. While minimizing the number of vehicles and total traveling time is the major objective of vehicle routing problems in the literature, here we also consider the route compactness and workload balancing of a solution since they are very important aspects in practical applications. In order to improve the route compactness and workload balancing, a capacitated clustering-based waste collection VRPTW algorithm is developed. The proposed algorithms have been successfully implemented and deployed for the real life waste collection problems at Waste Management, Inc. A set of waste collection VRPTW benchmark problems is also presented in this paper. Waste collection problems are frequently considered as arc routing problems without time windows. However, that point of view can be applied only to residential waste collection problems. In the waste collection industry, there are three major areas: commercial waste collection, residential waste collection and roll-on-roll-off. In this paper, we mainly focus on the commercial waste collection problem. The problem can be characterized as a variant of VRPTW since commercial waste collection stops may have time windows. The major variation from a standard VRPTW is due to disposal operations and driver’s lunch break. When a vehicle is full, it needs to go to one of the disposal facilities (landfill or transfer station). Each vehicle can, and typically does, make multiple disposal trips per day. The purpose of this paper is to introduce the waste collection VRPTW, benchmark problem sets, and a solution approach for the problem. The proposed algorithms have been successfully implemented and deployed for the real life waste collection problems of Waste Management, the leading provider of comprehensive waste management services in North America with nearly 26,000 collection and transfer vehicles. 2005 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "ee0ba4a70bfa4f53d33a31b2d9063e89",
"text": "Since the identification of long-range dependence in network traffic ten years ago, its consistent appearance across numerous measurement studies has largely discredited Poisson-based models. However, since that original data set was collected, both link speeds and the number of Internet-connected hosts have increased by more than three orders of magnitude. Thus, we now revisit the Poisson assumption, by studying a combination of historical traces and new measurements obtained from a major backbone link belonging to a Tier 1 ISP. We show that unlike the older data sets, current network traffic can be well represented by the Poisson model for sub-second time scales. At multisecond scales, we find a distinctive piecewise-linear nonstationarity, together with evidence of long-range dependence. Combining our observations across both time scales leads to a time-dependent Poisson characterization of network traffic that, when viewed across very long time scales, exhibits the observed long-range dependence. This traffic characterization reconciliates the seemingly contradicting observations of Poisson and long-memory traffic characteristics. It also seems to be in general agreement with recent theoretical models for large-scale traffic aggregation",
"title": ""
},
{
"docid": "3ce09ec0f516894d027583d27814294f",
"text": "This paper provides a model of the use of computer algebra experimentation in algebraic graph theory. Starting from the semisymmetric cubic graph L on 112 vertices, we embed it into another semisymmetric graph N of valency 15 on the same vertex set. In order to consider systematically the links between L and N a number of combinatorial structures are involved and related coherent configurations are investigated. In particular, the construction of the incidence double cover of directed graphs is exploited. As a natural by-product of the approach presented here, a number of new interesting (mostly non-Schurian) association schemes on 56, 112 and 120 vertices are introduced and briefly discussed. We use computer algebra system GAP (including GRAPE and nauty), as well as computer package COCO.",
"title": ""
},
{
"docid": "8ffc78f24f56e6c3a46b0149a6842663",
"text": "In this paper, we present a hierarchical spatiotemporal blur-based approach to automatically detect contaminants on the camera lens. Contaminants adhering to camera lens corresponds to blur regions in digital image, as camera is focused on scene. We use kurtosis for a first level analysis to detect blur regions and filter them out. Next level of analysis computes lowpass energy and singular values to further validate blur regions. These analyses detect blur regions in an image efficiently and temporal consistency of blur is additionally incorporated to remove false detections. Once the presence of a contaminant is detected, we use an appearance-based classifier to categorize the type of contaminant on the lens. Our results are promising in terms of performance and latency when compared with state-of-the-art methods under a variety of real-world conditions.",
"title": ""
},
{
"docid": "e51d3dda4b53a01fbf12ce033321421f",
"text": "The tremendous growth in electronic data of universities creates the need to have some meaningful information extracted from these large volumes of data. The advancement in the data mining field makes it possible to mine educational data in order to improve the quality of the educational processes. This study, thus, uses data mining methods to study the performance of undergraduate students. Two aspects of students' performance have been focused upon. First, predicting students' academic achievement at the end of a fouryear study programme. Second, studying typical progressions and combining them with prediction results. Two important groups of students have been identified: the low and high achieving students. The results indicate that by focusing on a small number of courses that are indicators of particularly good or poor performance, it is possible to provide timely warning and support to low achieving students, and advice and opportunities to high performing students. © 2017 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "ab7663ef08505e37be080eab491d2607",
"text": "This paper has studied the fatigue and friction of big end bearing on an engine connecting rod by combining the multi-body dynamics and hydrodynamic lubrication model. First, the basic equations and the application on AVL-Excite software platform of multi-body dynamics have been described in detail. Then, introduce the hydrodynamic lubrication model, which is the extended Reynolds equation derived from the Navier-Stokes equation and the equation of continuity. After that, carry out the static calculation of connecting rod assembly. At the same time, multi-body dynamics analysis has been performed and stress history can be obtained by finite element data recovery. Next, execute the fatigue analysis combining the Static stress and dynamic stress, safety factor distribution of connecting rod will be obtained as result. At last, detailed friction analysis of the big-end bearing has been performed. And got a good agreement when contrast the simulation results to the Bearing wear in the experiment.",
"title": ""
},
{
"docid": "673f1315f3699e0fbc3701743a90eb71",
"text": "The majority of learning algorithms available today focus on approximating the state (V ) or state-action (Q) value function and efficient action selection comes as an afterthought. On the other hand, real-world problems tend to have large action spaces, where evaluating every possible action becomes impractical. This mismatch presents a major obstacle in successfully applying reinforcement learning to real-world problems. In this paper we present an effective approach to learning and acting in domains with multidimensional and/or continuous control variables where efficient action selection is embedded in the learning process. Instead of learning and representing the state or state-action value function of the MDP, we learn a value function over an implied augmented MDP, where states represent collections of actions in the original MDP and transitions represent choices eliminating parts of the action space at each step. Action selection in the original MDP is reduced to a binary search by the agent in the transformed MDP, with computational complexity logarithmic in the number of actions, or equivalently linear in the number of action dimensions. Our method can be combined with any discrete-action reinforcement learning algorithm for learning multidimensional continuous-action policies using a state value approximator in the transformed MDP. Our preliminary results with two well-known reinforcement learning algorithms (Least-Squares Policy Iteration and Fitted Q-Iteration) on two continuous action domains (1-dimensional inverted pendulum regulator, 2-dimensional bicycle balancing) demonstrate the viability and the potential of the proposed approach.",
"title": ""
},
{
"docid": "af3addd0c8e9af91eb10131ba0eba406",
"text": "Answering compositional questions requiring multi-step reasoning is challenging. We introduce an end-to-end differentiable model for interpreting questions about a knowledge graph (KG), which is inspired by formal approaches to semantics. Each span of text is represented by a denotation in a KG and a vector that captures ungrounded aspects of meaning. Learned composition modules recursively combine constituent spans, culminating in a grounding for the complete sentence which answers the question. For example, to interpret “not green”, the model represents “green” as a set of KG entities and “not” as a trainable ungrounded vector—and then uses this vector to parameterize a composition function that performs a complement operation. For each sentence, we build a parse chart subsuming all possible parses, allowing the model to jointly learn both the composition operators and output structure by gradient descent from endtask supervision. The model learns a variety of challenging semantic operators, such as quantifiers, disjunctions and composed relations, and infers latent syntactic structure. It also generalizes well to longer questions than seen in its training data, in contrast to RNN, its treebased variants, and semantic parsing baselines.",
"title": ""
},
{
"docid": "d0c5bb905973b3098b06f55232ed9c8f",
"text": "In recent years, theoretical and computational linguistics has paid much attention to linguistic items that form scales. In NLP, much research has focused on ordering adjectives by intensity (tiny < small). Here, we address the task of automatically ordering English adverbs by their intensifying or diminishing effect on adjectives (e.g. extremely small < very small). We experiment with 4 different methods: 1) using the association strength between adverbs and adjectives; 2) exploiting scalar patterns (such as not only X but Y); 3) using the metadata of product reviews; 4) clustering. The method that performs best is based on the use of metadata and ranks adverbs by their scaling factor relative to unmodified adjectives.",
"title": ""
},
{
"docid": "1994429bea369cf4f4395095789b3ec4",
"text": "Since Software-Defined Networking (SDN) gains popularity, mobile/wireless support is mentioned with importance to be handled as one of the crucial aspects in SDN. SDN introduces a centralized entity called SDN controller with the holistic view of the topology on the separated control/data plane architecture. Leveraging the features provided in the SDN controller, mobility management can be simply designed and lightweight, thus there is no need to define and rely on new mobility entities such as given in the traditional IP mobility management architectures. In this paper, we design and implement lightweight IPv6 mobility management in Open Network Operating System (ONOS) that is an open-source SDN control platform for service providers. For the lightweight mobility management, we implement the Neighbor Discovery Proxy (ND Proxy) function into the OpenFlow-enabled AP and switches, and ONOS controller module to handle the receiving ICMPv6 message and to send the unique home network prefix address to an IPv6 host. Thus this approach enables mobility management without bringing or integrating on traditional IP mobility protocols. The proposed idea was experimentally evaluated in the ONOS controller and Raspberry Pi based testbed, identifying the obtained handoff signaling latency is in the acceptable performance range.",
"title": ""
},
{
"docid": "cf70de0c40646e3564b7d04c9dc050c7",
"text": "After segmenting candidate exudates regions in colour retinal images we present and compare two methods for their classification. The Neural Network based approach performs marginally better than the Support Vector Machine based approach, but we show that the latter are more flexible given criteria such as control of sensitivity and specificity rates. We present classification results for different learning algorithms for the Neural Net and use both hard and soft margins for the Support Vector Machines. We also present ROC curves to examine the trade-off between the sensitivity and specificity of the classifiers.",
"title": ""
},
{
"docid": "955ae6e1dffbe580217b812f943b4339",
"text": "Successful applications of reinforcement learning in realworld problems often require dealing with partially observable states. It is in general very challenging to construct and infer hidden states as they often depend on the agent’s entire interaction history and may require substantial domain knowledge. In this work, we investigate a deep-learning approach to learning the representation of states in partially observable tasks, with minimal prior knowledge of the domain. In particular, we study reinforcement learning with deep neural networks, including RNN and LSTM, which are equipped with the desired property of being able to capture long-term dependency on history, and thus providing an effective way of learning the representation of hidden states. We further develop a hybrid approach that combines the strength of both supervised learning (for representing hidden states) and reinforcement learning (for optimizing control) with joint training. Extensive experiments based on a KDD Cup 1998 direct mailing campaign problem demonstrate the effectiveness and advantages of the proposed approach, which performs the best across the board.",
"title": ""
},
{
"docid": "41b8c1b04f11f5ac86d1d6e696007036",
"text": "The neural systems involved in hearing and repeating single words were investigated in a series of experiments using PET. Neuropsychological and psycholinguistic studies implicate the involvement of posterior and anterior left perisylvian regions (Wernicke's and Broca's areas). Although previous functional neuroimaging studies have consistently shown activation of Wernicke's area, there has been only variable implication of Broca's area. This study demonstrates that Broca's area is involved in both auditory word perception and repetition but activation is dependent on task (greater during repetition than hearing) and stimulus presentation (greater when hearing words at a slow rate). The peak of frontal activation in response to hearing words is anterior to that associated with repeating words; the former is probably located in Brodmann's area 45, the latter in Brodmann's area 44 and the adjacent precentral sulcus. As Broca's area activation is more subtle and complex than that in Wernicke's area during these tasks, the likelihood of observing it is influenced by both the study design and the image analysis technique employed. As a secondary outcome from the study, the response of bilateral auditory association cortex to 'own voice' during repetition was shown to be the same as when listening to \"other voice' from a prerecorded tape.",
"title": ""
},
{
"docid": "864ab702d0b45235efe66cd9e3bc5e66",
"text": "In this work we release our extensible and easily configurable neural network training software. It provides a rich set of functional layers with a particular focus on efficient training of recurrent neural network topologies on multiple GPUs. The source of the software package is public and freely available for academic research purposes and can be used as a framework or as a standalone tool which supports a flexible configuration. The software allows to train state-of-the-art deep bidirectional long short-term memory (LSTM) models on both one dimensional data like speech or two dimensional data like handwritten text and was used to develop successful submission systems in several evaluation campaigns.",
"title": ""
},
{
"docid": "9852e00f24fd8f626a018df99bea5f1f",
"text": "Business Process Reengineering is a discipline in which extensive research has been carried out and numerous methodologies churned out. But what seems to be lacking is a structured approach. In this paper we provide a review of BPR and present ‘best of breed ‘ methodologies from contemporary literature and introduce a consolidated, systematic approach to the redesign of a business enterprise. The methodology includes the five activities: Prepare for reengineering, Map and Analyze As-Is process, Design To-be process, Implement reengineered process and Improve continuously.",
"title": ""
},
{
"docid": "b14007d127629d7082d9bb5169140d0e",
"text": "The term \"selection bias\" encompasses various biases in epidemiology. We describe examples of selection bias in case-control studies (eg, inappropriate selection of controls) and cohort studies (eg, informative censoring). We argue that the causal structure underlying the bias in each example is essentially the same: conditioning on a common effect of 2 variables, one of which is either exposure or a cause of exposure and the other is either the outcome or a cause of the outcome. This structure is shared by other biases (eg, adjustment for variables affected by prior exposure). A structural classification of bias distinguishes between biases resulting from conditioning on common effects (\"selection bias\") and those resulting from the existence of common causes of exposure and outcome (\"confounding\"). This classification also leads to a unified approach to adjust for selection bias.",
"title": ""
},
{
"docid": "8bf1793ff3dacec5f88586a980d4f20a",
"text": "A dominant-pole substitution (DPS) technique for low-dropout regulator (LDO) is proposed in this paper. The DPS technique involves signal-current feedforward and amplification such that an ultralow-frequency zero is generated to cancel the dominant pole of LDO, while a higher frequency pole substitutes in and becomes the new dominant pole. With DPS, the loop bandwidth of the proposed LDO can be significantly extended, while a standard value and large output capacitor for transient purpose can still be used. The resultant LDO benefits from both the fast response time due to the wide loop bandwidth and the large charge reservoir from the output capacitor to achieve the significant enhancement in the dynamic performances. Implemented with a commercial 0.18-μm CMOS technology, the proposed LDO with DPS is validated to be capable of delivering 100 mA at 1.0-V output from a 1.2-V supply, with current efficiency of 99.86%. Experimental results also show that the error voltage at the output undergoing 100 mA of load transient in 10-ns edge time is about 25 mV. Line transient responses reveal that no more than 20-mV instantaneous changes at the output when the supply voltage swings between 1.2 and 1.8 V in 100 ns. The power-supply rejection ratio at 3 MHz is -47 dB.",
"title": ""
},
{
"docid": "81cb6b35dcf083fea3973f4ee75a9006",
"text": "We propose frameworks and algorithms for identifying communities in social networks that change over time. Communities are intuitively characterized as \"unusually densely knit\" subsets of a social network. This notion becomes more problematic if the social interactions change over time. Aggregating social networks over time can radically misrepresent the existing and changing community structure. Instead, we propose an optimization-based approach for modeling dynamic community structure. We prove that finding the most explanatory community structure is NP-hard and APX-hard, and propose algorithms based on dynamic programming, exhaustive search, maximum matching, and greedy heuristics. We demonstrate empirically that the heuristics trace developments of community structure accurately for several synthetic and real-world examples.",
"title": ""
},
{
"docid": "881615ecd53c20a93c96defee048f0e1",
"text": "Several research groups have previously constructed short forms of the MacArthur-Bates Communicative Development Inventories (CDI) for different languages. We consider the specific aim of constructing such a short form to be used for language screening in a specific age group. We present a novel strategy for the construction, which is applicable if results from a population-based study using the CDI long form are available for this age group. The basic approach is to select items in a manner implying a left-skewed distribution of the summary score and hence a reliable discrimination among children in the lower end of the distribution despite the measurement error of the instrument. We report on the application of the strategy in constructing a Danish CDI short form and present some results illustrating the validity of the short form. Finally we discuss the choice of the most appropriate age for language screening based on a vocabulary score.",
"title": ""
},
{
"docid": "ebb40b1e228c9f95ce2ea9229a16853c",
"text": "Continuum manipulators attract a lot of interests due to their advantageous properties, such as distal dexterity, design compactness, intrinsic compliance for safe interaction with unstructured environments. However, these manipulators sometimes suffer from the lack of enough stiffness while applied in surgical robotic systems. This paper presents an experimental kinestatic comparison between three continuum manipulators, aiming at revealing how structural variations could alter the manipulators' stiffness properties. These variations not only include modifying the arrangements of elastic components, but also include integrating a passive rigid kinematic chain to form a hybrid continuum-rigid manipulator. Results of this paper could contribute to the development of design guidelines for realizing desired stiffness properties of a continuum or hybrid manipulator.",
"title": ""
}
] |
scidocsrr
|
fa0af2feb6dd57a7698470f706bcbe74
|
Supply Networks as Complex Systems: A Network-Science-Based Characterization
|
[
{
"docid": "bf5f08174c55ed69e454a87ff7fbe6e2",
"text": "In much of the current literature on supply chain management, supply networks are recognized as a system. In this paper, we take this observation to the next level by arguing the need to recognize supply networks as a complex adaptive system (CAS). We propose that many supply networks emerge rather than result from purposeful design by a singular entity. Most supply chain management literature emphasizes negative feedback for purposes of control; however, the emergent patterns in a supply network can much better be managed through positive feedback, which allows for autonomous action. Imposing too much control detracts from innovation and flexibility; conversely, allowing too much emergence can undermine managerial predictability and work routines. Therefore, when managing supply networks, managers must appropriately balance how much to control and how much to let emerge. © 2001 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "236896835b48994d7737b9152c0e435f",
"text": "A network is said to show assortative mixing if the nodes in the network that have many connections tend to be connected to other nodes with many connections. Here we measure mixing patterns in a variety of networks and find that social networks are mostly assortatively mixed, but that technological and biological networks tend to be disassortative. We propose a model of an assortatively mixed network, which we study both analytically and numerically. Within this model we find that networks percolate more easily if they are assortative and that they are also more robust to vertex removal.",
"title": ""
}
] |
[
{
"docid": "bda4bdc27e9ea401abb214c3fb7c9813",
"text": "Lipedema is a common, but often underdiagnosed masquerading disease of obesity, which almost exclusively affects females. There are many debates regarding the diagnosis as well as the treatment strategies of the disease. The clinical diagnosis is relatively simple, however, knowledge regarding the pathomechanism is less than limited and curative therapy does not exist at all demanding an urgent need for extensive research. According to our hypothesis, lipedema is an estrogen-regulated polygenetic disease, which manifests in parallel with feminine hormonal changes and leads to vasculo- and lymphangiopathy. Inflammation of the peripheral nerves and sympathetic innervation abnormalities of the subcutaneous adipose tissue also involving estrogen may be responsible for neuropathy. Adipocyte hyperproliferation is likely to be a secondary phenomenon maintaining a vicious cycle. Herein, the relevant articles are reviewed from 1913 until now and discussed in context of the most likely mechanisms leading to the disease, which could serve as a starting point for further research.",
"title": ""
},
{
"docid": "b68da205eb9bf4a6367250c6f04d2ad4",
"text": "Trends change rapidly in today’s world, prompting this key question: What is the mechanism behind the emergence of new trends? By representing real-world dynamic systems as complex networks, the emergence of new trends can be symbolized by vertices that “shine.” That is, at a specific time interval in a network’s life, certain vertices become increasingly connected to other vertices. This process creates new high-degree vertices, i.e., network stars. Thus, to study trends, we must look at how networks evolve over time and determine how the stars behave. In our research, we constructed the largest publicly available network evolution dataset to date, which contains 38,000 real-world networks and 2.5 million graphs. Then, we performed the first precise wide-scale analysis of the evolution of networks with various scales. Three primary observations resulted: (a) links are most prevalent among vertices that join a network at a similar time; (b) the rate that new vertices join a network is a central factor in molding a network’s topology; and (c) the emergence of network stars (high-degree vertices) is correlated with fast-growing networks. We applied our learnings to develop a flexible network-generation model based on large-scale, real-world data. This model gives a better understanding of how stars rise and fall within networks, and is applicable to dynamic systems both in nature and society. Multimedia Links I Video I Interactive Data Visualization I Data I Code Tutorials",
"title": ""
},
{
"docid": "7a08a183a3acec668d6405c3a9a01765",
"text": "In this work, we will investigate the task of building a Question Answering system using deep neural networks augmented with a memory component. Our goal is to implement the MemNN and its extensions described in [10] and [8] and apply it on the bAbI QA tasks introduced in [9]. Unlike simulated datasets like bAbI, the vanilla MemNN system is not sufficient to achieve satisfactory performance on real-world QA datasets like Wiki QA [6] and MCTest [5]. We will explore extensions to the proposed MemNN systems to make it work on these complex datasets.",
"title": ""
},
{
"docid": "1f5bcb6bc3fde7bc294240ce652ae4ab",
"text": "Rock climbing has increased in popularity as both a recreational physical activity and a competitive sport. Climbing is physiologically unique in requiring sustained and intermittent isometric forearm muscle contractions for upward propulsion. The determinants of climbing performance are not clear but may be attributed to trainable variables rather than specific anthropometric characteristics.",
"title": ""
},
{
"docid": "2c58791fd0f477fadf6d376ac4aaf16e",
"text": "Networked digital media present new challenges for people to locate information that they can trust. At the same time, societal reliance on information that is available solely or primarily via the Internet is increasing. This article discusses how and why digitally networked communication environments alter traditional notions of trust, and presents research that examines how information consumers make judgments about the credibility and accuracy of information they encounter online. Based on this research, the article focuses on the use of cognitive heuristics in credibility evaluation. Findings from recent studies are used to illustrate the types of cognitive heuristics that information consumers employ when determining what sources and information to trust online. The article concludes with an agenda for future research that is needed to better understand the role and influence of cognitive heuristics in credibility evaluation in computer-mediated communication contexts. © 2013 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "bd12f418cd731f9103a3d47ebac6951b",
"text": "Smartphones and tablets provide access to the Web anywhere and anytime. Automatic Text Summarization techniques aim to extract the fundamental information in documents. Making automatic summarization work in portable devices is a challenge, in several aspects. This paper presents an automatic summarization application for Android devices. The proposed solution is a multi-feature language independent summarization application targeted at news articles. Several evaluation assessments were conducted and indicate that the proposed solution provides good results.",
"title": ""
},
{
"docid": "1b638147b80419c6a4c472b02cd9916f",
"text": "Herein, we report the development of highly water dispersible nanocomposite of conducting polyaniline and multiwalled carbon nanotubes (PANI-MWCNTs) via novel, `dynamic' or `stirred' liquid-liquid interfacial polymerization method using sulphonic acid as a dopant. MWCNTs were functionalized prior to their use and then dispersed in water. The nanocomposite was further subjected for physico-chemical characterization using spectroscopic (UV-Vis and FT-IR), FE-SEM analysis. The UV-VIS spectrum of the PANI-MWCNTs nanocomposite shows a free carrier tail with increasing absorption at higher wavelength. This confirms the presence of conducting emeraldine salt phase of the polyaniline and is further supported by FT-IR analysis. The FE-SEM images show that the thin layer of polyaniline is coated over the functionalized MWCNTs forming a `core-shell' like structure. The synthesized nanocomposite was found to be highly dispersible in water and shows beautiful colour change from dark green to blue with change in pH of the solution from 1 to 12 (i.e. from acidic to basic pH). The change in colour of the polyaniline-MWCNTs nanocomposite is mainly due to the pH dependent chemical transformation /change of thin layer of polyaniline.",
"title": ""
},
{
"docid": "54d08377abbe59ada133c907f8d49eb6",
"text": "To avoid injury to the perineal branches of the pudendal nerve during urinary incontinence sling procedures, a thorough knowledge of the course of these nerve branches is essential. The dorsal nerve of the clitoris (DNC) may be at risk when performing the retropubic (tension-free vaginal tape) procedure as well as the inside-out and outside-in transobturator tape procedures. The purpose of this study was to identify the anatomical relationships of the DNC to the tapes placed during the procedures mentioned and to determine the influence of body variations. In this cadaveric study, the body mass index (cBMI) of unembalmed cadavers was determined. Suburethral tape procedures were performed by a registered urologist and gynecologist on a sample of 15 female cadavers; six retropubic, seven inside-out and nine outside-in transobturator tapes were inserted. After embalmment, dissections were performed and the distances between the DNC and the tapes measured. In general the trajectory of the outside-in tape was closer to the DNC than that of the other tapes. cBMI was weakly and nonsignificantly correlated with the distance between the trajectory of the tape and the DNC for the inside-out tape and the tension-free vaginal tape, but not for the outside-in tape. The findings suggest that the DNC is less likely to be injured during the inside-out tape procedure than during the outside-in procedure, regardless of BMI. Future studies on larger samples are desirable to confirm these findings.",
"title": ""
},
{
"docid": "63c1080df773ff57e3af8468e8d31d35",
"text": "This report refers to a body of investigations performed in support of experiments aboard the Space Shuttle, and designed to counteract the symptoms of Space Adapatation Syndrome, which resemble those of motion sickness on Earth. For these supporting studies we examined the autonomic manifestations of earth-based motion sickness. Heart rate, respiration rate, finger pulse volume and basal skin resistance were measured on 127 men and women before, during and after exposure to nauseogenic rotating chair tests. Significant changes in all autonomic responses were observed across the tests (p<.05). Significant differences in autonomic responses among groups divided according to motion sickness susceptibility were also observed (p<.05). Results suggest that the examination of autonomic responses as an objective indicator of motion sickness malaise is warranted and may contribute to the overall understanding of the syndrome on Earth and in Space. DESCRIPTORS: heart rate, respiration rate, finger pulse volume, skin resistance, biofeedback, motion sickness.",
"title": ""
},
{
"docid": "34611e88dd890c13a3b46b21be499c7b",
"text": "A low-power clocking solution is presented based on fractional-N highly digital LC-phase-locked loop (PLL) and sub-sampled ring PLL targeting multi-standard SerDes applications. The shared fractional-N digital LC-PLL covers 7–10 GHz frequency range consuming only 8-mW power and occupying 0.15 mm2 of silicon area with integrated jitter of 264 fs. Frequency resolution of the LC-PLL is 2 MHz. Per lane clock is generated using wide bandwidth ring PLL covering 800 MHz to 4 GHz to support the data rates between 1 and 14 Gb/s. The ring PLL supports dither-less fractional resolution of 250 MHz, corrects I/Q error with split tuning, and achieves less than 400-fs integrated jitter. Transmitter works at 14 Gb/s with power efficiency of 0.80 pJ/bit.",
"title": ""
},
{
"docid": "8ce15f6a0d6e5a49dcc2953530bceb19",
"text": "In signal restoration by Bayesian inference, one typically uses a parametric model of the prior distribution of the signal. Here, we consider how the parameters of a prior model should be estimated from observations of uncorrupted signals. A lot of recent work has implicitly assumed that maximum likelihood estimation is the optimal estimation method. Our results imply that this is not the case. We first obtain an objective function that approximates the error occurred in signal restoration due to an imperfect prior model. Next, we show that in an important special case (small gaussian noise), the error is the same as the score-matching objective function, which was previously proposed as an alternative for likelihood based on purely computational considerations. Our analysis thus shows that score matching combines computational simplicity with statistical optimality in signal restoration, providing a viable alternative to maximum likelihood methods. We also show how the method leads to a new intuitive and geometric interpretation of structure inherent in probability distributions.",
"title": ""
},
{
"docid": "dbc64c508b074f435b4175e6c8b967d5",
"text": "Data collected from mobile phones have the potential to provide insight into the relational dynamics of individuals. This paper compares observational data from mobile phones with standard self-report survey data. We find that the information from these two data sources is overlapping but distinct. For example, self-reports of physical proximity deviate from mobile phone records depending on the recency and salience of the interactions. We also demonstrate that it is possible to accurately infer 95% of friendships based on the observational data alone, where friend dyads demonstrate distinctive temporal and spatial patterns in their physical proximity and calling patterns. These behavioral patterns, in turn, allow the prediction of individual-level outcomes such as job satisfaction.",
"title": ""
},
{
"docid": "5c88fae140f343ae3002685ab96fd848",
"text": "Function recovery is a critical step in many binary analysis and instrumentation tasks. Existing approaches rely on commonly used function prologue patterns to recognize function starts, and possibly epilogues for the ends. However, this approach is not robust when dealing with different compilers, compiler versions, and compilation switches. Although machine learning techniques have been proposed, the possibility of errors still limits their adoption. In this work, we present a novel function recovery technique that is based on static analysis. Evaluations have shown that we can produce very accurate results that are applicable to a wider set of applications.",
"title": ""
},
{
"docid": "1e06f7e6b7b0d3f9a21a814e50af6e3c",
"text": "The context-dependent nature of online aggression makes annotating large collections of data extremely difficult. Previously studied datasets in abusive language detection have been insufficient in size to efficiently train deep learning models. Recently, Hate and Abusive Speech on Twitter, a dataset much greater in size and reliability, has been released. However, this dataset has not been comprehensively studied to its potential. In this paper, we conduct the first comparative study of various learning models on Hate and Abusive Speech on Twitter, and discuss the possibility of using additional features and context data for improvements. Experimental results show that bidirectional GRU networks trained on word-level features, with Latent Topic Clustering modules, is the most accurate model scoring 0.805 F1.",
"title": ""
},
{
"docid": "88d82f9a96ce40ed2d93d6cb9651f6be",
"text": "The way developers edit day-to-day code tend to be repetitive and often use existing code elements. Many researchers tried to automate this tedious task of code changes by learning from specific change templates and applied to limited scope. The advancement of Neural Machine Translation (NMT) and the availability of the vast open source software evolutionary data open up a new possibility of automatically learning those templates from the wild. However, unlike natural languages, for which NMT techniques were originally designed, source code and the changes have certain properties. For instance, compared to natural language source code vocabulary can be virtually infinite. Further, any good change in code should not break its syntactic structure. Thus, deploying state-of-the-art NMT models without domain adaptation may poorly serve the purpose. To this end, in this work, we propose a novel Tree2Tree Neural Machine Translation system to model source code changes and learn code change patterns from the wild. We realize our model with a change suggestion engine: CODIT. We train the model with more than 30k real-world changes and evaluate it with 6k patches. Our evaluation shows the effectiveness of CODIT in learning and suggesting abstract change templates. CODIT also shows promise in suggesting concrete patches and generating bug fixes.",
"title": ""
},
{
"docid": "f83017ad2454c465d19f70f8ba995e95",
"text": "The origins of life on Earth required the establishment of self-replicating chemical systems capable of maintaining and evolving biological information. In an RNA world, single self-replicating RNAs would have faced the extreme challenge of possessing a mutation rate low enough both to sustain their own information and to compete successfully against molecular parasites with limited evolvability. Thus theoretical analyses suggest that networks of interacting molecules were more likely to develop and sustain life-like behaviour. Here we show that mixtures of RNA fragments that self-assemble into self-replicating ribozymes spontaneously form cooperative catalytic cycles and networks. We find that a specific three-membered network has highly cooperative growth dynamics. When such cooperative networks are competed directly against selfish autocatalytic cycles, the former grow faster, indicating an intrinsic ability of RNA populations to evolve greater complexity through cooperation. We can observe the evolvability of networks through in vitro selection. Our experiments highlight the advantages of cooperative behaviour even at the molecular stages of nascent life.",
"title": ""
},
{
"docid": "41b712d0d485c65a8dff32725c215f97",
"text": "In this article, we present a novel, multi-user, virtual reality environment for the interactive, collaborative 3D analysis of large 3D scans and the technical advancements that were necessary to build it: a multi-view rendering system for large 3D point clouds, a suitable display infrastructure, and a suite of collaborative 3D interaction techniques. The cultural heritage site of Valcamonica in Italy with its large collection of prehistoric rock-art served as an exemplary use case for evaluation. The results show that our output-sensitive level-of-detail rendering system is capable of visualizing a 3D dataset with an aggregate size of more than 14 billion points at interactive frame rates. The system design in this exemplar application results from close exchange with a small group of potential users: archaeologists with expertise in rockart. The system allows them to explore the prehistoric art and its spatial context with highly realistic appearance. A set of dedicated interaction techniques was developed to facilitate collaborative visual analysis. A multi-display workspace supports the immediate comparison of geographically distributed artifacts. An expert review of the final demonstrator confirmed the potential for added value in rock-art research and the usability of our collaborative interaction techniques.",
"title": ""
},
{
"docid": "da5b920aa576589bc6041fa41250307f",
"text": "We investigate the problem of fine-grained sketch-based image retrieval (SBIR), where free-hand human sketches are used as queries to perform instance-level retrieval of images. This is an extremely challenging task because (i) visual comparisons not only need to be fine-grained but also executed cross-domain, (ii) free-hand (finger) sketches are highly abstract, making fine-grained matching harder, and most importantly (iii) annotated cross-domain sketch-photo datasets required for training are scarce, challenging many state-of-the-art machine learning techniques. In this paper, for the first time, we address all these challenges, providing a step towards the capabilities that would underpin a commercial sketch-based image retrieval application. We introduce a new database of 1,432 sketchphoto pairs from two categories with 32,000 fine-grained triplet ranking annotations. We then develop a deep tripletranking model for instance-level SBIR with a novel data augmentation and staged pre-training strategy to alleviate the issue of insufficient fine-grained training data. Extensive experiments are carried out to contribute a variety of insights into the challenges of data sufficiency and over-fitting avoidance when training deep networks for finegrained cross-domain ranking tasks.",
"title": ""
},
{
"docid": "199541aa317b2ebb4d40906d974ce5f2",
"text": "Experimental evidence has accumulated to suggest that biologically efficacious informational effects can be derived mimicking active compounds solely through electromagnetic distribution upon aqueous systems affecting biological systems. Empirically rigorous demonstrations of antimicrobial agent associated electromagnetic informational inhibition of MRSA, Entamoeba histolytica, Trichomonas vaginalis, Candida albicans and a host of other important and various reported effects have been evidenced, such as the electro-informational transfer of retinoic acid influencing human neuroblastoma cells and stem teratocarcinoma cells. Cell proliferation and differentiation effects from informationally affected fields interactive with aqueous systems are measured via microscopy, statistical analysis, reverse transcription polymerase chain reaction and other techniques. Information associated with chemical compounds affects biological aqueous systems, sans direct systemic exposure to the source molecule. This is a quantum effect, based on the interactivity between electromagnetic fields, and aqueous ordered coherence domains. The encoding of aqueous systems and tissue by photonic transfer and instantiation of information rather than via direct exposure to potentially toxic drugs and physical substances holds clear promise of creating inexpensive non-toxic medical treatments. Corresponding author.",
"title": ""
},
{
"docid": "c5e401fe1b2a65677b93ae3e8aa60e18",
"text": "In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a result. More and more often, algorithms mediate social processes, business transactions, governmental decisions, and how we perceive, understand, and interact among ourselves and with the environment. Gaps between the design and operation of algorithms and our understanding of their ethical implications can have severe consequences affecting individuals as well as groups and whole societies. This paper makes three contributions to clarify the ethical importance of algorithmic mediation. It provides a prescriptive map to organise the debate. It reviews the current discussion of ethical aspects of algorithms. And it assesses the available literature in order to identify areas requiring further work to develop the ethics of algorithms.",
"title": ""
}
] |
scidocsrr
|
043e9a62e6874e6f0e3a92f1b5d5cd25
|
Gamifying education: what is known, what is believed and what remains uncertain: a critical review
|
[
{
"docid": "bda419b065c53853f86f7fdbf0e330f2",
"text": "In current e-learning studies, one of the main challenges is to keep learners motivated in performing desirable learning behaviours and achieving learning goals. Towards tackling this challenge, social e-learning contributes favourably, but it requires solutions that can reduce side effects, such as abusing social interaction tools for ‘chitchat’, and further enhance learner motivation. In this paper, we propose a set of contextual gamification strategies, which apply flow and self-determination theory for increasing intrinsic motivation in social e-learning environments. This paper also presents a social e-learning environment that applies these strategies, followed by a user case study, which indicates increased learners’ perceived intrinsic motivation.",
"title": ""
}
] |
[
{
"docid": "c4f0e371ea3950e601f76f8d34b736e3",
"text": "Discretization is an essential preprocessing technique used in many knowledge discovery and data mining tasks. Its main goal is to transform a set of continuous attributes into discrete ones, by associating categorical values to intervals and thus transforming quantitative data into qualitative data. In this manner, symbolic data mining algorithms can be applied over continuous data and the representation of information is simplified, making it more concise and specific. The literature provides numerous proposals of discretization and some attempts to categorize them into a taxonomy can be found. However, in previous papers, there is a lack of consensus in the definition of the properties and no formal categorization has been established yet, which may be confusing for practitioners. Furthermore, only a small set of discretizers have been widely considered, while many other methods have gone unnoticed. With the intention of alleviating these problems, this paper provides a survey of discretization methods proposed in the literature from a theoretical and empirical perspective. From the theoretical perspective, we develop a taxonomy based on the main properties pointed out in previous research, unifying the notation and including all the known methods up to date. Empirically, we conduct an experimental study in supervised classification involving the most representative and newest discretizers, different types of classifiers, and a large number of data sets. The results of their performances measured in terms of accuracy, number of intervals, and inconsistency have been verified by means of nonparametric statistical tests. Additionally, a set of discretizers are highlighted as the best performing ones.",
"title": ""
},
{
"docid": "9f6429ac22b736bd988a4d6347d8475f",
"text": "The purpose of this paper is to defend the systematic introduction of formal ontological principles in the current practice of knowledge engineering, to explore the various relationships between ontology and knowledge representation, and to present the recent trends in this promising research area. According to the \"modelling view\" of knowledge acquisition proposed by Clancey, the modeling activity must establish a correspondence between a knowledge base and two separate subsystems: the agent's behavior (i.e. the problem-solving expertize) and its own environment (the problem domain). Current knowledge modelling methodologies tend to focus on the former subsystem only, viewing domain knowledge as strongly dependent on the particular task at hand: in fact, AI researchers seem to have been much more interested in the nature of reasoning rather than in the nature of the real world. Recently, however, the potential value of task-independent knowlege bases (or \"ontologies\") suitable to large scale integration has been underlined in many ways. In this paper, we compare the dichotomy between reasoning and representation to the philosophical distinction between epistemology and ontology. We introduce the notion of the ontological level, intermediate between the epistemological and the conceptual level discussed by Brachman, as a way to characterize a knowledge representation formalism taking into account the intended meaning of its primitives. We then discuss some formal ontological distinctions which may play an important role for such purpose.",
"title": ""
},
{
"docid": "5967c7705173ee346b4d47eb7422df20",
"text": "A novel learnable dictionary encoding layer is proposed in this paper for end-to-end language identification. It is inline with the conventional GMM i-vector approach both theoretically and practically. We imitate the mechanism of traditional GMM training and Supervector encoding procedure on the top of CNN. The proposed layer can accumulate high-order statistics from variable-length input sequence and generate an utterance level fixed-dimensional vector representation. Unlike the conventional methods, our new approach provides an end-to-end learning framework, where the inherent dictionary are learned directly from the loss function. The dictionaries and the encoding representation for the classifier are learned jointly. The representation is orderless and therefore appropriate for language identification. We conducted a preliminary experiment on NIST LRE07 closed-set task, and the results reveal that our proposed dictionary encoding layer achieves significant error reduction comparing with the simple average pooling.",
"title": ""
},
{
"docid": "41a0b9797c556368f84e2a05b80645f3",
"text": "This paper describes and evaluates log-linear parsing models for Combinatory Categorial Grammar (CCG). A parallel implementation of the L-BFGS optimisation algorithm is described, which runs on a Beowulf cluster allowing the complete Penn Treebank to be used for estimation. We also develop a new efficient parsing algorithm for CCG which maximises expected recall of dependencies. We compare models which use all CCG derivations, including nonstandard derivations, with normal-form models. The performances of the two models are comparable and the results are competitive with existing wide-coverage CCG parsers.",
"title": ""
},
{
"docid": "5006770c9f7a6fb171a060ad3d444095",
"text": "We developed a 56-GHz-bandwidth 2.0-Vppd linear MZM driver in 65-nm CMOS. It consumes only 180 mW for driving a 50-Ω impedance. We demonstrated the feasibility of drivers with less than 1 W for dual-polarization IQ modulation in 400-Gb/s systems.",
"title": ""
},
{
"docid": "881a495a8329c71a0202c3510e21b15d",
"text": "We apply basic statistical reasoning to signal reconstruction by machine learning – learning to map corrupted observations to clean signals – with a simple and powerful conclusion: it is possible to learn to restore images by only looking at corrupted examples, at performance at and sometimes exceeding training using clean data, without explicit image priors or likelihood models of the corruption. In practice, we show that a single model learns photographic noise removal, denoising synthetic Monte Carlo images, and reconstruction of undersampled MRI scans – all corrupted by different processes – based on noisy data only.",
"title": ""
},
{
"docid": "57ab94ce902f4a8b0082cc4f42cd3b3f",
"text": "In this work, we present a novel counter-fitting method which injects antonymy and synonymy constraints into vector space representations in order to improve the vectors’ capability for judging semantic similarity. Applying this method to publicly available pre-trained word vectors leads to a new state of the art performance on the SimLex-999 dataset. We also show how the method can be used to tailor the word vector space for the downstream task of dialogue state tracking, resulting in robust improvements across different dialogue domains.",
"title": ""
},
{
"docid": "0425ba0d95b98409d684b9b07b59b73a",
"text": "With a shift towards usage-based billing, the questions of how data costs affect mobile Internet use and how users manage mobile data arise. In this paper, we describe a mixed-methods study of mobile phone users' data usage practices in South Africa, a country where usage-based billing is prevalent and where data costs are high, to answer these questions. We do so using a large scale survey, in-depth interviews, and logs of actual data usage over time. Our findings suggest that unlike in more developed settings, when data is limited or expensive, mobile Internet users are extremely cost-conscious, and employ various strategies to optimize mobile data usage such as actively disconnecting from the mobile Internet to save data. Based on these findings, we suggest how the Ubicomp and related research communities can better support users that need to carefully manage their data to optimize costs.",
"title": ""
},
{
"docid": "1ac76924d3fae2bbcb7f7b84f1c2ea5e",
"text": "This chapter studies ontology matching : the problem of finding the semantic mappings between two given ontologies. This problem lies at the heart of numerous information processing applications. Virtually any application that involves multiple ontologies must establish semantic mappings among them, to ensure interoperability. Examples of such applications arise in myriad domains, including e-commerce, knowledge management, e-learning, information extraction, bio-informatics, web services, and tourism (see Part D of this book on ontology applications). Despite its pervasiveness, today ontology matching is still largely conducted by hand, in a labor-intensive and error-prone process. The manual matching has now become a key bottleneck in building large-scale information management systems. The advent of technologies such as the WWW, XML, and the emerging Semantic Web will further fuel information sharing applications and exacerbate the problem. Hence, the development of tools to assist in the ontology matching process has become crucial for the success of a wide variety of information management applications. In response to the above challenge, we have developed GLUE, a system that employs learning techniques to semi-automatically create semantic mappings between ontologies. We shall begin the chapter by describing a motivating example: ontology matching on the Semantic Web. Then we present our GLUE solution. Finally, we describe a set of experiments on several real-world domains, and show that GLUE proposes highly accurate semantic mappings.",
"title": ""
},
{
"docid": "b33c7e26d3a0a8fc7fc0fb73b72840d4",
"text": "As the number of Android malicious applications has explosively increased, effectively vetting Android applications (apps) has become an emerging issue. Traditional static analysis is ineffective for vetting apps whose code have been obfuscated or encrypted. Dynamic analysis is suitable to deal with the obfuscation and encryption of codes. However, existing dynamic analysis methods cannot effectively vet the applications, as a limited number of dynamic features have been explored from apps that have become increasingly sophisticated. In this work, we propose an effective dynamic analysis method called DroidWard in the aim to extract most relevant and effective features to characterize malicious behavior and to improve the detection accuracy of malicious apps. In addition to using the existing 9 features, DroidWard extracts 6 novel types of effective features from apps through dynamic analysis. DroidWard runs apps, extracts features and identifies benign and malicious apps with Support Vector Machine (SVM), Decision Tree (DTree) and Random Forest. 666 Android apps are used in the experiments and the evaluation results show that DroidWard correctly classifies 98.54% of malicious apps with 1.55% of false positives. Compared to existing work, DroidWard improves the TPR with 16.07% and suppresses the FPR with 1.31% with SVM, indicating that it is more effective than existing methods.",
"title": ""
},
{
"docid": "de0482515de1d6134b8ff907be49d4dc",
"text": "In this paper, we describe the Adaptive Place Advi sor, a conversational recommendation system designed to he lp users decide on a destination. We view the selection of destinations a an interactive, conversational process, with the advisory system in quiring about desired item characteristics and the human responding. The user model, which contains preferences regarding items, attributes, values and v lue combinations, is also acquired during the conversation. The system enhanc es the user’s requirements with the user model and retrieves suitable items fr om a case-base. If the number of items found by the system is unsuitable (too hig h, too low) the next attribute to be constrained or relaxed is selected based on t he information gain associated with the attributes. We also describe the current s tatu of the system and future work.",
"title": ""
},
{
"docid": "c629dfdd363f1599d397ccde1f7be360",
"text": "We propose a classification taxonomy over a large crawl of HTML tables on the Web, focusing primarily on those tables that express structured knowledge. The taxonomy separates tables into two top-level classes: a) those used for layout purposes, including navigational and formatting; and b) those containing relational knowledge, including listings, attribute/value, matrix, enumeration, and form. We then propose a classification algorithm for automatically detecting a subset of the classes in our taxonomy, namely layout tables and attribute/value tables. We report on the performance of our system over a large sample of manually annotated HTML tables on the Web.",
"title": ""
},
{
"docid": "95fa1dac07ce26c1ccd64a9c86c96a22",
"text": "Eyelid bags are the result of relaxation of lid structures like the skin, the orbicularis muscle, and mainly the septum, with subsequent protrusion or pseudo herniation of intraorbital fat contents. The logical treatment of baggy upper and lower eyelids should therefore include repositioning the herniated fat into the orbit and strengthening the attenuated septum in the form of a septorhaphy as a hernia repair. The preservation of orbital fat results in a more youthful appearance. The operative technique of the orbital septorhaphy is demonstrated for the upper and lower eyelid. A prospective series of 60 patients (50 upper and 90 lower blepharoplasties) with a maximum follow-up of 17 months were analyzed. Pleasing results were achieved in 56 patients. A partial recurrence was noted in 3 patients and widening of the palpebral fissure in 1 patient. Orbital septorhaphy for baggy eyelids is a rational, reliable procedure to correct the herniation of orbital fat in the upper and lower eyelids. Tightening of the orbicularis muscle and skin may be added as usual. The procedure is technically simple and without trauma to the orbital contents. The morbidity is minimal, the rate of complications is low, and the results are pleasing and reliable.",
"title": ""
},
{
"docid": "a0e9e04a3b04c1974951821d44499fa7",
"text": "PURPOSE\nTo examine factors related to turnover of new graduate nurses in their first job.\n\n\nDESIGN\nData were obtained from a 3-year panel survey (2006-2008) of the Graduates Occupational Mobility Survey that followed-up college graduates in South Korea. The sample consisted of 351 new graduates whose first job was as a full-time registered nurse in a hospital.\n\n\nMETHODS\nSurvival analysis was conducted to estimate survival curves and related factors, including individual and family, nursing education, hospital, and job dissatisfaction (overall and 10 specific job aspects).\n\n\nFINDINGS\nThe estimated probabilities of staying in their first job for 1, 2, and 3 years were 0.823, 0.666, and 0.537, respectively. Nurses reporting overall job dissatisfaction had significantly lower survival probabilities than those who reported themselves to be either neutral or satisfied. Nurses were more likely to leave if they were married or worked in small (vs. large), nonmetropolitan, and nonunionized hospitals. Dissatisfaction with interpersonal relationships, work content, and physical work environment was associated with a significant increase in the hazards of leaving the first job.\n\n\nCONCLUSIONS\nHospital characteristics as well as job satisfaction were significantly associated with new graduates' turnover.\n\n\nCLINICAL RELEVANCE\nThe high turnover of new graduates could be reduced by improving their job satisfaction, especially with interpersonal relationships, work content, and the physical work environment.",
"title": ""
},
{
"docid": "7e251f86e41d01778a143c231304aa92",
"text": "Adequate representation of natural language semantics requires access to vast amounts of common sense and domain-specific world knowledge. Prior work in the field was based on purely statistical techniques that did not make use of background knowledge, on limited lexicographic knowledge bases such as WordNet, or on huge manual efforts such as the CYC project. Here we propose a novel method, called Explicit Semantic Analysis (ESA), for fine-grained semantic interpretation of unrestricted natural language texts. Our method represents meaning in a high-dimensional space of concepts derived from Wikipedia, the largest encyclopedia in existence. We explicitly represent the meaning of any text in terms of Wikipedia-based concepts. We evaluate the effectiveness of our method on text categorization and on computing the degree of semantic relatedness between fragments of natural language text. Using ESA results in significant improvements over the previous state of the art in both tasks. Importantly, due to the use of natural concepts, the ESA model is easy to explain to human users.",
"title": ""
},
{
"docid": "b11592d07491ef9e0f67e257bfba6d84",
"text": "Convolutional networks have achieved great success in various vision tasks. This is mainly due to a considerable amount of research on network structure. In this study, instead of focusing on architectures, we focused on the convolution unit itself. The existing convolution unit has a fixed shape and is limited to observing restricted receptive fields. In earlier work, we proposed the active convolution unit (ACU), which can freely define its shape and learn by itself. In this paper, we provide a detailed analysis of the previously proposed unit and show that it is an efficient representation of a sparse weight convolution. Furthermore, we extend an ACU to a grouped ACU, which can observe multiple receptive fields in one layer. We found that the performance of a naive grouped convolution is degraded by increasing the number of groups; however, the proposed unit retains the accuracy even though the number of parameters decreases. Based on this result, we suggest a depthwise ACU, and various experiments have shown that our unit is efficient and can replace the existing convolutions.",
"title": ""
},
{
"docid": "3d5eb503f837adffb4468548b3f76560",
"text": "Purpose This study investigates the impact of such contingency factors as top management support, business vision, and external expertise, on the one hand, and ERP system success, on the other. Design/methodology/approach A conceptual model was developed and relevant hypotheses formulated. Surveys were conducted in two Northern European countries and a structural equation modeling technique was used to analyze the data. Originality/value It is argued that ERP systems are different from other IT implementations; as such, there is a need to provide insights as to how the aforementioned factors play out in the context of ERP system success evaluations for adopting organizations. As was predicted, the results showed that the three contingency factors positively influence ERP system success. More importantly, the relative importance of quality external expertise over the other two factors for ERP initiatives was underscored. The implications of the findings for both practitioners and researchers are discussed.",
"title": ""
},
{
"docid": "a2bd543446fb86da6030ce7f46db9f75",
"text": "This paper presents a risk assessment algorithm for automatic lane change maneuvers on highways. It is capable of reliably assessing a given highway situation in terms of the possibility of collisions and robustly giving a recommendation for lane changes. The algorithm infers potential collision risks of observed vehicles based on Bayesian networks considering uncertainties of its input data. It utilizes two complementary risk metrics (time-to-collision and minimal safety margin) in temporal and spatial aspects to cover all risky situations that can occur for lane changes. In addition, it provides a robust recommendation for lane changes by filtering out uncertain noise data pertaining to vehicle tracking. The validity of the algorithm is tested and evaluated on public highways in real traffic as well as a closed high-speed test track in simulated traffic through in-vehicle testing based on overtaking and overtaken scenarios in order to demonstrate the feasibility of the risk assessment for automatic lane change maneuvers on highways.",
"title": ""
},
{
"docid": "75591d4da0b01f1890022b320cdab705",
"text": "Many lakes in boreal and arctic regions have high concentrations of CDOM (coloured dissolved organic matter). Remote sensing of such lakes is complicated due to very low water leaving signals. There are extreme (black) lakes where the water reflectance values are negligible in almost entire visible part of spectrum (400–700 nm) due to the absorption by CDOM. In these lakes, the only water-leaving signal detectable by remote sensing sensors occurs as two peaks—near 710 nm and 810 nm. The first peak has been widely used in remote sensing of eutrophic waters for more than two decades. We show on the example of field radiometry data collected in Estonian and Swedish lakes that the height of the 810 nm peak can also be used in retrieving water constituents from remote sensing data. This is important especially in black lakes where the height of the 710 nm peak is still affected by CDOM. We have shown that the 810 nm peak can be used also in remote sensing of a wide variety of lakes. The 810 nm peak is caused by combined effect of slight decrease in absorption by water molecules and backscattering from particulate material in the water. Phytoplankton was the dominant particulate material in most of the studied lakes. Therefore, the height of the 810 peak was in good correlation with all proxies of phytoplankton biomass—chlorophyll-a (R2 = 0.77), total suspended matter (R2 = 0.70), and suspended particulate organic matter (R2 = 0.68). There was no correlation between the peak height and the suspended particulate inorganic matter. Satellite sensors with sufficient spatial and radiometric resolution for mapping lake water quality (Landsat 8 OLI and Sentinel-2 MSI) were launched recently. In order to test whether these satellites can capture the 810 nm peak we simulated the spectral performance of these two satellites from field radiometry data. Actual satellite imagery from a black lake was also used to study whether these sensors can detect the peak despite their band configuration. Sentinel 2 MSI has a nearly perfectly positioned band at 705 nm to characterize the 700–720 nm peak. We found that the MSI 783 nm band can be used to detect the 810 nm peak despite the location of this band is not in perfect to capture the peak.",
"title": ""
},
{
"docid": "64fbd2207a383bc4b04c66e8ee867922",
"text": "Ultra compact, short pulse, high voltage, high current pulsers are needed for a variety of non-linear electrical and optical applications. With a fast risetime and short pulse width, these drivers are capable of producing sub-nanosecond electrical and thus optical pulses by gain switching semiconductor laser diodes. Gain-switching of laser diodes requires a sub-nanosecond pulser capable of driving a low output impedance (5 /spl Omega/ or less). Optical pulses obtained had risetimes as fast as 20 ps. The designed pulsers also could be used for triggering photo-conductive semiconductor switches (PCSS), gating high speed optical imaging systems, and providing electrical and optical sources for fast transient sensor applications. Building on concepts from Lawrence Livermore National Laboratory, the development of pulsers based on solid state avalanche transistors was adapted to drive low impedances. As each successive stage is avalanched in the circuit, the amount of overvoltage increases, increasing the switching speed and improving the turn on time of the output pulse at the final stage. The output of the pulser is coupled into the load using a Blumlein configuration.",
"title": ""
}
] |
scidocsrr
|
e5bbc787e841e3c470de98a90b382bed
|
Video segmentation by tracing discontinuities in a trajectory embedding
|
[
{
"docid": "fea6d5cffd6b2943fac155231e7e9d89",
"text": "We propose a principled account on multiclass spectral clustering. Given a discrete clustering formulation, we first solve a relaxed continuous optimization problem by eigendecomposition. We clarify the role of eigenvectors as a generator of all optimal solutions through orthonormal transforms. We then solve an optimal discretization problem, which seeks a discrete solution closest to the continuous optima. The discretization is efficiently computed in an iterative fashion using singular value decomposition and nonmaximum suppression. The resulting discrete solutions are nearly global-optimal. Our method is robust to random initialization and converges faster than other clustering methods. Experiments on real image segmentation are reported. Spectral graph partitioning methods have been successfully applied to circuit layout [3, 1], load balancing [4] and image segmentation [10, 6]. As a discriminative approach, they do not make assumptions about the global structure of data. Instead, local evidence on how likely two data points belong to the same class is first collected and a global decision is then made to divide all data points into disjunct sets according to some criterion. Often, such a criterion can be interpreted in an embedding framework, where the grouping relationships among data points are preserved as much as possible in a lower-dimensional representation. What makes spectral methods appealing is that their global-optima in the relaxed continuous domain are obtained by eigendecomposition. However, to get a discrete solution from eigenvectors often requires solving another clustering problem, albeit in a lower-dimensional space. That is, eigenvectors are treated as geometrical coordinates of a point set. Various clustering heuristics such as Kmeans [10, 9], transportation [2], dynamic programming [1], greedy pruning or exhaustive search [3, 10] are subsequently employed on the new point set to retrieve partitions. We show that there is a principled way to recover a discrete optimum. This is based on a fact that the continuous optima consist not only of the eigenvectors, but of a whole family spanned by the eigenvectors through orthonormal transforms. The goal is to find the right orthonormal transform that leads to a discretization.",
"title": ""
}
] |
[
{
"docid": "aa32bff910ce6c7b438dc709b28eefe3",
"text": "Here we sketch the rudiments of what constitutes a smart city which we define as a city in which ICT is merged with traditional infrastructures, coordinated and integrated using new digital technologies. We first sketch our vision defining seven goals which concern: developing a new understanding of urban problems; effective and feasible ways to coordinate urban technologies; models and methods for using urban data across spatial and temporal scales; developing new technologies for communication and dissemination; developing new forms of urban governance and organisation; defining critical problems relating to cities, transport, and energy; and identifying risk, uncertainty, and hazards in the smart city. To this, we add six research challenges: to relate the infrastructure of smart cities to their operational functioning and planning through management, control and optimisation; to explore the notion of the city as a laboratory for innovation; to provide portfolios of urban simulation which inform future designs; to develop technologies that ensure equity, fairness and realise a better quality of city life; to develop technologies that ensure informed participation and create shared knowledge for democratic city governance; and to ensure greater and more effective mobility and access to opportunities for a e-mail: [email protected] 482 The European Physical Journal Special Topics urban populations. We begin by defining the state of the art, explaining the science of smart cities. We define six scenarios based on new cities badging themselves as smart, older cities regenerating themselves as smart, the development of science parks, tech cities, and technopoles focused on high technologies, the development of urban services using contemporary ICT, the use of ICT to develop new urban intelligence functions, and the development of online and mobile forms of participation. Seven project areas are then proposed: Integrated Databases for the Smart City, Sensing, Networking and the Impact of New Social Media, Modelling Network Performance, Mobility and Travel Behaviour, Modelling Urban Land Use, Transport and Economic Interactions, Modelling Urban Transactional Activities in Labour and Housing Markets, Decision Support as Urban Intelligence, Participatory Governance and Planning Structures for the Smart City. Finally we anticipate the paradigm shifts that will occur in this research and define a series of key demonstrators which we believe are important to progressing a science",
"title": ""
},
{
"docid": "7eebeb133a9881e69bf3c367b9e20751",
"text": "Advanced driver assistance systems or highly automated driving systems for lane change maneuvers are expected to enhance highway traffic safety, transport efficiency, and driver comfort. To extend the capability of current advanced driver assistance systems, and eventually progress to highly automated highway driving, the task of automatically determine if, when, and how to perform a lane change maneuver, is essential. This paper thereby presents a low-complexity lane change maneuver algorithm which determines whether a lane change maneuver is desirable, and if so, selects an appropriate inter-vehicle traffic gap and time instance to perform the maneuver, and calculates the corresponding longitudinal and lateral control trajectory. The ability of the proposed lane change maneuver algorithm to make appropriate maneuver decisions and generate smooth and safe lane change trajectories in various traffic situations is demonstrated by simulation and experimental results.",
"title": ""
},
{
"docid": "e0fb10bf5f0206c8cf3f97f5daa33fc0",
"text": "Existing techniques on adversarial malware generation employ feature mutations based on feature vectors extracted from malware. However, most (if not all) of these techniques suffer from a common limitation: feasibility of these attacks is unknown. The synthesized mutations may break the inherent constraints posed by code structures of the malware, causing either crashes or malfunctioning of malicious payloads. To address the limitation, we present Malware Recomposition Variation (MRV), an approach that conducts semantic analysis of existing malware to systematically construct new malware variants for malware detectors to test and strengthen their detection signatures/models. In particular, we use two variation strategies (i.e., malware evolution attack and malware confusion attack) following structures of existing malware to enhance feasibility of the attacks. Upon the given malware, we conduct semantic-feature mutation analysis and phylogenetic analysis to synthesize mutation strategies. Based on these strategies, we perform program transplantation to automatically mutate malware bytecode to generate new malware variants. We evaluate our MRV approach on actual malware variants, and our empirical evaluation on 1,935 Android benign apps and 1,917 malware shows that MRV produces malware variants that can have high likelihood to evade detection while still retaining their malicious behaviors. We also propose and evaluate three defense mechanisms to counter MRV.",
"title": ""
},
{
"docid": "5e6c24f5f3a2a3c3b0aff67e747757cb",
"text": "Traps have been used extensively to provide early warning of hidden pest infestations. To date, however, there is only one type of trap on the market in the U.K. for storage mites, namely the BT mite trap, or monitor. Laboratory studies have shown that under the test conditions (20 °C, 65% RH) the BT trap is effective at detecting mites for at least 10 days for all three species tested: Lepidoglyphus destructor, Tyrophagus longior and Acarus siro. Further tests showed that all three species reached a trap at a distance of approximately 80 cm in a 24 h period. In experiments using 100 mites of each species, and regardless of either temperature (15 or 20 °C) or relative humidity (65 or 80% RH), the most abundant species in the traps was T. longior, followed by A. siro then L. destructor. Trap catches were highest at 20 °C and 65% RH. Temperature had a greater effect on mite numbers than humidity. Tests using different densities of each mite species showed that the number of L. destructor found in/on the trap was significantly reduced when either of the other two species was dominant. It would appear that there is an interaction between L. destructor and the other two mite species which affects relative numbers found within the trap.",
"title": ""
},
{
"docid": "da4ec6dcf7f47b8ec0261195db7af5ca",
"text": "Smart factories are on the verge of becoming the new industrial paradigm, wherein optimization permeates all aspects of production, from concept generation to sales. To fully pursue this paradigm, flexibility in the production means as well as in their timely organization is of paramount importance. AI is planning a major role in this transition, but the scenarios encountered in practice might be challenging for current tools. Task planning is one example where AI enables more efficient and flexible operation through an online automated adaptation and rescheduling of the activities to cope with new operational constraints and demands. In this paper we present SMarTplan, a task planner specifically conceived to deal with real-world scenarios in the emerging smart factory paradigm. Including both special-purpose and general-purpose algorithms, SMarTplan is based on current automated reasoning technology and it is designed to tackle complex application domains. In particular, we show its effectiveness on a logistic scenario, by comparing its specialized version with the general purpose one, and extending the comparison to other state-of-the-art task planners.",
"title": ""
},
{
"docid": "4193bd310422b555faa5f6de8a1a94cd",
"text": "Although hundreds of chemical compounds have been identified in grapes and wines, only a few compounds actually contribute to sensory perception of wine flavor. This critical review focuses on volatile compounds that contribute to wine aroma and provides an overview of recent developments in analytical techniques for volatiles analysis, including methods used to identify the compounds that make the greatest contributions to the overall aroma. Knowledge of volatile composition alone is not enough to completely understand the overall wine aroma, however, due to complex interactions of odorants with each other and with other nonvolatile matrix components. These interactions and their impact on aroma volatility are the focus of much current research and are also reviewed here. Finally, the sequencing of the grapevine and yeast genomes in the past approximately 10 years provides the opportunity for exciting multidisciplinary studies aimed at understanding the influences of multiple genetic and environmental factors on grape and wine flavor biochemistry and metabolism (147 references).",
"title": ""
},
{
"docid": "f3a89c01dbbd40663811817ef7ba4be3",
"text": "In order to address the mental health disparities that exist for Latino adolescents in the United States, psychologists must understand specific factors that contribute to the high risk of mental health problems in Latino youth. Given the significant percentage of Latino youth who are immigrants or the children of immigrants, acculturation is a key factor in understanding mental health among this population. However, limitations in the conceptualization and measurement of acculturation have led to conflicting findings in the literature. Thus, the goal of the current review is to examine and critique research linking acculturation and mental health outcomes for Latino youth, as well as to integrate individual, environmental, and family influences of this relationship. An integrated theoretical model is presented and implications for clinical practice and future directions are discussed.",
"title": ""
},
{
"docid": "12adb5e324d971d2c752f2193cec3126",
"text": "Despite recent excitement generated by the P2P paradigm and despite surprisingly fast deployment of some P2P applications, there are few quantitative evaluations of P2P systems behavior. Due to its open architecture and achieved scale, Gnutella is an interesting P2P architecture case study. Gnutella, like most other P2P applications, builds at the application level a virtual network with its own routing mechanisms. The topology of this overlay network and the routing mechanisms used have a significant influence on application properties such as performance, reliability, and scalability. We built a ‘crawler’ to extract the topology of Gnutella’s application level network, we analyze the topology graph and evaluate generated network traffic. We find that although Gnutella is not a pure power-law network, its current configuration has the benefits and drawbacks of a power-law structure. These findings lead us to propose changes to Gnutella protocol and implementations that bring significant performance and scalability improvements.",
"title": ""
},
{
"docid": "eeff8964179ebd51745fece9b2fd50f3",
"text": "In this paper, we present a novel structure-preserving image completion approach equipped with dynamic patches. We formulate the image completion problem into an energy minimization framework that accounts for coherence within the hole and global coherence simultaneously. The completion of the hole is achieved through iterative optimizations combined with a multi-scale solution. In order to avoid abnormal structure and disordered texture, we utilize a dynamic patch system to achieve efficient structure restoration. Our dynamic patch system functions in both horizontal and vertical directions of the image pyramid. In the horizontal direction, we conduct a parallel search for multi-size patches in each pyramid level and design a competitive mechanism to select the most suitable patch. In the vertical direction, we use large patches in higher pyramid level to maximize the structure restoration and use small patches in lower pyramid level to reduce computational workload. We test our approach on massive images with complex structure and texture. The results are visually pleasing and preserve nice structure. Apart from effective structure preservation, our approach outperforms previous state-of-the-art methods in time consumption.",
"title": ""
},
{
"docid": "5096194bcbfebd136c74c30b998fb1f3",
"text": "This present study is designed to propose a conceptual framework extended from the previously advanced Theory of Acceptance Model (TAM). The framework makes it possible to examine the effects of social media, and perceived risk as the moderating effects between intention and actual purchase to be able to advance the Theory of Acceptance Model (TAM). 400 samples will be randomly selected among Saudi in Jeddah, Dammam and Riyadh. Data will be collected using questionnaire survey. As the research involves the analysis of numerical data, the assessment is carried out using Structural Equation Model (SEM). The hypothesis will be tested and the result is used to explain the proposed TAM. The findings from the present study will be beneficial for marketers to understand the intrinsic behavioral factors that influence consumers' selection hence avoid trial and errors in their advertising drives.",
"title": ""
},
{
"docid": "c3112126fa386710fb478dcfe978630e",
"text": "In recent years, distributed intelligent microelectromechanical systems (DiMEMSs) have appeared as a new form of distributed embedded systems. DiMEMSs contain thousands or millions of removable autonomous devices, which will collaborate with each other to achieve the final target of the whole system. Programming such systems is becoming an extremely difficult problem. The difficulty is due not only to their inherent nature of distributed collaboration, mobility, large scale, and limited resources of their devices (e.g., in terms of energy, memory, communication, and computation) but also to the requirements of real-time control and tolerance for uncertainties such as inaccurate actuation and unreliable communications. As a result, existing programming languages for traditional distributed and embedded systems are not suitable for DiMEMSs. In this article, we first introduce the origin and characteristics of DiMEMSs and then survey typical implementations of DiMEMSs and related research hotspots. Finally, we propose a real-time programming framework that can be used to design new real-time programming languages for DiMEMSs. The framework is composed of three layers: a real-time programming model layer, a compilation layer, and a runtime system layer. The design challenges and requirements of these layers are investigated. The framework is then discussed in further detail and suggestions for future research are given.",
"title": ""
},
{
"docid": "69e90a5882bdea0055bb61463687b0c1",
"text": "www.frontiersinecology.org © The Ecological Society of America E generate a range of goods and services important for human well-being, collectively called ecosystem services. Over the past decade, progress has been made in understanding how ecosystems provide services and how service provision translates into economic value (Daily 1997; MA 2005; NRC 2005). Yet, it has proven difficult to move from general pronouncements about the tremendous benefits nature provides to people to credible, quantitative estimates of ecosystem service values. Spatially explicit values of services across landscapes that might inform land-use and management decisions are still lacking (Balmford et al. 2002; MA 2005). Without quantitative assessments, and some incentives for landowners to provide them, these services tend to be ignored by those making land-use and land-management decisions. Currently, there are two paradigms for generating ecosystem service assessments that are meant to influence policy decisions. Under the first paradigm, researchers use broad-scale assessments of multiple services to extrapolate a few estimates of values, based on habitat types, to entire regions or the entire planet (eg Costanza et al. 1997; Troy and Wilson 2006; Turner et al. 2007). Although simple, this “benefits transfer” approach incorrectly assumes that every hectare of a given habitat type is of equal value – regardless of its quality, rarity, spatial configuration, size, proximity to population centers, or the prevailing social practices and values. Furthermore, this approach does not allow for analyses of service provision and changes in value under new conditions. For example, if a wetland is converted to agricultural land, how will this affect the provision of clean drinking water, downstream flooding, climate regulation, and soil fertility? Without information on the impacts of land-use management practices on ecosystem services production, it is impossible to design policies or payment programs that will provide the desired ecosystem services. In contrast, under the second paradigm for generating policy-relevant ecosystem service assessments, researchers carefully model the production of a single service in a small area with an “ecological production function” – how provision of that service depends on local ecological variables (eg Kaiser and Roumasset 2002; Ricketts et al. 2004). Some of these production function approaches also use market prices and non-market valuation methods to estimate the economic value of the service and how that value changes under different ecological conditions. Although these methods are superior to the habitat assessment benefits transfer approach, these studies lack both the scope (number of services) and scale (geographic and temporal) to be relevant for most policy questions. What is needed are approaches that combine the rigor of the small-scale studies with the breadth of broad-scale assessments (see Boody et al. 2005; Jackson et al. 2005; ECOSYSTEM SERVICES ECOSYSTEM SERVICES ECOSYSTEM SERVICES",
"title": ""
},
{
"docid": "a3ae9af5962d5df8a001da8964edfe3b",
"text": "The problem of blind demodulation of multiuser information symbols in a high-rate code-division multiple-access (CDMA) network in the presence of both multiple-access interference (MAI) and intersymbol interference (ISI) is considered. The dispersive CDMA channel is first cast into a multipleinput multiple-output (MIMO) signal model framework. By applying the theory of blind MIMO channel identification and equalization, it is then shown that under certain conditions the multiuser information symbols can be recovered without any prior knowledge of the channel or the users’ signature waveforms (including the desired user’s signature waveform), although the algorithmic complexity of such an approach is prohibitively high. However, in practice, the signature waveform of the user of interest is always available at the receiver. It is shown that by incorporating this knowledge, the impulse response of each user’s dispersive channel can be identified using a subspace method. It is further shown that based on the identified signal subspace parameters and the channel response, two linear detectors that are capable of suppressing both MAI and ISI, i.e., a zeroforcing detector and a minimum-mean-square-errror (MMSE) detector, can be constructed in closed form, at almost no extra computational cost. Data detection can then be furnished by applying these linear detectors (obtained blindly) to the received signal. The major contribution of this paper is the development of these subspace-based blind techniques for joint suppression of MAI and ISI in the dispersive CDMA channels.",
"title": ""
},
{
"docid": "c9a78279a2dfb2b8ed7ab2424aa41c34",
"text": "It is widely recognized that people sometimes use theory-of-mind judgments in moral cognition. A series of recent studies shows that the connection can also work in the opposite direction: moral judgments can sometimes be used in theory-of-mind cognition. Thus, there appear to be cases in which people's moral judgments actually serve as input to the process underlying their application of theory-of-mind concepts.",
"title": ""
},
{
"docid": "9ba1b3b31d077ad9a8b05e3736cb8716",
"text": "This work addresses multi-class segmentation of indoor scenes with RGB-D inputs. While this area of research has gained much attention recently, most works still rely on handcrafted features. In contrast, we apply a multiscale convolutional network to learn features directly from the images and the depth information. Using a frame by frame labeling, we obtain nearly state-of-the-art performance on the NYU-v2 depth dataset with an accuracy of 64.5%. We then show that the labeling can be further improved by exploiting the temporal consistency in the video sequence of the scene. To that goal, we present a method producing temporally consistent superpixels from a streaming video. Among the different methods producing superpixel segmentations of an image, the graph-based approach of Felzenszwalb and Huttenlocher is broadly employed. One of its interesting properties is that the regions are computed in a greedy manner in quasi-linear time by using a minimum spanning tree. In a framework exploiting minimum spanning trees all along, we propose an efficient video segmentation approach that computes temporally consistent pixels in a causal manner, filling the need for causal and real-time applications. We illustrate the labeling of indoor scenes in video sequences that could be processed in real-time using appropriate hardware such as an FPGA.",
"title": ""
},
{
"docid": "187fe997bb78bf60c5aaf935719df867",
"text": "Access to clean, affordable and reliable energy has been a cornerstone of the world's increasing prosperity and economic growth since the beginning of the industrial revolution. Our use of energy in the twenty–first century must also be sustainable. Solar and water–based energy generation, and engineering of microbes to produce biofuels are a few examples of the alternatives. This Perspective puts these opportunities into a larger context by relating them to a number of aspects in the transportation and electricity generation sectors. It also provides a snapshot of the current energy landscape and discusses several research and development opportunities and pathways that could lead to a prosperous, sustainable and secure energy future for the world.",
"title": ""
},
{
"docid": "a5391753b4ac2b7cab9f58f28348ab8d",
"text": "We present a temporal map of key processes that occur during decision making, which consists of three stages: 1) formation of preferences among options, 2) selection and execution of an action, and 3) experience or evaluation of an outcome. This framework can be used to integrate findings of traditional choice psychology, neuropsychology, brain lesion studies, and functional neuroimaging. Decision making is distributed across various brain centers, which are differentially active across these stages of decision making. This approach can be used to follow developmental trajectories of the different stages of decision making and to identify unique deficits associated with distinct psychiatric disorders.",
"title": ""
},
{
"docid": "2a44dc875eac50b8fa08ea98ab5ca463",
"text": "Next-generation e-Science features large-scale, compute-intensive workflows of many computing modules that are typically executed in a distributed manner. With the recent emergence of cloud computing and the rapid deployment of cloud infrastructures, an increasing number of scientific workflows have been shifted or are in active transition to cloud environments. As cloud computing makes computing a utility, scientists across different application domains are facing the same challenge of reducing financial cost in addition to meeting the traditional goal of performance optimization. We develop a prototype generic workflow system by leveraging existing technologies for a quick evaluation of scientific workflow optimization strategies. We construct analytical models to quantify the network performance of scientific workflows using cloud-based computing resources, and formulate a task scheduling problem to minimize the workflow end-to-end delay under a user-specified financial constraint. We rigorously prove that the proposed problem is not only NP-complete but also non-approximable. We design a heuristic solution to this problem, and illustrate its performance superiority over existing methods through extensive simulations and real-life workflow experiments based on proof-of-concept implementation and deployment in a local cloud testbed.",
"title": ""
},
{
"docid": "47faebfa7d65ebf277e57436cf7c2ca4",
"text": "Steganography is a method which can put data into a media without a tangible impact on the cover media. In addition, the hidden data can be extracted with minimal differences. In this paper, twodimensional discrete wavelet transform is used for steganography in 24-bit color images. This steganography is of blind type that has no need for original images to extract the secret image. In this algorithm, by the help of a structural similarity and a two-dimensional correlation coefficient, it is tried to select part of sub-band cover image instead of embedding location. These sub-bands are obtained by 3levels of applying the DWT. Also to increase the steganography resistance against cropping or insert visible watermark, two channels of color image is used simultaneously. In order to raise the security, an encryption algorithm based on Arnold transform was also added to the steganography operation. Because diversity of chaos scenarios is limited in Arnold transform, it could be improved by its mirror in order to increase the diversity of key. Additionally, an ability is added to encryption algorithm that can still maintain its efficiency against image crop. Transparency of steganography image is measured by the peak signalto-noise ratio that indicates the adequate transparency of steganography process. Extracted image similarity is also measured by two-dimensional correlation coefficient with more than 99% similarity. Moreover, steganography resistance against increasing and decreasing brightness and contrast, lossy compression, cropping image, changing scale and adding noise is acceptable",
"title": ""
},
{
"docid": "7677f90e0d949488958b27422bdffeb5",
"text": "This vignette is a slightly modified version of Koenker (2008a). It was written in plain latex not Sweave, but all data and code for the examples described in the text are available from either the JSS website or from my webpages. Quantile regression for censored survival (duration) data offers a more flexible alternative to the Cox proportional hazard model for some applications. We describe three estimation methods for such applications that have been recently incorporated into the R package quantreg: the Powell (1986) estimator for fixed censoring, and two methods for random censoring, one introduced by Portnoy (2003), and the other by Peng and Huang (2008). The Portnoy and Peng-Huang estimators can be viewed, respectively, as generalizations to regression of the Kaplan-Meier and NelsonAalen estimators of univariate quantiles for censored observations. Some asymptotic and simulation comparisons are made to highlight advantages and disadvantages of the three methods.",
"title": ""
}
] |
scidocsrr
|
59e961dd5a4db454129f31cd2e85e782
|
Probabilistic risk analysis and terrorism risk.
|
[
{
"docid": "7adb0a3079fb3b64f7a503bd8eae623e",
"text": "Attack trees have found their way to practice because they have proved to be an intuitive aid in threat analysis. Despite, or perhaps thanks to, their apparent simplicity, they have not yet been provided with an unambiguous semantics. We argue that such a formal interpretation is indispensable to precisely understand how attack trees can be manipulated during construction and analysis. We provide a denotational semantics, based on a mapping to attack suites, which abstracts from the internal structure of an attack tree, we study transformations between attack trees, and we study the attribution and projection of an attack tree.",
"title": ""
}
] |
[
{
"docid": "a2189a6b0cf23e40e2d1948e86330466",
"text": "Evolutionary psychology is an approach to the psychological sciences in which principles and results drawn from evolutionary biology, cognitive science, anthropology, and neuroscience are integrated with the rest of psychology in order to map human nature. By human nature, evolutionary psychologists mean the evolved, reliably developing, species-typical computational and neural architecture of the human mind and brain. According to this view, the functional components that comprise this architecture were designed by natural selection to solve adaptive problems faced by our hunter-gatherer ancestors, and to regulate behavior so that these adaptive problems were successfully addressed (for discussion, see Cosmides & Tooby, 1987, Tooby & Cosmides, 1992). Evolutionary psychology is not a specific subfield of psychology, such as the study of vision, reasoning, or social behavior. It is a way of thinking about psychology that can be applied to any topic within it including the emotions.",
"title": ""
},
{
"docid": "f555a50f629bd9868e1be92ebdcbc154",
"text": "The transformation of traditional energy networks to smart grids revolutionizes the energy industry in terms of reliability, performance, and manageability by providing bi-directional communications to operate, monitor, and control power flow and measurements. However, communication networks in smart grid bring increased connectivity with increased severe security vulnerabilities and challenges. Smart grid can be a prime target for cyber terrorism because of its critical nature. As a result, smart grid security is already getting a lot of attention from governments, energy industries, and consumers. There have been several research efforts for securing smart grid systems in academia, government and industries. This article provides a comprehensive study of challenges in smart grid security, which we concentrate on the problems and proposed solutions. Then, we outline current state of the research and future perspectives.With this article, readers can have a more thorough understanding of smart grid security and the research trends in this topic.",
"title": ""
},
{
"docid": "60fbaecc398f04bdb428ccec061a15a5",
"text": "A decade earlier, work on modeling and analyzing social network, was primarily focused on manually collected datasets where the friendship links were sparse but relatively noise free (i.e. all links represented strong physical relation). With the popularity of online social networks, the notion of “friendship” changed dramatically. The data collection, now although automated, contains dense friendship links but the links contain noisier information (i.e. some weaker relationships). The aim of this study is to identify these weaker links and suggest how these links (identification) play a vital role in improving social media design elements such as privacy control, detection of auto-bots, friend introductions, information prioritization and so on. The binary metric used so far for modeling links in social network (i.e. friends or not) is of little importance as it groups all our relatives, close friends and acquaintances in the same category. Therefore a popular notion of tie-strength has been incorporated for modeling links. In this paper, a predictive model is presented that helps evaluate tie-strength for each link in network based on transactional features (e.g. communication, file transfer, photos). The model predicts tie strength with 76.4% efficiency. This work also suggests that important link properties manifest similarly across different social media sites.",
"title": ""
},
{
"docid": "b8fe5687c8b18a8cfdac14a198b77033",
"text": "1 Sia Siew Kien, Michael Rosemann and Phillip Yetton are the accepting senior editors for this article. 2 This research was partly funded by an Australian Research Council Discovery grant. The authors are grateful to the interviewees, whose willingness to share their valuable insights and experiences made this study possible, and to the senior editors and reviewers for their very helpful feedback and advice throughout the review process. 3 All quotes in this article are from employees of “RetailCo,” the subject of this case study. The names of the organization and its business divisions have been anonymized. 4 A digital business platform is “an integrated set of electronic business processes and the technologies, applications and data supporting those processes” Weill, P. and Ross, J. W. IT Savvy: What Top Executives Must Know to Go from Pain to Gain, Harvard Business School Publishing, 2009, p. 4; for more on digitized platforms, see pp. 67-87 of this publication. How an Australian Retailer Enabled Business Transformation Through Enterprise Architecture",
"title": ""
},
{
"docid": "de17b1fcae6336947e82adab0881b5ba",
"text": "Presence of duplicate documents in the World Wide Web adversely affects crawling, indexing and relevance, which are the core building blocks of web search. In this paper, we present a set of techniques to mine rules from URLs and utilize these learnt rules for de-duplication using just URL strings without fetching the content explicitly. Our technique is composed of mining the crawl logs and utilizing clusters of similar pages to extract specific rules from URLs belonging to each cluster. Preserving each mined rules for de-duplication is not efficient due to the large number of specific rules. We present a machine learning technique to generalize the set of rules, which reduces the resource footprint to be usable at web-scale. The rule extraction techniques are robust against web-site specific URL conventions. We demonstrate the effectiveness of our techniques through experimental evaluation.",
"title": ""
},
{
"docid": "171d9acd0e2cb86a02d5ff56d4515f0d",
"text": "We explore two solutions to the problem of mistranslating rare words in neural machine translation. First, we argue that the standard output layer, which computes the inner product of a vector representing the context with all possible output word embeddings, rewards frequent words disproportionately, and we propose to fix the norms of both vectors to a constant value. Second, we integrate a simple lexical module which is jointly trained with the rest of the model. We evaluate our approaches on eight language pairs with data sizes ranging from 100k to 8M words, and achieve improvements of up to +4.3 BLEU, surpassing phrasebased translation in nearly all settings.1",
"title": ""
},
{
"docid": "2d6523ef6609c11274449d3b9a57c53c",
"text": "Performing information retrieval tasks while preserving data confidentiality is a desirable capability when a database is stored on a server maintained by a third-party service provider. This paper addresses the problem of enabling content-based retrieval over encrypted multimedia databases. Search indexes, along with multimedia documents, are first encrypted by the content owner and then stored onto the server. Through jointly applying cryptographic techniques, such as order preserving encryption and randomized hash functions, with image processing and information retrieval techniques, secure indexing schemes are designed to provide both privacy protection and rank-ordered search capability. Retrieval results on an encrypted color image database and security analysis of the secure indexing schemes under different attack models show that data confidentiality can be preserved while retaining very good retrieval performance. This work has promising applications in secure multimedia management.",
"title": ""
},
{
"docid": "3caa8fc1ea07fcf8442705c3b0f775c5",
"text": "Recent research in the field of computational social science have shown how data resulting from the widespread adoption and use of social media channels such as twitter can be used to predict outcomes such as movie revenues, election winners, localized moods, and epidemic outbreaks. Underlying assumptions for this research stream on predictive analytics are that social media actions such as tweeting, liking, commenting and rating are proxies for user/consumer's attention to a particular object/product and that the shared digital artefact that is persistent can create social influence. In this paper, we demonstrate how social media data from twitter can be used to predict the sales of iPhones. Based on a conceptual model of social data consisting of social graph (actors, actions, activities, and artefacts) and social text (topics, keywords, pronouns, and sentiments), we develop and evaluate a linear regression model that transforms iPhone tweets into a prediction of the quarterly iPhone sales with an average error close to the established prediction models from investment banks. This strong correlation between iPhone tweets and iPhone sales becomes marginally stronger after incorporating sentiments of tweets. We discuss the findings and conclude with implications for predictive analytics with big social data.",
"title": ""
},
{
"docid": "52b1c306355e6bf8ba10ea7e3cf1d05e",
"text": "QUESTION\nIs there a means of assessing research impact beyond citation analysis?\n\n\nSETTING\nThe case study took place at the Washington University School of Medicine Becker Medical Library.\n\n\nMETHOD\nThis case study analyzed the research study process to identify indicators beyond citation count that demonstrate research impact.\n\n\nMAIN RESULTS\nThe authors discovered a number of indicators that can be documented for assessment of research impact, as well as resources to locate evidence of impact. As a result of the project, the authors developed a model for assessment of research impact, the Becker Medical Library Model for Assessment of Research.\n\n\nCONCLUSION\nAssessment of research impact using traditional citation analysis alone is not a sufficient tool for assessing the impact of research findings, and it is not predictive of subsequent clinical applications resulting in meaningful health outcomes. The Becker Model can be used by both researchers and librarians to document research impact to supplement citation analysis.",
"title": ""
},
{
"docid": "5e85b2fedd9fc66b198ccfc5b010da54",
"text": "a r t i c l e i n f o Keywords: Theory of planned behaviour Post-adoption Perceived value Facebook Social networking sites TPB SNS This study examines the continuance participation intentions and behaviour on Facebook, as a representative of Social Networking Sites (SNSs), from a social and behavioural perspective. The study extends the Theory of Planned Behaviour (TPB) through the inclusion of perceived value construct and utilizes the extended theory to explain users' continuance participation intentions and behaviour on Facebook. Despite the recent massive uptake of Facebook, our review of the related-literature revealed that very few studies tackled such technologies from the context of post-adoption as in this research. Using data from surveys of undergraduate and postgraduate students in Jordan (n=403), the extended theory was tested using statistical analysis methods. The results show that attitude, subjective norm, perceived behavioural control, and perceived value have significant effect on the continuance participation intention of post-adopters. Further, the results show that continuance participation intention and perceived value have significant effect on continuance participation behaviour. However, the results show that perceived be-havioural control has no significant effect on continuance participation behaviour of post-adopters. When comparing the extended theory developed in this study with the standard TPB, it was found that the inclusion of the perceived value construct in the extended theory is fruitful; as such an extension explained an additional 11.6% of the variance in continuance participation intention and 4.5% of the variance in continuance participation behaviour over the standard TPB constructs. Consistent with the research on value-driven post-adoption behaviour, these findings suggest that continuance intentions and behaviour of users of Facebook are likely to be greater when they perceive the behaviour to be associated with significant added-value (i.e. benefits outperform sacrifices). Since its introduction, the Internet has enabled entirely new forms of social interaction and activities, thanks to its basic features such as the prevalent usability and access. As the Internet is massively evolving over time, the World Wide Web or otherwise referred to as Web 1.0 has been transformed to the so-called Web 2.0. In fact, Web 2.0 refers to the second generation of the World Wide Web that facilitates information sharing, interoperability, user-centred design and collaboration. The advent of Web 2.0 has led to the development and evolution of Web-based communities, hosted services, and Web applications that work as a mainstream medium for value creation and exchange. Examples of Web …",
"title": ""
},
{
"docid": "11a9d7a218d1293878522252e1f62778",
"text": "This paper presents a wideband circularly polarized millimeter-wave (mmw) antenna design. We introduce a novel 3-D-printed polarizer, which consists of several air and dielectric slabs to transform the polarization of the antenna radiation from linear to circular. The proposed polarizer is placed above a radiating aperture operating at the center frequency of 60 GHz. An electric field, <inline-formula> <tex-math notation=\"LaTeX\">${E}$ </tex-math></inline-formula>, radiated from the aperture generates two components of electric fields, <inline-formula> <tex-math notation=\"LaTeX\">${E} _{\\mathrm {x}}$ </tex-math></inline-formula> and <inline-formula> <tex-math notation=\"LaTeX\">${E} _{\\mathrm {y}}$ </tex-math></inline-formula>. After passing through the polarizer, both <inline-formula> <tex-math notation=\"LaTeX\">${E} _{\\mathrm {x}}$ </tex-math></inline-formula> and <inline-formula> <tex-math notation=\"LaTeX\">${E} _{\\mathrm {y}}$ </tex-math></inline-formula> fields can be degenerated with an orthogonal phase difference which results in having a wide axial ratio bandwidth. The phase difference between <inline-formula> <tex-math notation=\"LaTeX\">${E} _{\\mathrm {x}}$ </tex-math></inline-formula> and <inline-formula> <tex-math notation=\"LaTeX\">${E} _{\\mathrm {y}}$ </tex-math></inline-formula> is determined by the incident angle <inline-formula> <tex-math notation=\"LaTeX\">$\\phi $ </tex-math></inline-formula>, of the polarization of the electric field to the polarizer as well as the thickness, <inline-formula> <tex-math notation=\"LaTeX\">${h}$ </tex-math></inline-formula>, of the dielectric slabs. With the help of the thickness of the polarizer, the directivity of the radiation pattern is increased so as to devote high-gain and wideband characteristics to the antenna. To verify our concept, an intensive parametric study and an experiment were carried out. Three antenna sources, including dipole, patch, and aperture antennas, were investigated with the proposed 3-D-printed polarizer. All measured results agree with the theoretical analysis. The proposed antenna with the polarizer achieves a wide impedance bandwidth of 50% from 45 to 75 GHz for the reflection coefficient less than or equal −10 dB, and yields an overlapped axial ratio bandwidth of 30% from 49 to 67 GHz for the axial ratio ≤ 3 dB. The maximum gain of the antenna reaches to 15 dBic. The proposed methodology of this design can apply to applications related to mmw wireless communication systems. The ultimate goal of this paper is to develop a wideband, high-gain, and low-cost antenna for the mmw frequency band.",
"title": ""
},
{
"docid": "289b67247b109ee0de851c0cd4e76ec3",
"text": "User engagement is a key concept in designing user-centred web applications. It refers to the quality of the user experience that emphasises the positive aspects of the interaction, and in particular the phenomena associated with being captivated by technology. This definition is motivated by the observation that successful technologies are not just used, but they are engaged with. Numerous methods have been proposed in the literature to measure engagement, however, little has been done to validate and relate these measures and so provide a firm basis for assessing the quality of the user experience. Engagement is heavily influenced, for example, by the user interface and its associated process flow, the user’s context, value system and incentives. In this paper we propose an approach to relating and developing unified measures of user engagement. Our ultimate aim is to define a framework in which user engagement can be studied, measured, and explained, leading to recommendations and guidelines for user interface and interaction design for front-end web technology. Towards this aim, in this paper, we consider how existing user engagement metrics, web analytics, information retrieval metrics, and measures from immersion in gaming can bring new perspective to defining, measuring and explaining user engagement.",
"title": ""
},
{
"docid": "00602badbfba6bc97dffbdd6c5a2ae2d",
"text": "Accurately drawing 3D objects is difficult for untrained individuals, as it requires an understanding of perspective and its effects on geometry and proportions. Step-by-step tutorials break the complex task of sketching an entire object down into easy-to-follow steps that even a novice can follow. However, creating such tutorials requires expert knowledge and is time-consuming. As a result, the availability of tutorials for a given object or viewpoint is limited. How2Sketch (H2S) addresses this problem by automatically generating easy-to-follow tutorials for arbitrary 3D objects. Given a segmented 3D model and a camera viewpoint, H2S computes a sequence of steps for constructing a drawing scaffold comprised of geometric primitives, which helps the user draw the final contours in correct perspective and proportion. To make the drawing scaffold easy to construct, the algorithm solves for an ordering among the scaffolding primitives and explicitly makes small geometric modifications to the size and location of the object parts to simplify relative positioning. Technically, we formulate this scaffold construction as a single selection problem that simultaneously solves for the ordering and geometric changes of the primitives. We generate different tutorials on man-made objects using our method and evaluate how easily the tutorials can be followed with a user study.",
"title": ""
},
{
"docid": "d19e825235b5fbb759ff49a1c8398cea",
"text": "Febrile seizures are common and mostly benign. They are the most common cause of seizures in children less than five years of age. There are two categories of febrile seizures, simple and complex. Both the International League against Epilepsy and the National Institute of Health has published definitions on the classification of febrile seizures. Simple febrile seizures are mostly benign, but a prolonged (complex) febrile seizure can have long term consequences. Most children who have a febrile seizure have normal health and development after the event, but there is recent evidence that suggests a small subset of children that present with seizures and fever may have recurrent seizure or develop epilepsy. This review will give an overview of the definition of febrile seizures, epidemiology, evaluation, treatment, outcomes and recent research.",
"title": ""
},
{
"docid": "bb799a3aac27f4ac764649e1f58ee9fb",
"text": "White grubs (larvae of Coleoptera: Scarabaeidae) are abundant in below-ground systems and can cause considerable damage to a wide variety of crops by feeding on roots. White grub populations may be controlled by natural enemies, but the predator guild of the European species is barely known. Trophic interactions within soil food webs are difficult to study with conventional methods. Therefore, a polymerase chain reaction (PCR)-based approach was developed to investigate, for the first time, a soil insect predator-prey system. Can, however, highly sensitive detection methods identify carrion prey in predators, as has been shown for fresh prey? Fresh Melolontha melolontha (L.) larvae and 1- to 9-day-old carcasses were presented to Poecilus versicolor Sturm larvae. Mitochondrial cytochrome oxidase subunit I fragments of the prey, 175, 327 and 387 bp long, were detectable in 50% of the predators 32 h after feeding. Detectability decreased to 18% when a 585 bp sequence was amplified. Meal size and digestion capacity of individual predators had no influence on prey detection. Although prey consumption was negatively correlated with cadaver age, carrion prey could be detected by PCR as efficiently as fresh prey irrespective of carrion age. This is the first proof that PCR-based techniques are highly efficient and sensitive, both in fresh and carrion prey detection. Thus, if active predation has to be distinguished from scavenging, then additional approaches are needed to interpret the picture of prey choice derived by highly sensitive detection methods.",
"title": ""
},
{
"docid": "1255c63b8fc0406b1f3a0161f59ebfb1",
"text": "This paper proposes an EMI filter design software which can serve as an aid to the designer to quickly arrive at optimal filter sizes based on off-line measurement data or simulation results. The software covers different operating conditions-such as: different switching devices, different types of switching techniques, different load conditions and layout of the test setup. The proposed software design works for both silicon based and WBG based power converters.",
"title": ""
},
{
"docid": "0c41de0df5dd88c87061c57ae26c5b32",
"text": "Context. The share and importance of software within automotive vehicles is growing steadily. Most functionalities in modern vehicles, especially safety related functions like advanced emergency braking, are controlled by software. A complex and common phenomenon in today’s automotive vehicles is the distribution of such software functions across several Electronic Control Units (ECUs) and consequently across several ECU system software modules. As a result, integration testing of these distributed software functions has been found to be a challenge. The automotive industry neither has infinite resources, nor has the time to carry out exhaustive testing of these functions. On the other hand, the traditional approach of implementing an ad-hoc selection of test scenarios based on the tester’s experience, can lead to test gaps and test redundancies. Hence, there is a pressing need within the automotive industry for a feasible and effective verification strategy for testing distributed software functions. Objectives. Firstly, to identify the current approach used to test the distributed automotive embedded software functions in literature and in a case company. Secondly, propose and validate a feasible and effective verification strategy for testing the distributed software functions that would help improve test coverage while reducing test redundancies and test gaps. Methods. To accomplish the objectives, a case study was conducted at Scania CV AB, Södertälje, Sweden. One of the data collection methods was through conducting interviews of different employees involved in the software testing activities. Based on the research objectives, an interview questionnaire with open-ended and close-ended questions has been used. Apart from interviews, data from relevant artifacts in databases and archived documents has been used to achieve data triangulation. Moreover, to further strengthen the validity of the results obtained, adequate literature support has been presented throughout. Towards the end, a verification strategy has been proposed and validated using existing historical data at Scania. Conclusions. The proposed verification strategy to test distributed automotive embedded software functions has given promising results by providing means to identify test gaps and test redundancies. It helps establish an effective and feasible approach to capture function test coverage information that helps enhance the effectiveness of integration testing of the distributed software functions.",
"title": ""
},
{
"docid": "5bb98a6655f823b38c3866e6d95471e9",
"text": "This article describes the HR Management System in place at Sears. Key emphases of Sears' HR management infrastructure include : (1) formulating and communicating a corporate mission, vision, and goals, (2) employee education and development through the Sears University, (3) performance management and incentive compensation systems linked closely to the firm's strategy, (4) validated employee selection systems, and (5) delivering the \"HR Basics\" very competently. Key challenges for the future include : (1) maintaining momentum in the performance improvement process, (2) identifying barriers to success, and (3) clearly articulating HR's role in the change management process . © 1999 John Wiley & Sons, Inc .",
"title": ""
},
{
"docid": "f14b2dda47ff1eed966a3dad44514334",
"text": "Diced cartilage rolled up in a fascia (DC-F) is a recent technique developed by Rollin K Daniel. It consists to tailor make a composite graft composed by pieces of cartilage cut in small dices wrapped in a layer of deep temporal aponeurosis. This initially malleable graft allows an effective dorsum augmentation (1 to 10 mm), adjustable until the end of the operation and even post operatively. The indications are all the primary and secondary augmentation rhinoplasties. However, the elective indications are the secondary augmentation rhinoplasties with cartilaginous donor site depletion, or when cartilaginous grafts are of poor quality (insufficient length, multifragmented...), or finally when the recipient site is uneven or asymmetrical. We report our experience of 20 patients operated in 2006 and 2007, with one year minimal follow-up. All the cases are relative or absolute saddle noses, idiopathic, post-traumatic or iatrogenic. Moreover, two patients also had a concomitant chin augmentation with DC-F. No case of displacement or resorption was noted. We modified certain technical points in order to make this technique even more powerful and predictable.",
"title": ""
},
{
"docid": "fb9bbd096fa29cbb0abf646b33f7693b",
"text": "This paper presents a new parameter extraction methodology, based on an accurate and continuous MOS model dedicated to low-voltage and low-current analog circuit design and simulation (EKV MOST Model). The extraction procedure provides the key parameters from the pinch-off versus gate voltage characteristic, measured at constant current from a device biased in moderate inversion. Unique parameter sets, suitable for statistical analysis, describe the device behavior in all operating regions and over all device geometries. This efficient and simple method is shown to be accurate for both submicron bulk CMOS and fully depleted SOI technologies. INTRODUCTION The requirements for good MOS analog simulation models such as accuracy and continuity of the largeand small-signal characteristics are well established [1][2]. Continuity of the largeand small-signal characteristics from weak to strong inversion is one of the main features of the Enz-Krummenacher-Vittoz or EKV MOS transistor model [3][4][5]. One of the basic concepts of this model is the pinch-off voltage. A constant current bias is used to measure the pinch-off voltage versus gate voltage characteristic in moderate inversion (MI). This measure allows for an efficient and simple characterization method to be formulated for the most important model parameters as the threshold voltage and the other parameters related to the channel doping, using a single measured characteristic. The same principle is applied for various geometries, including shortand narrow-channel devices, and forms the major part of the complete characterization methodology. The simplicity of the model and the relatively small number of parameters to be extracted eases the parameter extraction. This is of particular importance if large statistical data are to be gathered. This method has been validated on a large number of different CMOS processes. To show its flexibility as well as the abilities of the model, results are presented for submicron bulk and fully depleted SOI technologies. SHORT DESCRIPTION OF THE STATIC MODEL A detailed description of the model formulation can be found in [3]; important concepts are shortly recalled here since they form the basis of the parameter extraction. A set of 13 intrinsic parameters is used for first and second order effects, listed in Table I. Unlike most other MOS simulation models, in the EKV model the gate, source and drain voltages, VG , VS and VD , are all referred to the substrate in order to preserve the intrinsic symmetry of the device. The Pinch-off Voltage The threshold voltage VTO, which is consequently also referred to the bulk, is defined as the gate voltage for which the inversion charge forming the channel is zero at equilibrium. The pinch-off voltage VP corresponds to the value of the channel potential Vch for which the inversion charge becomes zero in a non-equilibrium situation. VP can be directly related to VG :",
"title": ""
}
] |
scidocsrr
|
4e21917e5c72bdf48464ccf984850ab4
|
A Critical Analysis on the Security Concerns of Internet of Things (IoT)
|
[
{
"docid": "fdc903a98097de8b7533b3e2fe209863",
"text": "As advances in networking technology help to connect the distant corners of the globe and as the Internet continues to expand its influence as a medium for communications and commerce, the threat from spammers, attackers and criminal enterprises has also grown accordingly. It is the prevalence of such threats that has made intrusion detection systems—the cyberspace’s equivalent to the burglar alarm—join ranks with firewalls as one of the fundamental technologies for network security. However, today’s commercially available intrusion detection systems are predominantly signature-based intrusion detection systems that are designed to detect known attacks by utilizing the signatures of those attacks. Such systems require frequent rule-base updates and signature updates, and are not capable of detecting unknown attacks. In contrast, anomaly detection systems, a subset of intrusion detection systems, model the normal system/network behavior which enables them to be extremely effective in finding and foiling both known as well as unknown or ‘‘zero day’’ attacks. While anomaly detection systems are attractive conceptually, a host of technological problems need to be overcome before they can be widely adopted. These problems include: high false alarm rate, failure to scale to gigabit speeds, etc. In this paper, we provide a comprehensive survey of anomaly detection systems and hybrid intrusion detection systems of the recent past and present. We also discuss recent technological trends in anomaly detection and identify open problems and challenges in this area. 2007 Elsevier B.V. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "a7089d7b076d2fb974e95985b20d5fa5",
"text": "In this paper, we use a simple concept based on k-reverse nearest neighbor digraphs, to develop a framework RECORD for clustering and outlier detection. We developed three algorithms - (i) RECORD algorithm (requires one parameter), (ii) Agglomerative RECORD algorithm (no parameters required) and (iii) Stability-based RECORD algorithm (no parameters required). Our experimental results with published datasets, synthetic and real-life datasets show that RECORD not only handles noisy data, but also identifies the relevant clusters. Our results are as good as (if not better than) the results got from other algorithms.",
"title": ""
},
{
"docid": "bc4d717db3b3470d7127590b8d165a5d",
"text": "In this paper, we develop a general formalism for describing the C++ programming language, and regular enough to cope with proposed extensions (such as concepts) for C++0x that affect its type system. Concepts are a mechanism for checking template arguments currently being developed to help cope with the massive use of templates in modern C++. The main challenges in developing a formalism for C++ are scoping, overriding, overloading, templates, specialization, and the C heritage exposed in the built-in types. Here, we primarily focus on templates and overloading.",
"title": ""
},
{
"docid": "e6dabfc7165883e77c4cf6772ed59ee4",
"text": "Automatic emotion recognition is a challenging task which can make great impact on improving natural human computer interactions. In this paper, we present our effort for the Affect Subtask in the Audio/Visual Emotion Challenge (AVEC) 2017, which requires participants to perform continuous emotion prediction on three affective dimensions: Arousal, Valence and Likability based on the audiovisual signals. We highlight three aspects of our solutions: 1) we explore and fuse different hand-crafted and deep learned features from all available modalities including acoustic, visual, and textual modalities, and we further consider the interlocutor influence for the acoustic features; 2) we compare the effectiveness of non-temporal model SVR and temporal model LSTM-RNN and show that the LSTM-RNN can not only alleviate the feature engineering efforts such as construction of contextual features and feature delay, but also improve the recognition performance significantly; 3) we apply multi-task learning strategy for collaborative prediction of multiple emotion dimensions with shared representations according to the fact that different emotion dimensions are correlated with each other. Our solutions achieve the CCC of 0.675, 0.756 and 0.509 on arousal, valence, and likability respectively on the challenge testing set, which outperforms the baseline system with corresponding CCC of 0.375, 0.466, and 0.246 on arousal, valence, and likability.",
"title": ""
},
{
"docid": "d55664deebc86b841e9d82671c14f120",
"text": "New design of a circular microstrip antenna with three feeds for wideband circular polarization is presented. Detailed theoretical analysis proves that a circular microstrip antenna, which is excited by three central symmetrical feeds with equal amplitudes and relative 120° phase shifts, can obtain good circular polarization characteristic. A broadband 120° phase shifter is developed using the metamaterial transmission line (MM TL). It consists of a Wilkinson power divider and three phase-adjusting TLs, namely a MM TL and two microstrips. The antenna is composed of two circular patches. The primary radiating patch under the parasitic patch is excited by three feeds. The proposed antenna can provide an impedance bandwidth of 54.75% and a 3-dB axial ratio bandwidth of 47.88%. The 3-dB gain bandwidth is as large as 40%, with the peak gain 8.8 dBic. In addition, the measured and simulated symmetrical radiation patterns are in good agreement.",
"title": ""
},
{
"docid": "06b43b63aafbb70de2601b59d7813576",
"text": "Facial expression recognizers based on handcrafted features have achieved satisfactory performance on many databases. Recently, deep neural networks, e. g. deep convolutional neural networks (CNNs) have been shown to boost performance on vision tasks. However, the mechanisms exploited by CNNs are not well established. In this paper, we establish the existence and utility of feature maps selective to action units in a deep CNN trained by transfer learning. We transfer a network pre-trained on the Image-Net dataset to the facial expression recognition task using the Karolinska Directed Emotional Faces (KDEF), Radboud Faces Database(RaFD) and extended Cohn-Kanade (CK+) database. We demonstrate that higher convolutional layers of the deep CNN trained on generic images are selective to facial action units. We also show that feature selection is critical in achieving robustness, with action unit selective feature maps being more critical in the facial expression recognition task. These results support the hypothesis that both human and deeply learned CNNs use similar mechanisms for recognizing facial expressions.",
"title": ""
},
{
"docid": "b30a31d14e226eea0bc00b68c3f38607",
"text": "String matching plays an important role in field of Computer Science and there are many algorithm of String matching, the important aspect is that which algorithm is to be used in which condition. BM(Boyer-Moore) algorithm is standard benchmark of string matching algorithm so here we explain the BM(Boyer-Moore) algorithm and then explain its improvement as BMH (Boyer-Moore-Horspool), BMHS (Boyer-Moore-Horspool-Sundays), BMHS2 (Boyer-MooreHorspool-Sundays 2), improved BMHS( improved BoyerMoore-Horspool-Sundays) ,BMI (Boyer-Moore improvement) and CBM (composite Boyer-Moore).And also analyze and compare them using a example and find which one is better in which conditions. Keywords-String Matching: BM; BMH; BMHS; BMHS2; improved BMHS; BMI; CBM",
"title": ""
},
{
"docid": "d3875bf0d0bf1af7b7b8044b06152c46",
"text": "This two-part article series covers the design, development, and testing of a reprogrammable UAV autopilot system. Here you get a detailed system-level description of the autopilot design, with specific emphasis on its hardware and software. nmanned aerial vehicle (UAV) usage has increased tremendously in recent years. Although this growth has been fueled mainly by demand from government defense agencies, UAVs are now being used for non-military endeavors as well. Today, UAVs are employed for purposes ranging from wildlife tracking to forest fire monitoring. Advances in microelectronics technology have enabled engineers to automate such aircraft and convert them into useful remote-sensing platforms. For instance, due to sensor development in the automotive industry and elsewhere, the cost of the components required to build such systems has fallen greatly. In this two-part article series, we'll present the design, development, and flight test results for a reprogrammable UAV autopi-lot system. The design is primarily focused on supporting guidance, navigation, and control (GNC) research. It facilitates a fric-tionless transition from software simulation to hardware-in-the-loop (HIL) simulation to flight tests, eliminating the need to write low-level source code. We can easily make, simulate, and test changes in the algorithms on the hardware before attempting flight. The hardware is primarily \" programmed \" using MathWorks Simulink, a block-diagram based tool for modeling, simulating, and analyzing dynamical systems.",
"title": ""
},
{
"docid": "0012f70ed83e001aa074a9c4d1a41a61",
"text": "In this paper, instead of multilayered notch antenna, the ridged tapered slot antenna (RTSA) is chosen as an element of wideband phased array antenna (PAA) since it has rigid body and can be easily manufactured by mechanical wire-cutting. In addition, because the RTSA is made of conductor, it doesn't need via-holes which are required to avoid the blind angles out of the operation frequency band. Theses blind angles come from the self resonance of the dielectric material of notch antenna. We developed wide band/wide scan PAA which has a bandwidth of 3:1 and scan volume of plusmn45deg. In order to determine the shape of the RTSA, the active VSWR (AVSWR) of the RTSA was optimized in the numerical waveguide simulator. And then using the E-plane/H-plane simulator, the AVSWR with beam scan angles in E-plane/H-plane are calculated respectively. On the basis of optimized design, numerical analysis of finite arrays was performed by commercial time domain solver. Through the simulation of 10 times 6 quad-element RTSA arrays, the AVSWR at the center element was computed and compared with the measured result. The active element pattern (AEP) of 10 times 6 quad-element RTSA arrays was also computed and had a good agreement with the measured AEP. From the result of the AEP, we can easily predict that 10 times 6 quad-element RTSA arrays have a good beam scanning capabilities",
"title": ""
},
{
"docid": "dc7361721e3a40de15b3d2211998cc2a",
"text": "Despite advances in surgical technique and postoperative care, fibrosis remains the major impediment to a marked reduction of intraocular pressure without the need of additional medication (complete success) following filtering glaucoma surgery. Several aspects specific to filtering surgery may contribute to enhanced fibrosis. Changes in conjunctival tissue structure and composition due to preceding treatments as well as alterations in interstitial fluid flow and content due to aqueous humor efflux may act as important drivers of fibrosis. In light of these pathophysiological considerations, current and possible future strategies to control fibrosis following filtering glaucoma surgery are discussed.",
"title": ""
},
{
"docid": "2de8df231b5af77cfd141e26fb7a3ace",
"text": "A significant challenge for the practical application of reinforcement learning in the real world is the need to specify an oracle reward function that correctly defines a task. Inverse reinforcement learning (IRL) seeks to avoid this challenge by instead inferring a reward function from expert behavior. While appealing, it can be impractically expensive to collect datasets of demonstrations that cover the variation common in the real world (e.g. opening any type of door). Thus in practice, IRL must commonly be performed with only a limited set of demonstrations where it can be exceedingly difficult to unambiguously recover a reward function. In this work, we exploit the insight that demonstrations from other tasks can be used to constrain the set of possible reward functions by learning a “prior” that is specifically optimized for the ability to infer expressive reward functions from limited numbers of demonstrations. We demonstrate that our method can efficiently recover rewards from images for novel tasks and provide intuition as to how our approach is analogous to learning a prior.",
"title": ""
},
{
"docid": "91bf2f458111b34eb752c9e3c88eb10a",
"text": "The scope of this paper is to explore, analyze and develop a universal architecture that supports mobile payments and mobile banking, taking into consideration the third and the emerging fourth generation communication technologies. Interaction and cooperation between payment and banking systems, integration of existing technologies and exploitation of intelligent procedures provide the prospect to develop an open financial services architecture (OFSA), which satisfies requirements of all involved entities. A unified scenario is designed and a prototype is implemented to demonstrate the feasibility of the proposed architecture. 2007 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "e49ea1a6aa8d7ffec9ca16ac18cfc43a",
"text": "Simultaneous Localization And Mapping (SLAM) is a fundamental problem in mobile robotics. While point-based SLAM methods provide accurate camera localization, the generated maps lack semantic information. On the other hand, state of the art object detection methods provide rich information about entities present in the scene from a single image. This work marries the two and proposes a method for representing generic objects as quadrics which allows object detections to be seamlessly integrated in a SLAM framework. For scene coverage, additional dominant planar structures are modeled as infinite planes. Experiments show that the proposed points-planes-quadrics representation can easily incorporate Manhattan and object affordance constraints, greatly improving camera localization and leading to semantically meaningful maps. The performance of our SLAM system is demonstrated in https://youtu.be/dR-rB9keF8M.",
"title": ""
},
{
"docid": "54f8df63208cf72cfda9a3a01f87d3dc",
"text": "7124 | P a g e C o u n c i l f o r I n n o v a t i v e R e s e a r c h J u l y , 2 0 1 6 w w w . c i r w o r l d . c o m AN IMPLEMENTATION OF LOAD BALANCING ALGORITHM IN CLOUD ENVIRONMENT Sheenam Kamboj , Mr. Navtej Singh Ghumman (2) (1) Research Scholar, Department of Computer Science & Engineering, SBSSTC, Ferozepur, Punjab. [email protected] (2) Assistant Professor, Department of Computer Science & Engineering, SBSSTC, Ferozepur, Punjab. [email protected] ABSTRACT",
"title": ""
},
{
"docid": "c6aabcb242cecdb2b2d7591dbc99ed08",
"text": "Convolutional neural network (CNN) based methods have recently achieved great success for image super-resolution (SR). However, most deep CNN based SR models attempt to improve distortion measures (e.g. PSNR, SSIM, IFC, VIF) while resulting in poor quantified perceptual quality (e.g. human opinion score, no-reference quality measures such as NIQE). Few works have attempted to improve the perceptual quality at the cost of performance reduction in distortion measures. A very recent study has revealed that distortion and perceptual quality are at odds with each other and there is always a trade-off between the two. Often the restoration algorithms that are superior in terms of perceptual quality, are inferior in terms of distortion measures. Our work attempts to analyze the trade-off between distortion and perceptual quality for the problem of single image SR. To this end, we use the well-known SR architectureenhanced deep super-resolution (EDSR) network and show that it can be adapted to achieve better perceptual quality for a specific range of the distortion measure. While the original network of EDSR was trained to minimize the error defined based on perpixel accuracy alone, we train our network using a generative adversarial network framework with EDSR as the generator module. Our proposed network, called enhanced perceptual super-resolution network (EPSR), is trained with a combination of mean squared error loss, perceptual loss, and adversarial loss. Our experiments reveal that EPSR achieves the state-of-the-art trade-off between distortion and perceptual quality while the existing methods perform well in either of these measures alone.",
"title": ""
},
{
"docid": "47027e5df955bc7c8fa64b0753a01d9f",
"text": "Recent years have witnessed great advancements in the science and technology of autonomy, robotics, and networking. This paper surveys recent concepts and algorithms for dynamic vehicle routing (DVR), that is, for the automatic planning of optimal multivehicle routes to perform tasks that are generated over time by an exogenous process. We consider a rich variety of scenarios relevant for robotic applications. We begin by reviewing the basic DVR problem: demands for service arrive at random locations at random times and a vehicle travels to provide on-site service while minimizing the expected wait time of the demands. Next, we treat different multivehicle scenarios based on different models for demands (e.g., demands with different priority levels and impatient demands), vehicles (e.g., motion constraints, communication, and sensing capabilities), and tasks. The performance criterion used in these scenarios is either the expected wait time of the demands or the fraction of demands serviced successfully. In each specific DVR scenario, we adopt a rigorous technical approach that relies upon methods from queueing theory, combinatorial optimization, and stochastic geometry. First, we establish fundamental limits on the achievable performance, including limits on stability and quality of service. Second, we design algorithms, and provide provable guarantees on their performance with respect to the fundamental limits.",
"title": ""
},
{
"docid": "b5dc5268c2eb3b216aa499a639ddfbf9",
"text": "This paper describes a self-localization for indoor mobile robots based on integrating measurement values from multiple optical mouse sensors and a global camera. This paper consists of two parts. Firstly, we propose a dead-reckoning based on increments of the robot movements read directly from the floor using optical mouse sensors. Since the measurement values from multiple optical mouse sensors are compared to each other and only the reliable values are selected, accurate dead-reckoning can be realized compared with the conventional method based on increments of wheel rotations. Secondly, in order to realize robust localization, we propose a method of estimating position and orientation by integrating measured robot position (orientation information is not included) via global camera and dead-reckoning with the Kalman filter",
"title": ""
},
{
"docid": "d984ad1af6b56e515157375c94f62fe5",
"text": "In this paper, we present a novel packet delivery mechanism called Multi-Path and Multi-SPEED Routing Protocol (MMSPEED) for probabilistic QoS guarantee in wireless sensor networks. The QoS provisioning is performed in two quality domains, namely, timeliness and reliability. Multiple QoS levels are provided in the timeliness domain by guaranteeing multiple packet delivery speed options. In the reliability domain, various reliability requirements are supported by probabilistic multipath forwarding. These mechanisms for QoS provisioning are realized in a localized way without global network information by employing localized geographic packet forwarding augmented with dynamic compensation, which compensates for local decision inaccuracies as a packet travels towards its destination. This way, MMSPEED can guarantee end-to-end requirements in a localized way, which is desirable for scalability and adaptability to large scale dynamic sensor networks. Simulation results show that MMSPEED provides QoS differentiation in both reliability and timeliness domains and, as a result, significantly improves the effective capacity of a sensor network in terms of number of flows that meet both reliability and timeliness requirements up to 50 percent (12 flows versus 18 flows).",
"title": ""
},
{
"docid": "ccfa5c06643cb3913b0813103a85e0b0",
"text": "We consider the problem of zero-shot recognition: learning a visual classifier for a category with zero training examples, just using the word embedding of the category and its relationship to other categories, which visual data are provided. The key to dealing with the unfamiliar or novel category is to transfer knowledge obtained from familiar classes to describe the unfamiliar class. In this paper, we build upon the recently introduced Graph Convolutional Network (GCN) and propose an approach that uses both semantic embeddings and the categorical relationships to predict the classifiers. Given a learned knowledge graph (KG), our approach takes as input semantic embeddings for each node (representing visual category). After a series of graph convolutions, we predict the visual classifier for each category. During training, the visual classifiers for a few categories are given to learn the GCN parameters. At test time, these filters are used to predict the visual classifiers of unseen categories. We show that our approach is robust to noise in the KG. More importantly, our approach provides significant improvement in performance compared to the current state-of-the-art results (from 2 ~ 3% on some metrics to whopping 20% on a few).",
"title": ""
},
{
"docid": "53e7e445f9a662cff689eab432d3e73b",
"text": "Location-based services allow mobile device users to access various services based on the users' current physical location information. Path-critical applications, such as supply chain verification, require a chronological ordering of location proofs. It is a significant challenge in distributed and user-centric architectures for users to prove their presence and the path of travel in a privacy-protected and secure manner. So far, proposed schemes for secure location proofs are mostly subject to tampering, not resistant to collusion attacks, do not offer preservation of the provenance, and are not flexible enough for users to prove their provenance of location proofs. In this paper, we present WORAL, a complete ready-to-deploy framework for generating and validating witness oriented asserted location provenance records. The WORAL framework is based on the asserted location proof protocol and the OTIT model for generating secure location provenance on the mobile devices. WORAL allows user-centric, collusion resistant, tamper-evident, privacy protected, verifiable, and provenance preserving location proofs for mobile devices. This paper presents the schematic development, feasibility of usage, comparative advantage over similar protocols, and implementation of WORAL for android device users including a Google Glass-based client for enhanced usability.",
"title": ""
},
{
"docid": "9bd7df9356b87225948cf42bf3ea4604",
"text": "Machine learning techniques work best when the data used for training resembles the data used for evaluation. This holds true for learned single-image denoising algorithms, which are applied to real raw camera sensor readings but, due to practical constraints, are often trained on synthetic image data. Though it is understood that generalizing from synthetic to real images requires careful consideration of the noise properties of camera sensors, the other aspects of an image processing pipeline (such as gain, color correction, and tone mapping) are often overlooked, despite their significant effect on how raw measurements are transformed into finished images. To address this, we present a technique to “unprocess” images by inverting each step of an image processing pipeline, thereby allowing us to synthesize realistic raw sensor measurements from commonly available Internet photos. We additionally model the relevant components of an image processing pipeline when evaluating our loss function, which allows training to be aware of all relevant photometric processing that will occur after denoising. By unprocessing and processing training data and model outputs in this way, we are able to train a simple convolutional neural network that has 14%-38% lower error rates and is 9×-18× faster than the previous state of the art on the Darmstadt Noise Dataset [30], and generalizes to sensors outside of that dataset as well.",
"title": ""
}
] |
scidocsrr
|
bd9696dbeb9f275fa10f67a6205f3393
|
Managing RFID Data: Challenges, Opportunities and Solutions
|
[
{
"docid": "564f9c0a1e1f395d59837e1a4b7f08ef",
"text": "To compensate for the inherent unreliability of RFID data streams, most RFID middleware systems employ a \"smoothing filter\", a sliding-window aggregate that interpolates for lost readings. In this paper, we propose SMURF, the first declarative, adaptive smoothing filter for RFID data cleaning. SMURF models the unreliability of RFID readings by viewing RFID streams as a statistical sample of tags in the physical world, and exploits techniques grounded in sampling theory to drive its cleaning processes. Through the use of tools such as binomial sampling and π-estimators, SMURF continuously adapts the smoothing window size in a principled manner to provide accurate RFID data to applications.",
"title": ""
},
{
"docid": "8ae12d8ef6e58cb1ac376eb8c11cd15a",
"text": "This paper surveys recent technical research on the problems of privacy and security for radio frequency identification (RFID). RFID tags are small, wireless devices that help identify objects and people. Thanks to dropping cost, they are likely to proliferate into the billions in the next several years-and eventually into the trillions. RFID tags track objects in supply chains, and are working their way into the pockets, belongings, and even the bodies of consumers. This survey examines approaches proposed by scientists for privacy protection and integrity assurance in RFID systems, and treats the social and technical context of their work. While geared toward the nonspecialist, the survey may also serve as a reference for specialist readers.",
"title": ""
}
] |
[
{
"docid": "36c4b2ab451c24d2d0d6abcbec491116",
"text": "A key advantage of scientific workflow systems over traditional scripting approaches is their ability to automatically record data and process dependencies introduced during workflow runs. This information is often represented through provenance graphs, which can be used by scientists to better understand, reproduce, and verify scientific results. However, while most systems record and store data and process dependencies, few provide easy-to-use and efficient approaches for accessing and querying provenance information. Instead, users formulate provenance graph queries directly against physical data representations (e.g., relational, XML, or RDF), leading to queries that are difficult to express and expensive to evaluate. We address these problems through a high-level query language tailored for expressing provenance graph queries. The language is based on a general model of provenance supporting scientific workflows that process XML data and employ update semantics. Query constructs are provided for querying both structure and lineage information. Unlike other languages that return sets of nodes as answers, our query language is closed, i.e., answers to lineage queries are sets of lineage dependencies (edges) allowing answers to be further queried. We provide a formal semantics for the language and present novel techniques for efficiently evaluating lineage queries. Experimental results on real and synthetic provenance traces demonstrate that our lineage based optimizations outperform an in-memory and standard database implementation by orders of magnitude. We also show that our strategies are feasible and can significantly reduce both provenance storage size and query execution time when compared with standard approaches.",
"title": ""
},
{
"docid": "8b1734f040031e22c50b6b2a573ff58a",
"text": "Is it permissible to harm one to save many? Classic moral dilemmas are often defined by the conflict between a putatively rational response to maximize aggregate welfare (i.e., the utilitarian judgment) and an emotional aversion to harm (i.e., the non-utilitarian judgment). Here, we address two questions. First, what specific aspect of emotional responding is relevant for these judgments? Second, is this aspect of emotional responding selectively reduced in utilitarians or enhanced in non-utilitarians? The results reveal a key relationship between moral judgment and empathic concern in particular (i.e., feelings of warmth and compassion in response to someone in distress). Utilitarian participants showed significantly reduced empathic concern on an independent empathy measure. These findings therefore reveal diminished empathic concern in utilitarian moral judges.",
"title": ""
},
{
"docid": "d13ce7762aeded7a40a7fbe89f1beccf",
"text": "[Purpose] This study aims to examined the effect of the self-myofascial release induced with a foam roller on the reduction of stress by measuring the serum concentration of cortisol. [Subjects and Methods] The subjects of this study were healthy females in their 20s. They were divided into the experimental and control groups. Both groups, each consisting of 12 subjects, were directed to walk for 30 minutes on a treadmill. The control group rested for 30 minutes of rest by lying down, whereas the experimental group was performed a 30 minutes of self-myofascial release program. [Results] Statistically significant levels of cortisol concentration reduction were observed in both the experimental group, which used the foam roller, and the control group. There was no statistically significant difference between the two groups. [Conclusion] The Self-myofascial release induced with a foam roller did not affect the reduction of stress.",
"title": ""
},
{
"docid": "94a35547a45c06a90f5f50246968b77e",
"text": "In this paper we present a process called color transfer which can borrow one image's color characteristics from another. Recently Reinhard and his colleagues reported a pioneering work of color transfer. Their technology can produce very believable results, but has to transform pixel values from RGB to lαβ. Inspired by their work, we advise an approach which can directly deal with the color transfer in any 3D space.From the view of statistics, we consider pixel's value as a three-dimension stochastic variable and an image as a set of samples, so the correlations between three components can be measured by covariance. Our method imports covariance between three components of pixel values while calculate the mean along each of the three axes. Then we decompose the covariance matrix using SVD algorithm and get a rotation matrix. Finally we can scale, rotate and shift pixel data of target image to fit data points' cluster of source image in the current color space and get resultant image which takes on source image's look and feel. Besides the global processing, a swatch-based method is introduced in order to manipulate images' color more elaborately. Experimental results confirm the validity and usefulness of our method.",
"title": ""
},
{
"docid": "c507ce14998e9ef9e574b1b4cc021dec",
"text": "There are no scientific publications on a electric motor in Tesla cars, so let's try to deduce something. Tesla's induction motor is very enigmatic so the paper tries to introduce a basic model. This secrecy could be interesting for the engineering and physics students. Multidisciplinary problem is considered: kinematics, mechanics, electric motors, numerical methods, control of electric drives. Identification based on three points in the steady-state torque-speed curve of the induction motor is presented. The field weakening mode of operation of the motor is analyzed. The Kloss' formula is obtained. The main aim of the article is determination of a mathematical description of the torque vs. speed curve of induction motor and its application for vehicle motion modeling. Additionally, the moment of inertia of the motor rotor and the electric vehicle mass are considered in one equation as electromechanical system. Presented approach may seem like speculation, but it allows to understand the problem of a vehicle motion. The article composition is different from classical approach - studying should be intriguing.",
"title": ""
},
{
"docid": "25751673cedf36c5e8b7ae310b66a8f2",
"text": "BACKGROUND\nMuscle dysmorphia (MD) describes a condition characterised by a misconstrued body image in which individuals who interpret their body size as both small or weak even though they may look normal or highly muscular.MD has been conceptualized as a type of body dysmorphic disorder, an eating disorder, and obsessive–compulsive disorder symptomatology. METHOD AND AIM: Through a review of the most salient literature on MD, this paper proposes an alternative classification of MD--the ‘Addiction to Body Image’ (ABI) model--using Griffiths (2005)addiction components model as the framework in which to define MD as an addiction.\n\n\nRESULTS\nIt is argued the addictive activity in MD is the maintaining of body image via a number of different activities such as bodybuilding, exercise,eating certain foods, taking specific drugs (e.g., anabolic steroids), shopping for certain foods, food supplements,and the use or purchase of physical exercise accessories). In the ABI model, the perception of the positive effects on the self-body image is accounted for as a critical aspect of the MD condition (rather than addiction to exercise or certain types of eating disorder).\n\n\nCONCLUSIONS\nBased on empirical evidence to date, it is proposed that MD could be re-classified as an addiction due to the individual continuing to engage in maintenance behaviours that may cause long-term harm.",
"title": ""
},
{
"docid": "3e177f8b02a5d67c7f4d93ce601c4539",
"text": "This research proposes an approach for text classification that uses a simple neural network called Dynamic Text Classifier Neural Network (DTCNN). The neural network uses as input vectors of words with variable dimension without information loss called Dynamic Token Vectors (DTV). The proposed neural network is designed for the classification of large and short text into categories. The learning process combines competitive and Hebbian learning. Due to the combination of these learning rules the neural network is able to work in a supervised or semi-supervised mode. In addition, it provides transparency in the classification. The network used in this paper is quite simple, and that is what makes enough for its task. The results of evaluation the proposed method shows an improvement in the text classification problem using the DTCNN compared to baseline approaches.",
"title": ""
},
{
"docid": "fbddd20271cf134e15b33e7d6201c374",
"text": "Authors and publishers who wish their publications to be considered for review in Computational Linguistics should send a copy to the book review editor, Graeme Hirst, Department of Computer Science, University of Toronto, Toronto, Canada M5S 3G4. All relevant books received will be listed, but not all can be reviewed. Technical reports (other than dissertations) will not be listed or reviewed. Authors should be aware that some publishers will not send books for review (even when instructed to do so); authors wishing to enquire as to whether their book has been received for review may contact the book review editor.",
"title": ""
},
{
"docid": "2d254443a7cbe748250acc0070c4a08b",
"text": "This paper introduces a new supervised Bayesian approach to hyperspectral image segmentation with active learning, which consists of two main steps. First, we use a multinomial logistic regression (MLR) model to learn the class posterior probability distributions. This is done by using a recently introduced logistic regression via splitting and augmented Lagrangian algorithm. Second, we use the information acquired in the previous step to segment the hyperspectral image using a multilevel logistic prior that encodes the spatial information. In order to reduce the cost of acquiring large training sets, active learning is performed based on the MLR posterior probabilities. Another contribution of this paper is the introduction of a new active sampling approach, called modified breaking ties, which is able to provide an unbiased sampling. Furthermore, we have implemented our proposed method in an efficient way. For instance, in order to obtain the time-consuming maximum a posteriori segmentation, we use the α-expansion min-cut-based integer optimization algorithm. The state-of-the-art performance of the proposed approach is illustrated using both simulated and real hyperspectral data sets in a number of experimental comparisons with recently introduced hyperspectral image analysis methods.",
"title": ""
},
{
"docid": "3e3dc575858c21806edbe6149475f5e0",
"text": "This paper describes a new model for understanding natural language commands given to autonomous systems that perform navigation and mobile manipulation in semi-structured environments. Previous approaches have used models with fixed structure to infer the likelihood of a sequence of actions given the environment and the command. In contrast, our framework, called Generalized Grounding Graphs (G), dynamically instantiates a probabilistic graphical model for a particular natural language command according to the command’s hierarchical and compositional semantic structure. Our system performs inference in the model to successfully find and execute plans corresponding to natural language commands such as “Put the tire pallet on the truck.” The model is trained using a corpus of commands collected using crowdsourcing. We pair each command with robot actions and use the corpus to learn the parameters of the model. We evaluate the robot’s performance by inferring plans from natural language commands, executing each plan in a realistic robot simulator, and asking users to evaluate the system’s performance. We demonstrate that our system can successfully follow many natural language commands from the corpus.",
"title": ""
},
{
"docid": "6cd317113158241a98517ad5a8247174",
"text": "Feature Oriented Programming (FOP) is an emerging paradigmfor application synthesis, analysis, and optimization. Atarget application is specified declaratively as a set of features,like many consumer products (e.g., personal computers,automobiles). FOP technology translates suchdeclarative specifications into efficient programs.",
"title": ""
},
{
"docid": "619af7dc39e21690c1d164772711d7ed",
"text": "The prevalence of smart mobile devices has promoted the popularity of mobile applications (a.k.a. apps). Supporting mobility has become a promising trend in software engineering research. This article presents an empirical study of behavioral service profiles collected from millions of users whose devices are deployed with Wandoujia, a leading Android app-store service in China. The dataset of Wandoujia service profiles consists of two kinds of user behavioral data from using 0.28 million free Android apps, including (1) app management activities (i.e., downloading, updating, and uninstalling apps) from over 17 million unique users and (2) app network usage from over 6 million unique users. We explore multiple aspects of such behavioral data and present patterns of app usage. Based on the findings as well as derived knowledge, we also suggest some new open opportunities and challenges that can be explored by the research community, including app development, deployment, delivery, revenue, etc.",
"title": ""
},
{
"docid": "44e374587f199b4161315850b58fe2fa",
"text": "This paper discusses a new kind of distortion mechanism found in transistorized audio power amplifiers. It is shown that this distortion arises from the multistage feedback loop found in most high-quality amplifiers, provided that the open-loop transient response of the power amplifier is slower than the transient response of the preamplifier. The results of the analysis are verified by measurements from a simulated power amplifier, and a number of constructional rules for eliminating this distortion are derived. Manuscript received December 3, 1969; revised January 23, 1970. introduction An ordinary transistorized audio amplifier consists of a preamplifier and a power amplifier. The typical preamplifier incorporates two to eight stages with local feedback. The power amplifier has, however, usually a feedback loop enclosing three to four stages. The power amplifier generally determines the frequency response and the distortion of the whole amplifier, For stationary signals, the harmonic distortion of the power amplifier decreases proportionally with increasing feedback, provided that the transfer function of the amplifier is monotonically continuous and that the gain is always greater than zero. (These assumptions are not valid, of course, in case of overload or crossover distortion.) With the same assumptions, the intermodulation distortion decreases similarly. The frequency response is also enhanced in proportion with the feedback. It would seem, then, that feedback is highly beneficial to the power amplifier. The purpose of this paper is, however, to show that the usable frequency response of the amplifier does not necessarily become better due to feedback, and that, under certain circumstances, the feedback can cause severe transient distortion resembling intermodulation distortion. These facts are well known among amplifier designers and have been discussed on a phenomenological basis (for instance [l]). They have not, however, received a. quantitative ,treatment except in some special cases [2], [3 I. Transient Signals in Amplifiers Sound in general, and especially music, consists largely of sudden variations. The steep rise portion of these transient signals can be approximated with a unit step function, provided that the transfer functions of the microphone and the amplifiers are considered separately. We may, therefore, divide the amplifier as in Fig. 1. A is the preamplifier including the microphone, C is the power amplifier, and B is the feedback loop around it. If resistive feedback is to be applied in the power amplifier, stability criteria necessitate its transfer function to have not more than two poles and a single zero in the usable frequency range. The transfer function without feedback can thus be approximated to be of the form F,(s) = d o SD? 1 (1) (s + wo)(s + 4 where A. is the midband gain without feedback, and w1 and w0 are the upper and lower cutoff angular frequencies, respectively. The transfer function of the signal source and the preamplifier can be arbitrary. Usually, however, it can be considered as having several poles and zeros, often multiple. In the following we will consider two special cases. Case a: The transfer function is flat in the midband and has a 12 dB per octave rolloff in both the high-frequency 234 lEEE TRANSACTIONS ON AUDIO AND ELECTROACOUSTICS VOL. AU-18, NO. 3 SEPTEMBER 1970 V1 A v 2 + v 3 0 C 1 ; O v4 Fig. 1. The analyzed circuit. A is the preamplifier which includes the transfer function of the signal source. B is the feedback path around the power amplifier C. Fig. 2. The preamplifier f equency response asymptotes used in the analysis. Asymptote o corresponds to a flot response and asymptote b corresponds to o cose where the high-frequency tone control has been turned to maximum. and the low-frequency ranges. This characteristic is shown in Fig. 2 with asymptote a. Case b. The transfer function in the low-frequency range is similar to Case a. A $6 dB/octave emphasis is applied in the high-frequency range starting at an angular frequency w4 and resulting in asymptote b in Fig. 2 . These two cases are, of course, arbitrary, but are con-. sidered as being representative: the first for the flat response case, and the second, for the worst case where the high-frequency tone control has been turned to maximum. The transfer functions of the preamplifier are then for Case a",
"title": ""
},
{
"docid": "aac360802c767fb9594e033341883578",
"text": "The protection mechanisms of computer systems control the access to objects, especially information objects. The range of responsibilities of these mechanisms includes at one extreme completely isolating executing programs from each other, and at the other extreme permitting complete cooperation and shared access among executing programs. Within this range one can identify at least seven levels at which protection mechanisms can be conceived as being required, each level being more difficult than its predecessor to implement:\n 1. No sharing at all (complete isolation).\n 2. Sharing copies of programs or data files.\n 3. Sharing originals of programs or data files.\n 4. Sharing programming systems or subsystems.\n 5. Permitting the cooperation of mutually suspicious subsystems---e.g., as with debugging or proprietary subsystems.\n 6. Providing \"memoryless\" subsystems---i.e., systems which, having performed their tasks, are guaranteed to have kept no secret record of the task performed (an income-tax computing service, for example, must be allowed to keep billing information on its use by customers but not to store information secretly on customers' incomes).\n 7. Providing \"certified\" subsystems---i.e., those whose correctness has been completely validated and is guaranteed a priori.",
"title": ""
},
{
"docid": "41cc4f54df2533897cc678db9818902b",
"text": "Financial statement fraud has reached the epidemic proportion globally. Recently, financial statement fraud has dominated the corporate news causing debacle at number of companies worldwide. In the wake of failure of many organisations, there is a dire need of prevention and detection of financial statement fraud. Prevention of financial statement fraud is a measure to stop its occurrence initially whereas detection means the identification of such fraud as soon as possible. Fraud detection is required only if prevention has failed. Therefore, a continuous fraud detection mechanism should be in place because management may be unaware about the failure of prevention mechanism. In this paper we propose a data mining framework for prevention and detection of financial statement fraud.",
"title": ""
},
{
"docid": "fec2b6b7cdef1ddf88dffd674fe7111a",
"text": "This paper introduces Dex, a reinforcement learning environment toolkit specialized for training and evaluation of continual learning methods as well as general reinforcement learning problems. We also present the novel continual learning method of incremental learning, where a challenging environment is solved using optimal weight initialization learned from first solving a similar easier environment. We show that incremental learning can produce vastly superior results than standard methods by providing a strong baseline method across ten Dex environments. We finally develop a saliency method for qualitative analysis of reinforcement learning, which shows the impact incremental learning has on network attention.",
"title": ""
},
{
"docid": "10f1e89998a7e463f2996270099bebdc",
"text": "This paper proposes an effective algorithm for recognizing objects and accurately estimating their 6DOF pose in scenes acquired by a RGB-D sensor. The proposed method is based on a combination of different recognition pipelines, each exploiting the data in a diverse manner and generating object hypotheses that are ultimately fused together in an Hypothesis Verification stage that globally enforces geometrical consistency between model hypotheses and the scene. Such a scheme boosts the overall recognition performance as it enhances the strength of the different recognition pipelines while diminishing the impact of their specific weaknesses. The proposed method outperforms the state-of-the-art on two challenging benchmark datasets for object recognition comprising 35 object models and, respectively, 176 and 353 scenes.",
"title": ""
},
{
"docid": "a6b6fd9beb4e8d640e7afdd6086a2552",
"text": "Automatic and accurate estimation of disease severity is essential for food security, disease management, and yield loss prediction. Deep learning, the latest breakthrough in computer vision, is promising for fine-grained disease severity classification, as the method avoids the labor-intensive feature engineering and threshold-based segmentation. Using the apple black rot images in the PlantVillage dataset, which are further annotated by botanists with four severity stages as ground truth, a series of deep convolutional neural networks are trained to diagnose the severity of the disease. The performances of shallow networks trained from scratch and deep models fine-tuned by transfer learning are evaluated systemically in this paper. The best model is the deep VGG16 model trained with transfer learning, which yields an overall accuracy of 90.4% on the hold-out test set. The proposed deep learning model may have great potential in disease control for modern agriculture.",
"title": ""
},
{
"docid": "be1bfd488f90deca658937dd20ee0915",
"text": "This research examined the effects of hands-free cell phone conversations on simulated driving. The authors found that these conversations impaired driver's reactions to vehicles braking in front of them. The authors assessed whether this impairment could be attributed to a withdrawal of attention from the visual scene, yielding a form of inattention blindness. Cell phone conversations impaired explicit recognition memory for roadside billboards. Eye-tracking data indicated that this was due to reduced attention to foveal information. This interpretation was bolstered by data showing that cell phone conversations impaired implicit perceptual memory for items presented at fixation. The data suggest that the impairment of driving performance produced by cell phone conversations is mediated, at least in part, by reduced attention to visual inputs.",
"title": ""
},
{
"docid": "60f9aaa5e3814a9f41218255a17eab1d",
"text": "The constant demand to scale down transistors and improve device performance has led to material as well as process changes in the formation of IC interconnect. Traditionally, aluminum has been used to form the IC interconnects. The process involved subtractive etching of blanket aluminum as defined by the patterned photo resist. However, the scaling and performance demands have led to transition from Aluminum to Copper interconnects. The primary motivation behind the introduction of copper for forming interconnects is the advantages that copper offers over Aluminum. The table 1 below gives a comparison between Aluminum and Copper properties.",
"title": ""
}
] |
scidocsrr
|
99c0d8cba2df38cd4e9d6d5d27499dd5
|
An Analysis of Visual Question Answering Algorithms
|
[
{
"docid": "0a625d5f0164f7ed987a96510c1b6092",
"text": "We present a method that learns to answer visual questions by selecting image regions relevant to the text-based query. Our method maps textual queries and visual features from various regions into a shared space where they are compared for relevance with an inner product. Our method exhibits significant improvements in answering questions such as \"what color,\" where it is necessary to evaluate a specific location, and \"what room,\" where it selectively identifies informative image regions. Our model is tested on the recently released VQA [1] dataset, which features free-form human-annotated questions and answers.",
"title": ""
},
{
"docid": "6a26a8a73aedda5d733ff90415707d75",
"text": "Visual question answering (VQA) tasks use two types of images: abstract (illustrations) and real. Domain-specific differences exist between the two types of images with respect to “objectness,” “texture,” and “color.” Therefore, achieving similar performance by applying methods developed for real images to abstract images, and vice versa, is difficult. This is a critical problem in VQA, because image features are crucial clues for correctly answering the questions about the images. However, an effective, domain-invariant method can provide insight into the high-level reasoning required for VQA. We thus propose a method called DualNet that demonstrates performance that is invariant to the differences in real and abstract scene domains. Experimental results show that DualNet outperforms state-of-the-art methods, especially for the abstract images category.",
"title": ""
},
{
"docid": "8328b1dd52bcc081548a534dc40167a3",
"text": "This work aims to address the problem of imagebased question-answering (QA) with new models and datasets. In our work, we propose to use neural networks and visual semantic embeddings, without intermediate stages such as object detection and image segmentation, to predict answers to simple questions about images. Our model performs 1.8 times better than the only published results on an existing image QA dataset. We also present a question generation algorithm that converts image descriptions, which are widely available, into QA form. We used this algorithm to produce an order-of-magnitude larger dataset, with more evenly distributed answers. A suite of baseline results on this new dataset are also presented.",
"title": ""
}
] |
[
{
"docid": "aa0dc468b1b7402e9eb03848af31216e",
"text": "This paper discusses the construction of speech databases for research into speech information processing and describes a problem illustrated by the case of emotional speech synthesis. It introduces a project for the processing of expressive speech, and describes the data collection techniques and the subsequent analysis of supra-linguistic, and emotional features signalled in the speech. It presents annotation guidelines for distinguishing speaking-style differences, and argues that the focus of analysis for expressive speech processing applications should be on the speaker relationships (defined herein), rather than on emotions.",
"title": ""
},
{
"docid": "414160c5d5137def904c38cccc619628",
"text": "Side-channel attacks, particularly differential power analysis (DPA) attacks, are efficient ways to extract secret keys of the attacked devices by leaked physical information. To resist DPA attacks, hiding and masking methods are commonly used, but it usually resulted in high area overhead and performance degradation. In this brief, a DPA countermeasure circuit based on digital controlled ring oscillators is presented to efficiently resist the first-order DPA attack. The implementation of the critical S-box of the advanced encryption standard (AES) algorithm shows that the area overhead of a single S-box is about 19% without any extra delay in the critical path. Moreover, the countermeasure circuit can be mounted onto different S-box implementations based on composite field or look-up table (LUT). Based on our approach, a DPA-resistant AES chip can be proposed to maintain the same throughput with less than 2K extra gates.",
"title": ""
},
{
"docid": "97e33cc9da9cb944c27d93bb4c09ef3d",
"text": "Synchrophasor devices guarantee situation awareness for real-time monitoring and operational visibility of the smart grid. With their widespread implementation, significant challenges have emerged, especially in communication, data quality and cybersecurity. The existing literature treats these challenges as separate problems, when in reality, they have a complex interplay. This paper conducts a comprehensive review of quality and cybersecurity challenges for synchrophasors, and identifies the interdependencies between them. It also summarizes different methods used to evaluate the dependency and surveys how quality checking methods can be used to detect potential cyberattacks. In doing so, this paper serves as a starting point for researchers entering the fields of synchrophasor data analytics and security.",
"title": ""
},
{
"docid": "476f2a1970349b00ee296cf48aaf4983",
"text": "Web personalization systems are used to enhance the user experience by providing tailor-made services based on the user’s interests and preferences which are typically stored in user profiles. For such systems to remain effective, the profiles need to be able to adapt and reflect the users’ changing behaviour. In this paper, we introduce a set of methods designed to capture and track user interests and maintain dynamic user profiles within a personalization system. User interests are represented as ontological concepts which are constructed by mapping web pages visited by a user to a reference ontology and are subsequently used to learn short-term and long-term interests. A multi-agent system facilitates and coordinates the capture, storage, management and adaptation of user interests. We propose a search system that utilizes our dynamic user profile to provide a personalized search experience. We present a series of experiments that show how our system can effectively model a dynamic user profile and is capable of learning and adapting to different user browsing behaviours.",
"title": ""
},
{
"docid": "7d0b37434699aa5c3b36de33549a2b68",
"text": "In Ethiopia, malaria control has been complicated due to resistance of the parasite to the current drugs. Thus, new drugs are required against drug-resistant Plasmodium strains. Historically, many of the present antimalarial drugs were discovered from plants. This study was, therefore, conducted to document antimalarial plants utilized by Sidama people of Boricha District, Sidama Zone, South Region of Ethiopia. An ethnobotanical survey was carried out from September 2011 to February 2012. Data were collected through semistructured interview and field and market observations. Relative frequency of citation (RFC) was calculated and preference ranking exercises were conducted to estimate the importance of the reported medicinal plants in Boricha District. A total of 42 antimalarial plants belonging to 27 families were recorded in the study area. Leaf was the dominant plant part (59.0%) used in the preparation of remedies and oral (97.4%) was the major route of administration. Ajuga integrifolia scored the highest RFC value (0.80). The results of this study revealed the existence of rich knowledge on the use of medicinal plants in the study area to treat malaria. Thus, an attempt should be made to conserve and evaluate the claimed antimalarial medicinal plants with priority given to those that scored the highest RFC values.",
"title": ""
},
{
"docid": "deff50d73af79e57550016e8975de679",
"text": "The phase noise of a phase-locked loop (PLL) has a great impact on the performance of frequency-modulated continuous-wave (FMCW) radar. To examine the effects of the phase noise on FMCW radar performance, a model of an FMCW radar with a noisy PLL is developed. A filter-based technique for modeling the PLL phase noise is described. The radar model shows that PLL in-band phase noise affects the spatial resolution of the FMCW radar, whereas PLL out-of-band phase noise limits the maximum range. Finally, we propose a set of design constraints for PLL based on the model simulation results.",
"title": ""
},
{
"docid": "e913a4d2206be999f0278d48caa4708a",
"text": "Widespread deployment of the Internet enabled building of an emerging IT delivery model, i.e., cloud computing. Albeit cloud computing-based services have rapidly developed, their security aspects are still at the initial stage of development. In order to preserve cybersecurity in cloud computing, cybersecurity information that will be exchanged within it needs to be identified and discussed. For this purpose, we propose an ontological approach to cybersecurity in cloud computing. We build an ontology for cybersecurity operational information based on actual cybersecurity operations mainly focused on non-cloud computing. In order to discuss necessary cybersecurity information in cloud computing, we apply the ontology to cloud computing. Through the discussion, we identify essential changes in cloud computing such as data-asset decoupling and clarify the cybersecurity information required by the changes such as data provenance and resource dependency information.",
"title": ""
},
{
"docid": "e8ff6978cae740152a918284ebe49fe3",
"text": "Cross-lingual sentiment classification aims to predict the sentiment orientation of a text in a language (named as the target language) with the help of the resources from another language (named as the source language). However, current cross-lingual performance is normally far away from satisfaction due to the huge difference in linguistic expression and social culture. In this paper, we suggest to perform active learning for cross-lingual sentiment classification, where only a small scale of samples are actively selected and manually annotated to achieve reasonable performance in a short time for the target language. The challenge therein is that there are normally much more labeled samples in the source language than those in the target language. This makes the small amount of labeled samples from the target language flooded in the aboundance of labeled samples from the source language, which largely reduces their impact on cross-lingual sentiment classification. To address this issue, we propose a data quality controlling approach in the source language to select high-quality samples from the source language. Specifically, we propose two kinds of data quality measurements, intraand extra-quality measurements, from the certainty and similarity perspectives. Empirical studies verify the appropriateness of our active learning approach to cross-lingual sentiment classification.",
"title": ""
},
{
"docid": "fe98f8e9f9fd864c9c94b861f2c1db70",
"text": "The importance of intellectual talent to achievement in all professional domains is well established, but less is known about other individual differences that predict success. The authors tested the importance of 1 noncognitive trait: grit. Defined as perseverance and passion for long-term goals, grit accounted for an average of 4% of the variance in success outcomes, including educational attainment among 2 samples of adults (N=1,545 and N=690), grade point average among Ivy League undergraduates (N=138), retention in 2 classes of United States Military Academy, West Point, cadets (N=1,218 and N=1,308), and ranking in the National Spelling Bee (N=175). Grit did not relate positively to IQ but was highly correlated with Big Five Conscientiousness. Grit nonetheless demonstrated incremental predictive validity of success measures over and beyond IQ and conscientiousness. Collectively, these findings suggest that the achievement of difficult goals entails not only talent but also the sustained and focused application of talent over time.",
"title": ""
},
{
"docid": "89d0ffd0b809acafda10a20bd5f35a77",
"text": "Microscopic analysis of erythrocytes in urine is a valuable diagnostic tool for identifying glomerular hematuria. Indicative of glomerular hematuria is the presence of erythrocyte casts and polyand dysmorphic erythrocytes. In contrast, in non-glomerular hematuria, urine sediment erythrocytes are monoand isomorphic, and erythrocyte casts are absent (1, 2) . To date, various variant forms of dysmorphic erythrocyte morphology have been defi ned and classifi ed. They are categorized as: D1, D2, and D3 cells (2) . D1 and D2 cells are also referred to as acanthocytes or G1 cells which are mickey mouse-like cells with membrane protrusions and severe (D1) to mild (D2) loss of cytoplasmic color (2) . D3 cells are doughnut-like or other polyand dysmorphic forms that include discocytes, knizocytes, anulocytes, stomatocytes, codocytes, and schizocytes (2, 3) . The cellular morphology of these cells is observed to have mild cytoplasmic loss, and symmetrical shaped membranes free of protrusions. Echinocytes and pseudo-acanthocytes (bite-cells) are not considered to be dysmorphic erythrocytes. Glomerular hematuria is likely if more than 40 % of erythrocytes are dysmorphic or 5 % are D1-D2 cells and nephrologic work-up should be considered (2) . For over 20 years, manual microscopy has been the prevailing technique for examining dysmorphic erythrocytes in urine sediments when glomerular pathology is suspected (4, 5) . This labor-intensive method requires signifi cant expertise and experience to ensure consistent and accurate analysis. A more immediate and defi nitive automated technique that classifi es dysmorphic erythrocytes at least as good as the manual method would be an invaluable asset in the routine clinical laboratory practice. Therefore, the aim of the study was to investigate the use of the Iris Diagnostics automated iQ200 (Instrumentation Laboratory, Brussels, Belgium) as an automated platform for screening of dysmorphic erythrocytes. The iQ200 has proven to be an effi cient and reliable asset for our urinalysis (5) , but has not been used for the quantifi cation of dysmorphic erythrocytes. In total, 207 urine specimens of patients with suspected glomerular pathology were initially examined using manual phase contrast microscopy by two independent experienced laboratory technicians at a university medical center. The same specimens were re-evaluated using the Iris iQ200 instrument at our facility, which is a teaching hospital. The accuracy of the iQ200 was compared to the results of manual microscopy for detecting dysmorphic erythrocytes. Urine samples were processed within 2 h of voiding. Upon receipt, uncentrifuged urine samples were used for strip analysis using the AutionMax Urine Analyzer (Menarini, Valkenswaard, The Netherlands). For analysis of dysmorphic erythrocytes 20 mL urine was fi xed with CellFIX TM (a formaldehyde containing fi xative solution; BD Biosciences, Breda, The Netherlands) at a dilution of 100:1 (6) . One half of fi xed urine was centrifuged at 500 × g for 10 min and the pellet analyzed by two independent experienced technicians using phase-contrast microscopy. The other half was analyzed by automated urine sediment analyzer using the iQ200. The iQ200 uses a fl ow cell that hydrodynamically orients the particles within the focal plane of a microscopic lens coupled to a 1.3 megapixel CCD digital camera. Each particle image is digitized and sent to the instrument processor. For our study, the instrument ’ s cellrecognition function for classifying erythrocytes was used. Although the iQ200 can easily recognize and classify normal erythrocytes it cannot automatically classify dysmorphic erythrocytes. Instead, two independent and experienced technicians review the images in categories ‘ normal erythrocytes ’ and ‘ unclassifi ed ’ and reclassify dysmorphic erythrocytes to a separate ‘ dysmorphic ’ category. To minimize *Corresponding author: Ayşe Y. Demir, MD, PhD, Department of Clinical Chemistry and Haematology, Meander Medical Center Utrechtseweg 160, 3818 ES Amersfoort, The Netherlands Phone: + 31 33 8504344, Fax: + 31 33 8502035 , E-mail: [email protected] Received September 20, 2011; accepted November 15, 2011; previously published online December 7, 2011",
"title": ""
},
{
"docid": "786ef1b656c182ab71f7a63e7f263b3f",
"text": "The spectrum of a first-order sentence is the set of cardinalities of its finite models. This paper is concerned with spectra of sentences over languages that contain only unary function symbols. In particular, it is shown that a set S of natural numbers is the spectrum of a sentence over the language of one unary function symbol precisely if S is an eventually periodic set.",
"title": ""
},
{
"docid": "04d9f96fcd218e61f41412518c18cf31",
"text": "Squeak is an open, highly-portable Smalltalk implementation whose virtual machine is written entirely in Smalltalk, making it easy to. debug, analyze, and change. To achieve practical performance, a translator produces an equivalent C program whose performance is comparable to commercial Smalltalks.Other noteworthy aspects of Squeak include: a compact object format that typically requires only a single word of overhead per object; a simple yet efficient incremental garbage collector for 32-bit direct pointers; efficient bulk-mutation of objects; extensions of BitBlt to handle color of any depth and anti-aliased image rotation and scaling; and real-time sound and music synthesis written entirely in Smalltalk.",
"title": ""
},
{
"docid": "88ccacd6f14a9c00b54b8f465f3dfba0",
"text": "Autoencoders have been successful in learning meaningful representations from image datasets. However, their performance on text datasets has not been widely studied. Traditional autoencoders tend to learn possibly trivial representations of text documents due to their confoundin properties such as high-dimensionality, sparsity and power-law word distributions. In this paper, we propose a novel k-competitive autoencoder, called KATE, for text documents. Due to the competition between the neurons in the hidden layer, each neuron becomes specialized in recognizing specific data patterns, and overall the model can learn meaningful representations of textual data. A comprehensive set of experiments show that KATE can learn better representations than traditional autoencoders including denoising, contractive, variational, and k-sparse autoencoders. Our model also outperforms deep generative models, probabilistic topic models, and even word representation models (e.g., Word2Vec) in terms of several downstream tasks such as document classification, regression, and retrieval.",
"title": ""
},
{
"docid": "4d93be453dcb767faca082d966af5f3a",
"text": "This paper presents a unified variational formulation for joint object segmentation and stereo matching, which takes both accuracy and efficiency into account. In our approach, depth-map consists of compact objects, each object is represented through three different aspects: the perimeter in image space; the slanted object depth plane; and the planar bias, which is to add an additional level of detail on top of each object plane in order to model depth variations within an object. Compared with traditional high quality solving methods in low level, we use a convex formulation of the multilabel Potts Model with PatchMatch stereo techniques to generate depth-map at each image in object level and show that accurate multiple view reconstruction can be achieved with our formulation by means of induced homography without discretization or staircasing artifacts. Our model is formulated as an energy minimization that is optimized via a fast primal-dual algorithm, which can handle several hundred object depth segments efficiently. Performance evaluations in the Middlebury benchmark data sets show that our method outperforms the traditional integer-valued disparity strategy as well as the original PatchMatch algorithm and its variants in subpixel accurate disparity estimation. The proposed algorithm is also evaluated and shown to produce consistently good results for various real-world data sets (KITTI benchmark data sets and multiview benchmark data sets).",
"title": ""
},
{
"docid": "3f9ebd4116759203856e2387a4f91f4c",
"text": "Many real world stochastic control problems suffer from the “curse of dimensionality”. To overcome this difficulty, we develop a deep learning approach that directly solves high-dimensional stochastic control problems based on Monte-Carlo sampling. We approximate the time-dependent controls as feedforward neural networks and stack these networks together through model dynamics. The objective function for the control problem plays the role of the loss function for the deep neural network. We test this approach using examples from the areas of optimal trading and energy storage. Our results suggest that the algorithm presented here achieves satisfactory accuracy and at the same time, can handle rather high dimensional problems.",
"title": ""
},
{
"docid": "bc6c7fcd98160c48cd3b72abff8fad02",
"text": "A new concept of formality of linguistic expressions is introduced and argued to be the most important dimension of variation between styles or registers. Formality is subdivided into \"deep\" formality and \"surface\" formality. Deep formality is defined as avoidance of ambiguity by minimizing the context-dependence and fuzziness of expressions. This is achieved by explicit and precise description of the elements of the context needed to disambiguate the expression. A formal style is characterized by detachment, accuracy, rigidity and heaviness; an informal style is more flexible, direct, implicit, and involved, but less informative. An empirical measure of formality, the F-score, is proposed, based on the frequencies of different word classes in the corpus. Nouns, adjectives, articles and prepositions are more frequent in formal styles; pronouns, adverbs, verbs and interjections are more frequent in informal styles. It is shown that this measure, though coarse-grained, adequately distinguishes more from less formal genres of language production, for some available corpora in Dutch, French, Italian, and English. A factor similar to the F-score automatically emerges as the most important one from factor analyses applied to extensive data in 7 different languages. Different situational and personality factors are examined which determine the degree of formality in linguistic expression. It is proposed that formality becomes larger when the distance in space, time or background between the interlocutors increases, and when the speaker is male, introverted or academically educated. Some empirical evidence and a preliminary theoretical explanation for these propositions is discussed. Short Abstract: The concept of \"deep\" formality is proposed as the most important dimension of variation between language registers or styles. It is defined as avoidance of ambiguity by minimizing the context-dependence and fuzziness of expressions. An empirical measure, the F-score, is proposed, based on the frequencies of different word classes. This measure adequately distinguishes different genres of language production using data for Dutch, French, Italian, and English. Factor analyses applied to data in 7 different languages produce a similar factor as the most important one. Both the data and the theoretical model suggest that formality increases when the distance in space, time or background between the interlocutors increases, and when the speaker is male, introverted or academically educated.",
"title": ""
},
{
"docid": "e81736e35fe06b0e7e15e61329c6f4c9",
"text": "Aphasia is an acquired communication disorder often resulting from stroke that can impact quality of life and may lead to high levels of stress and depression. Depression diagnosis in this population is often completed through subjective caregiver questionnaires. Stress diagnostic tests have not been modified for language difficulties. This work proposes to use speech analysis as an objective measure of stress and depression in patients with aphasia. Preliminary analysis used linear support vector regression models to predict depression scores and stress scores for a total of 19 and 18 participants respectively. Teager Energy Operator-Amplitude Modulation features performed the best in predicting the Perceived Stress Scale score based on various measures. The complications of speech in people with aphasia are examined and indicate the need for future work on this understudied population.",
"title": ""
},
{
"docid": "a0547eae9a2186d4c6f1b8307317f061",
"text": "Leadership scholars have called for additional research on leadership skill requirements and how those requirements vary by organizational level. In this study, leadership skill requirements are conceptualized as being layered (strata) and segmented (plex), and are thus described using a strataplex. Based on previous conceptualizations, this study proposes a model made up of four categories of leadership skill requirements: Cognitive skills, Interpersonal skills, Business skills, and Strategic skills. The model is then tested in a sample of approximately 1000 junior, midlevel, and senior managers, comprising a full career track in the organization. Findings support the “plex” element of the model through the emergence of four leadership skill requirement categories. Findings also support the “strata” portion of the model in that different categories of leadership skill requirements emerge at different organizational levels, and that jobs at higher levels of the organization require higher levels of all leadership skills. In addition, although certain Cognitive skill requirements are important across organizational levels, certain Strategic skill requirements only fully emerge at the highest levels in the organization. Thus a strataplex proved to be a valuable tool for conceptualizing leadership skill requirements across organizational levels. © 2007 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "e964a46706179a92b775307166a64c8a",
"text": "I general, perceptions of information systems (IS) success have been investigated within two primary research streams—the user satisfaction literature and the technology acceptance literature. These two approaches have been developed in parallel and have not been reconciled or integrated. This paper develops an integrated research model that distinguishes beliefs and attitudes about the system (i.e., object-based beliefs and attitudes) from beliefs and attitudes about using the system (i.e., behavioral beliefs and attitudes) to build the theoretical logic that links the user satisfaction and technology acceptance literature. The model is then tested using a sample of 465 users from seven different organizations who completed a survey regarding their use of data warehousing software. The proposed model was supported, providing preliminary evidence that the two perspectives can and should be integrated. The integrated model helps build the bridge from design and implementation decisions to system characteristics (a core strength of the user satisfaction literature) to the prediction of usage (a core strength of the technology acceptance literature).",
"title": ""
},
{
"docid": "c2571f794304a6b0efdc4fe22bac89e5",
"text": "PURPOSE\nThe aim of this study was to analyse the psychometric properties of the Portuguese version of the body image scale (BIS; Hopwood, P., Fletcher, I., Lee, A., Al Ghazal, S., 2001. A body image scale for use with cancer patients. European Journal of Cancer, 37, 189-197). This is a brief and psychometric robust measure of body image for use with cancer patients, independently of age, cancer type, treatment or stage of the disease and it was developed in collaboration with the European Organization for Research and Treatment of Cancer (EORTC) Quality of Life Study Group.\n\n\nMETHOD\nThe sample is comprised of 173 Portuguese postoperative breast cancer patients that completed a battery of measures that included the BIS and other scales of body image and quality of life, in order to explore its construct validity.\n\n\nRESULTS\nThe Portuguese version of BIS confirmed the original unidimensional structure and demonstrated adequate internal consistency, both in the global sample (alpha=.93) as in surgical subgroups (mastectomy=.92 and breast-conserving surgery=.93). Evidence for the construct validity was provided through moderate to largely sized correlations between the BIS and other related measures. In further support of its discriminant validity, significant differences in BIS scores were found between women who underwent mastectomy and those who underwent breast-conserving surgery, with the former presenting higher scores. Age and time since diagnosis were not associated with BIS scores.\n\n\nCONCLUSIONS\nThe Portuguese BIS proved to be a reliable and valid measure of body image concerns in a sample of breast cancer patients, allowing a brief and comprehensive assessment, both on clinical and research settings.",
"title": ""
}
] |
scidocsrr
|
1c134e6fa0f2c18e9624284fb32eda81
|
The Fallacy of the Net Promoter Score : Customer Loyalty Predictive Model
|
[
{
"docid": "7401c7f3a396a76e9a806863bef7ff7c",
"text": "Complexity surrounding the holistic nature of customer experience has made measuring customer perceptions of interactive service experiences, challenging. At the same time, advances in technology and changes in methods for collecting explicit customer feedback are generating increasing volumes of unstructured textual data, making it difficult for managers to analyze and interpret this information. Consequently, text mining, a method enabling automatic extraction of information from textual data, is gaining in popularity. However, this method has performed below expectations in terms of depth of analysis of customer experience feedback and accuracy. In this study, we advance linguistics-based text mining modeling to inform the process of developing an improved framework. The proposed framework incorporates important elements of customer experience, service methodologies and theories such as co-creation processes, interactions and context. This more holistic approach for analyzing feedback facilitates a deeper analysis of customer feedback experiences, by encompassing three value creation elements: activities, resources, and context (ARC). Empirical results show that the ARC framework facilitates the development of a text mining model for analysis of customer textual feedback that enables companies to assess the impact of interactive service processes on customer experiences. The proposed text mining model shows high accuracy levels and provides flexibility through training. As such, it can evolve to account for changing contexts over time and be deployed across different (service) business domains; we term it an “open learning” model. The ability to timely assess customer experience feedback represents a pre-requisite for successful co-creation processes in a service environment. Accepted as: Ordenes, F. V., Theodoulidis, B., Burton, J., Gruber, T., & Zaki, M. (2014). Analyzing Customer Experience Feedback Using Text Mining A Linguistics-Based Approach. Journal of Service Research, August, 17(3) 278-295.",
"title": ""
}
] |
[
{
"docid": "32f55ca936d96b92c1bf38d51cd183b3",
"text": "Traditionally, a Certification Authority (CA) is required to sign, manage, verify and revoke public key certificates. Multiple CAs together form the CA-based Public Key Infrastructure (PKI). The use of a PKI forces one to place trust in the CAs, which have proven to be a single point-of-failure on multiple occasions. Blockchain has emerged as a transformational technology that replaces centralized trusted third parties with a decentralized, publicly verifiable, peer-to-peer data store which maintains data integrity among nodes through various consensus protocols. In this paper, we deploy three blockchain-based alternatives to the CA-based PKI for supporting IoT devices, based on Emercoin Name Value Service (NVS), smart contracts by Ethereum blockchain, and Ethereum Light Sync client. We compare these approaches with CA-based PKI and show that they are much more efficient in terms of computational and storage requirements in addition to providing a more robust and scalable PKI.",
"title": ""
},
{
"docid": "48c28572e5eafda1598a422fa1256569",
"text": "Future power networks will be characterized by safe and reliable functionality against physical and cyber attacks. This paper proposes a unified framework and advanced monitoring procedures to detect and identify network components malfunction or measurements corruption caused by an omniscient adversary. We model a power system under cyber-physical attack as a linear time-invariant descriptor system with unknown inputs. Our attack model generalizes the prototypical stealth, (dynamic) false-data injection and replay attacks. We characterize the fundamental limitations of both static and dynamic procedures for attack detection and identification. Additionally, we design provably-correct (dynamic) detection and identification procedures based on tools from geometric control theory. Finally, we illustrate the effectiveness of our method through a comparison with existing (static) detection algorithms, and through a numerical study.",
"title": ""
},
{
"docid": "bc0294e230abff5c47d5db0d81172bbc",
"text": "Pulse radiolysis experiments were used to characterize the intermediates formed from ibuprofen during electron beam irradiation in a solution of 0.1mmoldm(-3). For end product characterization (60)Co γ-irradiation was used and the samples were evaluated either by taking their UV-vis spectra or by HPLC with UV or MS detection. The reactions of OH resulted in hydroxycyclohexadienyl type radical intermediates. The intermediates produced in further reactions hydroxylated the derivatives of ibuprofen as final products. The hydrated electron attacked the carboxyl group. Ibuprofen degradation is more efficient under oxidative conditions than under reductive conditions. The ecotoxicity of the solution was monitored by Daphnia magna standard microbiotest and Vibrio fischeri luminescent bacteria test. The toxic effect of the aerated ibuprofen solution first increased upon irradiation indicating a higher toxicity of the first degradation products, then decreased with increasing absorbed dose.",
"title": ""
},
{
"docid": "92625cb17367de65a912cb59ea767a19",
"text": "With the fast progression of data exchange in electronic way, information security is becoming more important in data storage and transmission. Because of widely using images in industrial process, it is important to protect the confidential image data from unauthorized access. In this paper, we analyzed current image encryption algorithms and compression is added for two of them (Mirror-like image encryption and Visual Cryptography). Implementations of these two algorithms have been realized for experimental purposes. The results of analysis are given in this paper. Keywords—image encryption, image cryptosystem, security, transmission.",
"title": ""
},
{
"docid": "b67fadb3f5dca0e74bebc498260f99a4",
"text": "The interactive computation paradigm is reviewed and a particular example is extended to form the stochastic analog of a computational process via a transcription of a minimal Turing Machine into an equivalent asynchronous Cellular Automaton with an exponential waiting times distribution of effective transitions. Furthermore, a special toolbox for analytic derivation of recursive relations of important statistical and other quantities is introduced in the form of an Inductive Combinatorial Hierarchy.",
"title": ""
},
{
"docid": "ff9ca485a07dca02434396eca0f0c94f",
"text": "Clustering is a NP-hard problem that is used to find the relationship between patterns in a given set of patterns. It is an unsupervised technique that is applied to obtain the optimal cluster centers, especially in partitioned based clustering algorithms. On the other hand, cat swarm optimization (CSO) is a new metaheuristic algorithm that has been applied to solve various optimization problems and it provides better results in comparison to other similar types of algorithms. However, this algorithm suffers from diversity and local optima problems. To overcome these problems, we are proposing an improved version of the CSO algorithm by using opposition-based learning and the Cauchy mutation operator. We applied the opposition-based learning method to enhance the diversity of the CSO algorithm and we used the Cauchy mutation operator to prevent the CSO algorithm from trapping in local optima. The performance of our proposed algorithm was tested with several artificial and real datasets and compared with existing methods like K-means, particle swarm optimization, and CSO. The experimental results show the applicability of our proposed method.",
"title": ""
},
{
"docid": "f60e01205f1760c3aac261a05dfd7695",
"text": "The recommendation system is one of the core technologies for implementing personalization services. Recommendation systems in ubiquitous computing environment should have the capability of context-awareness. In this research, we developed a music recommendation system, which we shall call C_Music, which utilizes not only the user’s demographics and behavioral patterns but also the user’s context. For a specific user in a specific context, the C_Music recommends the music that the similar users listened most in the similar context. In evaluating the performance of C_Music using a real world data, it outperforms the comparative system that utilizes the user’s demographics and behavioral patterns only.",
"title": ""
},
{
"docid": "dcc55431a2da871c60abfd53ce270bad",
"text": "Synchrophasor Standards have evolved since the introduction of the first one, IEEE Standard 1344, in 1995. IEEE Standard C37.118-2005 introduced measurement accuracy under steady state conditions as well as interference rejection. In 2009, the IEEE started a joint project with IEC to harmonize real time communications in IEEE Standard C37.118-2005 with the IEC 61850 communication standard. These efforts led to the need to split the C37.118 into 2 different standards: IEEE Standard C37.118.1-2011 that now includes performance of synchrophasors under dynamic systems conditions; and IEEE Standard C37.118.2-2011 Synchrophasor Data Transfer for Power Systems, the object of this paper.",
"title": ""
},
{
"docid": "3371fe8778b813360debc384040c510e",
"text": "Medication non-adherence is a major concern in the healthcare industry and has led to increases in health risks and medical costs. For many neurological diseases, adherence to medication regimens can be assessed by observing movement patterns. However, physician observations are typically assessed based on visual inspection of movement and are limited to clinical testing procedures. Consequently, medication adherence is difficult to measure when patients are away from the clinical setting. The authors propose a data mining driven methodology that uses low cost, non-wearable multimodal sensors to model and predict patients' adherence to medication protocols, based on variations in their gait. The authors conduct a study involving Parkinson's disease patients that are \"on\" and \"off\" their medication in order to determine the statistical validity of the methodology. The data acquired can then be used to quantify patients' adherence while away from the clinic. Accordingly, this data-driven system may allow for early warnings regarding patient safety. Using whole-body movement data readings from the patients, the authors were able to discriminate between PD patients on and off medication, with accuracies greater than 97% for some patients using an individually customized model and accuracies of 78% for a generalized model containing multiple patient gait data. The proposed methodology and study demonstrate the potential and effectiveness of using low cost, non-wearable hardware and data mining models to monitor medication adherence outside of the traditional healthcare facility. These innovations may allow for cost effective, remote monitoring of treatment of neurological diseases.",
"title": ""
},
{
"docid": "216698730aa68b3044f03c64b77e0e62",
"text": "Portable biomedical instrumentation has become an important part of diagnostic and treatment instrumentation. Low-voltage and low-power tendencies prevail. A two-electrode biopotential amplifier, designed for low-supply voltage (2.7–5.5 V), is presented. This biomedical amplifier design has high differential and sufficiently low common mode input impedances achieved by means of positive feedback, implemented with an original interface stage. The presented circuit makes use of passive components of popular values and tolerances. The amplifier is intended for use in various two-electrode applications, such as Holter monitors, external defibrillators, ECG monitors and other heart beat sensing biomedical devices.",
"title": ""
},
{
"docid": "dce032d1568e8012053de20fa7063c25",
"text": "Radial visualization continues to be a popular design choice in information visualization systems, due perhaps in part to its aesthetic appeal. However, it is an open question whether radial visualizations are truly more effective than their Cartesian counterparts. In this paper, we describe an initial user trial from an ongoing empirical study of the SQiRL (Simple Query interface with a Radial Layout) visualization system, which supports both radial and Cartesian projections of stacked bar charts. Participants were shown 20 diagrams employing a mixture of radial and Cartesian layouts and were asked to perform basic analysis on each. The participants' speed and accuracy for both visualization types were recorded. Our initial findings suggest that, in spite of the widely perceived advantages of Cartesian visualization over radial visualization, both forms of layout are, in fact, equally usable. Moreover, radial visualization may have a slight advantage over Cartesian for certain tasks. In a follow-on study, we plan to test users' ability to create, as well as read and interpret, radial and Cartesian diagrams in SQiRL.",
"title": ""
},
{
"docid": "b151343a4c1e365ede70a71880065aab",
"text": "Cardiovascular disease (CVD) and depression are common. Patients with CVD have more depression than the general population. Persons with depression are more likely to eventually develop CVD and also have a higher mortality rate than the general population. Patients with CVD, who are also depressed, have a worse outcome than those patients who are not depressed. There is a graded relationship: the more severe the depression, the higher the subsequent risk of mortality and other cardiovascular events. It is possible that depression is only a marker for more severe CVD which so far cannot be detected using our currently available investigations. However, given the increased prevalence of depression in patients with CVD, a causal relationship with either CVD causing more depression or depression causing more CVD and a worse prognosis for CVD is probable. There are many possible pathogenetic mechanisms that have been described, which are plausible and that might well be important. However, whether or not there is a causal relationship, depression is the main driver of quality of life and requires prevention, detection, and management in its own right. Depression after an acute cardiac event is commonly an adjustment disorder than can improve spontaneously with comprehensive cardiac management. Additional management strategies for depressed cardiac patients include cardiac rehabilitation and exercise programmes, general support, cognitive behavioural therapy, antidepressant medication, combined approaches, and probably disease management programmes.",
"title": ""
},
{
"docid": "e45c07c42c1a7f235dd5cb511c131d30",
"text": "This paper is about mapping images to continuous output spaces using powerful Bayesian learning techniques. A sparse, semi-supervised Gaussian process regression model (S3GP) is introduced which learns a mapping using only partially labelled training data. We show that sparsity bestows efficiency on the S3GP which requires minimal CPU utilization for real-time operation; the predictions of uncertainty made by the S3GP are more accurate than those of other models leading to considerable performance improvements when combined with a probabilistic filter; and the ability to learn from semi-supervised data simplifies the process of collecting training data. The S3GP uses a mixture of different image features: this is also shown to improve the accuracy and consistency of the mapping. A major application of this work is its use as a gaze tracking system in which images of a human eye are mapped to screen coordinates: in this capacity our approach is efficient, accurate and versatile.",
"title": ""
},
{
"docid": "637ca0ccdc858c9e84ffea1bd3531024",
"text": "We propose a method to facilitate search through the storyline of TV series episodes. To this end, we use human written, crowdsourced descriptions—plot synopses—of the story conveyed in the video. We obtain such synopses from websites such as Wikipedia and propose various methods to align each sentence of the plot to shots in the video. Thus, the semantic story-based video retrieval problem is transformed into a much simpler text-based search. Finally, we return the set of shots aligned to the sentences as the video snippet corresponding to the query. The alignment is performed by first computing a similarity score between every shot and sentence through cues such as character identities and keyword matches between plot synopses and subtitles. We then formulate the alignment as an optimization problem and solve it efficiently using dynamic programming. We evaluate our methods on the fifth season of a TV series Buffy the Vampire Slayer and show encouraging results for both the alignment and the retrieval of story events.",
"title": ""
},
{
"docid": "b7851d3e08d29d613fd908d930afcd6b",
"text": "Word sense embeddings represent a word sense as a low-dimensional numeric vector. While this representation is potentially useful for NLP applications, its interpretability is inherently limited. We propose a simple technique that improves interpretability of sense vectors by mapping them to synsets of a lexical resource. Our experiments with AdaGram sense embeddings and BabelNet synsets show that it is possible to retrieve synsets that correspond to automatically learned sense vectors with Precision of 0.87, Recall of 0.42 and AUC of 0.78.",
"title": ""
},
{
"docid": "e9f9a7c506221bacf966808f54c4f056",
"text": "Reconfigurable antennas, with the ability to radiate more than one pattern at different frequencies and polarizations, are necessary in modern telecommunication systems. The requirements for increased functionality (e.g., direction finding, beam steering, radar, control, and command) within a confined volume place a greater burden on today's transmitting and receiving systems. Reconfigurable antennas are a solution to this problem. This paper discusses the different reconfigurable components that can be used in an antenna to modify its structure and function. These reconfiguration techniques are either based on the integration of radio-frequency microelectromechanical systems (RF-MEMS), PIN diodes, varactors, photoconductive elements, or on the physical alteration of the antenna radiating structure, or on the use of smart materials such as ferrites and liquid crystals. Various activation mechanisms that can be used in each different reconfigurable implementation to achieve optimum performance are presented and discussed. Several examples of reconfigurable antennas for both terrestrial and space applications are highlighted, such as cognitive radio, multiple-input-multiple-output (MIMO) systems, and satellite communication.",
"title": ""
},
{
"docid": "282480e24a35a922a6498dbf88f34603",
"text": "BACKGROUND\nThere is a growing awareness of the need for easily administered, psychometrically sound screening tools to identify individuals with elevated levels of psychological distress. Although support has been found for the psychometric properties of the Depression, Anxiety and Stress Scales (DASS) using classical test theory approaches it has not been subjected to Rasch analysis. The aim of this study was to use Rasch analysis to assess the psychometric properties of the DASS-21 scales, using two different administration modes.\n\n\nMETHODS\nThe DASS-21 was administered to 420 participants with half the sample responding to a web-based version and the other half completing a traditional pencil-and-paper version. Conformity of DASS-21 scales to a Rasch partial credit model was assessed using the RUMM2020 software.\n\n\nRESULTS\nTo achieve adequate model fit it was necessary to remove one item from each of the DASS-21 subscales. The reduced scales showed adequate internal consistency reliability, unidimensionality and freedom from differential item functioning for sex, age and mode of administration. Analysis of all DASS-21 items combined did not support its use as a measure of general psychological distress. A scale combining the anxiety and stress items showed satisfactory fit to the Rasch model after removal of three items.\n\n\nCONCLUSION\nThe results provide support for the measurement properties, internal consistency reliability, and unidimensionality of three slightly modified DASS-21 scales, across two different administration methods. The further use of Rasch analysis on the DASS-21 in larger and broader samples is recommended to confirm the findings of the current study.",
"title": ""
},
{
"docid": "7b6c039783091260cee03704ce9748d8",
"text": "We describe Algorithm 2 in detail. Algorithm 2 takes as input the sample set S, the query sequence F , the sensitivity of query ∆, the threshold τ , and the stop parameter s. Algorithm 2 outputs the result of each comparison with the threshold. In Algorithm 2, each noisy query output is compred with a noisy threshold at line 4 and outputs the result of comparison. Let ⊤ mean that fk(S) > τ . Algorithm 2 is terminated if outputs ⊤ s times.",
"title": ""
},
{
"docid": "21e536e7197ad878db7938c636d1640b",
"text": "The Cloud computing has become the fast spread in the field of computing, research and industry in the last few years. As part of the service offered, there are new possibilities to build applications and provide various services to the end user by virtualization through the internet. Task scheduling is the most significant matter in the cloud computing because the user has to pay for resource using on the basis of time, which acts to distribute the load evenly among the system resources by maximizing utilization and reducing task execution Time. Many heuristic algorithms have been existed to resolve the task scheduling problem such as a Particle Swarm Optimization algorithm (PSO), Genetic Algorithm (GA), Ant Colony Optimization (ACO) and Cuckoo search (CS) algorithms, etc. In this paper, a Dynamic Adaptive Particle Swarm Optimization algorithm (DAPSO) has been implemented to enhance the performance of the basic PSO algorithm to optimize the task runtime by minimizing the makespan of a particular task set, and in the same time, maximizing resource utilization. Also, .a task scheduling algorithm has been proposed to schedule the independent task over the Cloud Computing. The proposed algorithm is considered an amalgamation of the Dynamic PSO (DAPSO) algorithm and the Cuckoo search (CS) algorithm; called MDAPSO. According to the experimental results, it is found that MDAPSO and DAPSO algorithms outperform the original PSO algorithm. Also, a comparative study has been done to evaluate the performance of the proposed MDAPSO with respect to the original PSO.",
"title": ""
},
{
"docid": "ad854ceb89e437ca59099453db33fa41",
"text": "Semi-supervised learning has recently emerged as a new paradigm in the machine learning community. It aims at exploiting simultaneously labeled and unlabeled data for classification. We introduce here a new semi-supervised algorithm. Its originality is that it relies on a discriminative approach to semisupervised learning rather than a generative approach, as it is usually the case. We present in details this algorithm for a logistic classifier and show that it can be interpreted as an instance of the Classification Expectation Maximization algorithm. We also provide empirical results on two data sets for sentence classification tasks and analyze the behavior of our methods.",
"title": ""
}
] |
scidocsrr
|
6ebfa259ce68060dd4a8057689f40df1
|
Linear Algebraic Structure of Word Senses, with Applications to Polysemy
|
[
{
"docid": "fe99cf42e35cc0b7523247e126f3d8a3",
"text": "Current distributed representations of words show little resemblance to theories of lexical semantics. The former are dense and uninterpretable, the latter largely based on familiar, discrete classes (e.g., supersenses) and relations (e.g., synonymy and hypernymy). We propose methods that transform word vectors into sparse (and optionally binary) vectors. The resulting representations are more similar to the interpretable features typically used in NLP, though they are discovered automatically from raw corpora. Because the vectors are highly sparse, they are computationally easy to work with. Most importantly, we find that they outperform the original vectors on benchmark tasks.",
"title": ""
}
] |
[
{
"docid": "b87cf41b31b8d163d6e44c9b1fa68cae",
"text": "This paper gives a security analysis of Microsoft's ASP.NET technology. The main part of the paper is a list of threats which is structured according to an architecture of Web services and attack points. We also give a reverse table of threats against security requirements as well as a summary of security guidelines for IT developers. This paper has been worked out in collaboration with five University teams each of which is focussing on a different security problem area. We use the same architecture for Web services and attack points.",
"title": ""
},
{
"docid": "49fed572de904ac3bb9aab9cdc874cc6",
"text": "Factorized Hidden Layer (FHL) adaptation has been proposed for speaker adaptation of deep neural network (DNN) based acoustic models. In FHL adaptation, a speaker-dependent (SD) transformation matrix and an SD bias are included in addition to the standard affine transformation. The SD transformation is a linear combination of rank-1 matrices whereas the SD bias is a linear combination of vectors. Recently, the Long ShortTerm Memory (LSTM) Recurrent Neural Networks (RNNs) have shown to outperform DNN acoustic models in many Automatic Speech Recognition (ASR) tasks. In this work, we investigate the effectiveness of SD transformations for LSTM-RNN acoustic models. Experimental results show that when combined with scaling of LSTM cell states’ outputs, SD transformations achieve 2.3% and 2.1% absolute improvements over the baseline LSTM systems for the AMI IHM and AMI SDM tasks respectively.",
"title": ""
},
{
"docid": "aeda16415cb3414745493f1c356ffd99",
"text": "Recent estimates based on the 1991 census (Schuring 1993) indicate that approximately 45 per cent of the South African population have a speaking knowledge of English (the majority of the population speaking an African language, such as Zulu, Xhosa, Tswana, or Venda, as home language). The number of individuals who cite English as a home language appears to be, however, only about 10 per cent of the population. Of this figure it would seem that at least one in three English-speakers come from ethnic groups other than the white one (in proportionally descending order, from the South African Indian, Coloured, and Black ethnic groups). This figure has shown some increase in recent years.",
"title": ""
},
{
"docid": "6a9e30fd08b568ef6607158cab4f82b2",
"text": "Expertise with unfamiliar objects (‘greebles’) recruits face-selective areas in the fusiform gyrus (FFA) and occipital lobe (OFA). Here we extend this finding to other homogeneous categories. Bird and car experts were tested with functional magnetic resonance imaging during tasks with faces, familiar objects, cars and birds. Homogeneous categories activated the FFA more than familiar objects. Moreover, the right FFA and OFA showed significant expertise effects. An independent behavioral test of expertise predicted relative activation in the right FFA for birds versus cars within each group. The results suggest that level of categorization and expertise, rather than superficial properties of objects, determine the specialization of the FFA.",
"title": ""
},
{
"docid": "a9ac1250c9be5c7f95086f82251d5157",
"text": "In 3D reconstruction, the recovery of the calibration parameters of the cameras is paramount since it provides metric information about the observed scene, e.g., measures of angles and ratios of distances. Autocalibration enables the estimation of the camera parameters without using a calibration device, but by enforcing simple constraints on the camera parameters. In the absence of information about the internal camera parameters such as the focal length and the principal point, the knowledge of the camera pixel shape is usually the only available constraint. Given a projective reconstruction of a rigid scene, we address the problem of the autocalibration of a minimal set of cameras with known pixel shape and otherwise arbitrarily varying intrinsic and extrinsic parameters. We propose an algorithm that only requires 5 cameras (the theoretical minimum), thus halving the number of cameras required by previous algorithms based on the same constraint. To this purpose, we introduce as our basic geometric tool the six-line conic variety (SLCV), consisting in the set of planes intersecting six given lines of 3D space in points of a conic. We show that the set of solutions of the Euclidean upgrading problem for three cameras with known pixel shape can be parameterized in a computationally efficient way. This parameterization is then used to solve autocalibration from five or more cameras, reducing the three-dimensional search space to a two-dimensional one. We provide experiments with real images showing the good performance of the technique.",
"title": ""
},
{
"docid": "bd960da75daf8c268d4def33ada5964d",
"text": "(SCADA), have lately gained the attention of IT security researchers as critical components of modern industrial infrastructure. One main reason for this attention is that ICS have not been built with security in mind and are thus particularly vulnerable when they are connected to computer networks and the Internet. ICS consists of SCADA, Programmable Logic Controller (PLC), Human-Machine Interfaces (HMI), sensors, and actuators such as motors. These components are connected to each other over fieldbus or IP-based protocols. In this thesis, we have developed methods and tools for assessing the security of ICSs. By applying the STRIDE threat modeling methodology, we have conducted a high level threat analysis of ICSs. Based on the threat analysis, we created security analysis guidelines for Industrial Control System devices. These guidelines can be applied to many ICS devices and are mostly vendor independent. Moreover, we have integrated support for Modbus/TCP in the Scapy packet manipulation library, which can be used for robustness testing of ICS software. In a case study, we applied our security-assessment methodology to a detailed security analysis of a demonstration ICS, consisting of current products. As a result of the analysis, we discovered several security weaknesses. Most of the discovered vulnerabilities were common IT security problems, such as web-application and software-update issues, but some are specific to ICS. For example, we show how the data visualized by the Human-Machine Interface can be altered and modified without limit. Furthermore, sensor data, such as temperature values, can be spoofed within the PLC. Moreover, we show that input validation is critical for security also in the ICS world. Thus, we disclose several security vulnerabilities in production devices. However, in the interest of responsible disclosure of security flaws, the most severe security flaws found are not detailed in the thesis. Our analysis guidelines and the case study provide a basis for conducting vulnerability assessment on further ICS devices and entire systems. In addition, we briefly describe existing solutions for securing ICSs. Acknowledgements I would like to thank Nixu Oy and the colleagues (especially Lauri Vuornos, Juhani Mäkelä and Michael Przybilski) for making it possible to conduct my thesis on Industrial Control Systems. The industrial environment enabled us to take advantage of the research and to apply it to practical projects. Moreover, without the help and involvement of Schneider Electric such an applied analysis would not have been possible. Furthermore, I would like to thank Tuomas …",
"title": ""
},
{
"docid": "554fc3e28147738a9faa80f593ffe9df",
"text": "The issue of cyberbullying is a social concern that has arisen due to the prevalent use of computer technology today. In this paper, we present a multi-faceted solution to mitigate the effects of cyberbullying, one that uses computer technology in order to combat the problem. We propose to provide assistance for various groups affected by cyberbullying (the bullied and the bully, both). Our solution was developed through a series of group projects and includes i) technology to detect the occurrence of cyberbullying ii) technology to enable reporting of cyberbullying iii) proposals to integrate third-party assistance when cyberbullying is detected iv) facilities for those with authority to manage online social networks or to take actions against detected bullies. In all, we demonstrate how this important social problem which arises due to computer technology can also leverage computer technology in order to take steps to better cope with the undesirable effects that have arisen.",
"title": ""
},
{
"docid": "6ddf62a60b0d56c76b54ca6cd0b28ab9",
"text": "Improvement of vehicle safety performance is one of the targets of ITS development. A pre-crash safety system has been developed that utilizes ITS technologies. The Pre-crash Safety system reduces collision injury by estimating TTC(time-tocollision) to preemptively activate safety devices, which consist of “Pre-crash Seatbelt” system and “Pre-crash Brake Assist” system. The key technology of these systems is a “Pre-crash Sensor” to detect obstacles and estimate TTC. In this paper, the Pre-crash Sensor is presented. The Pre-crash Sensor uses millimeter-wave radar to detect preceding vehicles, oncoming vehicles, roadside objects, etc. on the road ahead. Furthermore, by using a phased array system as a vehicle radar for the first time, a compact electronically scanned millimeter-wave radar with high recognition performance has been achieved. With respect to the obstacle determination algorithm, a crash determination algorithm has been newly developed, taking into account estimation of the direction of advance of the vehicle, in addition to the distance, relative speed and direction of the object.",
"title": ""
},
{
"docid": "13ee1c00203fd12486ee84aa4681dc60",
"text": "Mobile crowdsensing has emerged as an efficient sensing paradigm which combines the crowd intelligence and the sensing power of mobile devices, e.g., mobile phones and Internet of Things (IoT) gadgets. This article addresses the contradicting incentives of privacy preservation by crowdsensing users and accuracy maximization and collection of true data by service providers. We firstly define the individual contributions of crowdsensing users based on the accuracy in data analytics achieved by the service provider from buying their data. We then propose a truthful mechanism for achieving high service accuracy while protecting the privacy based on the user preferences. The users are incentivized to provide true data by being paid based on their individual contribution to the overall service accuracy. Moreover, we propose a coalition strategy which allows users to cooperate in providing their data under one identity, increasing their anonymity privacy protection, and sharing the resulting payoff. Finally, we outline important open research directions in mobile and people-centric crowdsensing.",
"title": ""
},
{
"docid": "bd7a011f47fd48e19e2bbdb2f426ae1d",
"text": "In social networks, link prediction predicts missing links in current networks and new or dissolution links in future networks, is important for mining and analyzing the evolution of social networks. In the past decade, many works have been done about the link prediction in social networks. The goal of this paper is to comprehensively review, analyze and discuss the state-of-the-art of the link prediction in social networks. A systematical category for link prediction techniques and problems is presented. Then link prediction techniques and problems are analyzed and discussed. Typical applications of link prediction are also addressed. Achievements and roadmaps of some active research groups are introduced. Finally, some future challenges of the link prediction in social networks are discussed. 对社交网络中的链接预测研究现状进行系统回顾、分析和讨论, 并指出未来研究挑战. 在动态社交网络中, 链接预测是挖掘和分析网络演化的一项重要任务, 其目的是预测当前未知的链接以及未来链接的变化. 过去十余年中, 在社交网络链接预测问题上已有大量研究工作. 本文旨在对该问题的研究现状和趋势进行全面回顾、分析和讨论. 提出一种分类法组织链接预测技术和问题. 详细分析和讨论了链接预测的技术、问题和应用. 介绍了该问题的活跃研究组. 分析和讨论了社交网络链接预测研究的未来挑战.",
"title": ""
},
{
"docid": "1efdb6ff65c1aa8f8ecb95b4d466335f",
"text": "This paper provides a linguistic and pragmatic analysis of the phenomenon of irony in order to represent how Twitter’s users exploit irony devices within their communication strategies for generating textual contents. We aim to measure the impact of a wide-range of pragmatic phenomena in the interpretation of irony, and to investigate how these phenomena interact with contexts local to the tweet. Informed by linguistic theories, we propose for the first time a multi-layered annotation schema for irony and its application to a corpus of French, English and Italian tweets.We detail each layer, explore their interactions, and discuss our results according to a qualitative and quantitative perspective.",
"title": ""
},
{
"docid": "b495407cb455186ecad9a45aa88ec509",
"text": "This article provides a comprehensive introduction into the field of robotic mapping, with a focus on indoor mapping. It describes and compares various probabilistic techniques, as they are presently being applied to a vast array of mobile robot mapping problems. The history of robotic mapping is also described, along with an extensive list of open research problems. This research is sponsored by by DARPA’s MARS Program (Contract number N66001-01-C-6018) and the National Science Foundation (CAREER grant number IIS-9876136 and regular grant number IIS-9877033), all of which is gratefully acknowledged. The views and conclusions contained in this document are those of the author and should not be interpreted as necessarily representing official policies or endorsements, either expressed or implied, of the United States Government or any of the sponsoring institutions.",
"title": ""
},
{
"docid": "194db5da505acab27bbe14232b255d09",
"text": "Latent Dirichlet allocation defines hidden topics to capture latent semantics in text documents. However, it assumes that all the documents are represented by the same topics, resulting in the “forced topic” problem. To solve this problem, we developed a group latent Dirichlet allocation (GLDA). GLDA uses two kinds of topics: local topics and global topics. The highly related local topics are organized into groups to describe the local semantics, whereas the global topics are shared by all the documents to describe the background semantics. GLDA uses variational inference algorithms for both offline and online data. We evaluated the proposed model for topic modeling and document clustering. Our experimental results indicated that GLDA can achieve a competitive performance when compared with state-of-the-art approaches.",
"title": ""
},
{
"docid": "09b273c9e77f6fc1b2de20f50227c44d",
"text": "Age and gender are complementary soft biometric traits for face recognition. Successful estimation of age and gender from facial images taken under real-world conditions can contribute improving the identification results in the wild. In this study, in order to achieve robust age and gender classification in the wild, we have benefited from Deep Convolutional Neural Networks based representation. We have explored transferability of existing deep convolutional neural network (CNN) models for age and gender classification. The generic AlexNet-like architecture and domain specific VGG-Face CNN model are employed and fine-tuned with the Adience dataset prepared for age and gender classification in uncontrolled environments. In addition, task specific GilNet CNN model has also been utilized and used as a baseline method in order to compare with transferred models. Experimental results show that both transferred deep CNN models outperform the GilNet CNN model, which is the state-of-the-art age and gender classification approach on the Adience dataset, by an absolute increase of 7% and 4.5% in accuracy, respectively. This outcome indicates that transferring a deep CNN model can provide better classification performance than a task specific CNN model, which has a limited number of layers and trained from scratch using a limited amount of data as in the case of GilNet. Domain specific VGG-Face CNN model has been found to be more useful and provided better performance for both age and gender classification tasks, when compared with generic AlexNet-like model, which shows that transfering from a closer domain is more useful.",
"title": ""
},
{
"docid": "7a9572c3c74f9305ac0d817b04e4399a",
"text": "Due to the limited length and freely constructed sentence structures, it is a difficult classification task for short text classification. In this paper, a short text classification framework based on Siamese CNNs and few-shot learning is proposed. The Siamese CNNs will learn the discriminative text encoding so as to help classifiers distinguish those obscure or informal sentence. The different sentence structures and different descriptions of a topic are viewed as ‘prototypes’, which will be learned by few-shot learning strategy to improve the classifier’s generalization. Our experimental results show that the proposed framework leads to better results in accuracies on twitter classifications and outperforms some popular traditional text classification methods and a few deep network approaches.",
"title": ""
},
{
"docid": "9721f7f54bfcfcf8c3efb10257002ad9",
"text": "Audio description (AD) provides linguistic descriptions of movies and allows visually impaired people to follow a movie along with their peers. Such descriptions are by design mainly visual and thus naturally form an interesting data source for computer vision and computational linguistics. In this work we propose a novel dataset which contains transcribed ADs, which are temporally aligned to full length movies. In addition we also collected and aligned movie scripts used in prior work and compare the two sources of descriptions. We introduce the Large Scale Movie Description Challenge (LSMDC) which contains a parallel corpus of 128,118 sentences aligned to video clips from 200 movies (around 150 h of video in total). The goal of the challenge is to automatically generate descriptions for the movie clips. First we characterize the dataset by benchmarking different approaches for generating video descriptions. Comparing ADs to scripts, we find that ADs are more visual and describe precisely what is shown rather than what should happen according to the scripts created prior to movie production. Furthermore, we present and compare the results of several teams who participated in the challenges organized in the context of two workshops at ICCV 2015 and ECCV 2016.",
"title": ""
},
{
"docid": "00b2d45d6810b727ab531f215d2fa73e",
"text": "Parental preparation for a child's discharge from the hospital sets the stage for successful transitioning to care and recovery at home. In this study of 135 parents of hospitalized children, the quality of discharge teaching, particularly the nurses' skills in \"delivery\" of parent teaching, was associated with increased parental readiness for discharge, which was associated with less coping difficulty during the first 3 weeks postdischarge. Parental coping difficulty was predictive of greater utilization of posthospitalization health services. These results validate the role of the skilled nurse as a teacher in promoting positive outcomes at discharge and beyond the hospitalization.",
"title": ""
},
{
"docid": "70f35b19ba583de3b9942d88c94b9148",
"text": "ARCHEOGUIDE (Augmented Reality-based Cultural Heritage On-site GUIDE) is an IST project, funded by the EU, aiming at providing a personalized Virtual Reality guide and tour assistant to archaeological site visitors and a multimedia repository and information system for archaeologists and site curators. The system provides monument reconstructions, ancient life simulation, and database tools for creating and archiving archaeological multimedia material.",
"title": ""
},
{
"docid": "b27038accdabab12d8e0869aba20a083",
"text": "Two architectures that generalize convolutional neural networks (CNNs) for the processing of signals supported on graphs are introduced. We start with the selection graph neural network (GNN), which replaces linear time invariant filters with linear shift invariant graph filters to generate convolutional features and reinterprets pooling as a possibly nonlinear subsampling stage where nearby nodes pool their information in a set of preselected sample nodes. A key component of the architecture is to remember the position of sampled nodes to permit computation of convolutional features at deeper layers. The second architecture, dubbed aggregation GNN, diffuses the signal through the graph and stores the sequence of diffused components observed by a designated node. This procedure effectively aggregates all components into a stream of information having temporal structure to which the convolution and pooling stages of regular CNNs can be applied. A multinode version of aggregation GNNs is further introduced for operation in large-scale graphs. An important property of selection and aggregation GNNs is that they reduce to conventional CNNs when particularized to time signals reinterpreted as graph signals in a circulant graph. Comparative numerical analyses are performed in a source localization application over synthetic and real-world networks. Performance is also evaluated for an authorship attribution problem and text category classification. Multinode aggregation GNNs are consistently the best-performing GNN architecture.",
"title": ""
},
{
"docid": "7bac448a5754c168c897125a4f080548",
"text": "BACKGROUND\nOne of the main methods for evaluation of fetal well-being is analysis of Doppler flow velocity waveform of fetal vessels. Evaluation of Doppler wave of the middle cerebral artery can predict most of the at-risk fetuses in high-risk pregnancies. In this study, we tried to determine the normal ranges and their trends during pregnancy of Doppler flow velocity indices (resistive index, pulsatility index, systolic-to-diastolic ratio, and peak systolic velocity) of middle cerebral artery in 20 - 40 weeks normal pregnancies in Iranians.\n\n\nMETHODS\nIn this cross-sectional study, 1037 women with normal pregnancy and gestational age of 20 to 40 weeks were investigated for fetal middle cerebral artery Doppler examination.\n\n\nRESULTS\nResistive index, pulsatility index, and systolic-to-diastolic ratio values of middle cerebral artery decreased in a parabolic pattern while the peak systolic velocity value increased linearly with progression of the gestational age. These changes were statistically significant (P<0.001 for all four variables) and were more characteristic during late weeks of pregnancy. The mean fetal heart rate was also significantly (P<0.001) reduced in correlation with the gestational age.\n\n\nCONCLUSION\nDoppler waveform indices of fetal middle cerebral artery are useful means for determining fetal well-being. Herewith, the normal ranges of Doppler waveform indices for an Iranian population are presented.",
"title": ""
}
] |
scidocsrr
|
a7fab56e5dbc06d39ff0ec4046a3cb94
|
Benchmark Machine Learning Approaches with Classical Time Series Approaches on the Blood Glucose Level Prediction Challenge
|
[
{
"docid": "83f970bc22a2ada558aaf8f6a7b5a387",
"text": "The imputeTS package specializes on univariate time series imputation. It offers multiple state-of-the-art imputation algorithm implementations along with plotting functions for time series missing data statistics. While imputation in general is a well-known problem and widely covered by R packages, finding packages able to fill missing values in univariate time series is more complicated. The reason for this lies in the fact, that most imputation algorithms rely on inter-attribute correlations, while univariate time series imputation instead needs to employ time dependencies. This paper provides an introduction to the imputeTS package and its provided algorithms and tools. Furthermore, it gives a short overview about univariate time series imputation in R. Introduction In almost every domain from industry (Billinton et al., 1996) to biology (Bar-Joseph et al., 2003), finance (Taylor, 2007) up to social science (Gottman, 1981) different time series data are measured. While the recorded datasets itself may be different, one common problem are missing values. Many analysis methods require missing values to be replaced with reasonable values up-front. In statistics this process of replacing missing values is called imputation. Time series imputation thereby is a special sub-field in the imputation research area. Most popular techniques like Multiple Imputation (Rubin, 1987), Expectation-Maximization (Dempster et al., 1977), Nearest Neighbor (Vacek and Ashikaga, 1980) and Hot Deck (Ford, 1983) rely on interattribute correlations to estimate values for the missing data. Since univariate time series do not possess more than one attribute, these algorithms cannot be applied directly. Effective univariate time series imputation algorithms instead need to employ the inter-time correlations. On CRAN there are several packages solving the problem of imputation of multivariate data. Most popular and mature (among others) are AMELIA (Honaker et al., 2011), mice (van Buuren and Groothuis-Oudshoorn, 2011), VIM (Kowarik and Templ, 2016) and missMDA (Josse and Husson, 2016). However, since these packages are designed for multivariate data imputation only they do not work for univariate time series. At the moment imputeTS (Moritz, 2016a) is the only package on CRAN that is solely dedicated to univariate time series imputation and includes multiple algorithms. Nevertheless, there are some other packages that include imputation functions as addition to their core package functionality. Most noteworthy being zoo (Zeileis and Grothendieck, 2005) and forecast (Hyndman, 2016). Both packages offer also some advanced time series imputation functions. The packages spacetime (Pebesma, 2012), timeSeries (Rmetrics Core Team et al., 2015) and xts (Ryan and Ulrich, 2014) should also be mentioned, since they contain some very simple but quick time series imputation methods. For a broader overview about available time series imputation packages in R see also (Moritz et al., 2015). In this technical report we evaluate the performance of several univariate imputation functions in R on different time series. This paper is structured as follows: Section Overview imputeTS package gives an overview, about all features and functions included in the imputeTS package. This is followed by Usage examples of the different provided functions. The paper ends with a Conclusions section. Overview imputeTS package The imputeTS package can be found on CRAN and is an easy to use package that offers several utilities for ’univariate, equi-spaced, numeric time series’. Univariate means there is just one attribute that is observed over time. Which leads to a sequence of single observations o1, o2, o3, ... on at successive points t1, t2, t3, ... tn in time. Equi-spaced means, that time increments between successive data points are equal |t1 − t2| = |t2 − t3| = ... = |tn−1 − tn|. Numeric means that the observations are measurable quantities that can be described as a number. In the first part of this section, a general overview about all available functions and datasets is given. The R Journal Vol. XX/YY, AAAA 20ZZ ISSN 2073-4859 Contributed research article 2 This is followed by more detailed overviews about the three areas covered by the package: ’Plots & Statistics’, ’Imputation’ and ’Datasets’. Information about how to apply these functions and tools can be found later in the Usage examples section. General overview As can be seen in Table 1, beyond several imputation algorithm implementations the package also includes plotting functions and datasets. The imputation algorithms can be divided into rather simple but fast approaches like mean imputation and more advanced algorithms that need more computation time like kalman smoothing on a structural model. Simple Imputation Imputation Plots & Statistics Datasets na.locf na.interpolation plotNA.distribution tsAirgap na.mean na.kalman plotNA.distributionBar tsAirgapComplete na.random na.ma plotNA.gapsize tsHeating na.replace na.seadec plotNA.imputations tsHeatingComplete na.remove na.seasplit statsNA tsNH4 tsNH4Complete Table 1: General Overview imputeTS package As a whole, the package aims to support the user in the complete process of replacing missing values in time series. This process starts with analyzing the distribution of the missing values using the statsNA function and the plots of plotNA.distribution, plotNA.distributionBar, plotNA.gapsize. In the next step the actual imputation can take place with one of the several algorithm options. Finally, the imputation results can be visualized with the plotNA.imputations function. Additionally, the package contains three datasets, each in a version with and without missing values, that can be used to test imputation algorithms. Plots & Statistics functions An overview about the available plots and statistics functions can be found in Table 2. To get a good impression what the plots look like section Usage examples is recommended. Function Description plotNA.distribution Visualize Distribution of Missing Values plotNA.distributionBar Visualize Distribution of Missing Values (Barplot) plotNA.gapsize Visualize Distribution of NA gap sizes plotNA.imputations Visualize Imputed Values statsNA Print Statistics about the Missing Data Table 2: Overview Plots & Statistics The statsNA function calculates several missing data statistics of the input data. This includes overall percentage of missing values, absolute amount of missing values, amount of missing value in different sections of the data, longest series of consecutive NAs and occurrence of consecutive NAs. The plotNA.distribution function visualizes the distribution of NAs in a time series. This is done using a standard time series plot, in which areas with missing data are colored red. This enables the user to see at first sight where in the series most of the missing values are located. The plotNA.distributionBar function provides the same insights to users, but is designed for very large time series. This is necessary for time series with 1000 and more observations, where it is not possible to plot each observation as a single point. The plotNA.gapsize function provides information about consecutive NAs by showing the most common NA gap sizes in the time series. The plotNA.imputations function is designated for visual inspection of the results after applying an imputation algorithm. Therefore, newly imputed observations are shown in a different color than the rest of the series. The R Journal Vol. XX/YY, AAAA 20ZZ ISSN 2073-4859 Contributed research article 3 Imputation functions An overview about all available imputation algorithms can be found in Table 3. Even if these functions are really easy applicable, some examples can be found later in section Usage examples. More detailed information about the theoretical background of the algorithms can be found in the imputeTS manual (Moritz, 2016b). Function Option Description na.interpolation linear Imputation by Linear Interpolation spline Imputation by Spline Interpolation stine Imputation by Stineman Interpolation na.kalman StructTS Imputation by Structural Model & Kalman Smoothing auto.arima Imputation by ARIMA State Space Representation & Kalman Sm. na.locf locf Imputation by Last Observation Carried Forward nocb Imputation by Next Observation Carried Backward na.ma simple Missing Value Imputation by Simple Moving Average linear Missing Value Imputation by Linear Weighted Moving Average exponential Missing Value Imputation by Exponential Weighted Moving Average na.mean mean MissingValue Imputation by Mean Value median Missing Value Imputation by Median Value mode Missing Value Imputation by Mode Value na.random Missing Value Imputation by Random Sample na.replace Replace Missing Values by a Defined Value na.seadec Seasonally Decomposed Missing Value Imputation na.seasplit Seasonally Splitted Missing Value Imputation na.remove Remove Missing Values Table 3: Overview Imputation Algorithms For convenience similar algorithms are available under one function name as parameter option. For example linear, spline and stineman interpolation are all included in the na.interpolation function. The na.mean, na.locf, na.replace, na.random functions are all simple and fast. In comparison, na.interpolation, na.kalman, na.ma, na.seasplit, na.seadec are more advanced algorithms that need more computation time. The na.remove function is a special case, since it only deletes all missing values. Thus, it is not really an imputation function. It should be handled with care since removing observations may corrupt the time information of the series. The na.seasplit and na.seadec functions are as well exceptions. These perform seasonal split / decomposition operations as a preprocessing step. For the imputation itself, one out of the other imputation algorithms can be used (which one can be set as option). Looking at all available imputation methods, no single overall best method can b",
"title": ""
},
{
"docid": "68295a432f68900911ba29e5a6ca5e42",
"text": "In many forecasting applications, it is valuable to predict not only the value of a signal at a certain time point in the future, but also the values leading up to that point. This is especially true in clinical applications, where the future state of the patient can be less important than the patient's overall trajectory. This requires multi-step forecasting, a forecasting variant where one aims to predict multiple values in the future simultaneously. Standard methods to accomplish this can propagate error from prediction to prediction, reducing quality over the long term. In light of these challenges, we propose multi-output deep architectures for multi-step forecasting in which we explicitly model the distribution of future values of the signal over a prediction horizon. We apply these techniques to the challenging and clinically relevant task of blood glucose forecasting. Through a series of experiments on a real-world dataset consisting of 550K blood glucose measurements, we demonstrate the effectiveness of our proposed approaches in capturing the underlying signal dynamics. Compared to existing shallow and deep methods, we find that our proposed approaches improve performance individually and capture complementary information, leading to a large improvement over the baseline when combined (4.87 vs. 5.31 absolute percentage error (APE)). Overall, the results suggest the efficacy of our proposed approach in predicting blood glucose level and multi-step forecasting more generally.",
"title": ""
}
] |
[
{
"docid": "fb904fc99acf8228ae7585e29074f96c",
"text": "One of the biggest problems in manufacturing is the failure of machine tools due to loss of surface material in cutting operations like drilling and milling. Carrying on the process with a dull tool may damage the workpiece material fabricated. On the other hand, it is unnecessary to change the cutting tool if it is still able to continue cutting operation. Therefore, an effective diagnosis mechanism is necessary for the automation of machining processes so that production loss and downtime can be avoided. This study concerns with the development of a tool wear condition-monitoring technique based on a two-stage fuzzy logic scheme. For this, signals acquired from various sensors were processed to make a decision about the status of the tool. In the first stage of the proposed scheme, statistical parameters derived from thrust force, machine sound (acquired via a very sensitive microphone) and vibration signals were used as inputs to fuzzy process; and the crisp output values of this process were then taken as the input parameters of the second stage. Conclusively, outputs of this stage were taken into a threshold function, the output of which is used to assess the condition of the tool. r 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "4dcdb2520ec5f9fc9c32f2cbb343808c",
"text": "Shannon’s mathematical theory of communication defines fundamental limits on how much information can be transmitted between the different components of any man-made or biological system. This paper is an informal but rigorous introduction to the main ideas implicit in Shannon’s theory. An annotated reading list is provided for further reading.",
"title": ""
},
{
"docid": "356a72153f61311546f6ff874ee79bb4",
"text": "In this paper, an object cosegmentation method based on shape conformability is proposed. Different from the previous object cosegmentation methods which are based on the region feature similarity of the common objects in image set, our proposed SaCoseg cosegmentation algorithm focuses on the shape consistency of the foreground objects in image set. In the proposed method, given an image set where the implied foreground objects may be varied in appearance but share similar shape structures, the implied common shape pattern in the image set can be automatically mined and regarded as the shape prior of those unsatisfactorily segmented images. The SaCoseg algorithm mainly consists of four steps: 1) the initial Grabcut segmentation; 2) the shape mapping by coherent point drift registration; 3) the common shape pattern discovery by affinity propagation clustering; and 4) the refinement by Grabcut with common shape constraint. To testify our proposed algorithm and establish a benchmark for future work, we built the CoShape data set to evaluate the shape-based cosegmentation. The experiments on CoShape data set and the comparison with some related cosegmentation algorithms demonstrate the good performance of the proposed SaCoseg algorithm.",
"title": ""
},
{
"docid": "528ef696a9932f87763d66264da515af",
"text": "Ethical, philosophical and religious values are central to the continuing controversy over capital punishment. Nevertheless, factual evidence can and should inform policy making. The evidence for capital punishment as an uniquely effective deterrent to murder is especially important, since deterrence is the only major pragmatic argument on the pro-death penalty side.1 The purpose of this paper is to survey and evaluate the evidence for deterrence.",
"title": ""
},
{
"docid": "43ec6774e1352443f41faf8d3780059b",
"text": "Cloud computing is currently one of the most hyped information technology fields and it has become one of the fastest growing segments of IT. Cloud computing allows us to scale our servers in magnitude and availability in order to provide services to a greater number of end users. Moreover, adopters of the cloud service model are charged based on a pay-per-use basis of the cloud's server and network resources, aka utility computing. With this model, a conventional DDoS attack on server and network resources is transformed in a cloud environment to a new breed of attack that targets the cloud adopter's economic resource, namely Economic Denial of Sustainability attack (EDoS). In this paper, we advocate a novel solution, named EDoS-Shield, to mitigate the Economic Denial of Sustainability (EDoS) attack in the cloud computing systems. We design a discrete simulation experiment to evaluate its performance and the results show that it is a promising solution to mitigate the EDoS.",
"title": ""
},
{
"docid": "1dc4a8f02dfe105220db5daae06c2229",
"text": "Photosynthesis begins with light harvesting, where specialized pigment-protein complexes transform sunlight into electronic excitations delivered to reaction centres to initiate charge separation. There is evidence that quantum coherence between electronic excited states plays a role in energy transfer. In this review, we discuss how quantum coherence manifests in photosynthetic light harvesting and its implications. We begin by examining the concept of an exciton, an excited electronic state delocalized over several spatially separated molecules, which is the most widely available signature of quantum coherence in light harvesting. We then discuss recent results concerning the possibility that quantum coherence between electronically excited states of donors and acceptors may give rise to a quantum coherent evolution of excitations, modifying the traditional incoherent picture of energy transfer. Key to this (partially) coherent energy transfer appears to be the structure of the environment, in particular the participation of non-equilibrium vibrational modes. We discuss the open questions and controversies regarding quantum coherent energy transfer and how these can be addressed using new experimental techniques.",
"title": ""
},
{
"docid": "8dee3ada764a40fce6b5676287496ccd",
"text": "We study the problem of video-to-video synthesis, whose goal is to learn a mapping function from an input source video (e.g., a sequence of semantic segmentation masks) to an output photorealistic video that precisely depicts the content of the source video. While its image counterpart, the image-to-image translation problem, is a popular topic, the video-to-video synthesis problem is less explored in the literature. Without modeling temporal dynamics, directly applying existing image synthesis approaches to an input video often results in temporally incoherent videos of low visual quality. In this paper, we propose a video-to-video synthesis approach under the generative adversarial learning framework. Through carefully-designed generators and discriminators, coupled with a spatio-temporal adversarial objective, we achieve high-resolution, photorealistic, temporally coherent video results on a diverse set of input formats including segmentation masks, sketches, and poses. Experiments on multiple benchmarks show the advantage of our method compared to strong baselines. In particular, our model is capable of synthesizing 2K resolution videos of street scenes up to 30 seconds long, which significantly advances the state-of-the-art of video synthesis. Finally, we apply our method to future video prediction, outperforming several competing systems. Code, models, and more results are available at our website.",
"title": ""
},
{
"docid": "1fdb9fdea37c042187407451aef02297",
"text": "Websites have gained vital importance for organizations along with the growing competition in the world market. It is known that usability requirements heavily depend on the type, audience and purpose of websites. For the e-commerce environment, usability assessment of a website is required to figure out the impact of website design on customer purchases. Thus, usability assessment and design of online pages have become the subject of many scientific studies. However, in any of these studies, design parameters were not identified in such a detailed way, and they were not classified in line with customer expectations to assess the overall usability of an e-commerce website. This study therefore aims to analyze and classify design parameters according to customer expectations in order to evaluate the usability of e-commerce websites in a more comprehensive manner. Four websites are assessed using the proposed novel approach with respect to the identified design parameters and the usability scores of the websites are examined. It is revealed that the websites with high usability score are more preferred by customers. Therefore, it is indicated that usability of e-commerce websites affects customer purchases.",
"title": ""
},
{
"docid": "1af028a0cf88d0ac5c52e84019554d51",
"text": "Robots exhibit life-like behavior by performing intelligent actions. To enhance human-robot interaction it is necessary to investigate and understand how end-users perceive such animate behavior. In this paper, we report an experiment to investigate how people perceived different robot embodiments in terms of animacy and intelligence. iCat and Robovie II were used as the two embodiments in this experiment. We conducted a between-subject experiment where robot type was the independent variable, and perceived animacy and intelligence of the robot were the dependent variables. Our findings suggest that a robots perceived intelligence is significantly correlated with animacy. The correlation between the intelligence and the animacy of a robot was observed to be stronger in the case of the iCat embodiment. Our results also indicate that the more animated the face of the robot, the more likely it is to attract the attention of a user. We also discuss the possible and probable explanations of the results obtained.",
"title": ""
},
{
"docid": "cc3b5ee3c8c890499f3d52db00520563",
"text": "We report results from an oyster hatchery on the Oregon coast, where intake waters experienced variable carbonate chemistry (aragonite saturation state , 0.8 to . 3.2; pH , 7.6 to . 8.2) in the early summer of 2009. Both larval production and midstage growth (, 120 to , 150 mm) of the oyster Crassostrea gigas were significantly negatively correlated with the aragonite saturation state of waters in which larval oysters were spawned and reared for the first 48 h of life. The effects of the initial spawning conditions did not have a significant effect on early-stage growth (growth from D-hinge stage to , 120 mm), suggesting a delayed effect of water chemistry on larval development. Rising atmospheric carbon dioxide (CO2) driven by anthropogenic emissions has resulted in the addition of over 140 Pg-C (1 Pg 5 1015 g) to the ocean (Sabine et al. 2011). The thermodynamics of the reactions between carbon dioxide and water require this addition to cause a decline of ocean pH and carbonate ion concentrations ([CO3 ]). For the observed change between current-day and preindustrial atmospheric CO2, the surface oceans have lost approximately 16% of their [CO3 ] and decreased in pH by 0.1 unit, although colder surface waters are likely to have experienced a greater effect (Feely et al. 2009). Projections for the open ocean suggest that wide areas, particularly at high latitudes, could reach low enough [CO3 ] levels that dissolution of biogenic carbonate minerals is thermodynamically favored by the end of the century (Feely et al. 2009; Steinacher et al. 2009), with implications for commercially significant higher trophic levels (Aydin et al. 2005). There is considerable spatial and temporal variability in ocean carbonate chemistry, and there is evidence that these natural variations affect marine biota, with ecological assemblages next to cold-seep high-CO2 sources having been shown to be distinct from those nearby but less affected by the elevated CO2 levels (Hall-Spencer et al. 2008). Coastal environments that are subject to upwelling events also experience exposure to elevated CO2 conditions where deep water enriched by additions of respiratory CO2 is brought up from depth to the nearshore surface by physical processes. Feely et al. (2008) showed that upwelling on the Pacific coast of central North America markedly increased corrosiveness for calcium carbonate minerals in surface nearshore waters. A small but significant amount of anthropogenic CO2 present in the upwelled source waters provided enough additional CO2 to cause widespread corrosiveness on the continental shelves (Feely et al. 2008). Because the source water for upwelling on the North American Pacific coast takes on the order of decades to transit from the point of subduction to the upwelling locales (Feely et al. 2008), this anthropogenic CO2 was added to the water under a substantially lowerCO2 atmosphere than today’s, and water already en route to this location is likely carrying an increasing burden of anthropogenic CO2. Understanding the effects of natural variations in CO2 in these waters on the local fauna is critical for anticipating how more persistently corrosive conditions will affect marine ecosystems. The responses of organisms to rising CO2 are potentially numerous and include negative effects on respiration, motility, and fertility (Portner 2008). From a geochemical perspective, however, the easiest process to understand conceptually is that of solid calcium carbonate (CaCO3,s) mineral formation. In nearly all ocean surface waters, formation of CaCO3,s is thermodynamically favored by the abundance of the reactants, dissolved calcium ([Ca2+]), and carbonate ([CO3 ]) ions. While oceanic [Ca 2+] is relatively constant at high levels that are well described by conservative relationships with salinity, ocean [CO 3 ] decreases as atmospheric CO2 rises, lowering the energetic favorability of CaCO3,s formation. This energetic favorability is proportional to the saturation state, V, defined by",
"title": ""
},
{
"docid": "30bc96451dd979a8c08810415e4a2478",
"text": "An adaptive circulator fabricated on a 130 nm CMOS is presented. Circulator has two adaptive blocks for gain and phase mismatch correction and leakage cancelation. The impedance matching circuit corrects mismatches for antenna, divider, and LNTA. The cancelation block cancels the Tx leakage. Measured isolation between transmitter and receiver for single tone at 2.4 GHz is 90 dB, and for a 40 MHz wide-band signal is 50dB. The circulator Rx gain is 10 dB, with NF = 4.7 dB and 5 dB insertion loss.",
"title": ""
},
{
"docid": "33dedeabc83271223a1b3fb50bfb1824",
"text": "Quantum computers can be used to address electronic-structure problems and problems in materials science and condensed matter physics that can be formulated as interacting fermionic problems, problems which stretch the limits of existing high-performance computers. Finding exact solutions to such problems numerically has a computational cost that scales exponentially with the size of the system, and Monte Carlo methods are unsuitable owing to the fermionic sign problem. These limitations of classical computational methods have made solving even few-atom electronic-structure problems interesting for implementation using medium-sized quantum computers. Yet experimental implementations have so far been restricted to molecules involving only hydrogen and helium. Here we demonstrate the experimental optimization of Hamiltonian problems with up to six qubits and more than one hundred Pauli terms, determining the ground-state energy for molecules of increasing size, up to BeH2. We achieve this result by using a variational quantum eigenvalue solver (eigensolver) with efficiently prepared trial states that are tailored specifically to the interactions that are available in our quantum processor, combined with a compact encoding of fermionic Hamiltonians and a robust stochastic optimization routine. We demonstrate the flexibility of our approach by applying it to a problem of quantum magnetism, an antiferromagnetic Heisenberg model in an external magnetic field. In all cases, we find agreement between our experiments and numerical simulations using a model of the device with noise. Our results help to elucidate the requirements for scaling the method to larger systems and for bridging the gap between key problems in high-performance computing and their implementation on quantum hardware.",
"title": ""
},
{
"docid": "ba7081afe9e734c5895ccbe7307c8707",
"text": "Research effort in ontology visualization has largely focused on developing new visualization techniques. At the same time, researchers have paid less attention to investigating the usability of common visualization techniques that many practitioners regularly use to visualize ontological data. In this paper, we focus on two popular ontology visualization techniques: indented tree and graph. We conduct a controlled usability study with an emphasis on the effectiveness, efficiency, workload and satisfaction of these visualization techniques in the context of assisting users during evaluation of ontology mappings. Findings from this study have revealed both strengths and weaknesses of each visualization technique. In particular, while the indented tree visualization is more organized and familiar to novice users, subjects found the graph visualization to be more controllable and intuitive without visual redundancy, particularly for ontologies with multiple inheritance.",
"title": ""
},
{
"docid": "c05fc37d9f33ec94f4c160b3317dda00",
"text": "We consider the coordination control for multiagent systems in a very general framework where the position and velocity interactions among agents are modeled by independent graphs. Different algorithms are proposed and analyzed for different settings, including the case without leaders and the case with a virtual leader under fixed position and velocity interaction topologies, as well as the case with a group velocity reference signal under switching velocity interaction. It is finally shown that the proposed algorithms are feasible in achieving the desired coordination behavior provided the interaction topologies satisfy the weakest possible connectivity conditions. Such conditions relate only to the structure of the interactions among agents while irrelevant to their magnitudes and thus are easy to verify. Rigorous convergence analysis is preformed based on a combined use of tools from algebraic graph theory, matrix analysis as well as the Lyapunov stability theory.",
"title": ""
},
{
"docid": "464439e2c9e45045aeee5ca0b88b90e1",
"text": "We calculate the average number of critical points of a Gaussian field on a high-dimensional space as a function of their energy and their index. Our results give a complete picture of the organization of critical points and are of relevance to glassy and disordered systems and landscape scenarios coming from the anthropic approach to string theory.",
"title": ""
},
{
"docid": "1d9361cffd8240f3b691c887def8e2f5",
"text": "Twenty seven essential oils, isolated from plants representing 11 families of Portuguese flora, were screened for their nematicidal activity against the pinewood nematode (PWN), Bursaphelenchus xylophilus. The essential oils were isolated by hydrodistillation and the volatiles by distillation-extraction, and both were analysed by GC and GC-MS. High nematicidal activity was achieved with essential oils from Chamaespartium tridentatum, Origanum vulgare, Satureja montana, Thymbra capitata, and Thymus caespititius. All of these essential oils had an estimated minimum inhibitory concentration ranging between 0.097 and 0.374 mg/ml and a lethal concentration necessary to kill 100% of the population (LC(100)) between 0.858 and 1.984 mg/ml. Good nematicidal activity was also obtained with the essential oil from Cymbopogon citratus. The dominant components of the effective oils were 1-octen-3-ol (9%), n-nonanal, and linalool (both 7%) in C. tridentatum, geranial (43%), neral (29%), and β-myrcene (25%) in C. citratus, carvacrol (36% and 39%), γ-terpinene (24% and 40%), and p-cymene (14% and 7%) in O. vulgare and S. montana, respectively, and carvacrol (75% and 65%, respectively) in T. capitata and T. caespititius. The other essential oils obtained from Portuguese flora yielded weak or no activity. Five essential oils with nematicidal activity against PWN are reported for the first time.",
"title": ""
},
{
"docid": "0e644fc1c567356a2e099221a774232c",
"text": "We present a coupled two-way clustering approach to gene microarray data analysis. The main idea is to identify subsets of the genes and samples, such that when one of these is used to cluster the other, stable and significant partitions emerge. The search for such subsets is a computationally complex task. We present an algorithm, based on iterative clustering, that performs such a search. This analysis is especially suitable for gene microarray data, where the contributions of a variety of biological mechanisms to the gene expression levels are entangled in a large body of experimental data. The method was applied to two gene microarray data sets, on colon cancer and leukemia. By identifying relevant subsets of the data and focusing on them we were able to discover partitions and correlations that were masked and hidden when the full dataset was used in the analysis. Some of these partitions have clear biological interpretation; others can serve to identify possible directions for future research.",
"title": ""
},
{
"docid": "3207a4b3d199db8f43d96f1096e8eb81",
"text": "Recently, a branch of machine learning algorithms called deep learning gained huge attention to boost up accuracy of a variety of sensing applications. However, execution of deep learning algorithm such as convolutional neural network on mobile processor is non-trivial due to intensive computational requirements. In this paper, we present our early design of DeepSense - a mobile GPU-based deep convolutional neural network (CNN) framework. For its design, we first explored the differences between server-class and mobile-class GPUs, and studied effectiveness of various optimization strategies such as branch divergence elimination and memory vectorization. Our results show that DeepSense is able to execute a variety of CNN models for image recognition, object detection and face recognition in soft real time with no or marginal accuracy tradeoffs. Experiments also show that our framework is scalable across multiple devices with different GPU architectures (e.g. Adreno and Mali).",
"title": ""
},
{
"docid": "7143c97b6ea484566f521e36a3fa834e",
"text": "To determine the reliability and concurrent validity of a visual analogue scale (VAS) for disability as a single-item instrument measuring disability in chronic pain patients was the objective of the study. For the reliability study a test-retest design and for the validity study a cross-sectional design was used. A general rehabilitation centre and a university rehabilitation centre was the setting for the study. The study population consisted of patients over 18 years of age, suffering from chronic musculoskeletal pain; 52 patients in the reliability study, 344 patients in the validity study. Main outcome measures were as follows. Reliability study: Spearman's correlation coefficients (rho values) of the test and retest data of the VAS for disability; validity study: rho values of the VAS disability scores with the scores on four domains of the Short-Form Health Survey (SF-36) and VAS pain scores, and with Roland-Morris Disability Questionnaire scores in chronic low back pain patients. Results were as follows: in the reliability study rho values varied from 0.60 to 0.77; and in the validity study rho values of VAS disability scores with SF-36 domain scores varied from 0.16 to 0.51, with Roland-Morris Disability Questionnaire scores from 0.38 to 0.43 and with VAS pain scores from 0.76 to 0.84. The conclusion of the study was that the reliability of the VAS for disability is moderate to good. Because of a weak correlation with other disability instruments and a strong correlation with the VAS for pain, however, its validity is questionable.",
"title": ""
},
{
"docid": "d9b8c9c1427fc68f9e40e24ae517c7e8",
"text": "Although studies have shown that Instagram use and young adults' mental health are cross-sectionally associated, longitudinal evidence is lacking. In addition, no study thus far examined this association, or the reverse, among adolescents. To address these gaps, we set up a longitudinal panel study among 12- to 19-year-old Flemish adolescents to investigate the reciprocal relationships between different types of Instagram use and depressed mood. Self-report data from 671 adolescent Instagram users (61% girls; MAge = 14.96; SD = 1.29) were used to examine our research question and test our hypotheses. Structural equation modeling showed that Instagram browsing at Time 1 was related to increases in adolescents' depressed mood at Time 2. In addition, adolescents' depressed mood at Time 1 was related to increases in Instagram posting at Time 2. These relationships were similar among boys and girls. Potential explanations for the study findings and suggestions for future research are discussed.",
"title": ""
}
] |
scidocsrr
|
9c573fb5fef95e93027b5e3f953883d9
|
Rumor source detection with multiple observations: fundamental limits and algorithms
|
[
{
"docid": "1b2cdbc2e87fccef66aff9e67347cc73",
"text": "We provide a systematic study of the problem of finding the source of a rumor in a network. We model rumor spreading in a network with the popular susceptible-infected (SI) model and then construct an estimator for the rumor source. This estimator is based upon a novel topological quantity which we term rumor centrality. We establish that this is a maximum likelihood (ML) estimator for a class of graphs. We find the following surprising threshold phenomenon: on trees which grow faster than a line, the estimator always has nontrivial detection probability, whereas on trees that grow like a line, the detection probability will go to 0 as the network grows. Simulations performed on synthetic networks such as the popular small-world and scale-free networks, and on real networks such as an internet AS network and the U.S. electric power grid network, show that the estimator either finds the source exactly or within a few hops of the true source across different network topologies. We compare rumor centrality to another common network centrality notion known as distance centrality. We prove that on trees, the rumor center and distance center are equivalent, but on general networks, they may differ. Indeed, simulations show that rumor centrality outperforms distance centrality in finding rumor sources in networks which are not tree-like.",
"title": ""
}
] |
[
{
"docid": "e8216c275a20be6706f5c2792bc6fd92",
"text": "Robust and reliable vehicle detection from images acquired by a moving vehicle is an important problem with numerous applications including driver assistance systems and self-guided vehicles. Our focus in this paper is on improving the performance of on-road vehicle detection by employing a set of Gabor filters specifically optimized for the task of vehicle detection. This is essentially a kind of feature selection, a critical issue when designing any pattern classification system. Specifically, we propose a systematic and general evolutionary Gabor filter optimization (EGFO) approach for optimizing the parameters of a set of Gabor filters in the context of vehicle detection. The objective is to build a set of filters that are capable of responding stronger to features present in vehicles than to nonvehicles, therefore improving class discrimination. The EGFO approach unifies filter design with filter selection by integrating genetic algorithms (GAs) with an incremental clustering approach. Filter design is performed using GAs, a global optimization approach that encodes the Gabor filter parameters in a chromosome and uses genetic operators to optimize them. Filter selection is performed by grouping filters having similar characteristics in the parameter space using an incremental clustering approach. This step eliminates redundant filters, yielding a more compact optimized set of filters. The resulting filters have been evaluated using an application-oriented fitness criterion based on support vector machines. We have tested the proposed framework on real data collected in Dearborn, MI, in summer and fall 2001, using Ford's proprietary low-light camera.",
"title": ""
},
{
"docid": "f0c0bbb0282d76da7146e05f4a371843",
"text": "We have proposed a claw pole type half-wave rectified variable field flux motor (CP-HVFM) with special self-excitation method. The claw pole rotor needs the 3D magnetic path core. This paper reports an analysis method with experimental BH and loss data of the iron powder core for FEM. And it shows a designed analysis model and characteristics such as torque, efficiency and loss calculation results.",
"title": ""
},
{
"docid": "2800046ff82a5bc43b42c1d2e2dc6777",
"text": "We develop a novel, fundamental and surprisingly simple randomized iterative method for solving consistent linear systems. Our method has six different but equivalent interpretations: sketch-and-project, constrain-and-approximate, random intersect, random linear solve, random update and random fixed point. By varying its two parameters—a positive definite matrix (defining geometry), and a random matrix (sampled in an i.i.d. fashion in each iteration)—we recover a comprehensive array of well known algorithms as special cases, including the randomized Kaczmarz method, randomized Newton method, randomized coordinate descent method and random Gaussian pursuit. We naturally also obtain variants of all these methods using blocks and importance sampling. However, our method allows for a much wider selection of these two parameters, which leads to a number of new specific methods. We prove exponential convergence of the expected norm of the error in a single theorem, from which existing complexity results for known variants can be obtained. However, we also give an exact formula for the evolution of the expected iterates, which allows us to give lower bounds on the convergence rate.",
"title": ""
},
{
"docid": "3df9bacf95281fc609ee7fd2d4724e91",
"text": "The deleterious effects of plastic debris on the marine environment were reviewed by bringing together most of the literature published so far on the topic. A large number of marine species is known to be harmed and/or killed by plastic debris, which could jeopardize their survival, especially since many are already endangered by other forms of anthropogenic activities. Marine animals are mostly affected through entanglement in and ingestion of plastic litter. Other less known threats include the use of plastic debris by \"invader\" species and the absorption of polychlorinated biphenyls from ingested plastics. Less conspicuous forms, such as plastic pellets and \"scrubbers\" are also hazardous. To address the problem of plastic debris in the oceans is a difficult task, and a variety of approaches are urgently required. Some of the ways to mitigate the problem are discussed.",
"title": ""
},
{
"docid": "914daf0fd51e135d6d964ecbe89a5b29",
"text": "Large-scale parallel programming environments and algorithms require efficient group-communication on computing systems with failing nodes. Existing reliable broadcast algorithms either cannot guarantee that all nodes are reached or are very expensive in terms of the number of messages and latency. This paper proposes Corrected-Gossip, a method that combines Monte Carlo style gossiping with a deterministic correction phase, to construct a Las Vegas style reliable broadcast that guarantees reaching all the nodes at low cost. We analyze the performance of this method both analytically and by simulations and show how it reduces the latency and network load compared to existing algorithms. Our method improves the latency by 20% and the network load by 53% compared to the fastest known algorithm on 4,096 nodes. We believe that the principle of corrected-gossip opens an avenue for many other reliable group communication operations.",
"title": ""
},
{
"docid": "52c160736ae0c82f3bdd9d4519fe320c",
"text": "OBJECT\nThere continues to be confusion over how best to preserve the branches of the facial nerve to the frontalis muscle when elevating a frontotemporal (pterional) scalp flap. The object of this study was to examine the full course of the branches of the facial nerve that must be preserved to maintain innervation of the frontalis muscle during elevation of a frontotemporal scalp flap.\n\n\nMETHODS\nDissection was performed to follow the temporal branches of facial nerves along their course in 5 adult, cadaveric heads (n = 10 extracranial facial nerves).\n\n\nRESULTS\nPreserving the nerves to the frontalis muscle requires an understanding of the course of the nerves in 3 areas. The first area is on the outer surface of the temporalis muscle lateral to the superior temporal line (STL) where the interfascial or subfascial approaches are applied, the second is in the area medial to the STL where subpericranial dissection is needed, and the third is along the STL. Preserving the nerves crossing the STL requires an understanding of the complex fascial relationships at this line. It is important to preserve the nerves crossing the lateral and medial parts of the exposure, and the continuity of the nerves as they pass across the STL. Prior descriptions have focused largely on the area superficial to the temporalis muscle lateral to the STL.\n\n\nCONCLUSIONS\nUsing the interfascial-subpericranial flap and the subfascial-subpericranial flap avoids opening the layer of loose areolar tissue between the temporal fascia and galea in the area lateral to the STL and between the galea and frontal pericranium in the area medial to the STL. It also preserves the continuity of the nerve crossing the STL. This technique allows for the preservation of the nerves to the frontalis muscle along their entire trajectory, from the uppermost part of the parotid gland to the frontalis muscle.",
"title": ""
},
{
"docid": "5e756f85b15812daf80221c8b9ae6a96",
"text": "PURPOSE\nRural-dwelling cancer survivors (CSs) are at risk for decrements in health and well-being due to decreased access to health care and support resources. This study compares the impact of cancer in rural- and urban-dwelling adult CSs living in 2 regions of the Pacific Northwest.\n\n\nMETHODS\nA convenience sample of posttreatment adult CSs (N = 132) completed the Impact of Cancer version 2 (IOCv2) and the Memorial Symptom Assessment Scale-short form. High and low scorers on the IOCv2 participated in an in-depth interview (n = 19).\n\n\nFINDINGS\nThe sample was predominantly middle-aged (mean age 58) and female (84%). Mean time since treatment completion was 6.7 years. Cancer diagnoses represented included breast (56%), gynecologic (9%), lymphoma (8%), head and neck (6%), and colorectal (5%). Comparisons across geographic regions show statistically significant differences in body concerns, worry, negative impact, and employment concerns. Rural-urban differences from interview data include access to health care, care coordination, connecting/community, thinking about death and dying, public/private journey, and advocacy.\n\n\nCONCLUSION\nThe insights into the differences and similarities between rural and urban CSs challenge the prevalent assumptions about rural-dwelling CSs and their risk for negative outcomes. A common theme across the study findings was community. Access to health care may not be the driver of the survivorship experience. Findings can influence health care providers and survivorship program development, building on the strengths of both rural and urban living and the engagement of the survivorship community.",
"title": ""
},
{
"docid": "5d7dced0ed875fed0f11440dc26fffd1",
"text": "Different from conventional mobile networks designed to optimize the transmission efficiency of one particular service (e.g., streaming voice/ video) primarily, the industry and academia are reaching an agreement that 5G mobile networks are projected to sustain manifold wireless requirements, including higher mobility, higher data rates, and lower latency. For this purpose, 3GPP has launched the standardization activity for the first phase 5G system in Release 15 named New Radio (NR). To fully understand this crucial technology, this article offers a comprehensive overview of the state-of-the-art development of NR, including deployment scenarios, numerologies, frame structure, new waveform, multiple access, initial/random access procedure, and enhanced carrier aggregation (CA) for resource requests and data transmissions. The provided insights thus facilitate knowledge of design and practice for further features of NR.",
"title": ""
},
{
"docid": "cebcd53ef867abb158445842cd0f4daf",
"text": "Let [ be a random variable over a finite set with an arbitrary probability distribution. In this paper we make improvements to a fast method of generating sample values for ( in constant time.",
"title": ""
},
{
"docid": "6393d61b229e7230e256922445534bdb",
"text": "Recently, region based methods for estimating the 3D pose of an object from a 2D image have gained increasing popularity. They do not require prior knowledge of the object’s texture, making them particularity attractive when the object’s texture is unknown a priori. Region based methods estimate the 3D pose of an object by finding the pose which maximizes the image segmentation in to foreground and background regions. Typically the foreground and background regions are described using global appearance models, and an energy function measuring their fit quality is optimized with respect to the pose parameters. Applying a region based approach on standard 2D-3D pose estimation databases shows its performance is strongly dependent on the scene complexity. In simple scenes, where the statistical properties of the foreground and background do not spatially vary, it performs well. However, in more complex scenes, where the statistical properties of the foreground or background vary, the performance strongly degrades. The global appearance models used to segment the image do not sufficiently capture the spatial variation. Inspired by ideas from local active contours, we propose a framework for simultaneous image segmentation and pose estimation using multiple local appearance models. The local appearance models are capable of capturing spatial variation in statistical properties, where global appearance models are limited. We derive an energy function, measuring the image segmentation, using multiple local regions and optimize it with respect to the pose parameters. Our experiments show a substantially higher probability of estimating the correct pose for heterogeneous objects, whereas for homogeneous objects there is minor improvement.",
"title": ""
},
{
"docid": "f267b329f52628d3c52a8f618485ae95",
"text": "We present an approach to continuous American Sign Language (ASL) recognition, which uses as input three-dimensional data of arm motions. We use computer vision methods for three-dimensional object shape and motion parameter extraction and an Ascension Technologies Flock of Birds interchangeably to obtain accurate three-dimensional movement parameters of ASL sentences, selected from a 53-sign vocabulary and a widely varied sentence structure. These parameters are used as features for Hidden Markov Models (HMMs). To address coarticulation effects and improve our recognition results, we experimented with two different approaches. The first consists of training context-dependent HMMs and is inspired by speech recognition systems. The second consists of modeling transient movements between signs and is inspired by the characteristics of ASL phonology. Our experiments verified that the second approach yields better recognition results.",
"title": ""
},
{
"docid": "8814d6589ecea87015017feb3ba18b01",
"text": "Although pneumatic robots are expected to be physically friendly to humans and human-environments, large and heavy air sources and reservoir tanks are a problem to build a self-contained pneumatic robot. This paper proposes a compressor-embedded pneumatic-driven humanoid system consisting of a very small distributed compressors and hollow bones as air reservoir tanks as well as the structural parts. Musculoskeletal systems have possibility of doing dynamic motions using physical elasticity of muscles and tendons, coupled-driven systems of multi-articular muscles, and so on. We suppose a pneumatic driven flexible spine will be contribute to dynamic motions as well as physical adaptivity to environments. This paper presents the concept, design, and implementation of the compressor-embedded pneumatic-driven musculoskeletal humanoid robot named “buEnwa.” We have developed the pneumatic robot which embeds very small compressors and reservoir tanks, and has a multi-joint spine in which physically elastic elements such as rubber bands are attached, and the coupled-driving system of the spine and the shoulder. This paper also shows preliminary experiments of the real robot.",
"title": ""
},
{
"docid": "920748fbdcaf91346a40e3bf5ae53d42",
"text": "This sketch presents an improved formalization of automatic caricature that extends a standard approach to account for the population variance of facial features. Caricature is generally considered a rendering that emphasizes the distinctive features of a particular face. A formalization of this idea, which we term “Exaggerating the Difference from the Mean” (EDFM), is widely accepted among caricaturists [Redman 1984] and was first implemented in a groundbreaking computer program by [Brennan 1985]. Brennan’s “Caricature generator” program produced caricatures by manually defining a polyline drawing with topology corresponding to a frontal, mean, face-shape drawing, and then displacing the vertices by a constant factor away from the mean shape. Many psychological studies have applied the “Caricature Generator” or EDFM idea to investigate caricaturerelated issues in face perception [Rhodes 1997].",
"title": ""
},
{
"docid": "c27d0db50a555d30f8e994cd72114d33",
"text": "We present a novel approach to generating photo-realistic images of a face with accurate lip sync, given an audio input. By using a recurrent neural network, we achieved mouth landmarks based on audio features. We exploited the power of conditional generative adversarial networks to produce highly-realistic face conditioned on a set of landmarks. These two networks together are capable of producing sequence of natural faces in sync with an input audio track.",
"title": ""
},
{
"docid": "4ceeffb061aed60299d4153bf48e2ad4",
"text": "Enhancing on line analytical processing through efficient cube computation plays a key role in Data Warehouse management. Hashing, grouping and mining techniques are commonly used to improve cube pre-computation. BitCube, a fast cubing method which uses bitmaps as inverted indexes for grouping, is presented. It horizontally partitions data according to the values of one dimension and for each resulting fragment it performs grouping following bottom-up criteria. BitCube allows also partial materialization based on iceberg conditions to treat large datasets for which a full cube pre-computation is too expensive. Space requirement of bitmaps is optimized by applying an adaption of the WAH compression technique. Experimental analysis, on both synthetic and real datasets, shows that BitCube outperforms previous algorithms for full cube computation and results comparable on iceberg cubing.",
"title": ""
},
{
"docid": "1cbaabb7514b7323aac7f0648dff6260",
"text": "While traditional database systems optimize for performance on one-shot query processing, emerging large-scale monitoring applications require continuous tracking of complex data-analysis queries over collections of physically distributed streams. Thus, effective solutions have to be simultaneously space/time efficient (at each remote monitor site), communication efficient (across the underlying communication network), and provide continuous, guaranteed-quality approximate query answers. In this paper, we propose novel algorithmic solutions for the problem of continuously tracking a broad class of complex aggregate queries in such a distributed-streams setting. Our tracking schemes maintain approximate query answers with provable error guarantees, while simultaneously optimizing the storage space and processing time at each remote site, and the communication cost across the network. In a nutshell, our algorithms rely on tracking general-purpose randomized sketch summaries of local streams at remote sites along with concise prediction models of local site behavior in order to produce highly communication- and space/time-efficient solutions. The end result is a powerful approximate query tracking framework that readily incorporates several complex analysis queries (including distributed join and multi-join aggregates, and approximate wavelet representations), thus giving the first known low-overhead tracking solution for such queries in the distributed-streams model. Experiments with real data validate our approach, revealing significant savings over naive solutions as well as our analytical worst-case guarantees.",
"title": ""
},
{
"docid": "27ea4d25d672b04632c53c711afe0ceb",
"text": "Many advancements have been taking place in unmanned aerial vehicle (UAV) technology lately. This is leading towards the design and development of UAVs with various sizes that possess increased on-board processing, memory, storage, and communication capabilities. Consequently, UAVs are increasingly being used in a vast amount of commercial, military, civilian, agricultural, and environmental applications. However, to take full advantages of their services, these UAVs must be able to communicate efficiently with each other using UAV-to-UAV (U2U) communication and with existing networking infrastructures using UAV-to-Infrastructure (U2I) communication. In this paper, we identify the functions, services and requirements of UAV-based communication systems. We also present networking architectures, underlying frameworks, and data traffic requirements in these systems as well as outline the various protocols and technologies that can be used at different UAV communication links and networking layers. In addition, the paper discusses middleware layer services that can be provided in order to provide seamless communication and support heterogeneous network interfaces. Furthermore, we discuss a new important area of research, which involves the use of UAVs in collecting data from wireless sensor networks (WSNs). We discuss and evaluate several approaches that can be used to collect data from different types of WSNs including topologies such as linear sensor networks (LSNs), geometric and clustered WSNs. We outline the benefits of using UAVs for this function, which include significantly decreasing sensor node energy consumption, lower interference, and offers considerably increased flexibility in controlling the density of the deployed nodes since the need for the multihop approach for sensor-tosink communication is either eliminated or significantly reduced. Consequently, UAVs can provide good connectivity to WSN clusters.",
"title": ""
},
{
"docid": "87c3c488f027ef96b1c2a096c122d1b4",
"text": "We study the label complexity of pool-based active learning in the agnostic PAC model. Specifically, we derive general bounds on the number of label requests made by the A2 algorithm proposed by Balcan, Beygelzimer & Langford (Balcan et al., 2006). This represents the first nontrivial general-purpose upper bound on label complexity in the agnostic PAC model.",
"title": ""
},
{
"docid": "1e6ea96d9aafb244955ff38423562a1c",
"text": "Many statistical methods rely on numerical optimization to estimate a model’s parameters. Unfortunately, conventional algorithms sometimes fail. Even when they do converge, there is no assurance that they have found the global, rather than a local, optimum. We test a new optimization algorithm, simulated annealing, on four econometric problems and compare it to three common conventional algorithms. Not only can simulated annealing find the global optimum, it is also less likely to fail on difficult functions because it is a very robust algorithm. The promise of simulated annealing is demonstrated on the four econometric problems.",
"title": ""
},
{
"docid": "717dd8e3c699d6cc22ba483002ab0a6f",
"text": "Our analysis of many real-world event based applications has revealed that existing Complex Event Processing technology (CEP), while effective for efficient pattern matching on event stream, is limited in its capability of reacting in realtime to opportunities and risks detected or environmental changes. We are the first to tackle this problem by providing active rule support embedded directly within the CEP engine, henceforth called Active Complex Event Processing technology, or short, Active CEP. We design the Active CEP model and associated rule language that allows rules to be triggered by CEP system state changes and correctly executed during the continuous query process. Moreover we design an Active CEP infrastructure, that integrates the active rule component into the CEP kernel, allowing finegrained and optimized rule processing. We demonstrate the power of Active CEP by applying it to the development of a collaborative project with UMass Medical School, which detects potential threads of infection and reminds healthcare workers to perform hygiene precautions in real-time. 1. BACKGROUND AND MOTIVATION Complex patterns of events often capture exceptions, threats or opportunities occurring across application space and time. Complex Event Processing (CEP) technology has thus increasingly gained popularity for efficiently detecting such event patterns in real-time. For example CEP has been employed by diverse applications ranging from healthcare systems , financial analysis , real-time business intelligence to RFID based surveillance. However, existing CEP technologies [3, 7, 2, 5], while effective for pattern matching, are limited in their capability of supporting active rules. We motivate the need for such capability based on our experience with the development of a real-world hospital infection control system, called HygieneReminder, or short HyReminder. Application: HyReminder. According to the U.S. Centers for Disease Control and Prevention [8], healthcareassociated infections hit 1.7 million people a year in the Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Articles from this volume were presented at The 36th International Conference on Very Large Data Bases, September 13-17, 2010, Singapore. Proceedings of the VLDB Endowment, Vol. 3, No. 2 Copyright 2010 VLDB Endowment 2150-8097/10/09... $ 10.00. United States, causing an estimated 99,000 deaths. HyReminder is a collaborated project between WPI and University of Massachusetts Medical School (UMMS) that uses advanced CEP technologies to solve this long-standing public health problem. HyReminder system aims to continuously track healthcare workers (HCW) for hygiene compliance (for example cleansing hands before entering a H1N1 patient’s room), and remind the HCW at the appropriate moments to perform hygiene precautions thus preventing spread of infections. CEP technologies are adopted to efficiently monitor event patterns, such as the sequence that a HCW left a patient room (this behavior is measured by a sensor reading and modeled as “exit” event), did not sanitize his hands (referred as “!sanitize”, where ! represents negation), and then entered another patient’s room (referred as “enter”). Such a sequence of behaviors, i.e. SEQ(exit,!sanitize,enter), would be deemed as a violation of hand hygiene regulations. Besides detecting complex events, the HyReminder system requires the ability to specify logic rules reminding HCWs to perform the respective appropriate hygiene upon detection of an imminent hand hygiene violation or an actual observed violation. A condensed version of example logic rules derived from HyReminder and modeled using CEP semantics is depicted in Figure 1. In the figure, the edge marked “Q1.1” expresses the logic that “if query Q1.1 is satisfied for a HCW, then change his hygiene status to warning and change his badge light to yellow”. This logic rule in fact specifies how the system should react to the observed change, here meaning the risk being detected by the continuous pattern matching query Q1.1, during the long running query process. The system’s streaming environment requires that such reactions be executed in a timely fashion. An additional complication arises in that the HCW status changed by this logic rule must be used as a condition by other continuous queries at run time, like Q2.1 and Q2.2. We can see that active rules and continuous queries over streaming data are tightly-coupled: continuous queries are monitoring the world while active rules are changing the world, both in real-time. Yet contrary to traditional databases, data is not persistently stored in a DSMS, but rather streamed through the system in fluctuating arrival rate. Thus processing active rules in CEP systems requires precise synchronization between queries and rules and careful consideration of latency and resource utilization. Limitations of Existing CEP Technology. In summary, the following active functionalities are needed by many event stream applications, but not supported by the existing",
"title": ""
}
] |
scidocsrr
|
67c3b0a730893d241af4c6b7a2db6a7b
|
A digital controlled PV-inverter with grid impedance estimation for ENS detection
|
[
{
"docid": "819f6b62eb3f8f9d60437af28c657935",
"text": "The global electrical energy consumption is rising and there is a steady increase of the demand on the power capacity, efficient production, distribution and utilization of energy. The traditional power systems are changing globally, a large number of dispersed generation (DG) units, including both renewable and nonrenewable energy sources such as wind turbines, photovoltaic (PV) generators, fuel cells, small hydro, wave generators, and gas/steam powered combined heat and power stations, are being integrated into power systems at the distribution level. Power electronics, the technology of efficiently processing electric power, play an essential part in the integration of the dispersed generation units for good efficiency and high performance of the power systems. This paper reviews the applications of power electronics in the integration of DG units, in particular, wind power, fuel cells and PV generators.",
"title": ""
}
] |
[
{
"docid": "4ee078123815eff49cc5d43550021261",
"text": "Generalized anxiety and major depression have become increasingly common in the United States, affecting 18.6 percent of the adult population. Mood disorders can be debilitating, and are often correlated with poor general health, life dissatisfaction, and the need for disability benefits due to inability to work. Recent evidence suggests that some mood disorders have a circadian component, and disruptions in circadian rhythms may even trigger the development of these disorders. However, the molecular mechanisms of this interaction are not well understood. Polymorphisms in a circadian clock-related gene, PER3, are associated with behavioral phenotypes (extreme diurnal preference in arousal and activity) and sleep/mood disorders, including seasonal affective disorder (SAD). Here we show that two PER3 mutations, a variable number tandem repeat (VNTR) allele and a single-nucleotide polymorphism (SNP), are associated with diurnal preference and higher Trait-Anxiety scores, supporting a role for PER3 in mood modulation. In addition, we explore a potential mechanism for how PER3 influences mood by utilizing a comprehensive circadian clock model that accurately predicts the changes in circadian period evident in knock-out phenotypes and individuals with PER3-related clock disorders.",
"title": ""
},
{
"docid": "fd2450f5b02a2599be29b90a599ad31d",
"text": "Male genital injuries, demand prompt management to prevent long-term sexual and psychological damage. Injuries to the scrotum and contents may produce impaired fertility.We report our experience in diagnosing and managing a case of a foreign body in the scrotum following a boat engine blast accident. This case report highlights the need for a good history and thorough general examination to establish the mechanism of injury in order to distinguish between an embedded penetrating projectile injury and an injury with an exit wound. Prompt surgical exploration with hematoma evacuation limits complications.",
"title": ""
},
{
"docid": "bf623afcf45d449bbfaa87c8fd41a7f6",
"text": "A noise power spectral density (PSD) estimation is an indispensable component of speech spectral enhancement systems. In this paper we present a noise PSD tracking algorithm, which employs a noise presence probability estimate delivered by a deep neural network (DNN). The algorithm provides a causal noise PSD estimate and can thus be used in speech enhancement systems for communication purposes. An extensive performance comparison has been carried out with ten causal state-of-the-art noise tracking algorithms taken from the literature and categorized acc. to applied techniques. The experiments showed that the proposed DNN-based noise PSD tracker outperforms all competing methods with respect to all tested performance measures, which include the noise tracking performance and the performance of a speech enhancement system employing the noise tracking component.",
"title": ""
},
{
"docid": "d521b14ee04dbf69656240ef47c3319c",
"text": "This paper presents a computationally efficient approach for temporal action detection in untrimmed videos that outperforms state-of-the-art methods by a large margin. We exploit the temporal structure of actions by modeling an action as a sequence of sub-actions. A novel and fully automatic sub-action discovery algorithm is proposed, where the number of sub-actions for each action as well as their types are automatically determined from the training videos. We find that the discovered sub-actions are semantically meaningful. To localize an action, an objective function combining appearance, duration and temporal structure of sub-actions is optimized as a shortest path problem in a network flow formulation. A significant benefit of the proposed approach is that it enables real-time action localization (40 fps) in untrimmed videos. We demonstrate state-of-the-art results on THUMOS’14 and MEXaction2 datasets.",
"title": ""
},
{
"docid": "ab1b9b18163d3e732a2f8fc8b4e04ab1",
"text": "We measure the knowledge flows between countries by analysing publication and citation data, arguing that not all citations are equally important. Therefore, in contrast to existing techniques that utilize absolute citation counts to quantify knowledge flows between different entities, our model employs a citation context analysis technique, using a machine-learning approach to distinguish between important and non-important citations. We use 14 novel features (including context-based, cue words-based and text-based) to train a Support Vector Machine (SVM) and Random Forest classifier on an annotated dataset of 20,527 publications downloaded from the Association for Computational Linguistics anthology (http://allenai.org/data.html). Our machine-learning models outperform existing state-of-the-art citation context approaches, with the SVM model reaching up to 61% and the Random Forest model up to a very encouraging 90% Precision–Recall Area Under the Curve, with 10-fold cross-validation. Finally, we present a case study to explain our deployed method for datasets of PLoS ONE full-text publications in the field of Computer and Information Sciences. Our results show that a significant volume of knowledge flows from the United States, based on important citations, are consumed by the international scientific community. Of the total knowledge flow from China, we find a relatively smaller proportion (only 4.11%) falling into the category of knowledge flow based on important citations, while The Netherlands and Germany show the highest proportions of knowledge flows based on important citations, at 9.06 and 7.35% respectively. Among the institutions, interestingly, the findings show that at the University of Malaya more than 10% of the knowledge produced falls into the category of important. We believe that such analyses are helpful to understand the dynamics of the relevant knowledge flows across nations and institutions.",
"title": ""
},
{
"docid": "7cc3da275067df8f6c017da37025856c",
"text": "A simple, green method is described for the synthesis of Gold (Au) and Silver (Ag) nanoparticles (NPs) from the stem extract of Breynia rhamnoides. Unlike other biological methods for NP synthesis, the uniqueness of our method lies in its fast synthesis rates (~7 min for AuNPs) and the ability to tune the nanoparticle size (and subsequently their catalytic activity) via the extract concentration used in the experiment. The phenolic glycosides and reducing sugars present in the extract are largely responsible for the rapid reduction rates of Au(3+) ions to AuNPs. Efficient reduction of 4-nitrophenol (4-NP) to 4-aminophenol (4-AP) in the presence of AuNPs (or AgNPs) and NaBH(4) was observed and was found to depend upon the nanoparticle size or the stem extract concentration used for synthesis.",
"title": ""
},
{
"docid": "4ecf150613d45ae0f92485b8faa0deef",
"text": "Query optimizers in current database systems are designed to pick a single efficient plan for a given query based on current statistical properties of the data. However, different subsets of the data can sometimes have very different statistical properties. In such scenarios it can be more efficient to process different subsets of the data for a query using different plans. We propose a new query processing technique called content-based routing (CBR) that eliminates the single-plan restriction in current systems. We present low-overhead adaptive algorithms that partition input data based on statistical properties relevant to query execution strategies, and efficiently route individual tuples through customized plans based on their partition. We have implemented CBR as an extension to the Eddies query processor in the TelegraphCQ system, and we present an extensive experimental evaluation showing the significant performance benefits of CBR.",
"title": ""
},
{
"docid": "fe8386f75bb68d7cde398aab59cfb543",
"text": "Nutrition educators research, teach, and conduct outreach within the field of community food security (CFS), yet no clear consensus exists concerning what the field encompasses. Nutrition education needs to be integrated into the CFS movement for the fundamental reason that optimal health, well-being, and sustainability are at the core of both nutrition education and CFS. Establishing commonalities at the intersection of academic research, public policy development, and distinctive nongovernmental organizations expands opportunities for professional participation. Entry points for nutrition educators' participation are provided, including efforts dedicated to education, research, policy, programs and projects, and human rights.",
"title": ""
},
{
"docid": "556c0c1662a64f484aff9d7556b2d0b5",
"text": "In this paper, we investigate the Chinese calligraphy synthesis problem: synthesizing Chinese calligraphy images with specified style from standard font(eg. Hei font) images (Fig. 1(a)). Recent works mostly follow the stroke extraction and assemble pipeline which is complex in the process and limited by the effect of stroke extraction. In this work we treat the calligraphy synthesis problem as an image-to-image translation problem and propose a deep neural network based model which can generate calligraphy images from standard font images directly. Besides, we also construct a large scale benchmark that contains various styles for Chinese calligraphy synthesis. We evaluate our method as well as some baseline methods on the proposed dataset, and the experimental results demonstrate the effectiveness of our proposed model.",
"title": ""
},
{
"docid": "5dde43ab080f516c0b485fcd951bf9e1",
"text": "Differential privacy is a framework to quantify to what extent individual privacy in a statistical database is preserved while releasing useful aggregate information about the database. In this paper, within the classes of mechanisms oblivious of the database and the queriesqueries beyond the global sensitivity, we characterize the fundamental tradeoff between privacy and utility in differential privacy, and derive the optimal ϵ-differentially private mechanism for a single realvalued query function under a very general utility-maximization (or cost-minimization) framework. The class of noise probability distributions in the optimal mechanism has staircase-shaped probability density functions which are symmetric (around the origin), monotonically decreasing and geometrically decaying. The staircase mechanism can be viewed as a geometric mixture of uniform probability distributions, providing a simple algorithmic description for the mechanism. Furthermore, the staircase mechanism naturally generalizes to discrete query output settings as well as more abstract settings. We explicitly derive the parameter of the optimal staircase mechanism for ℓ<sup>1</sup> and ℓ<sup>2</sup> cost functions. Comparing the optimal performances with those of the usual Laplacian mechanism, we show that in the high privacy regime (ϵ is small), the Laplacian mechanism is asymptotically optimal as ϵ → 0; in the low privacy regime (ϵ is large), the minimum magnitude and second moment of noise are Θ(Δe<sup>(-ϵ/2)</sup>) and Θ(Δ<sup>2</sup>e<sup>(-2ϵ/3)</sup>) as ϵ → +∞, respectively, while the corresponding figures when using the Laplacian mechanism are Δ/ϵ and 2Δ<sup>2</sup>/ϵ<sup>2</sup>, where Δ is the sensitivity of the query function. We conclude that the gains of the staircase mechanism are more pronounced in the moderate-low privacy regime.",
"title": ""
},
{
"docid": "37e936c375d34f356e195f844125ae84",
"text": "LEARNING OBJECTIVES\nThe reader is presumed to have a basic understanding of facial anatomy and facial rejuvenation procedures. After reading this article, the reader should also be able to: 1. Identify the essential anatomy of the face as it relates to facelift surgery. 2. Describe the common types of facelift procedures, including their strengths and weaknesses. 3. Apply appropriate preoperative and postoperative management for facelift patients. 4. Describe common adjunctive procedures. Physicians may earn 1.0 AMA PRA Category 1 Credit by successfully completing the examination based on material covered in this article. This activity should take one hour to complete. The examination begins on page 464. As a measure of the success of the education we hope you will receive from this article, we encourage you to log on to the Aesthetic Society website and take the preexamination before reading this article. Once you have completed the article, you may then take the examination again for CME credit. The Aesthetic Society will be able to compare your answers and use these data for future reference as we attempt to continually improve the CME articles we offer. ASAPS members can complete this CME examination online by logging on to the ASAPS members-only website (http://www.surgery.org/members) and clicking on \"Clinical Education\" in the menu bar. Modern aesthetic surgery of the face began in the first part of the 20th century in the United States and Europe. Initial limited excisions gradually progressed to skin undermining and eventually to a variety of methods for contouring the subcutaneous facial tissue. This particular review focuses on the cheek and neck. While the lid-cheek junction, eyelids, and brow must also be considered to obtain a harmonious appearance, those elements are outside the scope of this article. Overall patient management, including patient selection, preoperative preparation, postoperative care, and potential complications are discussed.",
"title": ""
},
{
"docid": "47b9da2d6f741419536879da699f7456",
"text": "We consider the problem of scientific literature search, and we suggest that citation relations between publications can be very helpful in the systematic retrieval of scientific literature. We introduce a new software tool called CitNetExplorer that can be used for citation-based scientific literature retrieval. To demonstrate the use of CitNetExplorer, we employ the tool to identify publications dealing with the topic of community detection in networks. Citationbased scientific literature retrieval can be especially helpful in situations in which one needs to obtain a comprehensive overview of the literature on a certain research topic, for instance in the preparation of a review article.",
"title": ""
},
{
"docid": "08f766ca84fc4cb70b0fc288e2f12a5a",
"text": "The authors present a unified account of 2 neural systems concerned with the development and expression of adaptive behaviors: a mesencephalic dopamine system for reinforcement learning and a \"generic\" error-processing system associated with the anterior cingulate cortex. The existence of the error-processing system has been inferred from the error-related negativity (ERN), a component of the event-related brain potential elicited when human participants commit errors in reaction-time tasks. The authors propose that the ERN is generated when a negative reinforcement learning signal is conveyed to the anterior cingulate cortex via the mesencephalic dopamine system and that this signal is used by the anterior cingulate cortex to modify performance on the task at hand. They provide support for this proposal using both computational modeling and psychophysiological experimentation.",
"title": ""
},
{
"docid": "3a6197322da0e5fe2c2d98a8fcba7a42",
"text": "The amygdala and hippocampal complex, two medial temporal lobe structures, are linked to two independent memory systems, each with unique characteristic functions. In emotional situations, these two systems interact in subtle but important ways. Specifically, the amygdala can modulate both the encoding and the storage of hippocampal-dependent memories. The hippocampal complex, by forming episodic representations of the emotional significance and interpretation of events, can influence the amygdala response when emotional stimuli are encountered. Although these are independent memory systems, they act in concert when emotion meets memory.",
"title": ""
},
{
"docid": "7846c66aa411507d44ff935607cdb3ab",
"text": "The orphan, membrane-bound estrogen receptor (GPER) is expressed at high levels in a large fraction of breast cancer patients and its expression is favorable for patients’ survival. We investigated the role of GPER as a potential tumor suppressor in triple-negative breast cancer cells MDA-MB-231 and MDA-MB-468 using cell cycle analysis and apoptosis assay. The constitutive activity of GPER was investigated. GPER-specific activation with G-1 agonist inhibited breast cancer cell growth in concentration-dependent manner via induction of the cell cycle arrest in G2/M phase, enhanced phosphorylation of histone H3 and caspase-3-mediated apoptosis. Analysis of the methylation status of the GPER promoter in the triple-negative breast cancer cells and in tissues derived from breast cancer patients revealed that GPER amount is regulated by epigenetic mechanisms and GPER expression is inactivated by promoter methylation. Furthermore, GPER expression was induced by stress factors, such as radiation, and GPER amount inversely correlated with the p53 expression level. Overall, our results establish the protective role in breast cancer tumorigenesis, and the cell surface expression of GPER makes it an excellent potential therapeutic target for triple-negative breast cancer.",
"title": ""
},
{
"docid": "83e897a37aca4c349b4a910c9c0787f4",
"text": "Computational imaging methods that can exploit multiple modalities have the potential to enhance the capabilities of traditional sensing systems. In this paper, we propose a new method that reconstructs multimodal images from their linear measurements by exploiting redundancies across different modalities. Our method combines a convolutional group-sparse representation of images with total variation (TV) regularization for high-quality multimodal imaging. We develop an online algorithm that enables the unsupervised learning of convolutional dictionaries on large-scale datasets that are typical in such applications. We illustrate the benefit of our approach in the context of joint intensity-depth imaging.",
"title": ""
},
{
"docid": "462a0746875e35116f669b16d851f360",
"text": "We previously have applied deep autoencoder (DAE) for noise reduction and speech enhancement. However, the DAE was trained using only clean speech. In this study, by using noisyclean training pairs, we further introduce a denoising process in learning the DAE. In training the DAE, we still adopt greedy layer-wised pretraining plus fine tuning strategy. In pretraining, each layer is trained as a one-hidden-layer neural autoencoder (AE) using noisy-clean speech pairs as input and output (or transformed noisy-clean speech pairs by preceding AEs). Fine tuning was done by stacking all AEs with pretrained parameters for initialization. The trained DAE is used as a filter for speech estimation when noisy speech is given. Speech enhancement experiments were done to examine the performance of the trained denoising DAE. Noise reduction, speech distortion, and perceptual evaluation of speech quality (PESQ) criteria are used in the performance evaluations. Experimental results show that adding depth of the DAE consistently increase the performance when a large training data set is given. In addition, compared with a minimum mean square error based speech enhancement algorithm, our proposed denoising DAE provided superior performance on the three objective evaluations.",
"title": ""
},
{
"docid": "5431514a65d66d40e55b87a5d326d3b5",
"text": "The authors describe a theoretical framework for understanding when people interacting with a member of a stereotyped group activate that group's stereotype and apply it to that person. It is proposed that both stereotype activation and stereotype application during interaction depend on the strength of comprehension and self-enhancement goals that can be satisfied by stereotyping one's interaction partner and on the strength of one's motivation to avoid prejudice. The authors explain how these goals can promote and inhibit stereotype activation and application, and describe diverse chronic and situational factors that can influence the intensity of these goals during interaction and, thereby, influence stereotype activation and application. This approach permits integration of a broad range of findings on stereotype activation and application.",
"title": ""
},
{
"docid": "49f955fb928955da09a3bfe08efe78bc",
"text": "A novel macro model approach for modeling ESD MOS snapback is introduced. The macro model consists of standard components only. It includes a MOS transistor modeled by BSIM3v3, a bipolar transistor modeled by VBIC, and a resistor for substrate resistance. No external current source, which is essential in most publicly reported macro models, is included since both BSIM3vs and VBIC have formulations built in to model the relevant effects. The simplicity of the presented macro model makes behavior languages, such as Verilog-A, and special ESD equations not necessary in model implementation. This offers advantages of high simulation speed, wider availability, and less convergence issues. Measurement and simulation of the new approach indicates that good silicon correlation can be achieved.",
"title": ""
}
] |
scidocsrr
|
73727bfdc3a4f91da437d5d34374e46b
|
KINEMATIC MODEL AND CONTROL OF MOBILE ROBOT FOR TRAJECTORYTRACKING
|
[
{
"docid": "8f516d9cdfc5a765d3618c18e6f8158f",
"text": "The precise control of mobile robot is an important issue in robotics field. In this paper, the motion model of mobile robot is established by mechanism analysis. Then, a fuzzy PID controller is designed for trajectory tracking of mobile robot. The controller consists of a PID controller and a fuzzy inference unit with two inputs and three outputs to tune the parameters of PID controller according to the error and error rate. Finally, the model of a four-wheel mobile robot, fuzzy PID and traditional PID controller are all simulated in Simulink. Simulation experiments are reached in different conditions. The result shows that the mobile robot with the fuzzy PID controller can track the desired trail by about 3 seconds in advance and the overshoot of the system will decrease by 40 percent, comparing to the mobile robot with the traditional PID controller. The advantages of fuzzy PID controller for trajectory tracking control of mobile robot are major in its rapidity, stability, anti-interference and tracking precision.",
"title": ""
}
] |
[
{
"docid": "e84e27610c27b5880977aca20d04dba3",
"text": "Automatic bug fixing has become a promising direction for reducing manual effort in debugging. However, general approaches to automatic bug fixing may face some fundamental difficulties. In this paper, we argue that automatic fixing of specific types of bugs can be a useful complement.\n This paper reports our first attempt towards automatically fixing memory leaks in C programs. Our approach generates only safe fixes, which are guaranteed not to interrupt normal execution of the program. To design such an approach, we have to deal with several challenging problems such as inter-procedural leaks, global variables, loops, and leaks from multiple allocations. We propose solutions to all the problems and integrate the solutions into a coherent approach.\n We implemented our inter-procedural memory leak fixing into a tool named LeakFix and evaluated LeakFix on 15 programs with 522k lines of code. Our evaluation shows that LeakFix is able to successfully fix a substantial number of memory leaks, and LeakFix is scalable for large applications.",
"title": ""
},
{
"docid": "d6441d868b19d397740ef87ff700b3e9",
"text": "Distant supervised relation extraction is an efficient approach to scale relation extraction to very large corpora, and has been widely used to find novel relational facts from plain text. Recent studies on neural relation extraction have shown great progress on this task via modeling the sentences in low-dimensional spaces, but seldom considered syntax information to model the entities. In this paper, we propose to learn syntax-aware entity embedding for neural relation extraction. First, we encode the context of entities on a dependency tree as sentencelevel entity embedding based on tree-GRU. Then, we utilize both intra-sentence and inter-sentence attentions to obtain sentence set-level entity embedding over all sentences containing the focus entity pair. Finally, we combine both sentence embedding and entity embedding for relation classification. We conduct experiments on a widely used real-world dataset and the experimental results show that our model can make full use of all informative instances and achieve state-of-the-art performance of relation extraction.",
"title": ""
},
{
"docid": "f896ba5c4009f83cccff857af6d9ef0d",
"text": "Based on the frameworks of dual-process theories, we examined the interplay between intuitive and controlled cognitive processes related to moral and social judgments. In a virtual reality (VR) setting we performed an experiment investigating the progression from fast, automatic decisions towards more controlled decisions over multiple trials in the context of a sacrificing scenario. We repeatedly exposed participants to a modified ten-to-one version and to three one-to-one versions of the trolley dilemma in VR and varied avatar properties, such as their gender and ethnicity, and their orientation in space. We also investigated the influence of arousing music on decisions. Our experiment replicated the behavioral pattern observed in studies using text versions of the trolley dilemma, thereby validating the use of virtual environments in research on moral judgments. Additionally, we found a general tendency towards sacrificing male individuals which correlated with socially desirable responding. As indicated by differences in response times, the ten-to-one version of the trolley dilemma seems to be faster to decide than decisions requiring comparisons based on specific avatar properties as a result of differing moral content. Building upon research on music-based emotion induction, we used music to induce emotional arousal on a physiological level as measured by pupil diameter. We found a specific temporal signature displaying a peak in arousal around the moment of decision. This signature occurs independently of the overall arousal level. Furthermore, we found context-dependent gaze durations during sacrificing decisions, leading participants to look prolonged at their victim if they had to choose between avatars differing in gender. Our study confirmed that moral decisions can be explained within the framework of dual-process theories and shows that pupillometric measurements are a promising tool for investigating affective responses in dilemma situations.",
"title": ""
},
{
"docid": "5d7d7a49b254e08c95e40a3bed0aa10e",
"text": "Five mentally handicapped individuals living in a home for disabled persons in Southern Germany were seen in our outpatient department with pruritic, red papules predominantly located in groups on the upper extremities, neck, upper trunk and face. Over several weeks 40 inhabitants and 5 caretakers were affected by the same rash. Inspection of their home and the sheds nearby disclosed infestation with rat populations and mites. Finally the diagnosis of tropical rat mite dermatitis was made by the identification of the arthropod Ornithonyssus bacoti or so-called tropical rat mite. The patients were treated with topical corticosteroids and antihistamines. After elimination of the rats and disinfection of the rooms by a professional exterminator no new cases of rat mite dermatitis occurred. The tropical rat mite is an external parasite occurring on rats, mice, gerbils, hamsters and various other small mammals. When the principal animal host is not available, human beings can become the victim of mite infestation.",
"title": ""
},
{
"docid": "451c3c374412c3e3006aff6d5ec5f4e7",
"text": "Internet users today prefer getting precise answer to their questions rather than sifting through a bunch of relevant documents provided by search engines. This has led to the huge popularity of Community Question Answering (cQA) services like Yahoo! Answers, Baidu Zhidao, Quora, StackOverflow etc., where forum users respond directly to questions with short targeted answers. These forums provide a platform for interaction with experts and serve as popular and effective means of information seeking on the Web. Anyone can obtain answers to their questions by posting them for other participants on these sites. Community can also decide the quality of answers for a question. Over time, such cQA archives have become rich repositories of knowledge encoded in the form of questions and user generated answers. However, not all questions get immediate answers from other users. If a question is not interesting enough for community or if similar question is already answered by some other user, it may suffer from “starvation”. Such questions may take hours and sometimes days to get satisfactory answers. This delay in response can be avoided by searching similar questions in the very large archives of previously asked questions. If a similar question is found, then the corresponding best answer can be provided without any delay. The main challenge while retrieving similar questions is the “lexico-syntactic gap” between the user query and the questions already present in the forum. The aim is to detect question pairs that differ from each other lexically and syntactically but expresses the same meaning. In this thesis, we propose two novel approaches to bridge the lexico-syntactic gap between the question posed by the user and forum questions. In the first approach, we design “Deep Structured Topic Model (DSTM)” which retrieves similar questions that lie in the vicinity of the latent topic vector space of the query and the archived question-answer pairs. The retrieved topically similar questions are reranked using a deep semantic model. In the second approach, we explore the behaviour of deep semantic models with “parameter-sharing” between the parallel networks which help us to design “Siamese Convolutional Neural Network for cQA (SCQA)”. It consists of twin convolutional neural networks with shared parameters and a contrastive loss function joining them. It learns the similarity metric for question-question pairs by leveraging the question-answer pairs available in cQA forum archives. The model projects semantically similar question pairs nearer to each other and dissimilar question pairs farther away from each other in the semantic space. Several models have been built in the past to bridge the lexico-syntactic gap in the cQA content. However, considering the ever growing nature of the data in cQA forums, these models cannot be kept",
"title": ""
},
{
"docid": "93b3c8cd0a1c5f1d0112115e1c556b46",
"text": "Graph processing is important for a growing range of applications. Current performance studies of parallel graph computation employ a large variety of algorithms and graphs. To explore their robustness, we characterize behavior variation across algorithms and graph structures at different scales. Our results show that graph computation behaviors, with up to 1000-fold variation, form a very broad space. Any inefficient exploration of this space may lead to narrow understanding and ad-hoc studies. Hence, we consider constructing an ensemble of graph computations, or graph-algorithm pairs, to most effectively explore this graph computation behavior space. We study different ensembles of parallel graph computations, and define two metrics to quantify how efficiently and completely an ensemble explores the space. Our results show that: (1) experiments limited to a single algorithm or a single graph may unfairly characterize a graph-processing system, (2) benchmarks exploring both algorithm and graph diversity can significantly improve the quality (30% more complete and 200% more efficient), but must be carefully chosen, (3) some algorithms are more useful than others in benchmarking, and (4) we can reduce the complexity (number of algorithms, graphs, runtime) while conserving the benchmarking quality.",
"title": ""
},
{
"docid": "c2fffaf7705ec5d87ca6cfffb24b1371",
"text": "Francisella tularensis is a highly infectious bacterium whose virulence relies on its ability to rapidly reach the macrophage cytosol and extensively replicate in this compartment. We previously identified a novel Francisella virulence factor, DipA (FTT0369c), which is required for intramacrophage proliferation and survival, and virulence in mice. DipA is a 353 amino acid protein with a Sec-dependent signal peptide, four Sel1-like repeats (SLR), and a C-terminal coiled-coil (CC) domain. Here, we determined through biochemical and localization studies that DipA is a membrane-associated protein exposed on the surface of the prototypical F. tularensis subsp. tularensis strain SchuS4 during macrophage infection. Deletion and substitution mutagenesis showed that the CC domain, but not the SLR motifs, of DipA is required for surface exposure on SchuS4. Complementation of the dipA mutant with either DipA CC or SLR domain mutants did not restore intracellular growth of Francisella, indicating that proper localization and the SLR domains are required for DipA function. Co-immunoprecipitation studies revealed interactions with the Francisella outer membrane protein FopA, suggesting that DipA is part of a membrane-associated complex. Altogether, our findings indicate that DipA is positioned at the host-pathogen interface to influence the intracellular fate of this pathogen.",
"title": ""
},
{
"docid": "9787ae39c27f9cfad2dbd29779bb5f36",
"text": "Compressive sensing (CS) techniques offer a framework for the detection and allocation of sparse signals with a reduced number of samples. Today, modern radar systems operate with high bandwidths—demanding high sample rates according to the Shannon–Nyquist theorem—and a huge number of single elements for phased array consumption and costs of radar systems. There is only a small number of publications addressing the application of CS to radar, leaving several open questions. This paper addresses some aspects as a further step to CS-radar by presenting generic system architectures and implementation considerations. It is not the aim of this paper to investigate numerically efficient algorithms but to point to promising applications as well as arising problems. Three possible applications are considered: pulse compression, radar imaging, and air space surveillance with array antennas. Some simulation results are presented and enriched by the evaluation of real data acquired by an experimental radar system of Fraunhofer FHR. & 2009 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "bac9584a31e42129fb7a5fe2640f5725",
"text": "During the last few years, continuous progresses in wireless communications have opened new research fields in computer networking, aimed at extending data networks connectivity to environments where wired solutions are impracticable. Among these, vehicular communication is attracting growing attention from both academia and industry, owing to the amount and importance of the related applications, ranging from road safety to traffic control and up to mobile entertainment. Vehicular Ad-hoc Networks (VANETs) are self-organized networks built up from moving vehicles, and are part of the broader class of Mobile Ad-hoc Networks (MANETs). Owing to their peculiar characteristics, VANETs require the definition of specific networking techniques, whose feasibility and performance are usually tested by means of simulation. One of the main challenges posed by VANETs simulations is the faithful characterization of vehicular mobility at both the macroscopic and microscopic levels, leading to realistic non-uniform distributions of cars and velocity, and unique connectivity dynamics. However, freely distributed tools which are commonly used for academic studies only consider limited vehicular mobility issues, while they pay little or no attention to vehicular traffic generation and its interaction with its motion constraints counterpart. Such a simplistic approach can easily raise doubts on the confidence of derived VANETs simulation results. In this paper we present VanetMobiSim, a freely available generator of realistic vehicular movement traces for networks simulators. The traces generated by VanetMobiSim are validated first by illustrating how the interaction between featured motion constraints and traffic generator models is able to reproduce typical phenomena of vehicular traffic. Then, the traces are formally validated against those obtained by TSIS-CORSIM, a benchmark traffic simulator in transportation research. This makes VanetMobiSim one of the few vehicular mobility simulator fully validated and freely available to the vehicular networks research community.",
"title": ""
},
{
"docid": "c624b1ab8127ea8cafd217c9c0387a46",
"text": "A long-standing obstacle to progress in deep learning is the problem of vanishing and exploding gradients. Although, the problem has largely been overcome via carefully constructed initializations and batch normalization, architectures incorporating skip-connections such as highway and resnets perform much better than standard feedforward architectures despite wellchosen initialization and batch normalization. In this paper, we identify the shattered gradients problem. Specifically, we show that the correlation between gradients in standard feedforward networks decays exponentially with depth resulting in gradients that resemble white noise whereas, in contrast, the gradients in architectures with skip-connections are far more resistant to shattering, decaying sublinearly. Detailed empirical evidence is presented in support of the analysis, on both fully-connected networks and convnets. Finally, we present a new “looks linear” (LL) initialization that prevents shattering, with preliminary experiments showing the new initialization allows to train very deep networks without the addition of skip-connections.",
"title": ""
},
{
"docid": "52ea84f94abeb32cfa458ed17fb5b487",
"text": "Today . . . new transportation and communications technologies allow even the smallest firms to build partnerships with foreign producers to tap overseas expertise, cost-savings, and markets . . . The scarce resource in this new environment is the ability to locate foreign partners quickly and to manage complex business relationships across cultural and linguistic boundaries . . . [T]he Chinese and Indian entrepreneurs of Silicon Valley . . . are creating social structures that enable even the smallest producers to locate and maintain mutually beneficial collaborations across long distances. [AnnaLee Saxenian 1999, pp. 54–55]",
"title": ""
},
{
"docid": "01f9b07bc5c6ca47a6181deb908445e8",
"text": "This paper deals with deep neural networks for predicting accurate dense disparity map with Semi-global matching (SGM). SGM is a widely used regularization method for real scenes because of its high accuracy and fast computation speed. Even though SGM can obtain accurate results, tuning of SGMs penalty-parameters, which control a smoothness and discontinuity of a disparity map, is uneasy and empirical methods have been proposed. We propose a learning based penalties estimation method, which we call SGM-Nets that consist of Convolutional Neural Networks. A small image patch and its position are input into SGMNets to predict the penalties for the 3D object structures. In order to train the networks, we introduce a novel loss function which is able to use sparsely annotated disparity maps such as captured by a LiDAR sensor in real environments. Moreover, we propose a novel SGM parameterization, which deploys different penalties depending on either positive or negative disparity changes in order to represent the object structures more discriminatively. Our SGM-Nets outperformed state of the art accuracy on KITTI benchmark datasets.",
"title": ""
},
{
"docid": "5a03ecfcebb6fd339a8288be2adaf19c",
"text": "A resonant piezoelectric scanner is developed for high-resolution laser-scanning displays. A novel actuation scheme combines the principle of mechanical amplification with lead zirconate titanate (PZT) thin-film actuation. Sinusoidal actuation with 24 V at the mechanical resonance frequency of 40 kHz provides an optical scan angle of 38.5° for the 1.4-mm-wide mirror. This scanner is a significant step toward achieving full-high-definition resolution (1920 × 1080 pixels) in mobile laser projectors without the use of vacuum packaging. The reported piezoscanner requires no bulky components and consumes <; 30-mW power at maximum deflection, thus providing significant power and size advantages, compared with reported electromagnetic and electrostatic scanners. Interferometry measurements show that the dynamic deformation is at acceptable levels for a large fraction of the mirror and can be improved further for diffraction-limited performance at full resolution. A design variation with a segmented electrode pair illustrated that reliable angle sensing can be achieved with PZT for closed-loop control of the scanner.",
"title": ""
},
{
"docid": "9d555906ea3ea9fb3a03c735db62e3b2",
"text": "\"Electronic-sport\" (E-Sport) is now established as a new entertainment genre. More and more players enjoy streaming their games, which attract even more viewers. In fact, in a recent social study, casual players were found to prefer watching professional gamers rather than playing the game themselves. Within this context, advertising provides a significant source of revenue to the professional players, the casters (displaying other people's games) and the game streaming platforms. For this paper, we crawled, during more than 100 days, the most popular among such specialized platforms: Twitch.tv. Thanks to these gigabytes of data, we propose a first characterization of a new Web community, and we show, among other results, that the number of viewers of a streaming session evolves in a predictable way, that audience peaks of a game are explainable and that a Condorcet method can be used to sensibly rank the streamers by popularity. Last but not least, we hope that this paper will bring to light the study of E-Sport and its growing community. They indeed deserve the attention of industrial partners (for the large amount of money involved) and researchers (for interesting problems in social network dynamics, personalized recommendation, sentiment analysis, etc.).",
"title": ""
},
{
"docid": "b8700283c7fb65ba2e814adffdbd84f8",
"text": "Human immunoglobulin preparations for intravenous or subcutaneous administration are the cornerstone of treatment in patients with primary immunodeficiency diseases affecting the humoral immune system. Intravenous preparations have a number of important uses in the treatment of other diseases in humans as well, some for which acceptable treatment alternatives do not exist. We provide an update of the evidence-based guideline on immunoglobulin therapy, last published in 2006. Given the potential risks and inherent scarcity of human immunoglobulin, careful consideration of its indications and administration is warranted.",
"title": ""
},
{
"docid": "1c4fc130a8db023e8933608cde19e3ef",
"text": "Battelle has been actively exploring emerging quantum key distribution (QKD) cryptographic technologies for secure communication of information with a goal of expanding the use of this technology by commercial enterprises in the United States. In QKD systems, the principles of quantum physics are applied to generate a secret data encryption key, which is distributed between two users. The security of this key is guaranteed by the laws of quantum physics, and this distributed key can be used to encrypt data to enable secure communication on insecure channels. To date, Battelle has studied commercially available and custom-built QKD systems in controlled laboratory environments and is actively working to establish a QKD Test Bed network to characterize performance in real world metropolitan (10-100 km) and long distance (>; 100 km) environments. All QKD systems that we have tested to date utilize a discrete variable (DV) binary approach. In this approach, discrete information is encoded onto a quantum state of a single photon, and binary data are measured using single photon detectors. Recently, continuous variable (CV) QKD systems have been developed and are expected to be commercially available shortly. In CV-QKD systems, randomly generated continuous variables are encoded on coherent states of weak pulses of light, and continuous data values are measured with homodyne detection methods. In certain applications for cyber security, the CV-QKD systems may offer advantages over traditional DV-QKD systems, such as a higher secret key exchange rate for short distances, lower cost, and compatibility with telecommunication technologies. In this paper, current CV- and DV-QKD approaches are described, and security issues and technical challenges fielding these quantum-based systems are discussed. Experimental and theoretical data that have been published on quantum key exchange rates and distances that are relevant to metropolitan and long distance network applications are presented. From an analysis of these data, the relative performance of the two approaches is compared as a function of distance and environment (free space and optical fiber). Additionally, current research activities are described for both technologies, which include network integration and methods to increase secret key distribution rates and distances.",
"title": ""
},
{
"docid": "0d8b2997f10319da3d59ec35731c8e85",
"text": "In this paper, we study the performance of the IEEE 802.11 MAC protocol under a range of jammers that covers both channel-oblivious and channel-aware jamming. We study two channel-oblivious jammers: a periodic jammer that jams deterministically at a specified rate, and a memoryless jammer whose signals arrive according to a Poisson process. We also develop new models for channel-aware jamming, including a reactive jammer that only jams non-colliding transmissions and an omniscient jammer that optimally adjusts its strategy according to current states of the participating nodes. Our study comprises of a theoretical analysis of the saturation throughput of 802.11 under jamming, an extensive simulation study, and a testbed to conduct real world experimentation of jamming IEEE 802.11 using GNU Radio and USRP platform. In our theoretical analysis, we use a discrete-time Markov chain analysis to derive formulae for the saturation throughput of IEEE 802.11 under memoryless, reactive and omniscient jamming. One of our key results is a characterization of optimal omniscient jamming that establishes a lower bound on the saturation throughput of 802.11 under arbitrary jammer attacks. We validate the theoretical analysis by means of Qualnet simulations. Finally, we measure the real-world performance of periodic and memoryless jammers using our GNU radio jammer prototype.",
"title": ""
},
{
"docid": "887af59f0fab9aac9bb6104b3da9c5b3",
"text": "Over past decades, grounded theory is increasingly popular in a broad range of research primarily in educational research. The current paper aims to provide useful information for the new-comers and fit them well in grounded theory research. This paper starts with definitions, origin and applications of grounded theory, followed by types of grounded theory research designs and the key characteristics of grounded theory. Other aspects covered include data collection and data analysis, general steps, and ethical issues in grounded theory. Discussions on the strengths and limitations of grounded theory, as well as evaluation aspects, are found in the last part of this paper.",
"title": ""
},
{
"docid": "4f2fa6ee3a5e7a4b9a7472993b992439",
"text": "PURPOSE\nThe purpose of this research was to develop and evaluate a severity rating score for fecal incontinence, the Fecal Incontinence Severity Index.\n\n\nMETHODS\nThe Fecal Incontinence Severity Index is based on a type x frequency matrix. The matrix includes four types of leakage commonly found in the fecal incontinent population: gas, mucus, and liquid and solid stool and five frequencies: one to three times per month, once per week, twice per week, once per day, and twice per day. The Fecal Incontinence Severity Index was developed using both colon and rectal surgeons and patient input for the specification of the weighting scores.\n\n\nRESULTS\nSurgeons and patients had very similar weightings for each of the type x frequency combinations; significant differences occurred for only 3 of the 20 different weights. The Fecal Incontinence Severity Index score of a group of patients with fecal incontinence (N = 118) demonstrated significant correlations with three of the four scales found in a fecal incontinence quality-of-life scale.\n\n\nCONCLUSIONS\nEvaluation of the Fecal Incontinence Severity Index indicates that the index is a tool that can be used to assess severity of fecal incontinence. Overall, patient and surgeon ratings of severity are similar, with minor differences associated with the accidental loss of solid stool.",
"title": ""
},
{
"docid": "db9f6e58adc2a3ce423eed3223d88b19",
"text": "The self-organizing map (SOM) is an excellent tool in exploratory phase of data mining. It projects input space on prototypes of a low-dimensional regular grid that can be effectively utilized to visualize and explore properties of the data. When the number of SOM units is large, to facilitate quantitative analysis of the map and the data, similar units need to be grouped, i.e., clustered. In this paper, different approaches to clustering of the SOM are considered. In particular, the use of hierarchical agglomerative clustering and partitive clustering using k-means are investigated. The two-stage procedure--first using SOM to produce the prototypes that are then clustered in the second stage--is found to perform well when compared with direct clustering of the data and to reduce the computation time.",
"title": ""
}
] |
scidocsrr
|
17a6c77c9c98ac4baca278b03b0b58c0
|
URLNet: Learning a URL Representation with Deep Learning for Malicious URL Detection
|
[
{
"docid": "2af711baba40a79b259c8d9c1f14518c",
"text": "Twitter can suffer from malicious tweets containing suspicious URLs for spam, phishing, and malware distribution. Previous Twitter spam detection schemes have used account features such as the ratio of tweets containing URLs and the account creation date, or relation features in the Twitter graph. Malicious users, however, can easily fabricate account features. Moreover, extracting relation features from the Twitter graph is time and resource consuming. Previous suspicious URL detection schemes have classified URLs using several features including lexical features of URLs, URL redirection, HTML content, and dynamic behavior. However, evading techniques exist, such as time-based evasion and crawler evasion. In this paper, we propose WARNINGBIRD, a suspicious URL detection system for Twitter. Instead of focusing on the landing pages of individual URLs in each tweet, we consider correlated redirect chains of URLs in a number of tweets. Because attackers have limited resources and thus have to reuse them, a portion of their redirect chains will be shared. We focus on these shared resources to detect suspicious URLs. We have collected a large number of tweets from the Twitter public timeline and trained a statistical classifier with features derived from correlated URLs and tweet context information. Our classifier has high accuracy and low false-positive and falsenegative rates. We also present WARNINGBIRD as a realtime system for classifying suspicious URLs in the Twitter stream. ∗This research was supported by the MKE (The Ministry of Knowledge Economy), Korea, under the ITRC (Information Technology Research Center) support program supervised by the NIPA (National IT Industry Promotion Agency) (NIPA-2011-C1090-1131-0009) and World Class University program funded by the Ministry of Education, Science and Technology through the National Research Foundation of Korea(R31-10100).",
"title": ""
},
{
"docid": "da7d45d2cbac784d31e4d3957f4799e6",
"text": "Malicious Uniform Resource Locator (URL) detection is an important problem in web search and mining, which plays a critical role in internet security. In literature, many existing studies have attempted to formulate the problem as a regular supervised binary classification task, which typically aims to optimize the prediction accuracy. However, in a real-world malicious URL detection task, the ratio between the number of malicious URLs and legitimate URLs is highly imbalanced, making it very inappropriate for simply optimizing the prediction accuracy. Besides, another key limitation of the existing work is to assume a large amount of training data is available, which is impractical as the human labeling cost could be potentially quite expensive. To solve these issues, in this paper, we present a novel framework of Cost-Sensitive Online Active Learning (CSOAL), which only queries a small fraction of training data for labeling and directly optimizes two cost-sensitive measures to address the class-imbalance issue. In particular, we propose two CSOAL algorithms and analyze their theoretical performance in terms of cost-sensitive bounds. We conduct an extensive set of experiments to examine the empirical performance of the proposed algorithms for a large-scale challenging malicious URL detection task, in which the encouraging results showed that the proposed technique by querying an extremely small-sized labeled data (about 0.5% out of 1-million instances) can achieve better or highly comparable classification performance in comparison to the state-of-the-art cost-insensitive and cost-sensitive online classification algorithms using a huge amount of labeled data.",
"title": ""
}
] |
[
{
"docid": "ab5f79671bcd56a733236b089bd5e955",
"text": "Conversational modeling is an important task in natural language processing as well as machine learning. Like most important tasks, it’s not easy. Previously, conversational models have been focused on specific domains, such as booking hotels or recommending restaurants. They were built using hand-crafted rules, like ChatScript [11], a popular rule-based conversational model. In 2014, the sequence to sequence model being used for translation opened the possibility of phrasing dialogues as a translation problem: translating from an utterance to its response. The systems built using this principle, while conversing fairly fluently, aren’t very convincing because of their lack of personality and inconsistent persona [10] [5]. In this paper, we experiment building open-domain response generator with personality and identity. We built chatbots that imitate characters in popular TV shows: Barney from How I Met Your Mother, Sheldon from The Big Bang Theory, Michael from The Office, and Joey from Friends. A successful model of this kind can have a lot of applications, such as allowing people to speak with their favorite celebrities, creating more life-like AI assistants, or creating virtual alter-egos of ourselves. The model was trained end-to-end without any hand-crafted rules. The bots talk reasonably fluently, have distinct personalities, and seem to have learned certain aspects of their identity. The results of standard automated translation model evaluations yielded very low scores. However, we designed an evaluation metric with a human judgment element, for which the chatbots performed well. We are able to show that for a bot’s response, a human is more than 50% likely to believe that the response actually came from the real character. Keywords—Seq2seq, attentional mechanism, chatbot, dialogue system.",
"title": ""
},
{
"docid": "d7310e830f85541aa1d4b94606c1be0c",
"text": "We present a practical framework to automatically detect shadows in real world scenes from a single photograph. Previous works on shadow detection put a lot of effort in designing shadow variant and invariant hand-crafted features. In contrast, our framework automatically learns the most relevant features in a supervised manner using multiple convolutional deep neural networks (ConvNets). The 7-layer network architecture of each ConvNet consists of alternating convolution and sub-sampling layers. The proposed framework learns features at the super-pixel level and along the object boundaries. In both cases, features are extracted using a context aware window centered at interest points. The predicted posteriors based on the learned features are fed to a conditional random field model to generate smooth shadow contours. Our proposed framework consistently performed better than the state-of-the-art on all major shadow databases collected under a variety of conditions.",
"title": ""
},
{
"docid": "7f76401d9d635460bde256bbd6c8e84e",
"text": "This article presents a review of the methods used in recognition and analysis of the human gait from three different approaches: image processing, floor sensors and sensors placed on the body. Progress in new technologies has led the development of a series of devices and techniques which allow for objective evaluation, making measurements more efficient and effective and providing specialists with reliable information. Firstly, an introduction of the key gait parameters and semi-subjective methods is presented. Secondly, technologies and studies on the different objective methods are reviewed. Finally, based on the latest research, the characteristics of each method are discussed. 40% of the reviewed articles published in late 2012 and 2013 were related to non-wearable systems, 37.5% presented inertial sensor-based systems, and the remaining 22.5% corresponded to other wearable systems. An increasing number of research works demonstrate that various parameters such as precision, conformability, usability or transportability have indicated that the portable systems based on body sensors are promising methods for gait analysis.",
"title": ""
},
{
"docid": "3380a9a220e553d9f7358739e3f28264",
"text": "We present a multi-instance object segmentation algorithm to tackle occlusions. As an object is split into two parts by an occluder, it is nearly impossible to group the two separate regions into an instance by purely bottomup schemes. To address this problem, we propose to incorporate top-down category specific reasoning and shape prediction through exemplars into an intuitive energy minimization framework. We perform extensive evaluations of our method on the challenging PASCAL VOC 2012 segmentation set. The proposed algorithm achieves favorable results on the joint detection and segmentation task against the state-of-the-art method both quantitatively and qualitatively.",
"title": ""
},
{
"docid": "26142d27adc7a682d7e6698532578811",
"text": "X-ray imaging has been developed not only for its use in medical imaging for human beings, but also for materials or objects, where the aim is to analyze (nondestructively) those inner parts that are undetectable to the naked eye. Thus, X-ray testing is used to determine if a test object deviates from a given set of specifications. Typical applications are analysis of food products, screening of baggage, inspection of automotive parts, and quality control of welds. In order to achieve efficient and effective X-ray testing, automated and semi-automated systems are being developed to execute this task. In this paper, we present a general overview of computer vision methodologies that have been used in X-ray testing. In addition, we review some techniques that have been applied in certain relevant applications, and we introduce a public database of X-ray images that can be used for testing and evaluation of image analysis and computer vision algorithms. Finally, we conclude that the following: that there are some areas -like casting inspection- where automated systems are very effective, and other application areas -such as baggage screening- where human inspection is still used, there are certain application areas -like weld and cargo inspections- where the process is semi-automatic, and there is some research in areas -including food analysis- where processes are beginning to be characterized by the use of X-ray imaging.",
"title": ""
},
{
"docid": "69d296d1302d9e0acd7fb576f551118d",
"text": "Event detection is a research area that attracted attention during the last years due to the widespread availability of social media data. The problem of event detection has been examined in multiple social media sources like Twitter, Flickr, YouTube and Facebook. The task comprises many challenges including the processing of large volumes of data and high levels of noise. In this article, we present a wide range of event detection algorithms, architectures and evaluation methodologies. In addition, we extensively discuss on available datasets, potential applications and open research issues. The main objective is to provide a compact representation of the recent developments in the field and aid the reader in understanding the main challenges tackled so far as well as identifying interesting future research directions.",
"title": ""
},
{
"docid": "ae80dd046027bcefc8aaa6d4d3a06f59",
"text": "We present the results of an evaluation of the performance of the Leap Motion Controller with the aid of a professional, high-precision, fast motion tracking system. A set of static and dynamic measurements was performed with different numbers of tracking objects and configurations. For the static measurements, a plastic arm model simulating a human arm was used. A set of 37 reference locations was selected to cover the controller's sensory space. For the dynamic measurements, a special V-shaped tool, consisting of two tracking objects maintaining a constant distance between them, was created to simulate two human fingers. In the static scenario, the standard deviation was less than 0.5 mm. The linear correlation revealed a significant increase in the standard deviation when moving away from the controller. The results of the dynamic scenario revealed the inconsistent performance of the controller, with a significant drop in accuracy for samples taken more than 250 mm above the controller's surface. The Leap Motion Controller undoubtedly represents a revolutionary input device for gesture-based human-computer interaction; however, due to its rather limited sensory space and inconsistent sampling frequency, in its current configuration it cannot currently be used as a professional tracking system.",
"title": ""
},
{
"docid": "799573bf08fb91b1ac644c979741e7d2",
"text": "This short paper reports the method and the evaluation results of Casio and Shinshu University joint team for the ISBI Challenge 2017 – Skin Lesion Analysis Towards Melanoma Detection – Part 3: Lesion Classification hosted by ISIC. Our online validation score was 0.958 with melanoma classifier AUC 0.924 and seborrheic keratosis classifier AUC 0.993.",
"title": ""
},
{
"docid": "955ae6e1dffbe580217b812f943b4339",
"text": "Successful applications of reinforcement learning in realworld problems often require dealing with partially observable states. It is in general very challenging to construct and infer hidden states as they often depend on the agent’s entire interaction history and may require substantial domain knowledge. In this work, we investigate a deep-learning approach to learning the representation of states in partially observable tasks, with minimal prior knowledge of the domain. In particular, we study reinforcement learning with deep neural networks, including RNN and LSTM, which are equipped with the desired property of being able to capture long-term dependency on history, and thus providing an effective way of learning the representation of hidden states. We further develop a hybrid approach that combines the strength of both supervised learning (for representing hidden states) and reinforcement learning (for optimizing control) with joint training. Extensive experiments based on a KDD Cup 1998 direct mailing campaign problem demonstrate the effectiveness and advantages of the proposed approach, which performs the best across the board.",
"title": ""
},
{
"docid": "0df56ee771c5ddaafd01f63a151b11fe",
"text": "Genes play a central role in all biological processes. DNA microarray technology has made it possible to study the expression behavior of thousands of genes in one go. Often, gene expression data is used to generate features for supervised and unsupervised learning tasks. At the same time, advances in the field of deep learning have made available a plethora of architectures. In this paper, we use deep architectures pre-trained in an unsupervised manner using denoising autoencoders as a preprocessing step for a popular unsupervised learning task. Denoising autoencoders (DA) can be used to learn a compact representation of input, and have been used to generate features for further supervised learning tasks. We propose that our deep architectures can be treated as empirical versions of Deep Belief Networks (DBNs). We use our deep architectures to regenerate gene expression time series data for two different data sets. We test our hypothesis on two popular datasets for the unsupervised learning task of clustering and find promising improvements in performance.",
"title": ""
},
{
"docid": "5a11ab9ece5295d4d1d16401625ab3d4",
"text": "The hardware implementation of deep neural networks (DNNs) has recently received tremendous attention since many applications require high-speed operations. However, numerous processing elements and complex interconnections are usually required, leading to a large area occupation and a high power consumption. Stochastic computing has shown promising results for area-efficient hardware implementations, even though existing stochastic algorithms require long streams that exhibit long latency. In this paper, we propose an integer form of stochastic computation and introduce some elementary circuits. We then propose an efficient implementation of a DNN based on integral stochastic computing. The proposed architecture uses integer stochastic streams and a modified Finite State Machine-based tanh function to improve the performance and reduce the latency compared to existing stochastic architectures for DNN. The simulation results show the negligible performance loss of the proposed integer stochastic DNN for different network sizes compared to their floating point versions.",
"title": ""
},
{
"docid": "2abd75766d4875921edd4d6d63d5d617",
"text": "Wireless sensor networks typically consist of a large number of sensor nodes embedded in a physical space. Such sensors are low-power devices that are primarily used for monitoring several physical phenomena, potentially in remote harsh environments. Spatial and temporal dependencies between the readings at these nodes highly exist in such scenarios. Statistical contextual information encodes these spatio-temporal dependencies. It enables the sensors to locally predict their current readings based on their own past readings and the current readings of their neighbors. In this paper, we introduce context-aware sensors. Specifically, we propose a technique for modeling and learning statistical contextual information in sensor networks. Our approach is based on Bayesian classifiers; we map the problem of learning and utilizing contextual information to the problem of learning the parameters of a Bayes classifier, and then making inferences, respectively. We propose a scalable and energy-efficient procedure for online learning of these parameters in-network, in a distributed fashion. We discuss applications of our approach in discovering outliers and detection of faulty sensors, approximation of missing values, and in-network sampling. We experimentally analyze our approach in two applications, tracking and monitoring.",
"title": ""
},
{
"docid": "198ad1ba78ac0aa315dac6f5730b4f88",
"text": "Life history theory posits that behavioral adaptation to various environmental (ecological and/or social) conditions encountered during childhood is regulated by a wide variety of different traits resulting in various behavioral strategies. Unpredictable and harsh conditions tend to produce fast life history strategies, characterized by early maturation, a higher number of sexual partners to whom one is less attached, and less parenting of offspring. Unpredictability and harshness not only affects dispositional social and emotional functioning, but may also promote the development of personality traits linked to higher rates of instability in social relationships or more self-interested behavior. Similarly, detrimental childhood experiences, such as poor parental care or high parent-child conflict, affect personality development and may create a more distrustful, malicious interpersonal style. The aim of this brief review is to survey and summarize findings on the impact of negative early-life experiences on the development of personality and fast life history strategies. By demonstrating that there are parallels in adaptations to adversity in these two domains, we hope to lend weight to current and future attempts to provide a comprehensive insight of personality traits and functions at the ultimate and proximate levels.",
"title": ""
},
{
"docid": "9eccf674ee3b3826b010bc142ed24ef0",
"text": "We present an architecture of a recurrent neural network (RNN) with a fullyconnected deep neural network (DNN) as its feature extractor. The RNN is equipped with both causal temporal prediction and non-causal look-ahead, via auto-regression (AR) and moving-average (MA), respectively. The focus of this paper is a primal-dual training method that formulates the learning of the RNN as a formal optimization problem with an inequality constraint that provides a sufficient condition for the stability of the network dynamics. Experimental results demonstrate the effectiveness of this new method, which achieves 18.86% phone recognition error on the TIMIT benchmark for the core test set. The result approaches the best result of 17.7%, which was obtained by using RNN with long short-term memory (LSTM). The results also show that the proposed primal-dual training method produces lower recognition errors than the popular RNN methods developed earlier based on the carefully tuned threshold parameter that heuristically prevents the gradient from exploding.",
"title": ""
},
{
"docid": "3b2c9aebbf8f08b08b7630661f8ccfe7",
"text": "This study investigated the convergent, discriminant, and incremental validity of one ability test of emotional intelligence (EI)--the Mayer-Salovey-Caruso-Emotional Intelligence Test (MSCEIT)--and two self-report measures of EI--the Emotional Quotient Inventory (EQ-i) and the self-report EI test (SREIT). The MSCEIT showed minimal relations to the EQ-i and SREIT, whereas the latter two measures were moderately interrelated. Among EI measures, the MSCEIT was discriminable from well-studied personality and well-being measures, whereas the EQ-i and SREIT shared considerable variance with these measures. After personality and verbal intelligence were held constant, the MSCEIT was predictive of social deviance, the EQ-i was predictive of alcohol use, and the SREIT was inversely related to academic achievement. In general, results showed that ability EI and self-report EI are weakly related and yield different measurements of the same person.",
"title": ""
},
{
"docid": "f6899520472f9a5513ca5d1e0c16ad7c",
"text": "The high volume of monitoring information generated by large-scale cloud infrastructures poses a challenge to the capacity of cloud providers in detecting anomalies in the infrastructure. Traditional anomaly detection methods are resource-intensive and computationally complex for training and/or detection, what is undesirable in very dynamic and large-scale environment such as clouds. Isolation-based methods have the advantage of low complexity for training and detection and are optimized for detecting failures. In this work, we explore the feasibility of Isolation Forest, an isolation-based anomaly detection method, to detect anomalies in large-scale cloud data centers. We propose a method to code time-series information as extra attributes that enable temporal anomaly detection and establish its feasibility to adapt to seasonality and trends in the time-series and to be applied on-line and in real-time. Copyright c © 2017 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "fb4fcc4d5380c4123b24467c1ca2a8e3",
"text": "Deep neural networks are traditionally trained using humandesigned stochastic optimization algorithms, such as SGD and Adam. Recently, the approach of learning to optimize network parameters has emerged as a promising research topic. However, these learned black-box optimizers sometimes do not fully utilize the experience in human-designed optimizers, therefore have limitation in generalization ability. In this paper, a new optimizer, dubbed as HyperAdam, is proposed that combines the idea of “learning to optimize” and traditional Adam optimizer. Given a network for training, its parameter update in each iteration generated by HyperAdam is an adaptive combination of multiple updates generated by Adam with varying decay rates. The combination weights and decay rates in HyperAdam are adaptively learned depending on the task. HyperAdam is modeled as a recurrent neural network with AdamCell, WeightCell and StateCell. It is justified to be state-of-the-art for various network training, such as multilayer perceptron, CNN and LSTM.",
"title": ""
},
{
"docid": "7515938d82cf5f9e6682cdf4793ac27d",
"text": "Glioblastoma is an immunosuppressive, fatal brain cancer that contains glioblastoma stem-like cells (GSCs). Oncolytic herpes simplex virus (oHSV) selectively replicates in cancer cells while inducing anti-tumor immunity. oHSV G47Δ expressing murine IL-12 (G47Δ-mIL12), antibodies to immune checkpoints (CTLA-4, PD-1, PD-L1), or dual combinations modestly extended survival of a mouse glioma model. However, the triple combination of anti-CTLA-4, anti-PD-1, and G47Δ-mIL12 cured most mice in two glioma models. This treatment was associated with macrophage influx and M1-like polarization, along with increased T effector to T regulatory cell ratios. Immune cell depletion studies demonstrated that CD4+ and CD8+ T cells as well as macrophages are required for synergistic curative activity. This combination should be translatable to the clinic and other immunosuppressive cancers.",
"title": ""
},
{
"docid": "60c03017f7254c28ba61348d301ae612",
"text": "Code flaws or vulnerabilities are prevalent in software systems and can potentially cause a variety of problems including deadlock, information loss, or system failure. A variety of approaches have been developed to try and detect the most likely locations of such code vulnerabilities in large code bases. Most of them rely on manually designing features (e.g. complexity metrics or frequencies of code tokens) that represent the characteristics of the code. However, all suffer from challenges in sufficiently capturing both semantic and syntactic representation of source code, an important capability for building accurate prediction models. In this paper, we describe a new approach, built upon the powerful deep learning Long Short Term Memory model, to automatically learn both semantic and syntactic features in code. Our evaluation on 18 Android applications demonstrates that the prediction power obtained from our learned features is equal or even superior to what is achieved by state of the art vulnerability prediction models: 3%–58% improvement for within-project prediction and 85% for cross-project prediction.",
"title": ""
},
{
"docid": "3bc897662b39bcd59b7c7831fb1df091",
"text": "The proliferation of wearable devices has contributed to the emergence of mobile crowdsensing, which leverages the power of the crowd to collect and report data to a third party for large-scale sensing and collaborative learning. However, since the third party may not be honest, privacy poses a major concern. In this paper, we address this concern with a two-stage privacy-preserving scheme called RG-RP: the first stage is designed to mitigate maximum a posteriori (MAP) estimation attacks by perturbing each participant's data through a nonlinear function called repeated Gompertz (RG); while the second stage aims to maintain accuracy and reduce transmission energy by projecting high-dimensional data to a lower dimension, using a row-orthogonal random projection (RP) matrix. The proposed RG-RP scheme delivers better recovery resistance to MAP estimation attacks than most state-of-the-art techniques on both synthetic and real-world datasets. For collaborative learning, we proposed a novel LSTM-CNN model combining the merits of Long Short-Term Memory (LSTM) and Convolutional Neural Networks (CNN). Our experiments on two representative movement datasets captured by wearable sensors demonstrate that the proposed LSTM-CNN model outperforms standalone LSTM, CNN and Deep Belief Network. Together, RG+RP and LSTM-CNN provide a privacy-preserving collaborative learning framework that is both accurate and privacy-preserving.",
"title": ""
}
] |
scidocsrr
|
e6d29adf4dfd788e3e1d0962f72f3ea2
|
Workload analysis and efficient OpenCL-based implementation of SIFT algorithm on a smartphone
|
[
{
"docid": "c797b2a78ea6eb434159fd948c0a1bf0",
"text": "Feature extraction is an essential part in applications that require computer vision to recognize objects in an image processed. To extract the features robustly, feature extraction algorithms are often very demanding in computation so that the performance achieved by pure software is far from real-time. Among those feature extraction algorithms, scale-invariant feature transform (SIFT) has gained a lot of popularity recently. In this paper, we propose an all-hardware SIFT accelerator-the fastest of its kind to our knowledge. It consists of two interactive hardware components, one for key point identification, and the other for feature descriptor generation. We successfully developed a segment buffer scheme that could not only feed data to the computing modules in a data-streaming manner, but also reduce about 50% memory requirement than a previous work. With a parallel architecture incorporating a three-stage pipeline, the processing time of the key point identification is only 3.4 ms for one video graphics array (VGA) image. Taking also into account the feature descriptor generation part, the overall SIFT processing time for a VGA image can be kept within 33 ms (to support real-time operation) when the number of feature points to be extracted is fewer than 890.",
"title": ""
}
] |
[
{
"docid": "6f95e792b828ecef398f40b43cb312a7",
"text": "The database landscape has been significantly diversified during the last decade, resulting in the emergence of a variety of non-relational (also called NoSQL) databases, e.g., XML and JSON-document databases, key-value stores, and graph databases. To facilitate access to such databases and to enable data integration of non-relational data sources, we generalize the well-known ontologybased data access (OBDA) framework so as to allow for querying arbitrary databases through a mediating ontology. We instantiate this framework to MongoDB, a popular JSON-document database, and implement an prototype extension of the virtual OBDA system Ontop for answering SPARQL queries over MongoDB.",
"title": ""
},
{
"docid": "c1a4921eb85dc51e690c10649a582bf1",
"text": "System thinking skills are a prerequisite for acting successfully and responsibly in a complex world. However, traditional education largely fails to enhance system thinking skills whereas learner-centered educational methods seem more promising. Several such educational methods are compared with respect to their suitability for improving system thinking. It is proposed that integrated learning environments consisting of system dynamics models and additional didactical material have positive learning effects.This is exemplified by the illustration and validation of two learning sequences.",
"title": ""
},
{
"docid": "c23bedbcad1433c14b0942d12e12cb60",
"text": "In this study, we examine the role of strategy use in working memory (WM) tasks by providing short-term memory (STM) task strategy training to participants. In Experiment 1, the participants received four sessions of training to use a story-formation (i.e., chaining) strategy. There were substantial improvements from pretest to posttest (after training) in terms of both STM and WM task performance. Experiment 2 demonstrated that WM task improvement did not occur for control participants, who were given the same amount of practice but were not provided with strategy instructions. An assessment of participants' strategy use on the STM task before training indicated that more strategic participants displayed better WM task performance and better verbal skills. These results support our hypothesis that strategy use influences performance on WM tasks.",
"title": ""
},
{
"docid": "8dfc853c0d4256cdec04353982590e58",
"text": "Search result diversification has gained momentum as a way to tackle ambiguous queries. An effective approach to this problem is to explicitly model the possible aspects underlying a query, in order to maximise the estimated relevance of the retrieved documents with respect to the different aspects. However, such aspects themselves may represent information needs with rather distinct intents (e.g., informational or navigational). Hence, a diverse ranking could benefit from applying intent-aware retrieval models when estimating the relevance of documents to different aspects. In this paper, we propose to diversify the results retrieved for a given query, by learning the appropriateness of different retrieval models for each of the aspects underlying this query. Thorough experiments within the evaluation framework provided by the diversity task of the TREC 2009 and 2010 Web tracks show that the proposed approach can significantly improve state-of-the-art diversification approaches.",
"title": ""
},
{
"docid": "c5bb494ae302d7cc1c6c565ea7d4b039",
"text": "To improve the quality of computation experience for mobile devices, mobile-edge computing (MEC) is a promising paradigm by providing computing capabilities in close proximity within a sliced radio access network (RAN), which supports both traditional communication and MEC services. Nevertheless, the design of computation offloading policies for a virtual MEC system remains challenging. Specifically, whether to execute a computation task at the mobile device or to offload it for MEC server execution should adapt to the time-varying network dynamics. This paper considers MEC for a representative mobile user in an ultra-dense sliced RAN, where multiple base stations (BSs) are available to be selected for computation offloading. The problem of solving an optimal computation offloading policy is modelled as a Markov decision process, where our objective is to maximize the long-term utility performance whereby an offloading decision is made based on the task queue state, the energy queue state as well as the channel qualities between MU and BSs. To break the curse of high dimensionality in state space, we first propose a double deep Q-network (DQN) based strategic computation offloading algorithm to learn the optimal policy without knowing a priori knowledge of network dynamics. Then motivated by the additive structure of the utility function, a Q-function decomposition technique is combined with the double DQN, which leads to a novel learning algorithm for the solving of stochastic computation offloading. Numerical experiments show that our proposed learning algorithms achieve a significant improvement in computation offloading performance compared with the baseline policies.",
"title": ""
},
{
"docid": "fb59a43177d5e12ff8c87d04d10fcbbb",
"text": "One of the main concerns of deep reinforcement learning (DRL) is the data inefficiency problem, which stems both from an inability to fully utilize data acquired and from naive exploration strategies. In order to alleviate these problems, we propose a DRL algorithm that aims to improve data efficiency via both the utilization of unrewarded experiences and the exploration strategy by combining ideas from unsupervised auxiliary tasks, intrinsic motivation, and hierarchical reinforcement learning (HRL). Our method is based on a simple HRL architecture with a metacontroller and a subcontroller. The subcontroller is intrinsically motivated by the metacontroller to learn to control aspects of the environment, with the intention of giving the agent: 1) a neural representation that is generically useful for tasks that involve manipulation of the environment and 2) the ability to explore the environment in a temporally extended manner through the control of the metacontroller. In this way, we reinterpret the notion of pixel- and feature-control auxiliary tasks as reusable skills that can be learned via an intrinsic reward. We evaluate our method on a number of Atari 2600 games. We found that it outperforms the baseline in several environments and significantly improves performance in one of the hardest games--Montezuma's revenge--for which the ability to utilize sparse data is key. We found that the inclusion of intrinsic reward is crucial for the improvement in the performance and that most of the benefit seems to be derived from the representations learned during training.",
"title": ""
},
{
"docid": "f992f3af95b2f79d73781ba544dfe213",
"text": ". . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv",
"title": ""
},
{
"docid": "5f13c75aa77217bf2a9ed21f25fd1c49",
"text": "The RoboCup 2D simulation domain has served as a platform for research in AI, machine learning, and multiagent systems for more than two decades. However, for the researcher looking to quickly prototype and evaluate different algorithms, the full RoboCup task presents a cumbersome prospect, as it can take several weeks to set up the desired testing environment. The complexity owes in part to the coordination of several agents, each with a multi-layered control hierarchy, and which must balance offensive and defensive goals. This paper introduces a new open source benchmark, based on the Half Field Offense (HFO) subtask of soccer, as an easy-to-use platform for experimentation. While retaining the inherent challenges of soccer, the HFO environment constrains the agent’s attention to decision-making, providing standardized interfaces for interacting with the environment and with other agents, and standardized tools for evaluating performance. The resulting testbed makes it convenient to test algorithms for single and multiagent learning, ad hoc teamwork, and imitation learning. Along with a detailed description of the HFO environment, we present benchmark results for reinforcement learning agents on a diverse set of HFO tasks. We also highlight several other challenges that the HFO environment opens up for future research.",
"title": ""
},
{
"docid": "bfade0f99303617456429f1073b8be16",
"text": "Pertussis is a respiratory transmitted disease affecting approximately 23% of the worlds’ population. It is causes by Bordetella Pertussis [1-23]. The emergence of Multiple-Drug-Resistant (MDR) Pertussis has focused the attention of the scientific community thought the world on the urgent need for new anti–Pertussis drugs. In pursuit of this goal, our research efforts are directed toward the discovery of new chemical entities that are effective as anti–Pertussis drugs. During recent years, there have been intense investigations of different classes of 1,3,4-thiadiazole-2-sulfonamide compounds and derivatives such as 5-[(Phenylsulfonyl)amino]-1,3,4-thiadiazole-2-sulfonamide many of which are known to possess interesting pharmaceutical, biological, biochemical and biomedical properties suchlike anti–microbial, anti– Pertussis and anti–inflammatory activities. It should be noted that the purity of the synthesized compound was confirmed by High Performance Liquid Chromatography (HPLC) and also Thin–Layer Chromatography (TLC). Furthermore, the molecular and chemical structure of compound was characterized by 1HNMR, 13CNMR, Attenuated Total Reflectance Fourier Transform Infrared (ATR–FTIR), FT–Raman and HR Mass spectra.",
"title": ""
},
{
"docid": "2210afa182488e5ac68cbacfa2f0c797",
"text": "This paper presents a simple two-branch transmit diversity scheme. Using two transmit antennas and one receive antenna the scheme provides the same diversity order as maximal-ratio receiver combining (MRRC) with one transmit antenna, and two receive antennas. It is also shown that the scheme may easily be generalized to two transmit antennas and M receive antennas to provide a diversity order of 2M . The new scheme does not require any bandwidth expansion any feedback from the receiver to the transmitter and its computation complexity is similar to MRRC.",
"title": ""
},
{
"docid": "5752868bb14f434ce281733f2ecf84f8",
"text": "Tessellation in fundus is not only a visible feature for aged-related and myopic maculopathy but also confuse retinal vessel segmentation. The detection of tessellated images is an inevitable processing in retinal image analysis. In this work, we propose a model using convolutional neural network for detecting tessellated images. The input to the model is pre-processed fundus image, and the output indicate whether this photograph has tessellation or not. A database with 12,000 colour retinal images is collected to evaluate the classification performance. The best tessellation classifier achieves accuracy of 97.73% and AUC value of 0.9659 using pretrained GoogLeNet and transfer learning technique.",
"title": ""
},
{
"docid": "ac95eeba1f0f7632485c8138ea98fb6b",
"text": "Spreadsheets are becoming increasingly popular in solving engineering related problems. Among the strong features of spreadsheets are their instinctive cell-based structure and easy to use capabilities. Excel, for example, is a powerful spreadsheet with VBA robust programming capabilities that can be a powerful tool for teaching civil engineering concepts. Spreadsheets can do basic calculations such as cost estimates, schedule and cost control, and markup estimation, as well as structural calculations of reactions, stresses, strains, deflections, and slopes. Spreadsheets can solve complex problems, create charts and graphs, and generate useful reports. This paper highlights the use of Excel spreadsheet and VBA in teaching civil engineering concepts and creating useful applications. The focus is on concepts related to construction management and structural engineering ranging from a simple cost estimating problem to advanced applications like the simulation using PERT and the analysis of structural members. Several spreadsheet were developed for time-cost tradeoff analysis, optimum markup estimation, simulating activities with uncertain durations, scheduling repetitive projects, schedule and cost control, and optimization of construction operations, and structural calculations of reactions, internal forces, stresses, strains, deflections, and slopes. Seven illustrative examples are presented to demonstrate the use of spreadsheets as a powerful tool for teaching civil engineering concepts.",
"title": ""
},
{
"docid": "7527cfe075027c9356645419c4fd1094",
"text": "ive Multi-Document Summarization via Phrase Selection and Merging∗ Lidong Bing§ Piji Li Yi Liao Wai Lam Weiwei Guo† Rebecca J. Passonneau‡ §Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA USA Department of Systems Engineering and Engineering Management, The Chinese University of Hong Kong †Yahoo Labs, Sunnyvale, CA, USA ‡Center for Computational Learning Systems, Columbia University, New York, NY, USA §[email protected], {pjli, yliao, wlam}@se.cuhk.edu.hk †[email protected], ‡[email protected]",
"title": ""
},
{
"docid": "e193a91b13f12260dfd9339d181ea900",
"text": "E-marketing strategy is normally based and built up on the traditional 4 P's (Product, Price, Promotion and Place) that forms the classic marketi ng mix; e-marketing’s uniqueness is created using a series of specific and relational functions that are combined with the 4P’s to form the emarketing mix elements, each of which contain assoc iated e-marketing mix tools that are provided on business web sites to facilitate sales transactions. This research analyses the importance of each e-marketing tool related to its supporting e-marketing mix element. Furthermore, the composite score of each e-marketin g mix element is determined. This research concludes with a discussion of the relative weights of e-marketing tools.",
"title": ""
},
{
"docid": "daa6bef4038654f73a6489c03b131740",
"text": "Interpreters have been used in many contexts. They provide portability and ease of development at the expense of performance. The literature of the past decade covers analysis of why interpreters are slow, and many software techniques to improve them. A large proportion of these works focuses on the dispatch loop, and in particular on the implementation of the switch statement: typically an indirect branch instruction. Folklore attributes a significant penalty to this branch, due to its high misprediction rate. We revisit this assumption, considering state-of-the-art branch predictors and the three most recent Intel processor generations on current interpreters. Using both hardware counters on Haswell, the latest Intel processor generation, and simulation of the ITTAGE, we show that the accuracy of indirect branch prediction is no longer critical for interpreters. We further compare the characteristics of these interpreters and analyze why the indirect branch is less important than before.",
"title": ""
},
{
"docid": "187127dd1ab5f97b1158a77a25ddce91",
"text": "We introduce stochastic variational inference for Gaussian process models. This enables the application of Gaussian process (GP) models to data sets containing millions of data points. We show how GPs can be variationally decomposed to depend on a set of globally relevant inducing variables which factorize the model in the necessary manner to perform variational inference. Our approach is readily extended to models with non-Gaussian likelihoods and latent variable models based around Gaussian processes. We demonstrate the approach on a simple toy problem and two real world data sets.",
"title": ""
},
{
"docid": "fb9669d1f3e43d69d5893a9b2d15957f",
"text": "Researchers in the Digital Humanities and journalists need to monitor, collect and analyze fresh online content regarding current events such as the Ebola outbreak or the Ukraine crisis on demand. However, existing focused crawling approaches only consider topical aspects while ignoring temporal aspects and therefore cannot achieve thematically coherent and fresh Web collections. Especially Social Media provide a rich source of fresh content, which is not used by state-of-the-art focused crawlers. In this paper we address the issues of enabling the collection of fresh and relevant Web and Social Web content for a topic of interest through seamless integration of Web and Social Media in a novel integrated focused crawler. The crawler collects Web and Social Media content in a single system and exploits the stream of fresh Social Media content for guiding the crawler.",
"title": ""
},
{
"docid": "e4179fd890a55f829e398a6f80f1d26a",
"text": "This paper presents a soft-start circuit that adopts a pulse-skipping control to prevent inrush current and output voltage overshoot during the start-up period of dc-dc converters. The purpose of the pulse-skipping control is to significantly restrain the increasing rate of the reference voltage of the error amplifier. Thanks to the pulse-skipping mechanism and the duty cycle minimization, the soft-start-up time can be extended and the restriction of the charging current and the capacitance can be relaxed. The proposed soft-start circuit is fully integrated on chip without external components, leading to a reduction in PCB area and cost. A current-mode buck converter is implemented with TSMC 0.35-μm 2P4M CMOS process. Simulation results show the output voltage of the buck converter increases smoothly and inrush current is less than 300 mA.",
"title": ""
},
{
"docid": "937d93600ad3d19afda31ada11ea1460",
"text": "Several new services incentivize clients to compete in solving large computation tasks in exchange for financial rewards. This model of competitive distributed computation enables every user connected to the Internet to participate in a game in which he splits his computational power among a set of competing pools -- the game is called a computational power splitting game. We formally model this game and show its utility in analyzing the security of pool protocols that dictate how financial rewards are shared among the members of a pool. As a case study, we analyze the Bitcoin crypto currency which attracts computing power roughly equivalent to billions of desktop machines, over 70% of which is organized into public pools. We show that existing pool reward sharing protocols are insecure in our game-theoretic analysis under an attack strategy called the \"block withholding attack\". This attack is a topic of debate, initially thought to be ill-incentivized in today's pool protocols: i.e., causing a net loss to the attacker, and later argued to be always profitable. Our analysis shows that the attack is always well-incentivized in the long-run, but may not be so for a short duration. This implies that existing pool protocols are insecure, and if the attack is conducted systematically, Bitcoin pools could lose millions of dollars worth in months. The equilibrium state is a mixed strategy -- that is -- in equilibrium all clients are incentivized to probabilistically attack to maximize their payoffs rather than participate honestly. As a result, the Bitcoin network is incentivized to waste a part of its resources simply to compete.",
"title": ""
},
{
"docid": "0173b7946f0481a9370048738a172788",
"text": "In this paper, we present an approach for edge detection using adaptive thresholding and Ant Colony Optimization (ACO) algorithm to obtain a well-connected image edge map. Initially, the edge map of the image is obtained using adaptive thresholding. The end points obtained using adaptive threshoding are calculated and the ants are placed at these points. The movement of the ants is guided by the local variation in the pixel intensity values. The probability factor of only undetected neighboring pixels is taken into consideration while moving an ant to the next probable edge pixel. The two stopping rules are implemented to prevent the movement of ants through the pixel already detected using the adoptive thresholding. The results are qualitative analyze using Shanon's Entropy function.",
"title": ""
}
] |
scidocsrr
|
9889875b9d6840a54f0714e27f95c2c2
|
Evidence-based interventions for myofascial trigger points
|
[
{
"docid": "6318c9d0e62f1608c105b114c6395e6f",
"text": "Myofascial pain associated with myofascial trigger points (MTrPs) is a common cause of nonarticular musculoskeletal pain. Although the presence of MTrPs can be determined by soft tissue palpation, little is known about the mechanisms and biochemical milieu associated with persistent muscle pain. A microanalytical system was developed to measure the in vivo biochemical milieu of muscle in near real time at the subnanogram level of concentration. The system includes a microdialysis needle capable of continuously collecting extremely small samples (approximately 0.5 microl) of physiological saline after exposure to the internal tissue milieu across a 105-microm-thick semi-permeable membrane. This membrane is positioned 200 microm from the tip of the needle and permits solutes of <75 kDa to diffuse across it. Three subjects were selected from each of three groups (total 9 subjects): normal (no neck pain, no MTrP); latent (no neck pain, MTrP present); active (neck pain, MTrP present). The microdialysis needle was inserted in a standardized location in the upper trapezius muscle. Due to the extremely small sample size collected by the microdialysis system, an established microanalytical laboratory, employing immunoaffinity capillary electrophoresis and capillary electrochromatography, performed analysis of selected analytes. Concentrations of protons, bradykinin, calcitonin gene-related peptide, substance P, tumor necrosis factor-alpha, interleukin-1beta, serotonin, and norepinephrine were found to be significantly higher in the active group than either of the other two groups (P < 0.01). pH was significantly lower in the active group than the other two groups (P < 0.03). In conclusion, the described microanalytical technique enables continuous sampling of extremely small quantities of substances directly from soft tissue, with minimal system perturbation and without harmful effects on subjects. The measured levels of analytes can be used to distinguish clinically distinct groups.",
"title": ""
}
] |
[
{
"docid": "bcf4f735cd0a3269adb8e65fba4d21b1",
"text": "An optimal &OHgr;(<italic>n</italic><supscrpt>2</supscrpt>) lower bound is shown for the time-space product of any <italic>R</italic>-way branching program that determines those values which occur exactly once in a list of <italic>n</italic> integers in the range [1, <italic>R</italic>] where <italic>R</italic> ≥ <italic>n</italic>. This &OHgr;(<italic>n</italic><supscrpt>2</supscrpt>) tradeoff also applies to the sorting problem and thus improves the previous time-space tradeoffs for sorting. Because the <italic>R</italic>-way branching program is a such a powerful model these time-space product tradeoffs also apply to all models of sequential computation that have a fair measure of space such as off-line multi-tape Turing machines and off-line log-cost RAMs.",
"title": ""
},
{
"docid": "da3876613301b46645408e474c1f5247",
"text": "The Strength Pareto Evolutionary Algorithm (SPEA) (Zitzle r and Thiele 1999) is a relatively recent technique for finding or approximatin g the Pareto-optimal set for multiobjective optimization problems. In different st udies (Zitzler and Thiele 1999; Zitzler, Deb, and Thiele 2000) SPEA has shown very good performance in comparison to other multiobjective evolutionary algorith ms, and therefore it has been a point of reference in various recent investigations, e.g., (Corne, Knowles, and Oates 2000). Furthermore, it has been used in different a pplic tions, e.g., (Lahanas, Milickovic, Baltas, and Zamboglou 2001). In this pap er, an improved version, namely SPEA2, is proposed, which incorporates in cont rast o its predecessor a fine-grained fitness assignment strategy, a density estima tion technique, and an enhanced archive truncation method. The comparison of SPEA 2 with SPEA and two other modern elitist methods, PESA and NSGA-II, on diffe rent test problems yields promising results.",
"title": ""
},
{
"docid": "b8f6411673d866c6464509b6fa7e9498",
"text": "In computer vision there has been increasing interest in learning hashing codes whose Hamming distance approximates the data similarity. The hashing functions play roles in both quantizing the vector space and generating similarity-preserving codes. Most existing hashing methods use hyper-planes (or kernelized hyper-planes) to quantize and encode. In this paper, we present a hashing method adopting the k-means quantization. We propose a novel Affinity-Preserving K-means algorithm which simultaneously performs k-means clustering and learns the binary indices of the quantized cells. The distance between the cells is approximated by the Hamming distance of the cell indices. We further generalize our algorithm to a product space for learning longer codes. Experiments show our method, named as K-means Hashing (KMH), outperforms various state-of-the-art hashing encoding methods.",
"title": ""
},
{
"docid": "609fa8716f97a1d30683997d778e4279",
"text": "The role of behavior for the acquisition of sensory representations has been underestimated in the past. We study this question for the task of learning vergence eye movements allowing proper fixation of objects. We model the development of this skill with an artificial neural network based on reinforcement learning. A biologically plausible reward mechanism that is responsible for driving behavior and learning of the representation of disparity is proposed. The network learns to perform vergence eye movements between natural images of objects by receiving a reward whenever an object is fixated with both eyes. Disparity tuned neurons emerge robustly in the hidden layer during development. The characteristics of the cells' tuning curves depend strongly on the task: if mostly small vergence movements are to be performed, tuning curves become narrower at small disparities, as has been measured experimentally in barn owls. Extensive training to discriminate between small disparities leads to an effective enhancement of sensitivity of the tuning curves.",
"title": ""
},
{
"docid": "825888e4befcbf6b492143a13928a34e",
"text": "Sentiment analysis is one of the prominent fields of data mining that deals with the identification and analysis of sentimental contents generally available at social media. Twitter is one of such social medias used by many users about some topics in the form of tweets. These tweets can be analyzed to find the viewpoints and sentiments of the users by using clustering-based methods. However, due to the subjective nature of the Twitter datasets, metaheuristic-based clustering methods outperforms the traditional methods for sentiment analysis. Therefore, this paper proposes a novel metaheuristic method (CSK) which is based on K-means and cuckoo search. The proposed method has been used to find the optimum cluster-heads from the sentimental contents of Twitter dataset. The efficacy of proposed method has been tested on different Twitter datasets and compared with particle swarm optimization, differential evolution, cuckoo search, improved cuckoo search, gauss-based cuckoo search, and two n-grams methods. Experimental results and statistical analysis validate that the proposed method outperforms the existing methods. The proposed method has theoretical implications for the future research to analyze the data generated through social networks/medias. This method has also very generalized practical implications for designing a system that can provide conclusive reviews on any social issues. © 2017 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "5916147ceb3e0bb236798abb394d1106",
"text": "One of the fundamental questions of enzymology is how catalytic power is derived. This review focuses on recent developments in the structure--function relationships of chorismate-utilizing enzymes involved in siderophore biosynthesis to provide insight into the biocatalysis of pericyclic reactions. Specifically, salicylate synthesis by the two-enzyme pathway in Pseudomonas aeruginosa is examined. The isochorismate-pyruvate lyase is discussed in the context of its homologues, the chorismate mutases, and the isochorismate synthase is compared to its homologues in the MST family (menaquinone, siderophore, or tryptophan biosynthesis) of enzymes. The tentative conclusion is that the activities observed cannot be reconciled by inspection of the active site participants alone. Instead, individual activities must arise from unique dynamic properties of each enzyme that are tuned to promote specific chemistries.",
"title": ""
},
{
"docid": "7604942913928dfb0e0ef486eccbcf8b",
"text": "We connect two scenarios in structured learning: adapting a parser trained on one corpus to another annotation style, and projecting syntactic annotations from one language to another. We propose quasisynchronous grammar (QG) features for these structured learning tasks. That is, we score a aligned pair of source and target trees based on local features of the trees and the alignment. Our quasi-synchronous model assigns positive probability to any alignment of any trees, in contrast to a synchronous grammar, which would insist on some form of structural parallelism. In monolingual dependency parser adaptation, we achieve high accuracy in translating among multiple annotation styles for the same sentence. On the more difficult problem of cross-lingual parser projection, we learn a dependency parser for a target language by using bilingual text, an English parser, and automatic word alignments. Our experiments show that unsupervised QG projection improves on parses trained using only highprecision projected annotations and far outperforms, by more than 35% absolute dependency accuracy, learning an unsupervised parser from raw target-language text alone. When a few target-language parse trees are available, projection gives a boost equivalent to doubling the number of target-language trees. ∗The first author would like to thank the Center for Intelligent Information Retrieval at UMass Amherst. We would also like to thank Noah Smith and Rebecca Hwa for helpful discussions and the anonymous reviewers for their suggestions for improving the paper.",
"title": ""
},
{
"docid": "212619e09ee7dfe0f32d90e2da25c8f0",
"text": "This paper tackles anomaly detection in videos, which is an extremely challenging task because anomaly is unbounded. We approach this task by leveraging a Convolutional Neural Network (CNN or ConvNet) for appearance encoding for each frame, and leveraging a Convolutional Long Short Term Memory (ConvLSTM) for memorizing all past frames which corresponds to the motion information. Then we integrate ConvNet and ConvLSTM with Auto-Encoder, which is referred to as ConvLSTM-AE, to learn the regularity of appearance and motion for the ordinary moments. Compared with 3D Convolutional Auto-Encoder based anomaly detection, our main contribution lies in that we propose a ConvLSTM-AE framework which better encodes the change of appearance and motion for normal events, respectively. To evaluate our method, we first conduct experiments on a synthesized Moving-MNIST dataset under controlled settings, and results show that our method can easily identify the change of appearance and motion. Extensive experiments on real anomaly datasets further validate the effectiveness of our method for anomaly detection.",
"title": ""
},
{
"docid": "339c367d71b4b51ad24aa59799b13416",
"text": "One of the biggest challenges of the current big data landscape is our inability to process vast amounts of information in a reasonable time. In this work, we explore and compare two distributed computing frameworks implemented on commodity cluster architectures: MPI/OpenMP on Beowulf that is high-performance oriented and exploits multi-machine/multicore infrastructures, and Apache Spark on Hadoop which targets iterative algorithms through in-memory computing. We use the Google Cloud Platform service to create virtual machine clusters, run the frameworks, and evaluate two supervised machine learning algorithms: KNN and Pegasos SVM. Results obtained from experiments with a particle physics data set show MPI/OpenMP outperforms Spark by more than one order of magnitude in terms of processing speed and provides more consistent performance. However, Spark shows better data management infrastructure and the possibility of dealing with other aspects such as node failure and data replication.",
"title": ""
},
{
"docid": "643599f9b0dcfd270f9f3c55567ed985",
"text": "OBJECTIVES\nTo describe a new first-trimester sonographic landmark, the retronasal triangle, which may be useful in the early screening for cleft palate.\n\n\nMETHODS\nThe retronasal triangle, i.e. the three echogenic lines formed by the two frontal processes of the maxilla and the palate visualized in the coronal view of the fetal face posterior to the nose, was evaluated prospectively in 100 consecutive normal fetuses at the time of routine first-trimester sonographic screening at 11 + 0 to 13 + 6 weeks' gestation. In a separate study of five fetuses confirmed postnatally as having a cleft palate, ultrasound images, including multiplanar three-dimensional views, were analyzed retrospectively to review the retronasal triangle.\n\n\nRESULTS\nNone of the fetuses evaluated prospectively was affected by cleft lip and palate. During their first-trimester scan, the retronasal triangle could not be identified in only two fetuses. Reasons for suboptimal visualization of this area included early gestational age at scanning (11 weeks) and persistent posterior position of the fetal face. Of the five cases with postnatal diagnosis of cleft palate, an abnormal configuration of the retronasal triangle was documented in all cases on analysis of digitally stored three-dimensional volumes.\n\n\nCONCLUSIONS\nThis study demonstrates the feasibility of incorporating evaluation of the retronasal triangle into the routine evaluation of the fetal anatomy at 11 + 0 to 13 + 6 weeks' gestation. Because fetuses with cleft palate have an abnormal configuration of the retronasal triangle, focused examination of the midface, looking for this area at the time of the nuchal translucency scan, may facilitate the early detection of cleft palate in the first trimester.",
"title": ""
},
{
"docid": "4746703f20b8fd902c451e658e44f49b",
"text": "This paper describes the development of a Latvian speech-to-text (STT) system at LIMSI within the Quaero project. One of the aims of the speech processing activities in the Quaero project is to cover all official European languages. However, for some of the languages only very limited, if any, training resources are available via corpora agencies such as LDC and ELRA. The aim of this study was to show the way, taking Latvian as example, an STT system can be rapidly developed without any transcribed training data. Following the scheme proposed in this paper, the Latvian STT system was developed in about a month and obtained a word error rate of 20% on broadcast news and conversation data in the Quaero 2012 evaluation campaign.",
"title": ""
},
{
"docid": "73f24b296deb64f2477fe54f9071f14f",
"text": "Intersection-collision warning systems use vehicle-to-infrastructure communication to avoid accidents at urban intersections. However, they are costly because additional roadside infrastructure must be installed, and they suffer from problems related to real-time information delivery. In this paper, an intersection-collision warning system based on vehicle-to-vehicle communication is proposed in order to solve such problems. The distance to the intersection is computed to evaluate the risk that the host vehicle will collide at the intersection, and a time-to-intersection index is computed to establish the risk of a collision. The proposed system was verified through simulations, confirming its potential as a new intersection-collision warning system based on vehicle-to-vehicle communication.",
"title": ""
},
{
"docid": "30d723478faf6ef20776e057c666f3e1",
"text": "India has 790+ million active mobile connections and 80.57 million smartphone users. However, as per Reserve Bank of India, the number of transactions performed using smartphone based mobile banking applicationsis less than 12% of the overall banking transactions. One of the major reasons for such low numbers is the usability of the mobile banking app. In this paper, we focus on usability issues related tomobile banking apps and propose a Mobile App Usability Index (MAUI) for enhancing the usability of a mobile banking app. The proposed Index has been validatedwith mobile banking channel managers, chief information security officers, etc.",
"title": ""
},
{
"docid": "f8fc4910745911ae369fe625997de128",
"text": "A new 17-Watt, 8.4 GHz, solid state power amplifier (SSPA) has been developed for the Jet Propulsion Laboratory's Mars Exploration Rover mission. The SSPA consists of a power amplifier microwave module and a highly efficient DC-DC power converter module integrated into a compact package that can be installed near the spacecraft antenna to minimize downlink transmission loss. The SSPA output power is 17 Watts nominal with an input DC power of 59 Watts and nominal input signal of +1 dBm. The unit is qualified to operate over a temperature range of -40/spl deg/C to +70/spl deg/C in vacuum or Martian atmosphere.",
"title": ""
},
{
"docid": "3c8cc4192ee6ddd126e53c8ab242f396",
"text": "There are several approaches for automated functional web testing and the choice among them depends on a number of factors, including the tools used for web testing and the costs associated with their adoption. In this paper, we present an empirical cost/benefit analysis of two different categories of automated functional web testing approaches: (1) capture-replay web testing (in particular, using Selenium IDE); and, (2) programmable web testing (using Selenium WebDriver). On a set of six web applications, we evaluated the costs of applying these testing approaches both when developing the initial test suites from scratch and when the test suites are maintained, upon the release of a new software version. Results indicate that, on the one hand, the development of the test suites is more expensive in terms of time required (between 32% and 112%) when the programmable web testing approach is adopted, but on the other hand, test suite maintenance is less expensive when this approach is used (with a saving between 16% and 51%). We found that, in the majority of the cases, after a small number of releases (from one to three), the cumulative cost of programmable web testing becomes lower than the cost involved with capture-replay web testing and the cost saving gets amplified over the successive releases.",
"title": ""
},
{
"docid": "7b8dffab502fae2abbea65464e2727aa",
"text": "Bone tissue is continuously remodeled through the concerted actions of bone cells, which include bone resorption by osteoclasts and bone formation by osteoblasts, whereas osteocytes act as mechanosensors and orchestrators of the bone remodeling process. This process is under the control of local (e.g., growth factors and cytokines) and systemic (e.g., calcitonin and estrogens) factors that all together contribute for bone homeostasis. An imbalance between bone resorption and formation can result in bone diseases including osteoporosis. Recently, it has been recognized that, during bone remodeling, there are an intricate communication among bone cells. For instance, the coupling from bone resorption to bone formation is achieved by interaction between osteoclasts and osteoblasts. Moreover, osteocytes produce factors that influence osteoblast and osteoclast activities, whereas osteocyte apoptosis is followed by osteoclastic bone resorption. The increasing knowledge about the structure and functions of bone cells contributed to a better understanding of bone biology. It has been suggested that there is a complex communication between bone cells and other organs, indicating the dynamic nature of bone tissue. In this review, we discuss the current data about the structure and functions of bone cells and the factors that influence bone remodeling.",
"title": ""
},
{
"docid": "bd7841688d039371f85d34f982130105",
"text": "Behavioral skills or policies for autonomous agents are conventionally learned from reward functions, via reinforcement learning, or from demonstrations, via imitation learning. However, both modes of task specification have their disadvantages: reward functions require manual engineering, while demonstrations require a human expert to be able to actually perform the task in order to generate the demonstration. Instruction following from natural language instructions provides an appealing alternative: in the same way that we can specify goals to other humans simply by speaking or writing, we would like to be able to specify tasks for our machines. However, a single instruction may be insufficient to fully communicate our intent or, even if it is, may be insufficient for an autonomous agent to actually understand how to perform the desired task. In this work, we propose an interactive formulation of the task specification problem, where iterative language corrections are provided to an autonomous agent, guiding it in acquiring the desired skill. Our proposed language-guided policy learning algorithm can integrate an instruction and a sequence of corrections to acquire new skills very quickly. In our experiments, we show that this method can enable a policy to follow instructions and corrections for simulated navigation and manipulation tasks, substantially outperforming direct, non-interactive instruction following.",
"title": ""
},
{
"docid": "4f49a5cc49f1eeb864b4a6f347263710",
"text": "Future wireless applications will take advantage of rapidly deployable, self-configuring multihop ad hoc networks. Because of the difficulty of obtaining IEEE 802.11 feedback about link connectivity in real networks, many multihop ad hoc networks utilize hello messages to determine local connectivity. This paper uses an implementation of the Ad hoc On-demand Distance Vector (AODV) routing protocol to examine the effectiveness of hello messages for monitoring link status. In this study, it is determined that many factors influence the utility of hello messages, including allowed hello message loss settings, discrepancy between data and hello message size and 802.11b packet handling. This paper examines these factors and experimentally evaluates a variety of approaches for improving the accuracy of hello messages as an indicator of local connectivity.",
"title": ""
},
{
"docid": "155411fe242dd4f3ab39649d20f5340f",
"text": "Two studies are presented that investigated 'fear of movement/(re)injury' in chronic musculoskeletal pain and its relation to behavioral performance. The 1st study examines the relation among fear of movement/(re)injury (as measured with the Dutch version of the Tampa Scale for Kinesiophobia (TSK-DV)) (Kori et al. 1990), biographical variables (age, pain duration, gender, use of supportive equipment, compensation status), pain-related variables (pain intensity, pain cognitions, pain coping) and affective distress (fear and depression) in a group of 103 chronic low back pain (CLBP) patients. In the 2nd study, motoric, psychophysiologic and self-report measures of fear are taken from 33 CLBP patients who are exposed to a single and relatively simple movement. Generally, findings demonstrated that the fear of movement/(re)injury is related to gender and compensation status, and more closely to measures of catastrophizing and depression, but in a much lesser degree to pain coping and pain intensity. Furthermore, subjects who report a high degree of fear of movement/(re)injury show more fear and escape/avoidance when exposed to a simple movement. The discussion focuses on the clinical relevance of the construct of fear of movement/(re)injury and research questions that remain to be answered.",
"title": ""
},
{
"docid": "b16f7a4242a9ff353d7726e66669ba97",
"text": "The ARPA MT Evaluation methodology effort is intended to provide a basis for measuring and thereby facilitating the progress of MT systems of the ARPAsponsored research program. The evaluation methodologies have the further goal of being useful for identifying the context of that progress among developed, production MT systems in use today. Since 1991, the evaluations have evolved as we have discovered more about what properties are valuable to measure, what properties are not, and what elements of the tests/evaluations can be adjusted to enhance significance of the results while still remaining relatively portable. This paper describes this evolutionary process, along with measurements of the most recent MT evaluation (January 1994) and the current evaluation process now underway.",
"title": ""
}
] |
scidocsrr
|
c7f4b16c199e00851e8f667598fe4514
|
Force Control of Series Elastic Actuator: Implications for Series Elastic Actuator Design
|
[
{
"docid": "d8ec0c507217500a97c1664c33b2fe72",
"text": "To realize ideal force control of robots that interact with a human, a very precise actuating system with zero impedance is desired. For such applications, a rotary series elastic actuator (RSEA) has been introduced recently. This paper presents the design of RSEA and the associated control algorithms. To generate joint torque as desired, a torsional spring is installed between a motor and a human joint, and the motor is controlled to produce a proper spring deflection for torque generation. When the desired torque is zero, the motor must follow the human joint motion, which requires that the friction and the inertia of the motor be compensated. The human joint and the body part impose the load on the RSEA. They interact with uncertain environments and their physical properties vary with time. In this paper, the disturbance observer (DOB) method is applied to make the RSEA precisely generate the desired torque under such time-varying conditions. Based on the nominal model preserved by the DOB, feedback and feedforward controllers are optimally designed for the desired performance, i.e., the RSEA: (1) exhibits very low impedance and (2) generates the desired torque precisely while interacting with a human. The effectiveness of the proposed design is verified by experiments.",
"title": ""
}
] |
[
{
"docid": "090286ed539394be3ee14300772af98c",
"text": "Cryptography is essential to protect and secure data using a key. Different types of cryptographic techniques are found for data security. Genetic Algorithm is essentially used for obtaining optimal solution. Also, it can be efficiently used for random number generation which are very important in cryptography. This paper discusses the application of genetic algorithms for stream ciphers. Key generation is the most important factor in stream ciphers. In this paper Genetic Algorithm is used in the key generation process where key selection depends upon the fitness function. Here genetic algorithm is repeated for key selection. In each iteration, the key having highest fitness value is selected which further be compared with the threshold value. Selected key was unique and non-repeating. Therefore encryption with selected key are highly encrypted because of more randomness of key. This paper shows that the generated keys using GA are unique and more secure for encryption of data.",
"title": ""
},
{
"docid": "7c4ae542eb8809b2c7566898814fb5a1",
"text": "The accurate localization of facial landmarks is at the core of face analysis tasks, such as face recognition and facial expression analysis, to name a few. In this work we propose a novel localization approach based on a Deep Learning architecture that utilizes dual cascaded CNN subnetworks of the same length, where each subnetwork in a cascade refines the accuracy of its predecessor. The first set of cascaded subnetworks estimates heatmaps that encode the landmarks’ locations, while the second set of cascaded subnetworks refines the heatmaps-based localization using regression, and also receives as input the output of the corresponding heatmap estimation subnetwork. The proposed scheme is experimentally shown to compare favorably with contemporary state-of-the-art schemes.",
"title": ""
},
{
"docid": "f3375c52900c245ede8704a2c1cfbc9b",
"text": "In 2000 Hone and Graham [4] published ‘Towards a tool for the subjective assessment of speech system interfaces (SASSI)’. This position paper argues that the time is right to turn the theoretical foundations established in this earlier paper into a fully validated and score-able real world tool which can be applied to the usability measurement of current speech based systems. We call for a collaborative effort to refine the current question set and then collect and share sufficient data using the revised tool to allow establishment of its psychometric properties as a valid and reliable measure of speech system usability.",
"title": ""
},
{
"docid": "54722f4851707c2bf51d18910728a31c",
"text": "Many modern companies wish to maintain knowledge in the form of a corporate knowledge graph and to use and manage this knowledge via a knowledge graph management system (KGMS). We formulate various requirements for a fully-fledged KGMS. In particular, such a system must be capable of performing complex reasoning tasks but, at the same time, achieve efficient and scalable reasoning over Big Data with an acceptable computational complexity. Moreover, a KGMS needs interfaces to corporate databases, the web, and machine-learning and analytics packages. We present KRR formalisms and a system achieving these goals. To this aim, we use specific suitable fragments from the Datalog± family of languages, and we introduce the vadalog system, which puts these swift logics into action. This system exploits the theoretical underpinning of relevant Datalog± languages and combines it with existing and novel techniques from database and AI practice.",
"title": ""
},
{
"docid": "547423c409d466bcb537a7b0ae0e1758",
"text": "Sequential Bayesian estimation fornonlinear dynamic state-space models involves recursive estimation of filtering and predictive distributions of unobserved time varying signals based on noisy observations. This paper introduces a new filter called the Gaussian particle filter1. It is based on the particle filtering concept, and it approximates the posterior distributions by single Gaussians, similar to Gaussian filters like the extended Kalman filter and its variants. It is shown that under the Gaussianity assumption, the Gaussian particle filter is asymptotically optimal in the number of particles and, hence, has much-improved performance and versatility over other Gaussian filters, especially when nontrivial nonlinearities are present. Simulation results are presented to demonstrate the versatility and improved performance of the Gaussian particle filter over conventional Gaussian filters and the lower complexity than known particle filters. The use of the Gaussian particle filter as a building block of more complex filters is addressed in a companion paper.",
"title": ""
},
{
"docid": "2b8efba9363b5f177089534edeb877a9",
"text": "This article presents a methodology that allows the development of new converter topologies for single-input, multiple-output (SIMO) from different basic configurations of single-input, single-output dc-dc converters. These typologies have in common the use of only one power-switching device, and they are all nonisolated converters. Sixteen different topologies are highlighted, and their main features are explained. The 16 typologies include nine twooutput-type, five three-output-type, one four-output-type, and one six-output-type dc-dc converter configurations. In addition, an experimental prototype of a three-output-type configuration with six different output voltages based on a single-ended primary inductance (SEPIC)-Cuk-boost combination converter was developed, and the proposed design methodology for a basic converter combination was experimentally verified.",
"title": ""
},
{
"docid": "c0010c41640a2ecd1ea85f709a3f14c7",
"text": "Due to global climate change as well as economic concern of network operators, energy consumption of the infrastructure of cellular networks, or “Green Cellular Networking,” has become a popular research topic. While energy saving can be achieved by adopting renewable energy resources or improving design of certain hardware (e.g., power amplifier) to make it more energy-efficient, the cost of purchasing, replacing, and installing new equipment (including manpower, transportation, disruption to normal operation, as well as associated energy and direct cost) is often prohibitive. By comparison, approaches that work on the operating protocols of the system do not require changes to current network architecture, making them far less costly and easier for testing and implementation. In this survey, we first present facts and figures that highlight the importance of green mobile networking and then review existing green cellular networking research with particular focus on techniques that incorporate the concept of the “sleep mode” in base stations. It takes advantage of changing traffic patterns on daily or weekly basis and selectively switches some lightly loaded base stations to low energy consumption modes. As base stations are responsible for the large amount of energy consumed in cellular networks, these approaches have the potential to save a significant amount of energy, as shown in various studies. However, it is noticed that certain simplifying assumptions made in the published papers introduce inaccuracies. This review will discuss these assumptions, particularly, an assumption that ignores the effect of traffic-load-dependent factors on energy consumption. We show here that considering this effect may lead to noticeably lower benefit than in models that ignore this effect. Finally, potential future research directions are discussed.",
"title": ""
},
{
"docid": "f31cbd5b8594e27b9aea23bdb2074a24",
"text": "The hyphenation algorithm of OpenOffice.org 2.0.2 is a generalization of TEX’s hyphenation algorithm that allows automatic non-standard hyphenation by competing standard and non-standard hyphenation patterns. With the suggested integration of linguistic tools for compound decomposition and word sense disambiguation, this algorithm would be able to do also more precise non-standard and standard hyphenation for several languages.",
"title": ""
},
{
"docid": "29816f0358cfff1c1dddce203003ad41",
"text": "Increasing volumes of trajectory data require analysis methods which go beyond the visual. Methods for computing trajectory analysis typically assume linear interpolation between quasi-regular sampling points. This assumption, however, is often not realistic, and can lead to a meaningless analysis for sparsely and/or irregularly sampled data. We propose to use the space-time prism model instead, allowing to represent the influence of speed on possible trajectories within a volume. We give definitions for the similarity of trajectories in this model and describe algorithms for its computation using the Fréchet and the equal time distance.",
"title": ""
},
{
"docid": "00cabf8e41382d8a1b206da952b8633a",
"text": "Autonomous vehicle operations in outdoor environments challenge robotic perception. Construction, mining, agriculture, and planetary exploration environments are examples in which the presence of dust, fog, rain, changing illumination due to low sun angles, and lack of contrast can dramatically degrade conventional stereo and laser sensing. Nonetheless, environment perception can still succeed under compromised visibility through the use of a millimeter-wave radar. Radar also allows for multiple object detection within a single beam, whereas other range sensors are limited to one target return per emission. However, radar has shortcomings as well, such as a large footprint, specularity effects, and limited range resolution, all of which may result in poor environment survey or difficulty in interpretation. This paper presents a novel method for ground segmentation using a millimeter-wave radar mounted on a ground vehicle. Issues relevant to short-range perception in an outdoor environment are described along with field experiments and a quantitative comparison to laser data. The ability to classify the ground is successfully demonstrated in clear and low-visibility conditions, and significant improvement in range accuracy is shown. Finally, conclusions are drawn on the utility of millimeter-wave radar as a robotic sensor for persistent and accurate perception in natural scenarios. C © 2011 Wiley Periodicals, Inc.",
"title": ""
},
{
"docid": "15cde62b96f8c87bedb6f721befa3ae4",
"text": "To investigate the dispersion mechanism(s) of ternary dry powder inhaler (DPI) formulations by comparison of the interparticulate adhesions and in vitro performance of a number of carrier–drug–fines combinations. The relative levels of adhesion and cohesion between a lactose carrier and a number of drugs and fine excipients were quantified using the cohesion–adhesion balance (CAB) approach to atomic force microscopy. The in vitro performance of formulations produced using these materials was quantified and the particle size distribution of the aerosol clouds produced from these formulations determined by laser diffraction. Comparison between CAB ratios and formulation performance suggested that the improvement in performance brought about by the addition of fines to which the drug was more adhesive than cohesive might have been due to the formation of agglomerates of drug and fines particles. This was supported by aerosol cloud particle size data. The mechanism(s) underlying the improved performance of ternary formulations where the drug was more cohesive than adhesive to the fines was unclear. The performance of ternary DPI formulations might be increased by the preferential formation of drug–fines agglomerates, which might be subject to greater deagglomeration forces during aerosolisation than smaller agglomerates, thus producing better formulation performance.",
"title": ""
},
{
"docid": "e29eb914db494aadd140b7b75298f1ef",
"text": "AbstractThe Ainu, a minority ethnic group from the northernmost island of Japan, was investigated for DNA polymorphisms both from maternal (mitochondrial DNA) and paternal (Y chromosome) lineages extensively. Other Asian populations inhabiting North, East, and Southeast Asia were also examined for detailed phylogeographic analyses at the mtDNA sequence type as well as Y-haplogroup levels. The maternal and paternal gene pools of the Ainu contained 25 mtDNA sequence types and three Y-haplogroups, respectively. Eleven of the 25 mtDNA sequence types were unique to the Ainu and accounted for over 50% of the population, whereas 14 were widely distributed among other Asian populations. Of the 14 shared types, the most frequently shared type was found in common among the Ainu, Nivkhi in northern Sakhalin, and Koryaks in the Kamchatka Peninsula. Moreover, analysis of genetic distances calculated from the mtDNA data revealed that the Ainu seemed to be related to both the Nivkhi and other Japanese populations (such as mainland Japanese and Okinawans) at the population level. On the paternal side, the vast majority (87.5%) of the Ainu exhibited the Asian-specific YAP+ lineages (Y-haplogroups D-M55* and D-M125), which were distributed only in the Japanese Archipelago in this analysis. On the other hand, the Ainu exhibited no other Y-haplogroups (C-M8, O-M175*, and O-M122*) common in mainland Japanese and Okinawans. It is noteworthy that the rest of the Ainu gene pool was occupied by the paternal lineage (Y-haplogroup C-M217*) from North Asia including Sakhalin. Thus, the present findings suggest that the Ainu retain a certain degree of their own genetic uniqueness, while having higher genetic affinities with other regional populations in Japan and the Nivkhi among Asian populations.",
"title": ""
},
{
"docid": "d7528de0c00c3d37fa31b8dcb5123fd3",
"text": "We propose and throughly investigate a temporalized version of the popular Massey’s technique for rating actors in sport competitions. The method can be described as a dynamic temporal process in which team ratings are updated at every match according to their performance during the match and the strength of the opponent team. Using the Italian soccer dataset, we empirically show that the method has a good foresight prediction accuracy.",
"title": ""
},
{
"docid": "8240df0c9498482522ef86b4b1e924ab",
"text": "The advent of the IT-led era and the increased competition have forced companies to react to the new changes in order to remain competitive. Enterprise resource planning (ERP) systems offer distinct advantages in this new business environment as they lower operating costs, reduce cycle times and (arguably) increase customer satisfaction. This study examines, via an exploratory survey of 26 companies, the underlying reasons why companies choose to convert from conventional information systems (IS) to ERP systems and the changes brought in, particularly in the accounting process. The aim is not only to understand the changes and the benefits involved in adopting ERP systems compared with conventional IS, but also to establish the best way forward in future ERP applications. The empirical evidence confirms a number of changes in the accounting process introduced with the adoption of ERP systems.",
"title": ""
},
{
"docid": "e54b9897e79391b86327883164781dff",
"text": "This review paper gives a detailed account of the development of mesh generation techniques on planar regions, over curved surfaces and within volumes for the past years. Emphasis will be on the generation of the unstructured meshes for purpose of complex industrial applications and adaptive refinement finite element analysis. Over planar domains and on curved surfaces, triangular and quadrilateral elements will be used, whereas for three-dimensional structures, tetrahedral and hexahedral elements have to be generated. Recent advances indicate that mesh generation on curved surfaces is quite mature now that elements following closely to surface curvatures could be generated more or less in an automatic manner. As the boundary recovery procedure are getting more and more robust and efficient, discretization of complex solid objects into tetrahedra by means of Delaunay triangulation and other techniques becomes routine work in industrial applications. However, the decomposition of a general object into hexahedral elements in a robust and efficient manner remains as a challenge for researchers in the mesh generation community. Algorithms for the generation of anisotropic meshes on 2D and 3D domains have also been proposed for problems where elongated elements along certain directions are required. A web-site for the latest development in meshing techniques is included for the interested readers.",
"title": ""
},
{
"docid": "f6df414f8f61dbdab32be2f05d921cb8",
"text": "The task of discriminating one object from another is almost trivial for a human being. However, this task is computationally taxing for most modern machine learning methods, whereas, we perform this task at ease given very few examples for learning. It has been proposed that the quick grasp of concept may come from the shared knowledge between the new example and examples previously learned. We believe that the key to one-shot learning is the sharing of common parts as each part holds immense amounts of information on how a visual concept is constructed. We propose an unsupervised method for learning a compact dictionary of image patches representing meaningful components of an objects. Using those patches as features, we build a compositional model that outperforms a number of popular algorithms on a one-shot learning task. We demonstrate the effectiveness of this approach on hand-written digits and show that this model generalizes to multiple datasets.",
"title": ""
},
{
"docid": "4dc50a9c0665b5e2a7dcbc369acefdb0",
"text": "Nature is the principal source for proposing new optimization methods such as genetic algorithms (GA) and simulated annealing (SA) methods. All traditional evolutionary algorithms are heuristic population-based search procedures that incorporate random variation and selection. The main contribution of this study is that it proposes a novel optimization method that relies on one of the theories of the evolution of the universe; namely, the Big Bang and Big Crunch Theory. In the Big Bang phase, energy dissipation produces disorder and randomness is the main feature of this phase; whereas, in the Big Crunch phase, randomly distributed particles are drawn into an order. Inspired by this theory, an optimization algorithm is constructed, which will be called the Big Bang–Big Crunch (BB–BC) method that generates random points in the Big Bang phase and shrinks those points to a single representative point via a center of mass or minimal cost approach in the Big Crunch phase. It is shown that the performance of the new (BB–BC) method demonstrates superiority over an improved and enhanced genetic search algorithm also developed by the authors of this study, and outperforms the classical genetic algorithm (GA) for many benchmark test functions. q 2005 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "c6ad8ac0c8e9c0bd86868128eee6a916",
"text": "Online reviews are a cornerstone of consumer decision making. However, their authenticity and quality has proven hard to control, especially as polluters target these reviews toward promoting products or in degrading competitors. In a troubling direction, the widespread growth of crowdsourcing platforms like Mechanical Turk has created a large-scale, potentially difficult-to-detect workforce of malicious review writers. Hence, this paper tackles the challenge of uncovering crowdsourced manipulation of online reviews through a three-part effort: (i) First, we propose a novel sampling method for identifying products that have been targeted for manipulation and a seed set of deceptive reviewers who have been enlisted through crowdsourcing platforms. (ii) Second, we augment this base set of deceptive reviewers through a reviewer-reviewer graph clustering approach based on a Markov Random Field where we define individual potentials (of single reviewers) and pair potentials (between two reviewers). (iii) Finally, we embed the results of this probabilistic model into a classification framework for detecting crowd-manipulated reviews. We find that the proposed approach achieves up to 0.96 AUC, outperforming both traditional detection methods and a SimRank-based alternative clustering approach.",
"title": ""
},
{
"docid": "e0e00fdfecc4a23994315579938f740e",
"text": "Budget allocation in online advertising deals with distributing the campaign (insertion order) level budgets to different sub-campaigns which employ different targeting criteria and may perform differently in terms of return-on-investment (ROI). In this paper, we present the efforts at Turn on how to best allocate campaign budget so that the advertiser or campaign-level ROI is maximized. To do this, it is crucial to be able to correctly determine the performance of sub-campaigns. This determination is highly related to the action-attribution problem, i.e. to be able to find out the set of ads, and hence the sub-campaigns that provided them to a user, that an action should be attributed to. For this purpose, we employ both last-touch (last ad gets all credit) and multi-touch (many ads share the credit) attribution methodologies. We present the algorithms deployed at Turn for the attribution problem, as well as their parallel implementation on the large advertiser performance datasets. We conclude the paper with our empirical comparison of last-touch and multi-touch attribution-based budget allocation in a real online advertising setting.",
"title": ""
},
{
"docid": "91066155de090efcd3756f4f98b11e50",
"text": "Recently, the use of XML continues to grow in popularity, large repositories of XML documents are going to emerge, and users are likely to pose increasingly more complex queries on these data sets. In 2001 XQuery is decided by the World Wide Web Consortium (W3C) as the standard XML query language. In this article, we describe the design and implementation of an efficient and scalable purely relational XQuery processor which translates expressions of the XQuery language into their equivalent SQL evaluation scripts. The experiments of this article demonstrated the efficiency and scalability of our purely relational approach in comparison to the native XML/XQuery functionality supported by conventional RDBMSs and has shown that our purely relational approach for implementing XQuery processor deserves to be pursued further.",
"title": ""
}
] |
scidocsrr
|
bc06b9197d20496c46869cf310c831d8
|
How did the discussion go: Discourse act classification in social media conversations
|
[
{
"docid": "8f9af064f348204a71f0e542b2b98e7b",
"text": "It is often useful to classify email according to the intent of the sender (e.g., \"propose a meeting\", \"deliver information\"). We present experimental results in learning to classify email in this fashion, where each class corresponds to a verbnoun pair taken from a predefined ontology describing typical “email speech acts”. We demonstrate that, although this categorization problem is quite different from “topical” text classification, certain categories of messages can nonetheless be detected with high precision (above 80%) and reasonable recall (above 50%) using existing text-classification learning methods. This result suggests that useful task-tracking tools could be constructed based on automatic classification into this taxonomy.",
"title": ""
},
{
"docid": "59af45fa33fd70d044f9749e59ba3ca7",
"text": "Retweeting is the key mechanism for information diffusion in Twitter. It emerged as a simple yet powerful way of disseminating useful information. Even though a lot of information is shared via its social network structure in Twitter, little is known yet about how and why certain information spreads more widely than others. In this paper, we examine a number of features that might affect retweetability of tweets. We gathered content and contextual features from 74M tweets and used this data set to identify factors that are significantly associated with retweet rate. We also built a predictive retweet model. We found that, amongst content features, URLs and hashtags have strong relationships with retweetability. Amongst contextual features, the number of followers and followees as well as the age of the account seem to affect retweetability, while, interestingly, the number of past tweets does not predict retweetability of a user’s tweet. We believe that this research would inform the design of sensemaking tools for Twitter streams as well as other general social media collections. Keywords-Twitter; retweet; tweet; follower; social network; social media; factor analysis",
"title": ""
}
] |
[
{
"docid": "039044aaa25f047e28daba08237c0de5",
"text": "BI technologies are essential to running today's businesses and this technology is going through sea changes.",
"title": ""
},
{
"docid": "79d5cb45b36a707727ecfcae0a091498",
"text": "We use 810 versions of the Linux kernel, released over a perio d of 14 years, to characterize the system’s evolution, using Lehman’s laws of software evolut i n as a basis. We investigate different possible interpretations of these laws, as reflected by diff erent metrics that can be used to quantify them. For example, system growth has traditionally been qua tified using lines of code or number of functions, but functional growth of an operating system l ike Linux can also be quantified using the number of system calls. In addition we use the availabili ty of the source code to track metrics, such as McCabe’s cyclomatic complexity, that have not been tr acked across so many versions previously. We find that the data supports several of Lehman’ s l ws, mainly those concerned with growth and with the stability of the process. We also make som e novel observations, e.g. that the average complexity of functions is decreasing with time, bu t this is mainly due to the addition of many small functions.",
"title": ""
},
{
"docid": "affa4a43b68f8c158090df3a368fe6b6",
"text": "The purpose of this study is to evaluate the impact of modulated light projections perceived through the eyes on the autonomic nervous system (ANS). Three types of light projections, each containing both specific colors and specific modulations in the brainwaves frequency range, were tested, in addition to a placebo projection consisting of non-modulated white light. Evaluation was done using a combination of physiological measures (HR, HRV, SC) and psychological tests (Amen, POMS). Significant differences were found in the ANS effects of each of the colored light projections, and also between the colored and white projections.",
"title": ""
},
{
"docid": "de9767297368dffbdbae4073338bdb15",
"text": "An increasing number of applications rely on 3D geoinformation. In addition to 3D geometry, these applications particularly require complex semantic information. In the context of spatial data infrastructures the needed data are drawn from distributed sources and often are thematically and spatially fragmented. Straight forward joining of 3D objects would inevitably lead to geometrical inconsistencies such as cracks, permeations, or other inconsistencies. Semantic information can help to reduce the ambiguities for geometric integration, if it is coherently structured with respect to geometry. The paper discusses these problems with special focus on virtual 3D city models and the semantic data model CityGML, an emerging standard for the representation and the exchange of 3D city models based on ISO 191xx standards and GML3. Different data qualities are analyzed with respect to their semantic and spatial structure leading to the distinction of six categories regarding the spatio-semantic coherence of 3D city models. Furthermore, it is shown how spatial data with complex object descriptions support the integration process. The derived categories will help in the future development of automatic integration methods for complex 3D geodata.",
"title": ""
},
{
"docid": "7fc3dfcc8fa43c36938f41877a65bed7",
"text": "We propose a real-time RGB-based pipeline for object detection and 6D pose estimation. Our novel 3D orientation estimation is based on a variant of the Denoising Autoencoder that is trained on simulated views of a 3D model using Domain Randomization. This so-called Augmented Autoencoder has several advantages over existing methods: It does not require real, pose-annotated training data, generalizes to various test sensors and inherently handles object and view symmetries. Instead of learning an explicit mapping from input images to object poses, it provides an implicit representation of object orientations defined by samples in a latent space. Experiments on the T-LESS and LineMOD datasets show that our method outperforms similar modelbased approaches and competes with state-of-the art approaches that require real pose-annotated images. 1",
"title": ""
},
{
"docid": "1a5183d8e0a0a7a52935e357e9b525ed",
"text": "Embedded systems, as opposed to traditional computers, bring an incredible diversity. The number of devices manufactured is constantly increasing and each has a dedicated software, commonly known as firmware. Full firmware images are often delivered as multiple releases, correcting bugs and vulnerabilities, or adding new features. Unfortunately, there is no centralized or standardized firmware distribution mechanism. It is therefore difficult to track which vendor or device a firmware package belongs to, or to identify which firmware version is used in deployed embedded devices. At the same time, discovering devices that run vulnerable firmware packages on public and private networks is crucial to the security of those networks. In this paper, we address these problems with two different, yet complementary approaches: firmware classification and embedded web interface fingerprinting. We use supervised Machine Learning on a database subset of real world firmware files. For this, we first tell apart firmware images from other kind of files and then we classify firmware images per vendor or device type. Next, we fingerprint embedded web interfaces of both physical and emulated devices. This allows recognition of web-enabled devices connected to the network. In some cases, this complementary approach allows to logically link web-enabled online devices with the corresponding firmware package that is running on the devices. Finally, we test the firmware classification approach on 215 images with an accuracy of 93.5%, and the device fingerprinting approach on 31 web interfaces with 89.4% accuracy.",
"title": ""
},
{
"docid": "0d60045d58a4fbad2a3a30bd8b9483a8",
"text": "We present R2G, a tool for the automatic migration of databases from a relational to a Graph Database Management System (GDBMS). GDBMSs provide a flexible and efficient solution to the management of graph-based data (e.g., social and semantic Web data) and, in this context, the conversion of the persistent layer of an application from a relational to a graph format can be very beneficial. R2G provides a thorough solution to this problem with a minimal impact to the application layer: it transforms a relational database r into a graph database g and any conjunctive query over r into a graph query over g. Constraints defined over r are suitably used in the translation to minimize the number of data access required by graph queries. The approach refers to an abstract notion of graph database and this allows R2G to map relational database into different GDBMSs. The demonstration of R2G allows the direct comparison of the relational and the graph approaches to data management.",
"title": ""
},
{
"docid": "b150e9aef47001e1b643556f64c5741d",
"text": "BACKGROUND\nMany adolescents have poor mental health literacy, stigmatising attitudes towards people with mental illness, and lack skills in providing optimal Mental Health First Aid to peers. These could be improved with training to facilitate better social support and increase appropriate help-seeking among adolescents with emerging mental health problems. teen Mental Health First Aid (teen MHFA), a new initiative of Mental Health First Aid International, is a 3 × 75 min classroom based training program for students aged 15-18 years.\n\n\nMETHODS\nAn uncontrolled pilot of the teen MHFA course was undertaken to examine the feasibility of providing the program in Australian secondary schools, to test relevant measures of student knowledge, attitudes and behaviours, and to provide initial evidence of program effects.\n\n\nRESULTS\nAcross four schools, 988 students received the teen MHFA program. 520 students with a mean age of 16 years completed the baseline questionnaire, 345 completed the post-test and 241 completed the three-month follow-up. Statistically significant improvements were found in mental health literacy, confidence in providing Mental Health First Aid to a peer, help-seeking intentions and student mental health, while stigmatising attitudes significantly reduced.\n\n\nCONCLUSIONS\nteen MHFA appears to be an effective and feasible program for training high school students in Mental Health First Aid techniques. Further research is required with a randomized controlled design to elucidate the causal role of the program in the changes observed.",
"title": ""
},
{
"docid": "65d84bb6907a34f8bc8c4b3d46706e53",
"text": "This study analyzes the correlation between video game usage and academic performance. Scholastic Aptitude Test (SAT) and grade-point average (GPA) scores were used to gauge academic performance. The amount of time a student spends playing video games has a negative correlation with students' GPA and SAT scores. As video game usage increases, GPA and SAT scores decrease. A chi-squared analysis found a p value for video game usage and GPA was greater than a 95% confidence level (0.005 < p < 0.01). This finding suggests that dependence exists. SAT score and video game usage also returned a p value that was significant (0.01 < p < 0.05). Chi-squared results were not significant when comparing time spent studying and an individual's SAT score. This research suggests that video games may have a detrimental effect on an individual's GPA and possibly on SAT scores. Although these results show statistical dependence, proving cause and effect remains difficult, since SAT scores represent a single test on a given day. The effects of video games maybe be cumulative; however, drawing a conclusion is difficult because SAT scores represent a measure of general knowledge. GPA versus video games is more reliable because both involve a continuous measurement of engaged activity and performance. The connection remains difficult because of the complex nature of student life and academic performance. Also, video game usage may simply be a function of specific personality types and characteristics.",
"title": ""
},
{
"docid": "2d32062668cb4b010f69267911124718",
"text": "Interfascial plane blocks have becomevery popular in recent years. A novel interfascial plane block, erector spinae plane (ESP) block can target the dorsal and ventral rami of the thoracic spinal nerves but its effect in neuropathic pain is unclear [1]. If acute pain management for herpes zoster is not done aggressively, it can turn into chronic pain. However; ESP block is first described as inject local anesthetics around the erector spinae muscle at the level of T5 spinous process for thoracic region, if the block is performed at lower levels it could be effective for abdominal and lumbar region [2]. There have been no reports on the efficacy of ESP block over the herpes zoster pain. Here it we report the successful management of acute herpes zoster pain using low thoracic ESP block. Awritten consent formwasobtained from thepatient for this report. The patient was an 72-year-oldmanwho presentedwith severe painful vesicles (9/10 VAS intensity) over posterior lumbar and lateral abdominal region (Fig. 1A). The patient received amitriptyline 10 mg, non-",
"title": ""
},
{
"docid": "2b8305c10f1105905f2a2f9651cb7c9f",
"text": "Many distributed collective decision-making processes must balance diverse individual preferences with a desire for collective unity. We report here on an extensive session of behavioral experiments on biased voting in networks of individuals. In each of 81 experiments, 36 human subjects arranged in a virtual network were financially motivated to reach global consensus to one of two opposing choices. No payments were made unless the entire population reached a unanimous decision within 1 min, but different subjects were paid more for consensus to one choice or the other, and subjects could view only the current choices of their network neighbors, thus creating tensions between private incentives and preferences, global unity, and network structure. Along with analyses of how collective and individual performance vary with network structure and incentives generally, we find that there are well-studied network topologies in which the minority preference consistently wins globally; that the presence of \"extremist\" individuals, or the awareness of opposing incentives, reliably improve collective performance; and that certain behavioral characteristics of individual subjects, such as \"stubbornness,\" are strongly correlated with earnings.",
"title": ""
},
{
"docid": "4249c95fcd869434312524f05c013c55",
"text": "The demands on visual recognition systems do not end with the complexity offered by current large-scale image datasets, such as ImageNet. In consequence, we need curious and continuously learning algorithms that actively acquire knowledge about semantic concepts which are present in available unlabeled data. As a step towards this goal, we show how to perform continuous active learning and exploration, where an algorithm actively selects relevant batches of unlabeled examples for annotation. These examples could either belong to already known or to yet undiscovered classes. Our algorithm is based on a new generalization of the Expected Model Output Change principle for deep architectures and is especially tailored to deep neural networks. Furthermore, we show easy-to-implement approximations that yield efficient techniques for active selection. Empirical experiments show that our method outperforms currently used heuristics.",
"title": ""
},
{
"docid": "6572c7d33fcb3f1930a41b4b15635ffe",
"text": "Neurons in area MT (V5) are selective for the direction of visual motion. In addition, many are selective for the motion of complex patterns independent of the orientation of their components, a behavior not seen in earlier visual areas. We show that the responses of MT cells can be captured by a linear-nonlinear model that operates not on the visual stimulus, but on the afferent responses of a population of nonlinear V1 cells. We fit this cascade model to responses of individual MT neurons and show that it robustly predicts the separately measured responses to gratings and plaids. The model captures the full range of pattern motion selectivity found in MT. Cells that signal pattern motion are distinguished by having convergent excitatory input from V1 cells with a wide range of preferred directions, strong motion opponent suppression and a tuned normalization that may reflect suppressive input from the surround of V1 cells.",
"title": ""
},
{
"docid": "3c2b68ac95f1a9300585b73ca4b83122",
"text": "The success of various applications including robotics, digital content creation, and visualization demand a structured and abstract representation of the 3D world from limited sensor data. Inspired by the nature of human perception of 3D shapes as a collection of simple parts, we explore such an abstract shape representation based on primitives. Given a single depth image of an object, we present 3DPRNN, a generative recurrent neural network that synthesizes multiple plausible shapes composed of a set of primitives. Our generative model encodes symmetry characteristics of common man-made objects, preserves long-range structural coherence, and describes objects of varying complexity with a compact representation. We also propose a method based on Gaussian Fields to generate a large scale dataset of primitive-based shape representations to train our network. We evaluate our approach on a wide range of examples and show that it outperforms nearest-neighbor based shape retrieval methods and is on-par with voxelbased generative models while using a significantly reduced parameter space.",
"title": ""
},
{
"docid": "fbbd24318caac8a8a2a63670f6a624cd",
"text": "We show that elliptic-curve cryptography implementations on mobile devices are vulnerable to electromagnetic and power side-channel attacks. We demonstrate full extraction of ECDSA secret signing keys from OpenSSL and CoreBitcoin running on iOS devices, and partial key leakage from OpenSSL running on Android and from iOS's CommonCrypto. These non-intrusive attacks use a simple magnetic probe placed in proximity to the device, or a power probe on the phone's USB cable. They use a bandwidth of merely a few hundred kHz, and can be performed cheaply using an audio card and an improvised magnetic probe.",
"title": ""
},
{
"docid": "ef64da59880750872e056822c17ab00e",
"text": "The efficient cooling is very important for a light emitting diode (LED) module because both the energy efficiency and lifespan decrease significantly as the junction temperature increases. The fin heat sink is commonly used for cooling LED modules with natural convection conditions. This work proposed a new design method for high-power LED lamp cooling by combining plate fins with pin fins and oblique fins. Two new types of fin heat sinks called the pin-plate fin heat sink (PPF) and the oblique-plate fin heat sink (OPF) were designed and their heat dissipation performances were compared with three conventional fin heat sinks, the plate fin heat sink, the pin fin heat sink and the oblique fin heat sink. The LED module was assumed to be operated under 1 atmospheric pressure and its heat input is set to 4 watts. The PPF and OPF models show lower junction temperatures by about 6°C ~ 12°C than those of three conventional models. The PPF with 8 plate fins inside (PPF-8) and the OPF with 7 plate fins inside (OPF-7) showed the best thermal performance among all the PPF and OPF designs, respectively. The total thermal resistances of the PPF-8 and OPF-7 models decreased by 9.0% ~ 15.6% compared to those of three conventional models.",
"title": ""
},
{
"docid": "0ee2dff9fb026b5c117d39fa537ab1b3",
"text": "Motor Imagery (MI) is a highly supervised method nowadays for the disabled patients to give them hope. This paper proposes a differentiation method between imagery left and right hands movement using Daubechies wavelet of Discrete Wavelet Transform (DWT) and Levenberg-Marquardt back propagation training algorithm of Neural Network (NN). DWT decomposes the raw EEG data to extract significant features that provide feature vectors precisely. Levenberg-Marquardt Algorithm (LMA) based neural network uses feature vectors as input for classification of the two class data and outcomes overall classification accuracy of 92%. Previously various features and methods used but this recommended method exemplifies that statistical features provide better accuracy for EEG classification. Variation among features indicates differences between neural activities of two brain hemispheres due to two imagery hands movement. Results from the classifier are used to interface human brain with machine for better performance that requires high precision and accuracy scheme.",
"title": ""
},
{
"docid": "a2891655fbb08c584c6efe07ee419fb7",
"text": "Forecasting the flow of crowds is of great importance to traffic management and public safety, and very challenging as it is affected by many complex factors, such as inter-region traffic, events, and weather. We propose a deep-learning-based approach, called ST-ResNet, to collectively forecast the inflow and outflow of crowds in each and every region of a city. We design an end-to-end structure of ST-ResNet based on unique properties of spatio-temporal data. More specifically, we employ the residual neural network framework to model the temporal closeness, period, and trend properties of crowd traffic. For each property, we design a branch of residual convolutional units, each of which models the spatial properties of crowd traffic. ST-ResNet learns to dynamically aggregate the output of the three residual neural networks based on data, assigning different weights to different branches and regions. The aggregation is further combined with external factors, such as weather and day of the week, to predict the final traffic of crowds in each and every region. Experiments on two types of crowd flows in Beijing and New York City (NYC) demonstrate that the proposed ST-ResNet outperforms six well-known methods.",
"title": ""
},
{
"docid": "2f012c2941f8434b9d52ae1942b64aff",
"text": "Classification of plants based on a multi-organ approach is very challenging. Although additional data provide more information that might help to disambiguate between species, the variability in shape and appearance in plant organs also raises the degree of complexity of the problem. Despite promising solutions built using deep learning enable representative features to be learned for plant images, the existing approaches focus mainly on generic features for species classification, disregarding the features representing plant organs. In fact, plants are complex living organisms sustained by a number of organ systems. In our approach, we introduce a hybrid generic-organ convolutional neural network (HGO-CNN), which takes into account both organ and generic information, combining them using a new feature fusion scheme for species classification. Next, instead of using a CNN-based method to operate on one image with a single organ, we extend our approach. We propose a new framework for plant structural learning using the recurrent neural network-based method. This novel approach supports classification based on a varying number of plant views, capturing one or more organs of a plant, by optimizing the contextual dependencies between them. We also present the qualitative results of our proposed models based on feature visualization techniques and show that the outcomes of visualizations depict our hypothesis and expectation. Finally, we show that by leveraging and combining the aforementioned techniques, our best network outperforms the state of the art on the PlantClef2015 benchmark. The source code and models are available at https://github.com/cs-chan/Deep-Plant.",
"title": ""
},
{
"docid": "c03de8afcb5a6fce6c22e9394367f54d",
"text": "Thus the Gestalt domain with its three operations forms a general algebra. J. N. Wilson, Handbook of Computer Vision Algorithms in Image Algebra, 2nd ed. (1072), Computational Techniques and Algorithms for Image Processing (S. (1047), Universal Algebra and Coalgebra (Klaus Denecke, Shelly L. Wismath), World (986), Handbook of Mathematical Models in Computer Vision, (N. Paragios, (985), Numerical Optimization, second edition (Jorge Nocedal, Stephen J.",
"title": ""
}
] |
scidocsrr
|
3dd676472c987fdb0109b42a4eb7473e
|
Recent developments in human motion analysis
|
[
{
"docid": "b01bc5df28e670c82d274892a407b0aa",
"text": "We propose that many human behaviors can be accurately described as a set of dynamic models (e.g., Kalman filters) sequenced together by a Markov chain. We then use these dynamic Markov models to recognize human behaviors from sensory data and to predict human behaviors over a few seconds time. To test the power of this modeling approach, we report an experiment in which we were able to achieve 95 accuracy at predicting automobile drivers' subsequent actions from their initial preparatory movements.",
"title": ""
}
] |
[
{
"docid": "0cfa125deea633dd978478b0dd7d807d",
"text": "The purpose of this paper is to review research pertaining to the limitations and advantages of User-Robot Interaction for Unmanned-Vehicles (UVs) swarming. We identify and discuss results showing technologies that mitigate the observed problems such as specialized level of automation and human factors in controlling a swarm of mobile agents. In the paper, we first present an overview of definitions and important terms of swarm robotics and its application in multiple UVs systems. Then, the discussion of human-swam interactions in controlling of multiple vehicles is provided with consideration of varies limitations and design guidelines. Finally, we discussed challenges and potential research aspects in the area of Human-robot interaction design in large swarm of UVs and robots.",
"title": ""
},
{
"docid": "b1d806b9ef816b6e67d5d5606dfc1dcb",
"text": "Tender and Swollen Joint Assessment, Psoriasis Area and Severity Index (PASI), Nail Psoriasis Severity Index (NAPSI), Modified Nail Psoriasis Severity Index (mNAPSI), Mander/Newcastle Enthesitis Index (MEI), Leeds Enthesitis Index (LEI), Spondyloarthritis Research Consortium of Canada (SPARCC), Maastricht Ankylosing Spondylitis Enthesis Score (MASES), Leeds Dactylitis Index (LDI), Patient Global for Psoriatic Arthritis, Dermatology Life Quality Index (DLQI), Psoriatic Arthritis Quality of Life (PsAQOL), Functional Assessment of Chronic Illness Therapy–Fatigue (FACIT-F), Psoriatic Arthritis Response Criteria (PsARC), Psoriatic Arthritis Joint Activity Index (PsAJAI), Disease Activity in Psoriatic Arthritis (DAPSA), and Composite Psoriatic Disease Activity Index (CPDAI)",
"title": ""
},
{
"docid": "43f341cf9017305d6b94a11b8b52ec28",
"text": "Tagless interpreters for well-typed terms in some object language are a standard example of the power and benefit of precise indexing in types, whether with dependent types, or generalized algebraic datatypes. The key is to reflect object language types as indices (however they may be constituted) for the term datatype in the host language, so that host type coincidence ensures object type coincidence. Whilst this technique is widespread for simply typed object languages, dependent types have proved a tougher nut with nontrivial computation in type equality. In their type-safe representations, Danielsson [2006] and Chapman [2009] succeed in capturing the equality rules, but at the cost of representing equality derivations explicitly within terms. This article constructs a type-safe representation for a dependently typed object language, dubbed KIPLING, whose computational type equality just appropriates that of its host, Agda. The KIPLING interpreter example is not merely de rigeur - it is key to the construction. At the heart of the technique is that key component of generic programming, the universe.",
"title": ""
},
{
"docid": "cf5452e43b6141728da673892c680b6e",
"text": "This paper presents another approach of Thai word segmentation, which is composed of two processes : syllable segmentation and syllable merging. Syllable segmentation is done on the basis of trigram statistics. Syllable merging is done on the basis of collocation between syllables. We argue that many of word segmentation ambiguities can be resolved at the level of syllable segmentation. Since a syllable is a more well-defined unit and more consistent in analysis than a word, this approach is more reliable than other approaches that use a wordsegmented corpus. This approach can perform well at the level of accuracy 81-98% depending on the dictionary used in the segmentation.",
"title": ""
},
{
"docid": "a6d46da1b1d8c432ca7b54635567abe3",
"text": "Building the Internet of Things requires deploying a huge number of objects with full or limited connectivity to the Internet. Given that these objects are exposed to attackers and generally not secured-by-design, it is essential to be able to update them, to patch their vulnerabilities and to prevent hackers from enrolling them into botnets. Ideally, the update infrastructure should implement the CIA triad properties, i.e., confidentiality, integrity and availability. In this work, we investigate how the use of a blockchain infrastructure can meet these requirements, with a focus on availability. In addition, we propose a peer-to-peer mechanism, to spread updates between objects that have limited access to the Internet. Finally, we give an overview of our ongoing prototype implementation.",
"title": ""
},
{
"docid": "4f60b7c7483ec68804caa3ccdd488c50",
"text": "We propose an online, end-to-end, neural generative conversational model for open-domain dialog. It is trained using a unique combination of offline two-phase supervised learning and online human-inthe-loop active learning. While most existing research proposes offline supervision or hand-crafted reward functions for online reinforcement, we devise a novel interactive learning mechanism based on a diversity-promoting heuristic for response generation and one-character userfeedback at each step. Experiments show that our model inherently promotes the generation of meaningful, relevant and interesting responses, and can be used to train agents with customized personas, moods and conversational styles.",
"title": ""
},
{
"docid": "ef5c44f6895178c8727272dbb74b5df2",
"text": "We present a systematic analysis of existing multi-domain learning approaches with respect to two questions. First, many multidomain learning algorithms resemble ensemble learning algorithms. (1) Are multi-domain learning improvements the result of ensemble learning effects? Second, these algorithms are traditionally evaluated in a balanced class label setting, although in practice many multidomain settings have domain-specific class label biases. When multi-domain learning is applied to these settings, (2) are multidomain methods improving because they capture domain-specific class biases? An understanding of these two issues presents a clearer idea about where the field has had success in multi-domain learning, and it suggests some important open questions for improving beyond the current state of the art.",
"title": ""
},
{
"docid": "37edb948f37baa14aff4843d3f83e69b",
"text": "This article concerns the manner in which group interaction during focus groups impacted upon the data generated in a study of adolescent sexual health. Twenty-nine group interviews were conducted with secondary school pupils in Ireland, and data were subjected to a qualitative analysis. In exploring the relationship between method and theory generation, we begin by focusing on the ethnographic potential within group interviews. We propose that at times during the interviews, episodes of acting-out, or presenting a particular image in the presence of others, can be highly revealing in attempting to understand the normative rules embedded in the culture from which participants are drawn. However, we highlight a specific problem with distinguishing which parts of the group interview are a valid representation of group processes and which parts accurately reflect individuals' retrospective experiences of reality. We also note that at various points in the interview, focus groups have the potential to reveal participants' vulnerabilities. In addition, group members themselves can challenge one another on how aspects of their sub-culture are represented within the focus group, in a way that is normally beyond reach within individual interviews. The formation and composition of focus groups, particularly through the clustering of like-minded individuals, can affect the dominant views being expressed within specific groups. While focus groups have been noted to have an educational and transformative potential, we caution that they may also be a source of inaccurate information, placing participants at risk. Finally, the opportunities that focus groups offer in enabling researchers to cross-check the trustworthiness of data using a post-interview questionnaire are considered. We conclude by arguing that although far from flawless, focus groups are a valuable method for gathering data about health issues.",
"title": ""
},
{
"docid": "2c9e17d4c5bfb803ea1ff20ea85fbd10",
"text": "In this paper, we present a new and significant theoretical discovery. If the absolute height difference between base station (BS) antenna and user equipment (UE) antenna is larger than zero, then the network capacity performance in terms of the area spectral efficiency (ASE) will continuously decrease as the BS density increases for ultra-dense (UD) small cell networks (SCNs). This performance behavior has a tremendous impact on the deployment of UD SCNs in the 5th- generation (5G) era. Network operators may invest large amounts of money in deploying more network infrastructure to only obtain an even worse network performance. Our study results reveal that it is a must to lower the SCN BS antenna height to the UE antenna height to fully achieve the capacity gains of UD SCNs in 5G. However, this requires a revolutionized approach of BS architecture and deployment, which is explored in this paper too.",
"title": ""
},
{
"docid": "dbcdaa0413f31407ffc61708d03a693e",
"text": "There is a fundamental discrepancy between the targeted and actual users of current analytics frameworks. Most systems are designed for the data and infrastructure of the Googles and Facebooks of the world—petabytes of data distributed across large cloud deployments consisting of thousands of cheap commodity machines. Yet, the vast majority of users operate clusters ranging from a few to a few dozen nodes, analyze relatively small datasets of up to several terabytes, and perform primarily computeintensive operations. Targeting these users fundamentally changes the way we should build analytics systems. This paper describes the design of Tupleware, a new system specifically aimed at the challenges faced by the typical user. Tupleware’s architecture brings together ideas from the database, compiler, and programming languages communities to create a powerful end-to-end solution for data analysis. We propose novel techniques that consider the data, computations, and hardware together to achieve maximum performance on a case-by-case basis. Our experimental evaluation quantifies the impact of our novel techniques and shows orders of magnitude performance improvement over alternative systems.",
"title": ""
},
{
"docid": "ca41af467a412d08858b8161a40c0240",
"text": "This master thesis presents overview on advanced persistent threat (APT) definition and explanation of it. One of the most dangerous APT named: ”Snake” will be presented along with other similar APT’s. Various virtual environments like e.g. VirtualBox will be investigated in order to understand how APT malware behaves in these environments. The central focus of this master thesis lies on detection of futuristic APT malware based on cross-referencing communication patterns in order to detect APT malware. A prototype detection tool will be created and tested in order to detect similar APT’s like Snake. Additionally a prototype malware will be supplied as well, which contain similar stealth communication techniques as the Snake APT malware. This prototype malware will be tested with the current state of commercial firewall applications in order to prove its effectiveness. In the end challenges and solutions will be presented for future research work.",
"title": ""
},
{
"docid": "bc106d18d41f89fb3b0065305a9e9bd2",
"text": "endorsing or planning coursework/other institutional needs. You may store and print the file and share it with others helping you with the specified purpose, but under no circumstances may the file be distributed or otherwise made accessible to any other third parties without the express prior permission of Palgrave Macmillan. Please contact [email protected] if you have any queries regarding use of the file.",
"title": ""
},
{
"docid": "a0ebefc5137a1973e1d1da2c478de57c",
"text": "This paper presents BOTTA, the first Arabic dialect chatbot. We explore the challenges of creating a conversational agent that aims to simulate friendly conversations using the Egyptian Arabic dialect. We present a number of solutions and describe the different components of the BOTTA chatbot. The BOTTA database files are publicly available for researchers working on Arabic chatbot technologies. The BOTTA chatbot is also publicly available for any users who want to chat with it online.",
"title": ""
},
{
"docid": "e82013b8c8d2e9e48bfdd106df18c042",
"text": "The fourth Emotion Recognition in the Wild (EmotiW) challenge is a grand challenge in the ACM International Conference on Multimodal Interaction 2016, Tokyo. EmotiW is a series of benchmarking and competition effort for researchers working in the area of automatic emotion recognition in the wild. The fourth EmotiW has two sub-challenges: Video based emotion recognition (VReco) and Group-level emotion recognition (GReco). The VReco sub-challenge is being run for the fourth time and GReco is a new sub-challenge this year.",
"title": ""
},
{
"docid": "8ee3d3200ed95cad5ff4ed77c08bb608",
"text": "We present a rare case of a non-fatal impalement injury of the brain. A 13-year-old boy was found in his classroom unconsciously lying on floor. His classmates reported that they had been playing, and throwing building bricks, when suddenly the boy collapsed. The emergency physician did not find significant injuries. Upon admission to a hospital, CT imaging revealed a \"blood path\" through the brain. After clinical forensic examination, an impalement injury was diagnosed, with the entry wound just below the left eyebrow. Eventually, the police presented a variety of pointers that were suspected to have caused the injury. Forensic trace analysis revealed human blood on one of the pointers, and subsequent STR analysis linked the blood to the injured boy. Confronted with the results of the forensic examination, the classmates admitted that they had been playing \"sword fights\" using the pointers, and that the boy had been hit during the game. The case illustrates the difficulties of diagnosing impalement injuries, and identifying the exact cause of the injury.",
"title": ""
},
{
"docid": "2cd3130e123a440cd91edafc4a6848fa",
"text": "The aim of this research is to provide a design of an integrated intelligent system for management and controlling traffic lights based on distributed long range Photoelectric Sensors in distances prior to and after the traffic lights. The appropriate distances for sensors are chosen by the traffic management department so that they can monitor cars that are moving towards a specific traffic and then transfer this data to the intelligent software that are installed in the traffic control cabinet, which can control the traffic lights according to the measures that the sensors have read, and applying a proposed algorithm based on the total calculated relative weight of each road. Accordingly, the system will open the traffic that are overcrowded and give it a longer time larger than the given time for other traffics that their measures proved that their traffic density is less. This system can be programmed with very important criteria that enable it to take decisions for intelligent automatic control of traffic lights. Also the proposed system is designed to accept information about any emergency case through an active RFID based technology. Emergency cases such as the passing of presidents, ministries and ambulances vehicles that require immediate opening for the traffic automatically. The system has the ability to open a complete path for such emergency cases from the next traffic until reaching the target destination. (end of the path). As a result the system will guarantee the fluency of traffic for such emergency cases or for the main vital streets and paths that require the fluent traffic all the time, without affecting the fluency of traffic generally at normal streets according to the time of the day and the traffic density. Also the proposed system can be tuned to run automatically without any human intervention or can be tuned to allow human intervention at certain circumstances.",
"title": ""
},
{
"docid": "8bd7658e27334e52c74b188570edce46",
"text": "☆ JH was funded by NERC and a University Royal Socie by an AIB grant awarded to DAP. ⁎ Corresponding author. Department of Anthropology, University Park, PA 16802. Tel.: +1 814 867 0453. E-mail address: [email protected] (D.A. Puts). 1090-5138/$ – see front matter © 2013 The Authors. P http://dx.doi.org/10.1016/j.evolhumbehav.2013.05.004 Please cite this article as: Hill, A.K., et al., Qu (2013), http://dx.doi.org/10.1016/j.evolhum Article history: Initial receipt 13 March 2013 Final revision received 30 May 2013 Available online xxxx",
"title": ""
},
{
"docid": "ec323459d1bd85c80bc54dc9114fd8b8",
"text": "The hype around mobile payments has been growing in Sri Lanka with the exponential growth of the mobile adoption and increasing connectivity to the Internet. Mobile payments offer advantages in comparison to other payment modes, benefiting both the consumer and the society at large. Drawing upon the traditional technology adoption theories, this research develops a conceptual framework to uncover the influential factors fundamental to the mobile payment usage. The phenomenon discussed in this research is the factors influencing the use of mobile payments. In relation to the topic, nine independent factors were selected and their influence is to be tested onto behavioral intention to use mobile payments. The questionnaires need to be handed out for data collection for correlation analyses to track the relationship between the nine independent variables and the dependent variable — behavioral intention to use mobile payments. The second correlation analysis between behavioral intention to mobile payments and mobile payment usage is also to be checked together with the two moderating variables — age and level of education.",
"title": ""
},
{
"docid": "1deeae749259ff732ad3206dc4a7e621",
"text": "In traditional active learning, there is only one labeler that always returns the ground truth of queried labels. However, in many applications, multiple labelers are available to offer diverse qualities of labeling with different costs. In this paper, we perform active selection on both instances and labelers, aiming to improve the classification model most with the lowest cost. While the cost of a labeler is proportional to its overall labeling quality, we also observe that different labelers usually have diverse expertise, and thus it is likely that labelers with a low overall quality can provide accurate labels on some specific instances. Based on this fact, we propose a novel active selection criterion to evaluate the cost-effectiveness of instance-labeler pairs, which ensures that the selected instance is helpful for improving the classification model, and meanwhile the selected labeler can provide an accurate label for the instance with a relative low cost. Experiments on both UCI and real crowdsourcing data sets demonstrate the superiority of our proposed approach on selecting cost-effective queries.",
"title": ""
},
{
"docid": "e084557ddfafe910cfce5f823cb446ee",
"text": "Avoiding kernel vulnerabilities is critical to achieving security of many systems, because the kernel is often part of the trusted computing base. This paper evaluates the current state-of-the-art with respect to kernel protection techniques, by presenting two case studies of Linux kernel vulnerabilities. First, this paper presents data on 141 Linux kernel vulnerabilities discovered from January 2010 to March 2011, and second, this paper examines how well state-of-the-art techniques address these vulnerabilities. The main findings are that techniques often protect against certain exploits of a vulnerability but leave other exploits of the same vulnerability open, and that no effective techniques exist to handle semantic vulnerabilities---violations of high-level security invariants.",
"title": ""
}
] |
scidocsrr
|
fae3095469e50fac6324869cb0f85ae0
|
Comparative Studies of Passive Imaging in Terahertz and Mid-Wavelength Infrared Ranges for Object Detection
|
[
{
"docid": "f1b137d4ac36e141415963d6fab14918",
"text": "Passive equipments operating in the 30-300 GHz (millimeter wave) band are compared to those in the 300 GHz-3 THz (submillimeter band). Equipments operating in the submillimeter band can measure distance and also spectral information and have been used to address new opportunities in security. Solid state spectral information is available in the submillimeter region making it possible to identify materials, whereas in millimeter region bulk optical properties determine the image contrast. The optical properties in the region from 30 GHz to 3 THz are discussed for some typical inorganic and organic solids. In the millimeter-wave region of the spectrum, obscurants such as poor weather, dust, and smoke can be penetrated and useful imagery generated for surveillance. In the 30 GHz-3 THz region dielectrics such as plastic and cloth are also transparent and the detection of contraband hidden under clothing is possible. A passive millimeter-wave imaging concept based on a folded Schmidt camera has been developed and applied to poor weather navigation and security. The optical design uses a rotating mirror and is folded using polarization techniques. The design is very well corrected over a wide field of view making it ideal for surveillance and security. This produces a relatively compact imager which minimizes the receiver count.",
"title": ""
},
{
"docid": "22285844f638715765d21bff139d1bb1",
"text": "The field of Terahertz (THz) radiation, electromagnetic energy, between 0.3 to 3 THz, has seen intense interest recently, because it combines some of the best properties of IR along with those of RF. For example, THz radiation can penetrate fabrics with less attenuation than IR, while its short wavelength maintains comparable imaging capabilities. We discuss major challenges in the field: designing systems and applications which fully exploit the unique properties of THz radiation. To illustrate, we present our reflective, radar-inspired THz imaging system and results, centered on biomedical burn imaging and skin hydration, and discuss challenges and ongoing research.",
"title": ""
}
] |
[
{
"docid": "44b71e1429f731cc2d91f919182f95a4",
"text": "Power management of multi-core processors is extremely important because it allows power/energy savings when all cores are not used. OS directed power management according to ACPI (Advanced Power and Configurations Interface) specifications is the common approach that industry has adopted for this purpose. While operating systems are capable of such power management, heuristics for effectively managing the power are still evolving. The granularity at which the cores are slowed down/turned off should be designed considering the phase behavior of the workloads. Using 3-D, video creation, office and e-learning applications from the SYSmark benchmark suite, we study the challenges in power management of a multi-core processor such as the AMD Quad-Core Opteron\" and Phenom\". We unveil effects of the idle core frequency on the performance and power of the active cores. We adjust the idle core frequency to have the least detrimental effect on the active core performance. We present optimized hardware and operating system configurations that reduce average active power by 30% while reducing performance by an average of less than 3%. We also present complete system measurements and power breakdown between the various systems components using the SYSmark and SPEC CPU workloads. It is observed that the processor core and the disk consume the most power, with core having the highest variability.",
"title": ""
},
{
"docid": "56525ce9536c3c8ea03ab6852b854e95",
"text": "The Distributed Denial of Service (DDoS) attacks are a serious threat in today's Internet where packets from large number of compromised hosts block the path to the victim nodes and overload the victim servers. In the newly proposed future Internet Architecture, Named Data Networking (NDN), the architecture itself has prevention measures to reduce the overload to the servers. This on the other hand increases the work and security threats to the intermediate routers. Our project aims at identifying the DDoS attack in NDN which is known as Interest flooding attack, mitigate the consequence of it and provide service to the legitimate users. We have developed a game model for the DDoS attacks and provide possible countermeasures to stop the flooding of interests. Through this game theory model, we either forward or redirect or drop the incoming interest packets thereby reducing the PIT table consumption. This helps in identifying the nodes that send malicious interest packets and eradicate their actions of sending malicious interests further. The main highlight of this work is that we have implemented the Game Theory model in the NDN architecture. It was primarily imposed for the IP internet architecture.",
"title": ""
},
{
"docid": "75961ecd0eadf854ad9f7d0d76f7e9c8",
"text": "This paper presents the design of a microstrip-CPW transition where the CPW line propagates close to slotline mode. This design allows the solution to be determined entirely though analytical techniques. In addition, a planar via-less microwave crossover using this technique is proposed. The experimental results at 5 GHz show that the crossover has a minimum isolation of 32 dB. It also has low in-band insertion loss and return loss of 1.2 dB and 18 dB respectively over more than 44 % of bandwidth.",
"title": ""
},
{
"docid": "c936e76e8db97b640a4123e66169d1b8",
"text": "Varying philosophical and theoretical orientations to qualitative inquiry remind us that issues of quality and credibility intersect with audience and intended research purposes. This overview examines ways of enhancing the quality and credibility of qualitative analysis by dealing with three distinct but related inquiry concerns: rigorous techniques and methods for gathering and analyzing qualitative data, including attention to validity, reliability, and triangulation; the credibility, competence, and perceived trustworthiness of the qualitative researcher; and the philosophical beliefs of evaluation users about such paradigm-based preferences as objectivity versus subjectivity, truth versus perspective, and generalizations versus extrapolations. Although this overview examines some general approaches to issues of credibility and data quality in qualitative analysis, it is important to acknowledge that particular philosophical underpinnings, specific paradigms, and special purposes for qualitative inquiry will typically include additional or substitute criteria for assuring and judging quality, validity, and credibility. Moreover, the context for these considerations has evolved. In early literature on evaluation methods the debate between qualitative and quantitative methodologists was often strident. In recent years the debate has softened. A consensus has gradually emerged that the important challenge is to match appropriately the methods to empirical questions and issues, and not to universally advocate any single methodological approach for all problems.",
"title": ""
},
{
"docid": "3f80322512497ceb4129d1f10a6dbf99",
"text": "Alzheimer's dis ease (AD) is a leading cause of mortality in the developed world with 70% risk attributable to genetics. The remaining 30% of AD risk is hypothesized to include environmental factors and human lifestyle patterns. Environmental factors possibly include inorganic and organic hazards, exposure to toxic metals (aluminium, copper), pesticides (organochlorine and organophosphate insecticides), industrial chemicals (flame retardants) and air pollutants (particulate matter). Long term exposures to these environmental contaminants together with bioaccumulation over an individual's life-time are speculated to induce neuroinflammation and neuropathology paving the way for developing AD. Epidemiologic associations between environmental contaminant exposures and AD are still limited. However, many in vitro and animal studies have identified toxic effects of environmental contaminants at the cellular level, revealing alterations of pathways and metabolisms associated with AD that warrant further investigations. This review provides an overview of in vitro, animal and epidemiological studies on the etiology of AD, highlighting available data supportive of the long hypothesized link between toxic environmental exposures and development of AD pathology.",
"title": ""
},
{
"docid": "575d8fed62c2afa1429d16444b6b173c",
"text": "Research into learning and teaching in higher education over the last 25 years has provided a variety of concepts, methods, and findings that are of both theoretical interest and practical relevance. It has revealed the relationships between students’ approaches to studying, their conceptions of learning, and their perceptions of their academic context. It has revealed the relationships between teachers’ approaches to teaching, their conceptions of teaching, and their perceptions of the teaching environment. And it has provided a range of tools that can be exploited for developing our understanding of learning and teaching in particular contexts and for assessing and enhancing the student experience on specific courses and programs.",
"title": ""
},
{
"docid": "49c7b5cab51301d8b921fa87d6c0b1ff",
"text": "We introduce the input output automa ton a simple but powerful model of computation in asynchronous distributed networks With this model we are able to construct modular hierarchical correct ness proofs for distributed algorithms We de ne this model and give an interesting example of how it can be used to construct such proofs",
"title": ""
},
{
"docid": "0d51dc0edc9c4e1c050b536c7c46d49d",
"text": "MOTIVATION\nThe identification of risk-associated genetic variants in common diseases remains a challenge to the biomedical research community. It has been suggested that common statistical approaches that exclusively measure main effects are often unable to detect interactions between some of these variants. Detecting and interpreting interactions is a challenging open problem from the statistical and computational perspectives. Methods in computing science may improve our understanding on the mechanisms of genetic disease by detecting interactions even in the presence of very low heritabilities.\n\n\nRESULTS\nWe have implemented a method using Genetic Programming that is able to induce a Decision Tree to detect interactions in genetic variants. This method has a cross-validation strategy for estimating classification and prediction errors and tests for consistencies in the results. To have better estimates, a new consistency measure that takes into account interactions and can be used in a genetic programming environment is proposed. This method detected five different interaction models with heritabilities as low as 0.008 and with prediction errors similar to the generated errors.\n\n\nAVAILABILITY\nInformation on the generated data sets and executable code is available upon request.",
"title": ""
},
{
"docid": "0e19123e438f39c4404d4bd486348247",
"text": "Boundary and edge cues are highly beneficial in improving a wide variety of vision tasks such as semantic segmentation, object recognition, stereo, and object proposal generation. Recently, the problem of edge detection has been revisited and significant progress has been made with deep learning. While classical edge detection is a challenging binary problem in itself, the category-aware semantic edge detection by nature is an even more challenging multi-label problem. We model the problem such that each edge pixel can be associated with more than one class as they appear in contours or junctions belonging to two or more semantic classes. To this end, we propose a novel end-to-end deep semantic edge learning architecture based on ResNet and a new skip-layer architecture where category-wise edge activations at the top convolution layer share and are fused with the same set of bottom layer features. We then propose a multi-label loss function to supervise the fused activations. We show that our proposed architecture benefits this problem with better performance, and we outperform the current state-of-the-art semantic edge detection methods by a large margin on standard data sets such as SBD and Cityscapes.",
"title": ""
},
{
"docid": "6379e89db7d9063569a342ef2056307a",
"text": "Grounded Theory is a research method that generates theory from data and is useful for understanding how people resolve problems that are of concern to them. Although the method looks deceptively simple in concept, implementing Grounded Theory research can often be confusing in practice. Furthermore, despite many papers in the social science disciplines and nursing describing the use of Grounded Theory, there are very few examples and relevant guides for the software engineering researcher. This paper describes our experience using classical (i.e., Glaserian) Grounded Theory in a software engineering context and attempts to interpret the canons of classical Grounded Theory in a manner that is relevant to software engineers. We provide model to help the software engineering researchers interpret the often fuzzy definitions found in Grounded Theory texts and share our experience and lessons learned during our research. We summarize these lessons learned in a set of fifteen guidelines.",
"title": ""
},
{
"docid": "68a0e00fccbf8658186f31915479708e",
"text": "Semantic amodal segmentation is a recently proposed extension to instance-aware segmentation that includes the prediction of the invisible region of each object instance. We present the first all-in-one end-to-end trainable model for semantic amodal segmentation that predicts the amodal instance masks as well as their visible and invisible part in a single forward pass. In a detailed analysis, we provide experiments to show which architecture choices are beneficial for an all-in-one amodal segmentation model. On the COCO amodal dataset, our model outperforms the current baseline for amodal segmentation by a large margin. To further evaluate our model, we provide two new datasets with ground truth for semantic amodal segmentation, D2S amodal and COCOA cls. For both datasets, our model provides a strong baseline performance. Using special data augmentation techniques, we show that amodal segmentation on D2S amodal is possible with reasonable performance, even without providing amodal training data.",
"title": ""
},
{
"docid": "a0fcd09ea8f29a0827385ae9f48ddd44",
"text": "Networks play a central role in modern data analysis, enabling us to reason about systems by studying the relationships between their parts. Most often in network analysis, the edges are given. However, in many systems it is difficult or impossible to measure the network directly. Examples of latent networks include economic interactions linking financial instruments and patterns of reciprocity in gang violence. In these cases, we are limited to noisy observations of events associated with each node. To enable analysis of these implicit networks, we develop a probabilistic model that combines mutuallyexciting point processes with random graph models. We show how the Poisson superposition principle enables an elegant auxiliary variable formulation and a fully-Bayesian, parallel inference algorithm. We evaluate this new model empirically on several datasets.",
"title": ""
},
{
"docid": "8d3c1e649e40bf72f847a9f8ac6edf38",
"text": "Many organizations are forming “virtual teams” of geographically distributed knowledge workers to collaborate on a variety of workplace tasks. But how effective are these virtual teams compared to traditional face-to-face groups? Do they create similar teamwork and is information exchanged as effectively? An exploratory study of a World Wide Web-based asynchronous computer conference system known as MeetingWebTM is presented and discussed. It was found that teams using this computer-mediated communication system (CMCS) could not outperform traditional (face-to-face) teams under otherwise comparable circumstances. Further, relational links among team members were found to be a significant contributor to the effectiveness of information exchange. Though virtual and face-to-face teams exhibit similar levels of communication effectiveness, face-to-face team members report higher levels of satisfaction. Therefore, the paper presents steps that can be taken to improve the interaction experience of virtual teams. Finally, guidelines for creating and managing virtual teams are suggested, based on the findings of this research and other authoritative sources. Subject Areas: Collaboration, Computer Conference, Computer-mediated Communication Systems (CMCS), Internet, Virtual Teams, and World Wide Web. *The authors wish to thank the Special Focus Editor and the reviewers for their thoughtful critique of the earlier versions of this paper. We also wish to acknowledge the contributions of the Northeastern University College of Business Administration and its staff, which provided the web server and the MeetingWebTM software used in these experiments.",
"title": ""
},
{
"docid": "8e7b273daa9d91e010a9ea02b4b7658c",
"text": "This collection of invited papers covers a lot of ground in its nearly 800 pages, so any review of reasonable length will necessarily be selective. However, there are a number of features that make the book as a whole a comparatively easy and thoroughly rewarding read. Multiauthor compendia of this kind are often disjointed, with very little uniformity from chapter to chapter in terms of breadth, depth, and format. Such is not the case here. Breadth and depth of treatment are surprisingly consistent, with coherent formats that often include both a little history of the field and some thoughts about the future. The volume has a very logical structure in which the chapters flow and follow on from each other in an orderly fashion. There are also many cross-references between chapters, which allow the authors to build upon the foundation of one another's work and eliminate redundancies. Specifically, the contents consist of 38 survey papers grouped into three parts: Fundamentals; Processes, Methods, and Resources; and Applications. Taken together, they provide both a comprehensive introduction to the field and a useful reference volume. In addition to the usual author and subject matter indices, there is a substantial glossary that students will find invaluable. Each chapter ends with a bibliography, together with tips for further reading and mention of other resources, such as conferences , workshops, and URLs. Part I covers the full spectrum of linguistic levels of analysis from a largely theoretical point of view, including phonology, morphology, lexicography, syntax, semantics, discourse, and dialogue. The result is a layered approach to the subject matter that allows each new level to take the previous level for granted. However, the authors do not typically restrict themselves to linguistic theory. For example, Hanks's chapter on lexicography characterizes the deficiencies of both hand-built and corpus-based dictionaries , as well as discussing other practical problems, such as how to link meaning and use. The phonology and morphology chapters provide fine introductions to these topics, which tend to receive short shrift in many NLP and AI texts. Part I ends with two chapters, one on formal grammars and one on complexity, which round out the computational aspect. This is an excellent pairing, with Martín-Vide's thorough treatment of regular and context-free languages leading into Carpen-ter's masterly survey of problem complexity and practical efficiency. Part II is more task based, with a focus on such activities as text segmentation, …",
"title": ""
},
{
"docid": "4457c0b480ec9f3d503aa89c6bbf03b9",
"text": "An output-capacitorless low-dropout regulator (LDO) with a direct voltage-spike detection circuit is presented in this paper. The proposed voltage-spike detection is based on capacitive coupling. The detection circuit makes use of the rapid transient voltage at the LDO output to increase the bias current momentarily. Hence, the transient response of the LDO is significantly enhanced due to the improvement of the slew rate at the gate of the power transistor. The proposed voltage-spike detection circuit is applied to an output-capacitorless LDO implemented in a standard 0.35-¿m CMOS technology (where VTHN ¿ 0.5 V and VTHP ¿ -0.65 V). Experimental results show that the LDO consumes 19 ¿A only. It regulates the output at 0.8 V from a 1-V supply, with dropout voltage of 200 mV at the maximum output current of 66.7 mA. The voltage spike and the recovery time of the LDO with the proposed voltage-spike detection circuit are reduced to about 70 mV and 3 ¿s, respectively, whereas they are more than 420 mV and 30 ¿s for the LDO without the proposed detection circuit.",
"title": ""
},
{
"docid": "f5f3e946634af981f9a7e00ad9a0296c",
"text": "We investigate the use of machine learning algorithms to classify the topic of messages published in Online Social Networks using as input solely user interaction data, instead of the actual message content. During a period of six months, we monitored and gathered data from users interacting with news messages on Twitter, creating thousands of information diffusion processes. The data set presented regular patterns on how messages were spread over the network by users, depending on its content, so we could build classifiers to predict the topic of a message using as input only the information of which users shared such message. Thus, we demonstrate the explanatory power of user behavior data on identifying content present in Social Networks, proposing techniques for topic classification that can be used to assist traditional content identification strategies (such as natural language or image processing) in challenging contexts, or be applied in scenarios with limited information access.",
"title": ""
},
{
"docid": "418e29af01be9655c06df63918f41092",
"text": "A major goal of unsupervised learning is to discover data representations that are useful for subsequent tasks, without access to supervised labels during training. Typically, this goal is approached by minimizing a surrogate objective, such as the negative log likelihood of a generative model, with the hope that representations useful for subsequent tasks will arise as a side effect. In this work, we propose instead to directly target a later desired task by meta-learning an unsupervised learning rule, which leads to representations useful for that task. Here, our desired task (meta-objective) is the performance of the representation on semi-supervised classification, and we meta-learn an algorithm – an unsupervised weight update rule – that produces representations that perform well under this meta-objective. Additionally, we constrain our unsupervised update rule to a be a biologically-motivated, neuron-local function, which enables it to generalize to novel neural network architectures. We show that the meta-learned update rule produces useful features and sometimes outperforms existing unsupervised learning techniques. We show that the metalearned unsupervised update rule generalizes to train networks with different widths, depths, and nonlinearities. It also generalizes to train on data with randomly permuted input dimensions and even generalizes from image datasets to a text task.",
"title": ""
},
{
"docid": "1256f0799ed585092e60b50fb41055be",
"text": "So far, plant identification has challenges for sev eral researchers. Various methods and features have been proposed. However, there are still many approaches could be investigated to develop robust plant identification systems. This paper reports several xperiments in using Zernike moments to build folia ge plant identification systems. In this case, Zernike moments were combined with other features: geometr ic features, color moments and gray-level co-occurrenc e matrix (GLCM). To implement the identifications systems, two approaches has been investigated. Firs t approach used a distance measure and the second u sed Probabilistic Neural Networks (PNN). The results sh ow that Zernike Moments have a prospect as features in leaf identification systems when they are combin ed with other features.",
"title": ""
},
{
"docid": "c581f1797921247e9674c06b49c1b055",
"text": "Service organizations are increasingly utilizing advanced information and communication technologies, such as the Internet, in hopes of improving the efficiency, cost-effectiveness, and/or quality of their customer-facing operations. More of the contact a customer has with the firm is likely to be with the back-office and, therefore, mediated by technology. While previous operations management research has been important for its contributions to our understanding of customer contact in face-to-facesettings, considerably less work has been done to improve our understanding of customer contact in what we refer to as technology-mediated settings (e.g., via telephone, instant messaging (IM), or email). This paper builds upon the service operations management (SOM) literature on customer contact by theoretically defining and empirically developing new multi-item measurement scales specifically designed for assessing tech ology-mediated customer contact. Seminal works on customer contact theory and its empirical measurement are employed to provide a foundation for extending these concepts to technology-mediated contexts. We also draw upon other important frameworks, including the Service Profit Chain, the Theory of Planned Behavior, and the concept of media/information richness, in order to identify and define our constructs. We follow a rigorous empirical scale development process to create parsimonious sets of survey items that exhibit satisfactory levels of reliability and validity to be useful in advancing SOM empirical research in the emerging Internet-enabled back-office. © 2004 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "35c904cdbaddec5e7cd634978c0b415d",
"text": "Life-long visual localization is one of the most challenging topics in robotics over the last few years. The difficulty of this task is in the strong appearance changes that a place suffers due to dynamic elements, illumination, weather or seasons. In this paper, we propose a novel method (ABLE-M) to cope with the main problems of carrying out a robust visual topological localization along time. The novelty of our approach resides in the description of sequences of monocular images as binary codes, which are extracted from a global LDB descriptor and efficiently matched using FLANN for fast nearest neighbor search. Besides, an illumination invariant technique is applied. The usage of the proposed binary description and matching method provides a reduction of memory and computational costs, which is necessary for long-term performance. Our proposal is evaluated in different life-long navigation scenarios, where ABLE-M outperforms some of the main state-of-the-art algorithms, such as WI-SURF, BRIEF-Gist, FAB-MAP or SeqSLAM. Tests are presented for four public datasets where a same route is traversed at different times of day or night, along the months or across all four seasons.",
"title": ""
}
] |
scidocsrr
|
608edd72fdf66577f39f5cfa6aecd105
|
Excessive users of violent video games do not show emotional desensitization: an fMRI study
|
[
{
"docid": "e896b306c5282da3b0fd58aaf635c027",
"text": "In June 2011 the U.S. Supreme Court ruled that video games enjoy full free speech protections and that the regulation of violent game sales to minors is unconstitutional. The Supreme Court also referred to psychological research on violent video games as \"unpersuasive\" and noted that such research contains many methodological flaws. Recent reviews in many scholarly journals have come to similar conclusions, although much debate continues. Given past statements by the American Psychological Association linking video game and media violence with aggression, the Supreme Court ruling, particularly its critique of the science, is likely to be shocking and disappointing to some psychologists. One possible outcome is that the psychological community may increase the conclusiveness of their statements linking violent games to harm as a form of defensive reaction. However, in this article the author argues that the psychological community would be better served by reflecting on this research and considering whether the scientific process failed by permitting and even encouraging statements about video game violence that exceeded the data or ignored conflicting data. Although it is likely that debates on this issue will continue, a move toward caution and conservatism as well as increased dialogue between scholars on opposing sides of this debate will be necessary to restore scientific credibility. The current article reviews the involvement of the psychological science community in the Brown v. Entertainment Merchants Association case and suggests that it might learn from some of the errors in this case for the future.",
"title": ""
},
{
"docid": "47f1d6df5ec3ff30d747fb1fcbc271a7",
"text": "a r t i c l e i n f o Experimental studies routinely show that participants who play a violent game are more aggressive immediately following game play than participants who play a nonviolent game. The underlying assumption is that nonviolent games have no effect on aggression, whereas violent games increase it. The current studies demonstrate that, although violent game exposure increases aggression, nonviolent video game exposure decreases aggressive thoughts and feelings (Exp 1) and aggressive behavior (Exp 2). When participants assessed after a delay were compared to those measured immediately following game play, violent game players showed decreased aggressive thoughts, feelings and behavior, whereas nonviolent game players showed increases in these outcomes. Experiment 3 extended these findings by showing that exposure to nonviolent puzzle-solving games with no expressly prosocial content increases prosocial thoughts, relative to both violent game exposure and, on some measures, a no-game control condition. Implications of these findings for models of media effects are discussed. A major development in mass media over the last 25 years has been the advent and rapid growth of the video game industry. From the earliest arcade-based console games, video games have been immediately and immensely popular, particularly among young people and their subsequent introduction to the home market only served to further elevate their prevalence (Gentile, 2009). Given their popularity, social scientists have been concerned with the potential effects of video games on those who play them, focusing particularly on games with violent content. While a large percentage of games have always involved the destruction of enemies, recent advances in technology have enabled games to become steadily more realistic. Coupled with an increase in the number of adult players, these advances have enabled the development of games involving more and more graphic violence. Over the past several years, the majority of best-selling games have involved frequent and explicit acts of violence as a central gameplay theme (Smith, Lachlan, & Tamborini, 2003). A video game is essentially a simulated experience. Virtually every major theory of human aggression, including social learning theory, predicts that repeated simulation of antisocial behavior will produce an increase in antisocial behavior (e.g., aggression) and a decrease in prosocial behavior (e.g., helping) outside the simulated environment (i.e., in \" real life \"). In addition, an increase in the perceived realism of the simulation is posited to increase the strength of negative effects (Gentile & Anderson, 2003). Meta-analyses …",
"title": ""
}
] |
[
{
"docid": "f80e710c6256977fdf33427f79061350",
"text": "This paper presents an obstacle avoidance algorithm for low speed autonomous vehicles (AV), with guaranteed safety. A supervisory control algorithm is constructed based on a barrier function method, which works in a plug-and-play fashion with any lower level navigation algorithm. When the risk of collision is low, the barrier function is not active; when the risk is high, based on the distance to an “avoidable set,” the barrier function controller will intervene, using a mixed integer program to ensure safety with minimal control effort. This method is applied to solve the navigation and pedestrian avoidance problem of a low speed AV. Its performance is compared with two benchmark algorithms: a potential field method and the Hamilton–Jacobi method.",
"title": ""
},
{
"docid": "912305c77922b8708c291ccc63dae2cd",
"text": "Customer satisfaction and loyalty is a well known and established concept in several areas like marketing, consumer research, economic psychology, welfare-economics, and economics. And has long been a topic of high interest in both academia and practice. The aim of the study was to investigate whether customer satisfaction is an indicator of customer loyalty. The findings of the study supported the contention that strong relationship exist between customer satisfaction and loyalty. However, customer satisfaction alone cannot achieve the objective of creating a loyal customer base. Some researchers also argued, that customer satisfaction and loyalty are not directly correlated, particularly in competitive business environments because there is a big difference between satisfaction, which is a passive customer condition, and loyalty, which is an active or proactive relationship with the organization.",
"title": ""
},
{
"docid": "89596e6eedbc1f13f63ea144b79fdc64",
"text": "This paper describes our work in integrating three different lexical resources: FrameNet, VerbNet, and WordNet, into a unified, richer knowledge-base, to the end of enabling more robust semantic parsing. The construction of each of these lexical resources has required many years of laborious human effort, and they all have their strengths and shortcomings. By linking them together, we build an improved resource in which (1) the coverage of FrameNet is extended, (2) the VerbNet lexicon is augmented with frame semantics, and (3) selectional restrictions are implemented using WordNet semantic classes. The synergistic exploitation of various lexical resources is crucial for many complex language processing applications, and we prove it once again effective in building a robust semantic parser.",
"title": ""
},
{
"docid": "7394f3000da8af0d4a2b33fed4f05264",
"text": "We often base our decisions on uncertain data - for instance, when consulting the weather forecast before deciding what to wear. Due to their uncertainty, such forecasts can differ by provider. To make an informed decision, many people compare several forecasts, which is a time-consuming and cumbersome task. To facilitate comparison, we identified three aggregation mechanisms for forecasts: manual comparison and two mechanisms of computational aggregation. In a survey, we compared the mechanisms using different representations. We then developed a weather application to evaluate the most promising candidates in a real-world study. Our results show that aggregation increases users' confidence in uncertain data, independent of the type of representation. Further, we find that for daily events, users prefer to use computationally aggregated forecasts. However, for high-stakes events, they prefer manual comparison. We discuss how our findings inform the design of improved interfaces for comparison of uncertain data, including non-weather purposes.",
"title": ""
},
{
"docid": "ab11e7eda0563fd482c408aca673f436",
"text": "We present Gray S-box for advanced encryption standard. Gray S-box is constructed by adding binary Gray code transformation as a preprocessing step to original AES S-box. Gray S-box corresponds to a polynomial with all 255 non-zero terms in comparison with 9-term polynomial of original AES S-box. This increases the security for S-box against algebraic attacks and interpolation attacks. Besides, as Gray S-box reuses AES S-box as a whole, Gray S-box inherits all advantages and efficiency of any existing optimized implementation of AES S-box. Gray S-box also achieves important cryptographic properties of AES S-box, including strict avalanche criterion, nonlinearity, and differential uniformity.",
"title": ""
},
{
"docid": "53007a9a03b7db2d64dd03973717dc0f",
"text": "We present two children with hypoplasia of the left trapezius muscle and a history of ipsilateral transient neonatal brachial plexus palsy without documented trapezius weakness. Magnetic resonance imaging in these patients with unilateral left hypoplasia of the trapezius revealed decreased muscles in the left side of the neck and left supraclavicular region on coronal views, decreased muscle mass between the left splenius capitis muscle and the subcutaneous tissue at the level of the neck on axial views, and decreased size of the left paraspinal region on sagittal views. Three possibilities can explain the association of hypoplasia of the trapezius and obstetric brachial plexus palsy: increased vulnerability of the brachial plexus to stretch injury during delivery because of intrauterine trapezius weakness, a casual association of these two conditions, or an erroneous diagnosis of brachial plexus palsy in patients with trapezial weakness. Careful documentation of neck and shoulder movements can distinguish among shoulder weakness because of trapezius hypoplasia, brachial plexus palsy, or brachial plexus palsy with trapezius hypoplasia. Hence, we recommend precise documentation of neck movements in the initial description of patients with suspected neonatal brachial plexus palsy.",
"title": ""
},
{
"docid": "c258ca8e7c9d351fc8e380b0af0a529e",
"text": "Pervasive technology devices that intend to be worn must not only meet our functional requirements but also our social, emotional, and aesthetic needs. Current pervasive devices such as the PDA or cell phone are more portable than wearable, yet still they elicit strong consumer demand for intuitive interfaces and well-designed forms. Looking to the future of wearable pervasive devices, we can imagine an even greater demand for meaningful forms for objects nestled so close to our bodies. They will need to reflect our tastes and moods, and allow us to express our personalities, cultural beliefs, and values. Digital Jewelry explores a new wearable technology form that is based in jewelry design, not in technology. Through prototypes and meaningful scenarios, digital jewelry offers new ideas to consider in the design of wearable devices.",
"title": ""
},
{
"docid": "d3b2283ce3815576a084f98c34f37358",
"text": "We present a system for the detection of the stance of headlines with regard to their corresponding article bodies. The approach can be applied in fake news, especially clickbait detection scenarios. The component is part of a larger platform for the curation of digital content; we consider veracity and relevancy an increasingly important part of curating online information. We want to contribute to the debate on how to deal with fake news and related online phenomena with technological means, by providing means to separate related from unrelated headlines and further classifying the related headlines. On a publicly available data set annotated for the stance of headlines with regard to their corresponding article bodies, we achieve a (weighted) accuracy score of 89.59.",
"title": ""
},
{
"docid": "046245929e709ef2935c9413619ab3d7",
"text": "In recent years, there has been a growing intensity of competition in virtually all areas of business in both markets upstream for raw materials such as components, supplies, capital and technology and markets downstream for consumer goods and services. This paper examines the relationships among generic strategy, competitive advantage, and organizational performance. Firstly, the nature of generic strategies, competitive advantage, and organizational performance is examined. Secondly, the relationship between generic strategies and competitive advantage is analyzed. Finally, the implications of generic strategies, organizational performance, performance measures and competitive advantage are studied. This study focuses on: (i) the relationship of generic strategy and organisational performance in Australian manufacturing companies participating in the “Best Practice Program in Australia”, (ii) the relationship between generic strategies and competitive advantage, and (iii) the relationship among generic strategies, competitive advantage and organisational performance. 1999 Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "5ed31e2c0b4f958996df1ac8f5dfd6cc",
"text": "Through-wall tracking has gained a lot of attentions in civilian applications recently. Many applications would benefit from such device-free tracking, e.g. elderly people surveillance, intruder detection, gaming, etc. In this work, we present a system, named Tadar, for tracking moving objects without instrumenting them us- ing COTS RFID readers and tags. It works even through walls and behind closed doors. It aims to enable a see-through-wall technology that is low-cost, compact, and accessible to civilian purpose. In traditional RFID systems, tags modulate their IDs on the backscatter signals, which is vulnerable to the interferences from the ambient reflections. Unlike past work, which considers such vulnerability as detrimental, our design exploits it to detect surrounding objects even through walls. Specifically, we attach a group of RFID tags on the outer wall and logically convert them into an antenna array, receiving the signals reflected off moving objects. This paper introduces two main innovations. First, it shows how to eliminate the flash (e.g. the stronger reflections off walls) and extract the reflections from the backscatter signals. Second, it shows how to track the moving object based on HMM (Hidden Markov Model) and its reflections. To the best of our knowledge, we are the first to implement a through-wall tracking using the COTS RFID systems. Empirical measurements with a prototype show that Tadar can detect objects behind 5\" hollow wall and 8\" concrete wall, and achieve median tracking errors of 7.8cm and 20cm in the X and Y dimensions.",
"title": ""
},
{
"docid": "fc69f1c092bae3328ce9c5975929e92c",
"text": "In allusion to the “on-line beforehand decision-making, real time matching”, this paper proposes the stability control flow based on PMU for interconnected power system, which is a real-time stability control. In this scheme, preventive control, emergency control and corrective control are designed to a closed-loop rolling control process, it will protect the stability of power system. Then it ameliorates the corrective control process, and presents a new control method which is based on PMU and EEAC method. This scheme can ensure the real-time quality and advance the veracity for the corrective control.",
"title": ""
},
{
"docid": "f060713abe9ada73c1c4521c5ca48ea9",
"text": "In this paper, we revisit the classical Bayesian face recognition method by Baback Moghaddam et al. and propose a new joint formulation. The classical Bayesian method models the appearance difference between two faces. We observe that this “difference” formulation may reduce the separability between classes. Instead, we model two faces jointly with an appropriate prior on the face representation. Our joint formulation leads to an EM-like model learning at the training time and an efficient, closed-formed computation at the test time. On extensive experimental evaluations, our method is superior to the classical Bayesian face and many other supervised approaches. Our method achieved 92.4% test accuracy on the challenging Labeled Face in Wild (LFW) dataset. Comparing with current best commercial system, we reduced the error rate by 10%.",
"title": ""
},
{
"docid": "a7760563ce223473a3723e048b85427a",
"text": "The concept of “task” is at the core of artificial intelligence (AI): Tasks are used for training and evaluating AI systems, which are built in order to perform and automatize tasks we deem useful. In other fields of engineering theoretical foundations allow thorough evaluation of designs by methodical manipulation of well understood parameters with a known role and importance; this allows an aeronautics engineer, for instance, to systematically assess the effects of wind speed on an airplane’s performance and stability. No framework exists in AI that allows this kind of methodical manipulation: Performance results on the few tasks in current use (cf. board games, question-answering) cannot be easily compared, however similar or different. The issue is even more acute with respect to artificial general intelligence systems, which must handle unanticipated tasks whose specifics cannot be known beforehand. A task theory would enable addressing tasks at the class level, bypassing their specifics, providing the appropriate formalization and classification of tasks, environments, and their parameters, resulting in more rigorous ways of measuring, comparing, and evaluating intelligent behavior. Even modest improvements in this direction would surpass the current ad-hoc nature of machine learning and AI evaluation. Here we discuss the main elements of the argument for a task theory and present an outline of what it might look like for physical tasks.",
"title": ""
},
{
"docid": "f2f95f70783be5d5ee1260a3c5b9d892",
"text": "Information Extraction is the process of automatically obtaining knowledge from plain text. Because of the ambiguity of written natural language, Information Extraction is a difficult task. Ontology-based Information Extraction (OBIE) reduces this complexity by including contextual information in the form of a domain ontology. The ontology provides guidance to the extraction process by providing concepts and relationships about the domain. However, OBIE systems have not been widely adopted because of the difficulties in deployment and maintenance. The Ontology-based Components for Information Extraction (OBCIE) architecture has been proposed as a form to encourage the adoption of OBIE by promoting reusability through modularity. In this paper, we propose two orthogonal extensions to OBCIE that allow the construction of hybrid OBIE systems with higher extraction accuracy and a new functionality. The first extension utilizes OBCIE modularity to integrate different types of implementation into one extraction system, producing a more accurate extraction. For each concept or relationship in the ontology, we can select the best implementation for extraction, or we can combine both implementations under an ensemble learning schema. The second extension is a novel ontology-based error detection mechanism. Following a heuristic approach, we can identify sentences that are logically inconsistent with the domain ontology. Because the implementation strategy for the extraction of a concept is independent of the functionality of the extraction, we can design a hybrid OBIE system with concepts utilizing different implementation strategies for extracting correct or incorrect sentences. Our evaluation shows that, in the implementation extension, our proposed method is more accurate in terms of correctness and completeness of the extraction. Moreover, our error detection method can identify incorrect statements with a high accuracy.",
"title": ""
},
{
"docid": "f76194dbaf302eccadf84cb8787d7096",
"text": "We compare the restorative effects on cognitive functioning of interactions with natural versus urban environments. Attention restoration theory (ART) provides an analysis of the kinds of environments that lead to improvements in directed-attention abilities. Nature, which is filled with intriguing stimuli, modestly grabs attention in a bottom-up fashion, allowing top-down directed-attention abilities a chance to replenish. Unlike natural environments, urban environments are filled with stimulation that captures attention dramatically and additionally requires directed attention (e.g., to avoid being hit by a car), making them less restorative. We present two experiments that show that walking in nature or viewing pictures of nature can improve directed-attention abilities as measured with a backwards digit-span task and the Attention Network Task, thus validating attention restoration theory.",
"title": ""
},
{
"docid": "748abc573febb27f9b9eae92ec68fff7",
"text": "In this paper we develop a computational model of adaptation and spatial vision for realistic tone reproduction. The model is based on a multiscale representation of pattern, luminance, and color processing in the human visual system. We incorporate the model into a tone reproduction operator that maps the vast ranges of radiances found in real and synthetic scenes into the small fixed ranges available on conventional display devices such as CRT’s and printers. The model allows the operator to address the two major problems in realistic tone reproduction: wide absolute range and high dynamic range scenes can be displayed; and the displayed images match our perceptions of the scenes at both threshold and suprathreshold levels to the degree possible given a particular display device. Although in this paper we apply our visual model to the tone reproduction problem, the model is general and can be usefully applied to image quality metrics, image compression methods, and perceptually-based image synthesis algorithms. CR Categories: I.3.0 [Computer Graphics]: General;",
"title": ""
},
{
"docid": "eb218a1d8b7cbcd895dd0cd8cfcf9d80",
"text": "Caring is considered as the essence of nursing and is the basic factor that distinguishes between nurses and other health professions. The literature is rich of previous studies that focused on perceptions of nurses toward nurse caring behaviors, but less studywas applied in pediatric nurses in different settings. Aim of the study:evaluate the effect of application of Watson caring theory for nurses in pediatric critical care unit. Method(s): A convenience sample of 70 nurses of Pediatric Critical Care Unit in El-Menoufya University Hospital and educational hospital in ShebenElkom.were completed the demographics questionnaire, and the Caring Behavior Assessment (CBA) questionnaire,medical record to collect medical data regarding children characteristics such as age and diagnosis, Interviewing questionnaire for nurses regarding their barrier to less interest of comfort behavior such as doing doctor order, Shortage of nursing staff, Large number of patients, Heavy workloads, Secretarial jobs for nurses and Emotional stress. Results: more thantwothirds of nurses in study group and majority of control group had age less than 30 years, there were highly statistically significant difference related to mean scores for Caring Behavior Assessment (CBA) as rated by nurses in pretest (1.4750 to 2.0750) than in posttest (3.5 to 4.55). Also, near to two-thirds (64.3%) of the nurses stated that doing doctor order act as a barrier to apply this theory. In addition, there were a statistical significance difference between educational qualifications of nurses and a Supportive\\ protective\\corrective environment subscale with mean score for master degree 57.0000, also between years of experiences and human needs assistance. Conclusion: Program instructions for all nurses to apply Watson Caring theory for children in pediatric critical care unit were successful and effective and this study provided evidence for application of this theory for different departments in all settings. Recommendations: It was recommended that In-service training programs for nurses about caring behavior and its different areas, with special emphasis on communication are needed to improve their own behaviors in all aspects of the caring behaviors for all health care settings. Motivating hospital authorities to recruit more nurses, then, the nurses would be able to have more care that is direct. Consequently, the amount and the quality of nurse-child communication and opportunities for patient education would increase, this in turn improve child's outcome.",
"title": ""
},
{
"docid": "2a9e4ed54dd91eb8a6bad757afc9ac75",
"text": "The modern advancements in digital electronics allow waveforms to be easily synthesized and captured using only digital electronics. The synthesis of radar waveforms using only digital electronics, such as Digital-to-Analog Converters (DACs) and Analog-to-Digital Converters (ADCs) allows for a majority of the analog chain to be removed from the system. In order to create a constant amplitude waveform, the amplitude distortions must be compensated for. The method chosen to compensate for the amplitude distortions is to pre-distort the waveform so, when it is influenced by the system, the output waveform has a near constant amplitude modulus. The effects of the predistortion were observed to be successful in both range and range-Doppler radar implementations.",
"title": ""
},
{
"docid": "e8523816ead27edc299397d2cad68bc4",
"text": "This research investigated the link between ethical leadership and performance using data from the People’s Republic of China. Consistent with social exchange, social learning, and social identity theories, we examined leader–member exchange (LMX), self-efficacy, and organizational identification as mediators of the ethical leadership to performance relationship. Results from 72 supervisors and 201 immediate direct reports revealed that ethical leadership was positively and significantly related to employee performance as rated by their immediate supervisors and that this relationship was fully mediated by LMX, self-efficacy, and organizational identification, controlling for procedural fairness. We discuss implications of our findings for theory and practice.",
"title": ""
},
{
"docid": "23a5d1aebe5e2f7dd5ed8dfde17ce374",
"text": "Today's workplace often includes workers from 4 distinct generations, and each generation brings a unique set of core values and characteristics to an organization. These generational differences can produce benefits, such as improved patient care, as well as challenges, such as conflict among employees. This article reviews current research on generational differences in educational settings and the workplace and discusses the implications of these findings for medical imaging and radiation therapy departments.",
"title": ""
}
] |
scidocsrr
|
937bec8416217a0f5577d1223c514146
|
Active Learning of Inverse Models with Intrinsically Motivated Goal Exploration in Robots
|
[
{
"docid": "749e11a625e94ab4e1f03a74aa6b3ab2",
"text": "We present Confidence-Based Autonomy (CBA), an interactive algorithm for policy learning from demonstration. The CBA algorithm consists of two components which take advantage of the complimentary abilities of humans and computer agents. The first component, Confident Execution, enables the agent to identify states in which demonstration is required, to request a demonstration from the human teacher and to learn a policy based on the acquired data. The algorithm selects demonstrations based on a measure of action selection confidence, and our results show that using Confident Execution the agent requires fewer demonstrations to learn the policy than when demonstrations are selected by a human teacher. The second algorithmic component, Corrective Demonstration, enables the teacher to correct any mistakes made by the agent through additional demonstrations in order to improve the policy and future task performance. CBA and its individual components are compared and evaluated in a complex simulated driving domain. The complete CBA algorithm results in the best overall learning performance, successfully reproducing the behavior of the teacher while balancing the tradeoff between number of demonstrations and number of incorrect actions during learning.",
"title": ""
}
] |
[
{
"docid": "fe3570c283fbf8b1f504e7bf4c2703a8",
"text": "We propose ThalNet, a deep learning model inspired by neocortical communication via the thalamus. Our model consists of recurrent neural modules that send features through a routing center, endowing the modules with the flexibility to share features over multiple time steps. We show that our model learns to route information hierarchically, processing input data by a chain of modules. We observe common architectures, such as feed forward neural networks and skip connections, emerging as special cases of our architecture, while novel connectivity patterns are learned for the text8 compression task. Our model outperforms standard recurrent neural networks on several sequential benchmarks.",
"title": ""
},
{
"docid": "9da1449675af42a2fc75ba8259d22525",
"text": "The purpose of the research reported here was to test empirically a conceptualization of brand associations that consists of three dimensions: brand image, brand attitude and perceived quality. A better understanding of brand associations is needed to facilitate further theoretical development and practical measurement of the construct. Three studies were conducted to: test a protocol for developing product category specific measures of brand image; investigate the dimensionality of the brand associations construct; and explore whether the degree of dimensionality of brand associations varies depending upon a brand's familiarity. Findings confirm the efficacy of the brand image protocol and indicate that brand associations differ across brands and product categories. The latter finding supports the conclusion that brand associations for different products should be measured using different items. As predicted, dimensionality of brand associations was found to be influenced by brand familiarity. Research interest in branding continues to be strong in the marketing literature (e.g. Alden et al., 1999; Kirmani et al., 1999; Erdem, 1998). Likewise, marketing managers continue to realize the power of brands, manifest in the recent efforts of many companies to build strong Internet `̀ brands'' such as amazon.com and msn.com (Narisetti, 1998). The way consumers perceive brands is a key determinant of long-term businessconsumer relationships (Fournier, 1998). Hence, building strong brand perceptions is a top priority for many firms today (Morris, 1996). Despite the importance of brands and consumer perceptions of them, marketing researchers have not used a consistent definition or measurement technique to assess consumer perceptions of brands. To address this, two scholars have recently developed extensive conceptual treatments of branding and related issues. Keller (1993; 1998) refers to consumer perceptions of brands as brand knowledge, consisting of brand awareness (recognition and recall) and brand image. Keller defines brand image as `̀ perceptions about a brand as reflected by the brand associations held in consumer memory''. These associations include perceptions of brand quality and attitudes toward the brand. Similarly, Aaker (1991, 1996a) proposes that brand associations are anything linked in memory to a brand. Keller and Aaker both appear to hypothesize that consumer perceptions of brands are The current issue and full text archive of this journal is available at http://www.emerald-library.com The authors thank Paul Herr, Donnie Lichtenstein, Rex Moody, Dave Cravens and Julie Baker for helpful comments on earlier versions of this manuscript. Funding was provided by the Graduate School of the University of Colorado and the Charles Tandy American Enterprise Center at Texas Christian University. Top priority for many firms today 350 JOURNAL OF PRODUCT & BRAND MANAGEMENT, VOL. 9 NO. 6 2000, pp. 350-368, # MCB UNIVERSITY PRESS, 1061-0421 An executive summary for managers and executive readers can be found at the end of this article multi-dimensional, yet many of the dimensions they identify appear to be very similar. Furthermore, Aaker's and Keller's conceptualizations of consumers' psychological representation of brands have not been subjected to empirical validation. Consequently, it is difficult to determine if the various constructs they discuss, such as brand attitudes and perceived quality, are separate dimensions of brand associations, (multi-dimensional) as they propose, or if they are simply indicators of brand associations (unidimensional). A number of studies have appeared recently which measure some aspect of consumer brand associations, but these studies do not use consistent measurement techniques and hence, their results are not comparable. They also do not discuss the issue of how to conceptualize brand associations, but focus on empirically identifying factors which enhance or diminish one component of consumer perceptions of brands (e.g. Berthon et al., 1997; Keller and Aaker, 1997; Keller et al., 1998; RoedderJohn et al., 1998; Simonin and Ruth, 1998). Hence, the proposed multidimensional conceptualizations of brand perceptions have not been tested empirically, and the empirical work operationalizes these perceptions as uni-dimensional. Our goal is to provide managers of brands a practical measurement protocol based on a parsimonious conceptual model of brand associations. The specific objectives of the research reported here are to: . test a protocol for developing category-specific measures of brand image; . examine the conceptualization of brand associations as a multidimensional construct by testing brand image, brand attitude, and perceived quality in the same model; and . explore whether the degree of dimensionality of brand associations varies depending on a brand's familiarity. In subsequent sections of this paper we explain the theoretical background of our research, describe three studies we conducted to test our conceptual model, and discuss the theoretical and managerial implications of the results. Conceptual background Brand associations According to Aaker (1991), brand associations are the category of a brand's assets and liabilities that include anything `̀ linked'' in memory to a brand (Aaker, 1991). Keller (1998) defines brand associations as informational nodes linked to the brand node in memory that contain the meaning of the brand for consumers. Brand associations are important to marketers and to consumers. Marketers use brand associations to differentiate, position, and extend brands, to create positive attitudes and feelings toward brands, and to suggest attributes or benefits of purchasing or using a specific brand. Consumers use brand associations to help process, organize, and retrieve information in memory and to aid them in making purchase decisions (Aaker, 1991, pp. 109-13). While several research efforts have explored specific elements of brand associations (Gardner and Levy, 1955; Aaker, 1991; 1996a; 1996b; Aaker and Jacobson, 1994; Aaker, 1997; Keller, 1993), no research has been reported that combined these elements in the same study in order to measure how they are interrelated. Practical measurement protocol Importance to marketers and consumers JOURNAL OF PRODUCT & BRAND MANAGEMENT, VOL. 9 NO. 6 2000 351 Scales to measure partially brand associations have been developed. For example, Park and Srinivasan (1994) developed items to measure one dimension of toothpaste brand associations that included the brand's perceived ability to fight plaque, freshen breath and prevent cavities. This scale is clearly product category specific. Aaker (1997) developed a brand personality scale with five dimensions and 42 items. This scale is not practical to use in some applied studies because of its length. Also, the generalizability of the brand personality scale is limited because many brands are not personality brands, and no protocol is given to adapt the scale. As Aaker (1996b, p. 113) notes, `̀ using personality as a general indicator of brand strength will be a distortion for some brands, particularly those that are positioned with respect to functional advantages and value''. Hence, many previously developed scales are too specialized to allow for general use, or are too long to be used in some applied settings. Another important issue that has not been empirically examined in the literature is whether brand associations represent a one-dimensional or multi-dimensional construct. Although this may appear to be an obvious question, we propose later in this section the conditions under which this dimensionality may be more (or less) measurable. As previously noted, Aaker (1991) defines brand associations as anything linked in memory to a brand. Three related constructs that are, by definition, linked in memory to a brand, and which have been researched conceptually and measured empirically, are brand image, brand attitude, and perceived quality. We selected these three constructs as possible dimensions or indicators of brand associations in our conceptual model. Of the many possible components of brand associations we could have chosen, we selected these three constructs because they: (1) are the three most commonly cited consumer brand perceptions in the empirical marketing literature; (2) have established, reliable, published measures in the literature; and (3) are three dimensions discussed frequently in prior conceptual research (Aaker, 1991; 1996; Keller, 1993; 1998). We conceptualize brand image (functional and symbolic perceptions), brand attitude (overall evaluation of a brand), and perceived quality (judgments of overall superiority) as possible dimensions of brand associations (see Figure 1). Brand image, brand attitude, and perceived quality Brand image is defined as the reasoned or emotional perceptions consumers attach to specific brands (Dobni and Zinkhan,1990) and is the first consumer brand perception that was identified in the marketing literature (Gardner and Levy, 1955). Brand image consists of functional and symbolic brand beliefs. A measurement technique using semantic differential items generated for the relevant product category has been suggested for measuring brand image (Dolich, 1969; Fry and Claxton, 1971). Brand image associations are largely product category specific and measures should be customized for the unique characteristics of specific brand categories (Park and Srinivasan, 1994; Bearden and Etzel, 1982). Brand attitude is defined as consumers' overall evaluation of a brand ± whether good or bad (Mitchell and Olson, 1981). Semantic differential scales measuring brand attitude have frequently appeared in the marketing Linked in memory to a brand Reasoned or emotional perceptions 352 JOURNAL OF PRODUCT & BRAND MANAGEMENT, VOL. 9 NO. 6 2000 literature. Bruner and Hensel (1996) reported 66 published studies which measured brand attitud",
"title": ""
},
{
"docid": "7fafda966819bb780b8b2b6ada4cc468",
"text": "Acne inversa (AI) is a chronic and recurrent inflammatory skin disease. It occurs in intertriginous areas of the skin and causes pain, drainage, malodor and scar formation. While supposedly caused by an autoimmune reaction, bacterial superinfection is a secondary event in the disease process. A unique case of a 43-year-old male patient suffering from a recurring AI lesion in the left axilla was retrospectively analysed. A swab revealed Actinomyces neuii as the only agent growing in the lesion. The patient was then treated with Amoxicillin/Clavulanic Acid 3 × 1 g until he was cleared for surgical excision. The intraoperative swab was negative for A. neuii. Antibiotics were prescribed for another 4 weeks and the patient has remained relapse free for more than 12 months now. Primary cutaneous Actinomycosis is a rare entity and the combination of AI and Actinomycosis has never been reported before. Failure to detect superinfections of AI lesions with slow-growing pathogens like Actinomyces spp. might contribute to high recurrence rates after immunosuppressive therapy of AI. The present case underlines the potentially multifactorial pathogenesis of the disease and the importance of considering and treating potential infections before initiating immunosuppressive regimens for AI patients.",
"title": ""
},
{
"docid": "627587e2503a2555846efb5f0bca833b",
"text": "Image generation has been successfully cast as an autoregressive sequence generation or transformation problem. Recent work has shown that self-attention is an effective way of modeling textual sequences. In this work, we generalize a recently proposed model architecture based on self-attention, the Transformer, to a sequence modeling formulation of image generation with a tractable likelihood. By restricting the selfattention mechanism to attend to local neighborhoods we significantly increase the size of images the model can process in practice, despite maintaining significantly larger receptive fields per layer than typical convolutional neural networks. While conceptually simple, our generative models significantly outperform the current state of the art in image generation on ImageNet, improving the best published negative log-likelihood on ImageNet from 3.83 to 3.77. We also present results on image super-resolution with a large magnification ratio, applying an encoder-decoder configuration of our architecture. In a human evaluation study, we find that images generated by our super-resolution model fool human observers three times more often than the previous state of the art.",
"title": ""
},
{
"docid": "53307a72e0a50b65da45f83e5a8ff9f0",
"text": "Although few studies dispute that there are gender differences in depression, the etiology is still unknown. In this review, we cover a number of proposed factors and the evidences for and against these factors that may account for gender differences in depression. These include the possible role of estrogens at puberty, differences in exposure to childhood trauma, differences in stress perception between men and women and the biological differences in stress response. None of these factors seem to explain gender differences in depression. Finally, we do know that when depressed, women show greater hypothalamic–pituitary–adrenal (HPA) axis activation than men and that menopause with loss of estrogens show the greatest HPA axis dysregulation. It may be the constantly changing steroid milieu that contributes to these phenomena and vulnerability to depression.",
"title": ""
},
{
"docid": "2cac667e743d0a020ef136215339e1ed",
"text": "We present the design and experimental validation of a scalable dc microgrid for rural electrification in emerging regions. A salient property of the dc microgrid architecture is the distributed control of the grid voltage, which enables both instantaneous power sharing and a metric for determining the available grid power. A droop-voltage power-sharing scheme is implemented wherein the bus voltage droops in response to low supply/high demand. In addition, the architecture of the dc microgrid aims to minimize the losses associated with stored energy by distributing storage to individual households. In this way, the number of conversion steps and line losses are reduced. We calculate that the levelized cost of electricity of the proposed dc microgrid over a 15-year time horizon is $0.35/kWh. We also present the experimental results from a scaled-down experimental prototype that demonstrates the steady-state behavior, the perturbation response, and the overall efficiency of the system. Moreover, we present fault mitigation strategies for various faults that can be expected to occur in a microgrid distribution system. The experimental results demonstrate the suitability of the presented dc microgrid architecture as a technically advantageous and cost-effective method for electrifying emerging regions.",
"title": ""
},
{
"docid": "e9438241965b4cb6601624456b60f990",
"text": "This paper proposes a model for designing games around Artificial Intelligence (AI). AI-based games put AI in the foreground of the player experience rather than in a supporting role as is often the case in many commercial games. We analyze the use of AI in a number of existing games and identify design patterns for AI in games. We propose a generative ideation technique to combine a design pattern with an AI technique or capacity to make new AI-based games. Finally, we demonstrate this technique through two examples of AI-based game prototypes created using these patterns.",
"title": ""
},
{
"docid": "e567034595d9bb6a236d15b8623efce7",
"text": "In this paper, we use artificial neural networks (ANNs) for voice conversion and exploit the mapping abilities of an ANN model to perform mapping of spectral features of a source speaker to that of a target speaker. A comparative study of voice conversion using an ANN model and the state-of-the-art Gaussian mixture model (GMM) is conducted. The results of voice conversion, evaluated using subjective and objective measures, confirm that an ANN-based VC system performs as good as that of a GMM-based VC system, and the quality of the transformed speech is intelligible and possesses the characteristics of a target speaker. In this paper, we also address the issue of dependency of voice conversion techniques on parallel data between the source and the target speakers. While there have been efforts to use nonparallel data and speaker adaptation techniques, it is important to investigate techniques which capture speaker-specific characteristics of a target speaker, and avoid any need for source speaker's data either for training or for adaptation. In this paper, we propose a voice conversion approach using an ANN model to capture speaker-specific characteristics of a target speaker and demonstrate that such a voice conversion approach can perform monolingual as well as cross-lingual voice conversion of an arbitrary source speaker.",
"title": ""
},
{
"docid": "c27eecae33fe87779d3452002c1bdf8a",
"text": "When intelligent agents learn visuomotor behaviors from human demonstrations, they may benefit from knowing where the human is allocating visual attention, which can be inferred from their gaze. A wealth of information regarding intelligent decision making is conveyed by human gaze allocation; hence, exploiting such information has the potential to improve the agents’ performance. With this motivation, we propose the AGIL (Attention Guided Imitation Learning) framework. We collect high-quality human action and gaze data while playing Atari games in a carefully controlled experimental setting. Using these data, we first train a deep neural network that can predict human gaze positions and visual attention with high accuracy (the gaze network) and then train another network to predict human actions (the policy network). Incorporating the learned attention model from the gaze network into the policy network significantly improves the action prediction accuracy and task performance.",
"title": ""
},
{
"docid": "2b540b2e48d5c381e233cb71c0cf36fe",
"text": "In this paper we review the most peculiar and interesting information-theoretic and communications features of fading channels. We first describe the statistical models of fading channels which are frequently used in the analysis and design of communication systems. Next, we focus on the information theory of fading channels, by emphasizing capacity as the most important performance measure. Both single-user and multiuser transmission are examined. Further, we describe how the structure of fading channels impacts code design, and finally overview equalization of fading multipath channels.",
"title": ""
},
{
"docid": "d2c13b3daa3712b32172126404b14c20",
"text": "To adequately perform perioral rejuvenation procedures, it is necessary to understand the morphologic changes caused by facial aging. Anthropometric analyses of standardized frontal view and profile photographs could help to investigate such changes. Photographs of 346 male individuals were evaluated using 12 anthropometric indices. Data from two groups of health subjects, the first exhibiting a mean age of nearly 20 and the second of nearly 60 years, were compared. To evaluate the influence of combined nicotine and alcohol abuse, the data of the second group were compared to a third group exhibiting a similar mean age who were known alcohol and nicotine abusers. Comparison of the first to the second group showed significant decrease of the vertical height of upper and lower vermilion and relative enlargement of the cutaneous part of upper and lower lips. This effect was stronger in the upper vermilion and medial upper lips. The sagging of the upper lips led to the appearance of an increased mouth width. In the third group the effect of sagging of the upper lips, and especially its medial portion was significantly higher compared to the second group. The photo-assisted anthropometric measurements investigated gave reproducible results related to perioral aging.",
"title": ""
},
{
"docid": "00e56a93a3b8ee3a3d2cdab2fd27375e",
"text": "Omnidirectional image and video have gained popularity thanks to availability of capture and display devices for this type of content. Recent studies have assessed performance of objective metrics in predicting visual quality of omnidirectional content. These metrics, however, have not been rigorously validated by comparing their prediction results with ground-truth subjective scores. In this paper, we present a set of 360-degree images along with their subjective quality ratings. The set is composed of four contents represented in two geometric projections and compressed with three different codecs at four different bitrates. A range of objective quality metrics for each stimulus is then computed and compared to subjective scores. Statistical analysis is performed in order to assess performance of each objective quality metric in predicting subjective visual quality as perceived by human observers. Results show the estimated performance of the state-of-the-art objective metrics for omnidirectional visual content. Objective metrics specifically designed for 360-degree content do not outperform conventional methods designed for 2D images.",
"title": ""
},
{
"docid": "f395e3d72341bd20e1a16b97259bad7d",
"text": "Malicious software in form of Internet worms, computer viru ses, and Trojan horses poses a major threat to the security of network ed systems. The diversity and amount of its variants severely undermine the effectiveness of classical signature-based detection. Yet variants of malware f milies share typical behavioral patternsreflecting its origin and purpose. We aim to exploit these shared patterns for classification of malware and propose a m thod for learning and discrimination of malware behavior. Our method proceed s in three stages: (a) behavior of collected malware is monitored in a sandbox envi ro ment, (b) based on a corpus of malware labeled by an anti-virus scanner a malware behavior classifieris trained using learning techniques and (c) discriminativ e features of the behavior models are ranked for explanation of classifica tion decisions. Experiments with di fferent heterogeneous test data collected over several month s using honeypots demonstrate the e ffectiveness of our method, especially in detecting novel instances of malware families previously not recognized by commercial anti-virus software.",
"title": ""
},
{
"docid": "1e100608fd78b1e20020f892784199ed",
"text": "In this paper we introduce a system for unsupervised object discovery and segmentation of RGBD-images. The system models the sensor noise directly from data, allowing accurate segmentation without sensor specific hand tuning of measurement noise models making use of the recently introduced Statistical Inlier Estimation (SIE) method [1]. Through a fully probabilistic formulation, the system is able to apply probabilistic inference, enabling reliable segmentation in previously challenging scenarios. In addition, we introduce new methods for filtering out false positives, significantly improving the signal to noise ratio. We show that the system significantly outperform state-of-the-art in on a challenging real-world dataset.",
"title": ""
},
{
"docid": "335220bbad7798a19403d393bcbbf7fb",
"text": "In today’s computerized and information-based society, text data is rich but messy. People are soaked with vast amounts of natural-language text data, ranging from news articles, social media post, advertisements, to a wide range of textual information from various domains (medical records, corporate reports). To turn such massive unstructured text data into actionable knowledge, one of the grand challenges is to gain an understanding of the factual information (e.g., entities, attributes, relations, events) in the text. In this tutorial, we introduce data-driven methods to construct structured information networks (where nodes are different types of entities attached with attributes, and edges are different relations between entities) for text corpora of different kinds (especially for massive, domain-specific text corpora) to represent their factual information. We focus on methods that are minimally-supervised, domain-independent, and languageindependent for fast network construction across various application domains (news, web, biomedical, reviews). We demonstrate on real datasets including news articles, scientific publications, tweets and reviews how these constructed networks aid in text analytics and knowledge discovery at a large scale.",
"title": ""
},
{
"docid": "139d9d5866a1e455af954b2299bdbcf6",
"text": "1 . I n t r o d u c t i o n Reasoning about knowledge and belief has long been an issue of concern in philosophy and artificial intelligence (cf. [Hil],[MH],[Mo]). Recently we have argued that reasoning about knowledge is also crucial in understanding and reasoning about protocols in distributed systems, since messages can be viewed as changing the state of knowledge of a system [HM]; knowledge also seems to be of v i tal importance in cryptography theory [Me] and database theory. In order to formally reason about knowledge, we need a good semantic model. Part of the difficulty in providing such a model is that there is no agreement on exactly what the properties of knowledge are or should * This author's work was supported in part by DARPA contract N00039-82-C-0250. be. For example, is it the case that you know what facts you know? Do you know what you don't know? Do you know only true things, or can something you \"know\" actually be false? Possible-worlds semantics provide a good formal tool for \"customizing\" a logic so that, by making minor changes in the semantics, we can capture different sets of axioms. The idea, first formalized by Hintikka [Hi l ] , is that in each state of the world, an agent (or knower or player: we use all these words interchangeably) has other states or worlds that he considers possible. An agent knows p exactly if p is true in all the worlds that he considers possible. As Kripke pointed out [Kr], by imposing various conditions on this possibil i ty relation, we can capture a number of interesting axioms. For example, if we require that the real world always be one of the possible worlds (which amounts to saying that the possibility relation is reflexive), then it follows that you can't know anything false. Similarly, we can show that if the relation is transitive, then you know what you know. If the relation is transitive and symmetric, then you also know what you don't know. (The one-knower models where the possibility relation is reflexive corresponds to the classical modal logic T, while the reflexive and transitive case corresponds to S4, and the reflexive, symmetric and transitive case corresponds to S5.) Once we have a general framework for modelling knowledge, a reasonable question to ask is how hard it is to reason about knowledge. In particular, how hard is it to decide if a given formula is valid or satisfiable? The answer to this question depends crucially on the choice of axioms. For example, in the oneknower case, Ladner [La] has shown that for T and S4 the problem of deciding satisfiability is complete in polynomial space, while for S5 it is NP-complete, J. Halpern and Y. Moses 481 and thus no harder than the satisf iabi l i ty problem for propos i t iona l logic. Our a im in th is paper is to reexamine the possiblewor lds f ramework for knowledge and belief w i t h four par t icu lar po ints of emphasis: (1) we show how general techniques for f inding decision procedures and complete ax iomat izat ions apply to models for knowledge and belief, (2) we show how sensitive the di f f icul ty of the decision procedure is to such issues as the choice of moda l operators and the ax iom system, (3) we discuss how not ions of common knowledge and impl ic i t knowl edge among a group of agents fit in to the possibleworlds f ramework, and, f inal ly, (4) we consider to what extent the possible-worlds approach is a viable one for model l ing knowledge and belief. We begin in Section 2 by reviewing possible-world semantics in deta i l , and prov ing tha t the many-knower versions of T, S4, and S5 do indeed capture some of the more common axiomatizat ions of knowledge. In Section 3 we t u r n to complexity-theoret ic issues. We review some standard not ions f rom complexi ty theory, and then reprove and extend Ladner's results to show tha t the decision procedures for the many-knower versions of T, S4, and S5 are a l l complete in po lynomia l space.* Th is suggests tha t for S5, reasoning about many agents' knowledge is qual i ta t ive ly harder than jus t reasoning about one agent's knowledge of the real wor ld and of his own knowledge. In Section 4 we t u rn our at tent ion to mod i fy ing the model so tha t i t can deal w i t h belief rather than knowledge, where one can believe something tha t is false. Th is turns out to be somewhat more compl i cated t han dropp ing the assumption of ref lexivi ty, but i t can s t i l l be done in the possible-worlds f ramework. Results about decision procedures and complete axiomat i i a t i ons for belief paral le l those for knowledge. In Section 5 we consider what happens when operators for common knowledge and implicit knowledge are added to the language. A group has common knowledge of a fact p exact ly when everyone knows tha t everyone knows tha t everyone knows ... tha t p is t rue. (Common knowledge is essentially wha t McCar thy 's \" f oo l \" knows; cf. [MSHI] . ) A group has i m p l ic i t knowledge of p i f, roughly speaking, when the agents poo l the i r knowledge together they can deduce p. (Note our usage of the not ion of \" imp l i c i t knowl edge\" here differs s l ight ly f rom the way it is used in [Lev2] and [FH].) As shown in [ H M l ] , common knowl edge is an essential state for reaching agreements and * A problem is said to be complete w i th respect to a complexity class if, roughly speaking, it is the hardest problem in that class (see Section 3 for more details). coordinating action. For very similar reasons, common knowledge also seems to play an important role in human understanding of speech acts (cf. [CM]). The notion of implicit knowledge arises when reasoning about what states of knowledge a group can attain through communication, and thus is also crucial when reasoning about the efficacy of speech acts and about communication protocols in distributed systems. It turns out that adding an implicit knowledge operator to the language does not substantially change the complexity of deciding the satisfiability of formulas in the language, but this is not the case for common knowledge. Using standard techniques from PDL (Propositional Dynamic Logic; cf. [FL],[Pr]), we can show that when we add common knowledge to the language, the satisfiability problem for the resulting logic (whether it is based on T, S4, or S5) is complete in deterministic exponential time, as long as there at least two knowers. Thus, adding a common knowledge operator renders the decision procedure qualitatively more complex. (Common knowledge does not seem to be of much interest in the in the case of one knower. In fact, in the case of S4 and S5, if there is only one knower, knowledge and common knowledge are identical.) We conclude in Section 6 with some discussion of the appropriateness of the possible-worlds approach for capturing knowledge and belief, particularly in light of our results on computational complexity. Detailed proofs of the theorems stated here, as well as further discussion of these results, can be found in the ful l paper ([HM2]). 482 J. Halpern and Y. Moses 2.2 Possib le-wor lds semant ics: Following Hintikka [H i l ] , Sato [Sa], Moore [Mo], and others, we use a posaible-worlds semantics to model knowledge. This provides us wi th a general framework for our semantical investigations of knowledge and belief. (Everything we say about \"knowledge* in this subsection applies equally well to belief.) The essential idea behind possible-worlds semantics is that an agent's state of knowledge corresponds to the extent to which he can determine what world he is in. In a given world, we can associate wi th each agent the set of worlds that, according to the agent's knowledge, could possibly be the real world. An agent is then said to know a fact p exactly if p is true in all the worlds in this set; he does not know p if there is at least one world that he considers possible where p does not hold. * We discuss the ramifications of this point in Section 6. ** The name K (m) is inspired by the fact that for one knower, the system reduces to the well-known modal logic K. J. Halpern and Y. Moses 483 484 J. Halpern and Y. Moses that can be said is that we are modelling a rather idealised reaaoner, who knows all tautologies and all the logical consequences of his knowledge. If we take the classical interpretation of knowledge as true, justified belief, then an axiom such as A3 seems to be necessary. On the other hand, philosophers have shown that axiom A5 does not hold wi th respect to this interpretation ([Len]). However, the S5 axioms do capture an interesting interpretation of knowledge appropriate for reasoning about distributed systems (see [HM1] and Section 6). We continue here wi th our investigation of all these logics, deferring further comments on their appropriateness to Section 6. Theorem 3 implies that the provable formulas of K (m) correspond precisely to the formulas that are valid for Kripke worlds. As Kripke showed [Kr], there are simple conditions that we can impose on the possibility relations Pi so that the valid formulas of the resulting worlds are exactly the provable formulas of T ( m ) , S4 (m) , and S5(m) respectively. We wi l l try to motivate these conditions, but first we need a few definitions. * Since Lemma 4(b) says that a relation that is both reflexive and Euclidean must also be transitive, the reader may auspect that axiom A4 ia redundant in S5. Thia indeed ia the caae. J. Halpern and Y. Moses 485 486 J. Halpern and Y. Moses",
"title": ""
},
{
"docid": "5ca36a618eb3eee79e40228fa71dc029",
"text": "To achieve the long-term goal of machines being able to engage humans in conversation, our models should be engaging. We focus on communication grounded in images, whereby a dialogue is conducted based on a given photo, a setup that is naturally engaging to humans (Hu et al., 2014). We collect a large dataset of grounded human-human conversations, where humans are asked to play the role of a given personality, as the use of personality in conversation has also been shown to be engaging (Shuster et al., 2018). Our dataset, ImageChat, consists of 202k dialogues and 401k utterances over 202k images using 215 possible personality traits. We then design a set of natural architectures using state-of-the-art image and text representations, considering various ways to fuse the components. Automatic metrics and human evaluations show the efficacy of approach, in particular where our best performing model is preferred over human conversationalists 47.7% of the time.",
"title": ""
},
{
"docid": "20c3addef683da760967df0c1e83f8e3",
"text": "An RF duplexer has been fabricated on a CMOS IC for use in 3G/4G cellular transceivers. The passive circuit sustains large voltage swings in the transmit path, and isolates the receive path from the transmitter by more than 45 dB across a bandwidth of 200 MHz in 3G/4G bands I, II, III, IV, and IX. A low noise amplifier embedded into the duplexer demonstrates a cascade noise figure of 5 dB with more than 27 dB of gain. The duplexer inserts 2.5 dB of loss between power amplifier and antenna.",
"title": ""
},
{
"docid": "cc5126ea8a6f9ebca587970377966067",
"text": "In this paper reliability model of the converter valves in VSC-HVDC system is analyzed. The internal structure and functions of converter valve are presented. Taking the StakPak IGBT from ABB Semiconductors for example, the mathematical reliability model for converter valve and its sub-module is established. By means of calculation and analysis, the reliability indices of converter valve under various voltage classes and redundancy designs are obtained, and then optimal redundant scheme is chosen. KeywordsReliability Analysis; VSC-HVDC; Converter Valve",
"title": ""
},
{
"docid": "1e4f13016c846039f7bbed47810b8b3d",
"text": "This paper characterizes general properties of useful, or Effective, explanations of recommendations. It describes a methodology based on focus groups, in which we elicit what helps moviegoers decide whether or not they would like a movie. Our results highlight the importance of personalizing explanations to the individual user, as well as considering the source of recommendations, user mood, the effects of group viewing, and the effect of explanations on user expectations.",
"title": ""
}
] |
scidocsrr
|
7d4d1560fd706b595b9a32da96c69a05
|
Wireless Sensor and Networking Technologies for Swarms of Aquatic Surface Drones
|
[
{
"docid": "3cb6ba4a950868c1d912b44b77b264be",
"text": "With the popularity of winter tourism, the winter recreation activities have been increased day by day in alpine environments. However, large numbers of people and rescuers are injured and lost in this environment due to the avalanche accidents every year. Drone-based rescue systems are envisioned as a viable solution for saving lives in this hostile environment. To this aim, a European project named “Smart collaboration between Humans and ground-aErial Robots for imProving rescuing activities in Alpine environments (SHERPA)” has been launched with the objective to develop a mixed ground and aerial drone platform to support search and rescue activities in a real-world hostile scenarios. In this paper, we study the challenges of existing wireless technologies for enabling drone wireless communications in alpine environment. We extensively discuss about the positive and negative aspects of the standards according to the SHERPA network requirements. Then based on that, we choose Worldwide interoperability for Microwave Access network (WiMAX) as a suitable technology for drone communications in this environment. Finally, we present a brief discussion about the use of operating band for WiMAX and the implementation issues of SHERPA network. The outcomes of this research assist to achieve the goal of the SHERPA project.",
"title": ""
}
] |
[
{
"docid": "9973dab94e708f3b87d52c24b8e18672",
"text": "We show that two popular discounted reward natural actor-critics, NAC-LSTD and eNAC, follow biased estimates of the natural policy gradient. We derive the first unbiased discounted reward natural actor-critics using batch and iterative approaches to gradient estimation and prove their convergence to globally optimal policies for discrete problems and locally optimal policies for continuous problems. Finally, we argue that the bias makes the existing algorithms more appropriate for the average reward setting.",
"title": ""
},
{
"docid": "83d330486c50fe2ae1d6960a4933f546",
"text": "In this paper, an upgraded version of vehicle tracking system is developed for inland vessels. In addition to the features available in traditional VTS (Vehicle Tracking System) for automobiles, it has the capability of remote monitoring of the vessel's motion and orientation. Furthermore, this device can detect capsize events and other accidents by motion tracking and instantly notify the authority and/or the owner with current coordinates of the vessel, which is obtained using the Global Positioning System (GPS). This can certainly boost up the rescue process and minimize losses. We have used GSM network for the communication between the device installed in the ship and the ground control. So, this can be implemented only in the inland vessels. But using iridium satellite communication instead of GSM will enable the device to be used in any sea-going ships. At last, a model of an integrated inland waterway control system (IIWCS) based on this device is discussed.",
"title": ""
},
{
"docid": "a9de29e1d8062b4950e5ab3af6bea8df",
"text": "Asserts have long been a strongly recommended (if non-functional) adjunct to programs. They certainly don't add any user-evident feature value; and it can take quite some skill and effort to devise and add useful asserts. However, they are believed to add considerable value to the developer. Certainly, they can help with automated verification; but even in the absence of that, claimed advantages include improved understandability, maintainability, easier fault localization and diagnosis, all eventually leading to better software quality. We focus on this latter claim, and use a large dataset of asserts in C and C++ programs to explore the connection between asserts and defect occurrence. Our data suggests a connection: functions with asserts do have significantly fewer defects. This indicates that asserts do play an important role in software quality; we therefore explored further the factors that play a role in assertion placement: specifically, process factors (such as developer experience and ownership) and product factors, particularly interprocedural factors, exploring how the placement of assertions in functions are influenced by local and global network properties of the callgraph. Finally, we also conduct a differential analysis of assertion use across different application domains.",
"title": ""
},
{
"docid": "bf23473b7fe711e9dce9487c7df5b624",
"text": "A focus on population health management is a necessary ingredient for success under value-based payment models. As part of that effort, nine ways to embrace technology can help healthcare organizations improve population health, enhance the patient experience, and reduce costs: Use predictive analytics for risk stratification. Combine predictive modeling with algorithms for financial risk management. Use population registries to identify care gaps. Use automated messaging for patient outreach. Engage patients with automated alerts and educational campaigns. Automate care management tasks. Build programs and organize clinicians into care teams. Apply new technologies effectively. Use analytics to measure performance of organizations and providers.",
"title": ""
},
{
"docid": "b1d61ca503702f950ef1275b904850e7",
"text": "Prior research has demonstrated a clear relationship between experiences of racial microaggressions and various indicators of psychological unwellness. One concern with these findings is that the role of negative affectivity, considered a marker of neuroticism, has not been considered. Negative affectivity has previously been correlated to experiences of racial discrimination and psychological unwellness and has been suggested as a cause of the observed relationship between microaggressions and psychopathology. We examined the relationships between self-reported frequency of experiences of microaggressions and several mental health outcomes (i.e., anxiety [Beck Anxiety Inventory], stress [General Ethnic and Discrimination Scale], and trauma symptoms [Trauma Symptoms of Discrimination Scale]) in 177 African American and European American college students, controlling for negative affectivity (the Positive and Negative Affect Schedule) and gender. Results indicated that African Americans experience more racial discrimination than European Americans. Negative affectivity in African Americans appears to be significantly related to some but not all perceptions of the experience of discrimination. A strong relationship between racial mistreatment and symptoms of psychopathology was evident, even after controlling for negative affectivity. In summary, African Americans experience clinically measurable anxiety, stress, and trauma symptoms as a result of racial mistreatment, which cannot be wholly explained by individual differences in negative affectivity. Future work should examine additional factors in these relationships, and targeted interventions should be developed to help those suffering as a result of racial mistreatment and to reduce microaggressions.",
"title": ""
},
{
"docid": "65b843c30f69d33fa0c9aedd742e3434",
"text": "The computational study of complex systems increasingly requires model integration. The drivers include a growing interest in leveraging accepted legacy models, an intensifying pressure to reduce development costs by reusing models, and expanding user requirements that are best met by combining different modeling methods. There have been many published successes including supporting theory, conceptual frameworks, software tools, and case studies. Nonetheless, on an empirical basis, the published work suggests that correctly specifying model integration strategies remains challenging. This naturally raises a question that has not yet been answered in the literature, namely 'what is the computational difficulty of model integration?' This paper's contribution is to address this question with a time and space complexity analysis that concludes that deep model integration with proven correctness is both NP-complete and PSPACE-complete and that reducing this complexity requires sacrificing correctness proofs in favor of guidance from both subject matter experts and modeling specialists.",
"title": ""
},
{
"docid": "08e02afe2ef02fc9c8fff91cf7a70553",
"text": "Matrix factorization is a fundamental technique in machine learning that is applicable to collaborative filtering, information retrieval and many other areas. In collaborative filtering and many other tasks, the objective is to fill in missing elements of a sparse data matrix. One of the biggest challenges in this case is filling in a column or row of the matrix with very few observations. In this paper we introduce a Bayesian matrix factorization model that performs regression against side information known about the data in addition to the observations. The side information helps by adding observed entries to the factored matrices. We also introduce a nonparametric mixture model for the prior of the rows and columns of the factored matrices that gives a different regularization for each latent class. Besides providing a richer prior, the posterior distribution of mixture assignments reveals the latent classes. Using Gibbs sampling for inference, we apply our model to the Netflix Prize problem of predicting movie ratings given an incomplete user-movie ratings matrix. Incorporating rating information with gathered metadata information, our Bayesian approach outperforms other matrix factorization techniques even when using fewer dimensions.",
"title": ""
},
{
"docid": "c0315ef3bcc21723131d9b2687a5d5d1",
"text": "Network covert timing channels embed secret messages in legitimate packets by modulating interpacket delays. Unfortunately, such channels are normally implemented in higher network layers (layer 3 or above) and easily detected or prevented. However, access to the physical layer of a network stack allows for timing channels that are virtually invisible: Sub-microsecond modulations that are undetectable by software endhosts. Therefore, covert timing channels implemented in the physical layer can be a serious threat to the security of a system or a network. In fact, we empirically demonstrate an effective covert timing channel over nine routing hops and thousands of miles over the Internet (the National Lambda Rail). Our covert timing channel works with cross traffic, less than 10% bit error rate, which can be masked by forward error correction, and a covert rate of 81 kilobits per second. Key to our approach is access and control over every bit in the physical layer of a 10 Gigabit network stack (a bit is 100 picoseconds wide at 10 gigabit per seconds), which allows us to modulate and interpret interpacket spacings at sub-microsecond scale. We discuss when and how a timing channel in the physical layer works, how hard it is to detect such a channel, and what is required to do so.",
"title": ""
},
{
"docid": "757cb3e9b279f71cb0a9ff5b80c5f4ba",
"text": "When it comes to workplace preferences, Generation Y workers closely resemble Baby Boomers. Because these two huge cohorts now coexist in the workforce, their shared values will hold sway in the companies that hire them. The authors, from the Center for Work-Life Policy, conducted two large-scale surveys that reveal those values. Gen Ys and Boomers are eager to contribute to positive social change, and they seek out workplaces where they can do that. They expect flexibility and the option to work remotely, but they also want to connect deeply with colleagues. They believe in employer loyalty but desire to embark on learning odysseys. Innovative firms are responding by crafting reward packages that benefit both generations of workers--and their employers.",
"title": ""
},
{
"docid": "21a2347f9bb5b5638d63239b37c9d0e6",
"text": "This paper presents new circuits for realizing both current-mode and voltage-mode proportional-integralderivative (PID), proportional-derivative (PD) and proportional-integral (PI) controllers employing secondgeneration current conveyors (CCIIs) as active elements. All of the proposed PID, PI and PD controllers have grounded passive elements and adjustable parameters. The controllers employ reduced number of active and passive components with respect to the traditional op-amp-based PID, PI and PD controllers. A closed loop control system using the proposed PID controller is designed and simulated with SPICE.",
"title": ""
},
{
"docid": "cbe947b169331c8bb41c7fae2a8d0647",
"text": "In spite of high levels of poverty in low and middle income countries (LMIC), and the high burden posed by common mental disorders (CMD), it is only in the last two decades that research has emerged that empirically addresses the relationship between poverty and CMD in these countries. We conducted a systematic review of the epidemiological literature in LMIC, with the aim of examining this relationship. Of 115 studies that were reviewed, most reported positive associations between a range of poverty indicators and CMD. In community-based studies, 73% and 79% of studies reported positive associations between a variety of poverty measures and CMD, 19% and 15% reported null associations and 8% and 6% reported negative associations, using bivariate and multivariate analyses respectively. However, closer examination of specific poverty dimensions revealed a complex picture, in which there was substantial variation between these dimensions. While variables such as education, food insecurity, housing, social class, socio-economic status and financial stress exhibit a relatively consistent and strong association with CMD, others such as income, employment and particularly consumption are more equivocal. There are several measurement and population factors that may explain variation in the strength of the relationship between poverty and CMD. By presenting a systematic review of the literature, this paper attempts to shift the debate from questions about whether poverty is associated with CMD in LMIC, to questions about which particular dimensions of poverty carry the strongest (or weakest) association. The relatively consistent association between CMD and a variety of poverty dimensions in LMIC serves to strengthen the case for the inclusion of mental health on the agenda of development agencies and in international targets such as the millenium development goals.",
"title": ""
},
{
"docid": "c98e8abd72ba30e0d2cb2b7d146a3d13",
"text": "Process mining techniques help organizations discover and analyze business processes based on raw event data. The recently released \"Process Mining Manifesto\" presents guiding principles and challenges for process mining. Here, the authors summarize the manifesto's main points and argue that analysts should take into account the context in which events occur when analyzing processes.",
"title": ""
},
{
"docid": "1ef2bb601d91d77287d3517c73b453fe",
"text": "Proteins from silver-stained gels can be digested enzymatically and the resulting peptide analyzed and sequenced by mass spectrometry. Standard proteins yield the same peptide maps when extracted from Coomassie- and silver-stained gels, as judged by electrospray and MALDI mass spectrometry. The low nanogram range can be reached by the protocols described here, and the method is robust. A silver-stained one-dimensional gel of a fraction from yeast proteins was analyzed by nano-electrospray tandem mass spectrometry. In the sequencing, more than 1000 amino acids were covered, resulting in no evidence of chemical modifications due to the silver staining procedure. Silver staining allows a substantial shortening of sample preparation time and may, therefore, be preferable over Coomassie staining. This work removes a major obstacle to the low-level sequence analysis of proteins separated on polyacrylamide gels.",
"title": ""
},
{
"docid": "3f679dbd9047040d63da70fc9e977a99",
"text": "In this paper we consider videos (e.g. Hollywood movies) and their accompanying natural language descriptions in the form of narrative sentences (e.g. movie scripts without timestamps). We propose a method for temporally aligning the video frames with the sentences using both visual and textual information, which provides automatic timestamps for each narrative sentence. We compute the similarity between both types of information using vectorial descriptors and propose to cast this alignment task as a matching problem that we solve via dynamic programming. Our approach is simple to implement, highly efficient and does not require the presence of frequent dialogues, subtitles, and character face recognition. Experiments on various movies demonstrate that our method can successfully align the movie script sentences with the video frames of movies.",
"title": ""
},
{
"docid": "2d59fe09633ee41c60e9e951986e56a6",
"text": "Face alignment and 3D face reconstruction are traditionally accomplished as separated tasks. By exploring the strong correlation between 2D landmarks and 3D shapes, in contrast, we propose a joint face alignment and 3D face reconstruction method to simultaneously solve these two problems for 2D face images of arbitrary poses and expressions. This method, based on a summation model of 3D face shapes and cascaded regression in 2D and 3D face shape spaces, iteratively and alternately applies two cascaded regressors, one for updating 2D landmarks and the other for 3D face shape. The 3D face shape and the landmarks are correlated via a 3D-to-2D mapping matrix. Unlike existing methods, the proposed method can fully automatically generate both pose-and-expression-normalized (PEN) and expressive 3D face shapes and localize both visible and invisible 2D landmarks. Based on the PEN 3D face shapes, we devise a method to enhance face recognition accuracy across poses and expressions. Both linear and nonlinear implementations of the proposed method are presented and evaluated in this paper. Extensive experiments show that the proposed method can achieve the state-of-the-art accuracy in both face alignment and 3D face reconstruction, and benefit face recognition owing to its reconstructed PEN 3D face shapes.",
"title": ""
},
{
"docid": "3c514740d7f8ce78f9afbaca92dc3b1c",
"text": "In the Brazil nut problem (BNP), hard spheres with larger diameters rise to the top. There are various explanations (percolation, reorganization, convection), but a broad understanding or control of this effect is by no means achieved. A theory is presented for the crossover from BNP to the reverse Brazil nut problem based on a competition between the percolation effect and the condensation of hard spheres. The crossover condition is determined, and theoretical predictions are compared to molecular dynamics simulations in two and three dimensions.",
"title": ""
},
{
"docid": "16d949f6915cbb958cb68a26c6093b6b",
"text": "Overweight and obesity are a global epidemic, with over one billion overweight adults worldwide (300+ million of whom are obese). Obesity is linked to several serious health problems and medical conditions. Medical experts agree that physical activity is critical to maintaining fitness, reducing weight, and improving health, yet many people have difficulty increasing and maintaining physical activity in everyday life. Clinical studies have shown that health benefits can occur from simply increasing the number of steps one takes each day and that social support can motivate people to stay active. In this paper, we describe Houston, a prototype mobile phone application for encouraging activity by sharing step count with friends. We also present four design requirements for technologies that encourage physical activity that we derived from a three-week long in situ pilot study that was conducted with women who wanted to increase their physical activity.",
"title": ""
},
{
"docid": "179d8daa30a7986c8f345a47eabfb2c8",
"text": "A key advantage of taking a statistical approach to spoken dialogue systems is the ability to formalise dialogue policy design as a stochastic optimization problem. However, since dialogue policies are learnt by interactively exploring alternative dialogue paths, conventional static dialogue corpora cannot be used directly for training and instead, a user simulator is commonly used. This paper describes a novel statistical user model based on a compact stack-like state representation called a user agenda which allows state transitions to be modeled as sequences of push- and pop-operations and elegantly encodes the dialogue history from a user's point of view. An expectation-maximisation based algorithm is presented which models the observable user output in terms of a sequence of hidden states and thereby allows the model to be trained on a corpus of minimally annotated data. Experimental results with a real-world dialogue system demonstrate that the trained user model can be successfully used to optimise a dialogue policy which outperforms a hand-crafted baseline in terms of task completion rates and user satisfaction scores.",
"title": ""
},
{
"docid": "d9fe0834ccf80bddadc5927a8199cd2c",
"text": "Deep Residual Networks (ResNets) have recently achieved state-of-the-art results on many challenging computer vision tasks. In this work we analyze the role of Batch Normalization (BatchNorm) layers on ResNets in the hope of improving the current architecture and better incorporating other normalization techniques, such as Normalization Propagation (NormProp), into ResNets. Firstly, we verify that BatchNorm helps distribute representation learning to residual blocks at all layers, as opposed to a plain ResNet without BatchNorm where learning happens mostly in the latter part of the network. We also observe that BatchNorm well regularizes Concatenated ReLU (CReLU) activation scheme on ResNets, whose magnitude of activation grows by preserving both positive and negative responses when going deeper into the network. Secondly, we investigate the use of NormProp as a replacement for BatchNorm in ResNets. Though NormProp theoretically attains the same effect as BatchNorm on generic convolutional neural networks, the identity mapping of ResNets invalidates its theoretical promise and NormProp exhibits a significant performance drop when naively applied. To bridge the gap between BatchNorm and NormProp in ResNets, we propose a simple modification to NormProp and employ the CReLU activation scheme. We experiment on visual object recognition benchmark datasets such as CIFAR10/100 and ImageNet and demonstrate that 1) the modified NormProp performs better than the original NormProp but is still not comparable to BatchNorm and 2) CReLU improves the performance of ResNets with or without normalizations.",
"title": ""
},
{
"docid": "be9b40cc2e2340249584f7324e26c4d3",
"text": "This paper provides a unified account of two schools of thinking in information retrieval modelling: the generative retrieval focusing on predicting relevant documents given a query, and the discriminative retrieval focusing on predicting relevancy given a query-document pair. We propose a game theoretical minimax game to iteratively optimise both models. On one hand, the discriminative model, aiming to mine signals from labelled and unlabelled data, provides guidance to train the generative model towards fitting the underlying relevance distribution over documents given the query. On the other hand, the generative model, acting as an attacker to the current discriminative model, generates difficult examples for the discriminative model in an adversarial way by minimising its discrimination objective. With the competition between these two models, we show that the unified framework takes advantage of both schools of thinking: (i) the generative model learns to fit the relevance distribution over documents via the signals from the discriminative model, and (ii) the discriminative model is able to exploit the unlabelled data selected by the generative model to achieve a better estimation for document ranking. Our experimental results have demonstrated significant performance gains as much as 23.96% on Precision@5 and 15.50% on MAP over strong baselines in a variety of applications including web search, item recommendation, and question answering.",
"title": ""
}
] |
scidocsrr
|
8cdee7101e0e22ea85dd0ee171513909
|
Phoneme recognition using time-delay neural networks
|
[
{
"docid": "9ff2e30bbd34906f6a57f48b1e63c3f1",
"text": "In this paper, we extend hidden Markov modeling to speaker-independent phone recognition. Using multiple codebooks of various LPC parameters and discrete HMMs, we obtain a speakerindependent phone recognition accuracy of 58.8% to 73.8% on the TIMTT database, depending on the type of acoustic and language models used. In comparison, the performance of expert spectrogram readers is only 69% without use of higher level knowledge. We also introduce the co-occurrence smoothing algorithm which enables accurate recognition even with very limited training data. Since our results were evaluated on a standard database, they can be used as benchmarks to evaluate future systems. This research was partly sponsored by a National Science Foundation Graduate Fellowship, and by Defense Advanced Research Projects Agency Contract N00039-85-C-0163. The views and conclusions contained in this document are those of the author and should not be interpreted as representing the official policies, either expressed or implied, of the National Science Foundation, the Defense Advanced Research Projects Agency, or the US Government.",
"title": ""
}
] |
[
{
"docid": "41eab64d00f1a4aaea5c5899074d91ca",
"text": "Informally described design patterns are useful for communicating proven solutions for recurring design problems to developers, but they cannot be used as compliance points against which solutions that claim to conform to the patterns are checked. Pattern specification languages that utilize mathematical notation provide the needed formality, but often at the expense of usability. We present a rigorous and practical technique for specifying pattern solutions expressed in the unified modeling language (UML). The specification technique paves the way for the development of tools that support rigorous application of design patterns to UML design models. The technique has been used to create specifications of solutions for several popular design patterns. We illustrate the use of the technique by specifying observer and visitor pattern solutions.",
"title": ""
},
{
"docid": "7cc41229d0368f702a4dde3ccf597604",
"text": "State Machines",
"title": ""
},
{
"docid": "7ec9f6b40242a732282520f1a4808d49",
"text": "In this paper, a novel technique to enhance the bandwidth of substrate integrated waveguide cavity backed slot antenna is demonstrated. The feeding technique to the cavity backed antenna has been modified by introducing offset feeding of microstrip line along with microstrip to grounded coplanar waveguide transition which helps to excite TE120 mode in the cavity and also to get improvement in impedance matching to the slot antenna simultaneously. The proposed antenna is designed to resonate in X band (8-12 GHz) and shows a resonance at 10.2 GHz with a bandwidth of 4.2% and a gain of 5.6 dBi, 15.6 dB front to back ratio and -30 dB maximum cross polarization level.",
"title": ""
},
{
"docid": "43baeb87f1798d52399ba8c78ffa7fef",
"text": "ECONOMISTS are frequently asked to measure the effects of an economic event on the value of firms. On the surface this seems like a difficult task, but a measure can be constructed easily using an event study. Using financial market data, an event study measures the impact of a specific event on the value of a firm. The usefulness of such a study comes from the fact that, given rationality in the marketplace, the effects of an event will be reflected immediately in security prices. Thus a measure of the event’s economic impact can be constructed using security prices observed over a relatively short time period. In contrast, direct productivity related measures may require many months or even years of observation. The event study has many applications. In accounting and finance research, event studies have been applied to a variety of firm specific and economy wide events. Some examples include mergers and acquisitions, earnings announcements, issues of new debt or equity, and announcements of macroeconomic variables such as the trade deficit.1 However, applications in other fields are also abundant. For example, event studies are used in the field of law and economics to measure the impact on the value of a firm of a change in the regulatory environment (see G. William Schwert 1981) and in legal liability cases event studies are used to assess damages (see Mark Mitchell and Jeffry Netter 1994). In the majority of applications, the focus is the effect of an event on the price of a particular class of securities of the firm, most often common equity. In this paper the methodology is discussed in terms of applications that use common equity. However, event studies can be applied using debt securities with little modification. Event studies have a long history. Perhaps the first published study is James Dolley (1933). In this work, he examines the price effects of stock splits, studying nominal price changes at the time of the split. Using a sample of 95 splits from 1921 to 1931, he finds that the price in-",
"title": ""
},
{
"docid": "fcf6136271b04ac78717799d43017d74",
"text": "STUDY DESIGN\nPragmatic, multicentered randomized controlled trial, with 12-month follow-up.\n\n\nOBJECTIVE\nTo evaluate the effect of adding specific spinal stabilization exercises to conventional physiotherapy for patients with recurrent low back pain (LBP) in the United Kingdom.\n\n\nSUMMARY OF BACKGROUND DATA\nSpinal stabilization exercises are a popular form of physiotherapy management for LBP, and previous small-scale studies on specific LBP subgroups have identified improvement in outcomes as a result.\n\n\nMETHODS\nA total of 97 patients (18-60 years old) with recurrent LBP were recruited. Stratified randomization was undertaken into 2 groups: \"conventional,\" physiotherapy consisting of general active exercise and manual therapy; and conventional physiotherapy plus specific spinal stabilization exercises. Stratifying variables used were laterality of symptoms, duration of symptoms, and Roland Morris Disability Questionnaire score at baseline. Both groups received The Back Book, by Roland et al. Back-specific functional disability (Roland Morris Disability Questionnaire) at 12 months was the primary outcome. Pain, quality of life, and psychologic measures were also collected at 6 and 12 months. Analysis was by intention to treat.\n\n\nRESULTS\nA total of 68 patients (70%) provided 12-month follow-up data. Both groups showed improved physical functioning, reduced pain intensity, and an improvement in the physical component of quality of life. Mean change in physical functioning, measured by the Roland Morris Disability Questionnaire, was -5.1 (95% confidence interval -6.3 to -3.9) for the specific spinal stabilization exercises group and -5.4 (95% confidence interval -6.5 to -4.2) for the conventional physiotherapy group. No statistically significant differences between the 2 groups were shown for any of the outcomes measured, at any time.\n\n\nCONCLUSIONS\nPatients with LBP had improvement with both treatment packages to a similar degree. There was no additional benefit of adding specific spinal stabilization exercises to a conventional physiotherapy package for patients with recurrent LBP.",
"title": ""
},
{
"docid": "f70bd0a47eac274a1bb3b964f34e0a63",
"text": "Although deep neural network (DNN) has achieved many state-of-the-art results, estimating the uncertainty presented in the DNN model and the data is a challenging task. Problems related to uncertainty such as classifying unknown classes (class which does not appear in the training data) data as known class with high confidence, is critically concerned in the safety domain area (e.g, autonomous driving, medical diagnosis). In this paper, we show that applying current Bayesian Neural Network (BNN) techniques alone does not effectively capture the uncertainty. To tackle this problem, we introduce a simple way to improve the BNN by using one class classification (in this paper, we use the term ”set classification” instead). We empirically show the result of our method on an experiment which involves three datasets: MNIST, notMNIST and FMNIST.",
"title": ""
},
{
"docid": "1b0046cbee1afd3e7471f92f115f3d74",
"text": "We present an approach to improve statistical machine translation of image descriptions by multimodal pivots defined in visual space. The key idea is to perform image retrieval over a database of images that are captioned in the target language, and use the captions of the most similar images for crosslingual reranking of translation outputs. Our approach does not depend on the availability of large amounts of in-domain parallel data, but only relies on available large datasets of monolingually captioned images, and on state-ofthe-art convolutional neural networks to compute image similarities. Our experimental evaluation shows improvements of 1 BLEU point over strong baselines.",
"title": ""
},
{
"docid": "563045a67d06819b0b79c8232e2e16fa",
"text": "The impacts of climate change are felt by most critical systems, such as infrastructure, ecological systems, and power-plants. However, contemporary Earth System Models (ESM) are run at spatial resolutions too coarse for assessing effects this localized. Local scale projections can be obtained using statistical downscaling, a technique which uses historical climate observations to learn a low-resolution to high-resolution mapping. Depending on statistical modeling choices, downscaled projections have been shown to vary significantly terms of accuracy and reliability. The spatio-temporal nature of the climate system motivates the adaptation of super-resolution image processing techniques to statistical downscaling. In our work, we present DeepSD, a generalized stacked super resolution convolutional neural network (SRCNN) framework for statistical downscaling of climate variables. DeepSD augments SRCNN with multi-scale input channels to maximize predictability in statistical downscaling. We provide a comparison with Bias Correction Spatial Disaggregation as well as three Automated-Statistical Downscaling approaches in downscaling daily precipitation from 1 degree (~100km) to 1/8 degrees (~12.5km) over the Continental United States. Furthermore, a framework using the NASA Earth Exchange (NEX) platform is discussed for downscaling more than 20 ESM models with multiple emission scenarios.",
"title": ""
},
{
"docid": "4cdf61ea145da38c37201b85d38bf8a2",
"text": "Ontologies are powerful to support semantic based applications and intelligent systems. While ontology learning are challenging due to its bottleneck in handcrafting structured knowledge sources and training data. To address this difficulty, many researchers turn to ontology enrichment and population using external knowledge sources such as DBpedia. In this paper, we propose a method using DBpedia in a different manner. We utilize relation instances in DBpedia to supervise the ontology learning procedure from unstructured text, rather than populate the ontology structure as a post-processing step. We construct three language resources in areas of computer science: enriched Wikipedia concept tree, domain ontology, and gold standard from NSFC taxonomy. Experiment shows that the result of ontology learning from corpus of computer science can be improved via the relation instances extracted from DBpedia in the same field. Furthermore, making distinction between the relation instances and applying a proper weighting scheme in the learning procedure lead to even better result.",
"title": ""
},
{
"docid": "13a777b2c5edcf9cb342b1290ec50a3c",
"text": "Call for Book Chapters Introduction The history of robotics and artificial intelligence in many ways is also the history of humanity’s attempts to control such technologies. From the Golem of Prague to the military robots of modernity, the debate continues as to what degree of independence such entities should have and how to make sure that they do not turn on us, its inventors. Numerous recent advancements in all aspects of research, development and deployment of intelligent systems are well publicized but safety and security issues related to AI are rarely addressed. This book is proposed to mitigate this fundamental problem. It will be comprised of chapters from leading AI Safety researchers addressing different aspects of the AI control problem as it relates to the development of safe and secure artificial intelligence. The book would be the first textbook to address challenges of constructing safe and secure advanced machine intelligence.",
"title": ""
},
{
"docid": "7d5d2f819a5b2561db31645d534836b8",
"text": "Recent work has suggested enhancing Bloom filters by using a pre-filter, based on applying machine learning to model the data set the Bloom filter is meant to represent. Here we model such learned Bloom filters, clarifying what guarantees can and cannot be associated with such a structure.",
"title": ""
},
{
"docid": "b38603115c4dbce4ea5f11767a7a49ab",
"text": "Hydroa vacciniforme (HV) is a rare and chronic pediatric disorder that is characterized by photosensitivity and recurrent vesicles that heal with vacciniforme scarring. The pathogenesis of HV is unknown; no chromosome abnormality has been identified. HV patients have no abnormal laboratory results, so the diagnosis of HV is based on identifying the associated histological findings in a biopsy specimen and using repetitive ultraviolet phototesting to reproduce the characteristic vesicles on a patient's skin. Herein, we present a case of HV in a 7-year-old female who was diagnosed with HV according to histopathology and ultraviolet phototesting.",
"title": ""
},
{
"docid": "7292ceb6718d0892a154d294f6434415",
"text": "This article illustrates the application of a nonlinear system identification technique to the problem of STLF. Five NARX models are estimated using fixed-size LS-SVM, and two of the models are later modified into AR-NARX structures following the exploration of the residuals. The forecasting performance, assessed for different load series, is satisfactory. The MSE levels on the test data are below 3% in most cases. The models estimated with fixed-size LS-SVM give better results than a linear model estimated with the same variables and also better than a standard LS-SVM in dual space estimated using only the last 1000 data points. Furthermore, the good performance of the fixed-size LS-SVM is obtained based on a subset of M = 1000 initial support vectors, representing a small fraction of the available sample. Further research on a more dedicated definition of the initial input variables (for example, incorporation of external variables to reflect industrial activity, use of explicit seasonal information) might lead to further improvements and the extension toward other types of load series.",
"title": ""
},
{
"docid": "f3cb6de57ba293be0b0833a04086b2ce",
"text": "Due to increasing globalization, urban societies are becoming more multicultural. The availability of large-scale digital mobility traces e.g. from tweets or checkins provides an opportunity to explore multiculturalism that until recently could only be addressed using survey-based methods. In this paper we examine a basic facet of multiculturalism through the lens of language use across multiple cities in Switzerland. Using data obtained from Foursquare over 330 days, we present a descriptive analysis of linguistic differences and similarities across five urban agglomerations in a multicultural, western European country.",
"title": ""
},
{
"docid": "a1e6a95d2eb2f5f36caf43b5133bd384",
"text": "The RealSense F200 represents a new generation of economically viable 4-dimensional imaging (4D) systems for home use. However, its 3D geometric (depth) accuracy has not been clinically tested. Therefore, this study determined the depth accuracy of the RealSense, in a cohort of patients with a unilateral facial palsy (n = 34), by using the clinically validated 3dMD system as a gold standard. The patients were simultaneously recorded with both systems, capturing six Sunnybrook poses. This study has shown that the RealSense depth accuracy was not affected by a facial palsy (1.48 ± 0.28 mm), compared to a healthy face (1.46 ± 0.26 mm). Furthermore, the Sunnybrook poses did not influence the RealSense depth accuracy (p = 0.76). However, the distance of the patients to the RealSense was shown to affect the accuracy of the system, where the highest depth accuracy of 1.07 mm was measured at a distance of 35 cm. Overall, this study has shown that the RealSense can provide reliable and accurate depth data when recording a range of facial movements. Therefore, when the portability, low-costs, and availability of the RealSense are taken into consideration, the camera is a viable option for 4D close range imaging in telehealth.",
"title": ""
},
{
"docid": "b123916f2795ab6810a773ac69bdf00b",
"text": "The acceptance of open data practices by individuals and organizations lead to an enormous explosion in data production on the Internet. The access to a large number of these data is carried out through Web services, which provide a standard way to interact with data. This class of services is known as data services. In this context, users' queries often require the composition of multiple data services to be answered. On the other hand, the data returned by a data service is not always certain due to various raisons, e.g., the service accesses different data sources, privacy constraints, etc. In this paper, we study the basic activities of data services that are affected by the uncertainty of data, more specifically, modeling, invocation and composition. We propose a possibilistic approach that treats the uncertainty in all these activities.",
"title": ""
},
{
"docid": "b011b5e9ed5c96a59399603f4200b158",
"text": "The word list memory test from the Consortium to establish a registry for Alzheimer's disease (CERAD) neuropsychological battery (Morris et al. 1989) was administered to 230 psychiatric outpatients. Performance of a selected, age-matched psychiatric group and normal controls was compared using an ANCOVA design with education as a covariate. Results indicated that controls performed better than psychiatric patients on most learning and recall indices. The exception to this was the savings index that has been found to be sensitive to the effects of progressive dementias. The current data are compared and integrated with published CERAD data for Alzheimer's disease patients. The CERAD list memory test is recommended as a brief, efficient, and sensitive memory measure that can be used with a range of difficult patients.",
"title": ""
},
{
"docid": "e41079edd8ad3d39b22397d669f7af61",
"text": "Using the masked priming paradigm, we examined which phonological unit is used when naming Kanji compounds. Although the phonological unit in the Japanese language has been suggested to be the mora, Experiment 1 found no priming for mora-related Kanji prime-target pairs. In Experiment 2, significant priming was only found when Kanji pairs shared the whole sound of their initial Kanji characters. Nevertheless, when the same Kanji pairs used in Experiment 2 were transcribed into Kana, significant mora priming was observed in Experiment 3. In Experiment 4, matching the syllable structure and pitch-accent of the initial Kanji characters did not lead to mora priming, ruling out potential alternative explanations for the earlier absence of the effect. A significant mora priming effect was observed, however, when the shared initial mora constituted the whole sound of their initial Kanji characters in Experiments 5. Lastly, these results were replicated in Experiment 6. Overall, these results indicate that the phonological unit involved when naming Kanji compounds is not the mora but the whole sound of each Kanji character. We discuss how different phonological units may be involved when processing Kanji and Kana words as well as the implications for theories dealing with language production processes. (PsycINFO Database Record",
"title": ""
},
{
"docid": "1164e5b54ce970b55cf65cca0a1fbcb1",
"text": "We present a technique for automatic placement of authorization hooks, and apply it to the Linux security modules (LSM) framework. LSM is a generic framework which allows diverse authorization policies to be enforced by the Linux kernel. It consists of a kernel module which encapsulates an authorization policy, and hooks into the kernel module placed at appropriate locations in the Linux kernel. The kernel enforces the authorization policy using hook calls. In current practice, hooks are placed manually in the kernel. This approach is tedious, and as prior work has shown, is prone to security holes.Our technique uses static analysis of the Linux kernel and the kernel module to automate hook placement. Given a non-hook-placed version of the Linux kernel, and a kernel module that implements an authorization policy, our technique infers the set of operations authorized by each hook, and the set of operations performed by each function in the kernel. It uses this information to infer the set of hooks that must guard each kernel function. We describe the design and implementation of a prototype tool called TAHOE (Tool for Authorization Hook Placement) that uses this technique. We demonstrate the effectiveness of TAHOE by using it with the LSM implementation of security-enhanced Linux (selinux). While our exposition in this paper focuses on hook placement for LSM, our technique can be used to place hooks in other LSM-like architectures as well.",
"title": ""
},
{
"docid": "b039138e9c0ef8456084891c45d7b36d",
"text": "Over the last few years or so, the use of artificial neural networks (ANNs) has increased in many areas of engineering. In particular, ANNs have been applied to many geotechnical engineering problems and have demonstrated some degree of success. A review of the literature reveals that ANNs have been used successfully in pile capacity prediction, modelling soil behaviour, site characterisation, earth retaining structures, settlement of structures, slope stability, design of tunnels and underground openings, liquefaction, soil permeability and hydraulic conductivity, soil compaction, soil swelling and classification of soils. The objective of this paper is to provide a general view of some ANN applications for solving some types of geotechnical engineering problems. It is not intended to describe the ANNs modelling issues in geotechnical engineering. The paper also does not intend to cover every single application or scientific paper that found in the literature. For brevity, some works are selected to be described in some detail, while others are acknowledged for reference purposes. The paper then discusses the strengths and limitations of ANNs compared with the other modelling approaches.",
"title": ""
}
] |
scidocsrr
|
47547553b4abbac9675503e48ae8c0bd
|
Understanding Plagiarism Linguistic Patterns, Textual Features, and Detection Methods
|
[
{
"docid": "fe6fa144846269c7b2c9230ca9d8217b",
"text": "The paper is dedicated to plagiarism problem. The ways how to reduce plagiarism: both: plagiarism prevention and plagiarism detection are discussed. Widely used plagiarism detection methods are described. The most known plagiarism detection tools are analysed.",
"title": ""
}
] |
[
{
"docid": "c63d32013627d0bcea22e1ad62419e62",
"text": "According to its proponents, open source style software development has the capacity to compete successfully, and perhaps in many cases displace, traditional commercial development methods. In order to begin investigating such claims, we examine the development process of a major open source application, the Apache web server. By using email archives of source code change history and problem reports we quantify aspects of developer participation, core team size, code ownership, productivity, defect density, and problem resolution interval for this OSS project. This analysis reveals a unique process, which performs well on important measures. We conclude that hybrid forms of development that borrow the most effective techniques from both the OSS and commercial worlds may lead to high performance software processes.",
"title": ""
},
{
"docid": "091c57447d5a3c97d3ff1afb57ebb4e3",
"text": "We present ORB-SLAM2, a complete simultaneous localization and mapping (SLAM) system for monocular, stereo and RGB-D cameras, including map reuse, loop closing, and relocalization capabilities. The system works in real time on standard central processing units in a wide variety of environments from small hand-held indoors sequences, to drones flying in industrial environments and cars driving around a city. Our back-end, based on bundle adjustment with monocular and stereo observations, allows for accurate trajectory estimation with metric scale. Our system includes a lightweight localization mode that leverages visual odometry tracks for unmapped regions and matches with map points that allow for zero-drift localization. The evaluation on 29 popular public sequences shows that our method achieves state-of-the-art accuracy, being in most cases the most accurate SLAM solution. We publish the source code, not only for the benefit of the SLAM community, but with the aim of being an out-of-the-box SLAM solution for researchers in other fields.",
"title": ""
},
{
"docid": "e0c71e449f4c155a993ae04ece4bc822",
"text": "This paper shows how one can directly apply natural language processing (NLP) methods to classification problems in cheminformatics. Connection between these seemingly separate fields is shown by considering standard textual representation of compound, SMILES. The problem of activity prediction against a target protein is considered, which is a crucial part of computer aided drug design process. Conducted experiments show that this way one can not only outrank state of the art results of hand crafted representations but also gets direct structural insights into the way decisions are made.",
"title": ""
},
{
"docid": "345bd0959cf210e4afd47e9bf6fad76d",
"text": "Smartphone applications are getting more multi-farious and demanding of increased energy and computing resources. Mobile Cloud Computing (MCC) made a novel platform which allows personal Smartphones to execute heavy computing tasks with the assistance of powerful cloudlet servers attached to numerous wireless access points (APs). Furthermore, due to users' mobility in anywhere, ensuring the continuous connectivity of mobile devices in given wireless network access point is quite difficult because the signal strength becomes sporadic at that time. In this paper, we develop a QoS and mobility aware optimal resource allocation architecture, namely Q-MAC, for remote code execution in MCC that offers higher efficiency in timeliness and reliability domains. By carrying continuous track of user, our proposed architecture performs the offloading process. Our test-bed implementation results show that the Q-MAC outperforms the state-of-the-art methods in terms of success percentage, execution time and workload distribution.",
"title": ""
},
{
"docid": "e8e796774aa6e16ff022ab155237f402",
"text": "Mobile payment is the killer application in mobile commerce. We classify the payment methods according to several standards, analyze and point out the merits and drawbacks of each method. To enable future applications and technologies handle mobile payment, we provide a general layered framework and a new process for mobile payment. The framework is composed of load-bearing layer, network interface and core application platform layer, business layer, and decision-making layer. And it can be extended and improved by the developers. Then a pre-pay and account-based payment process is described. Our method has the advantages of low cost and technical requirement, high scalability and security.",
"title": ""
},
{
"docid": "73872cb92a522a222a3e8ee28a21e263",
"text": "All the power of computational techniques for data processing and analysis is worthless without human analysts choosing appropriate methods depending on data characteristics, setting parameters and controlling the work of the methods, interpreting results obtained, understanding what to do next, reasoning, and drawing conclusions. To enable effective work of human analysts, relevant information must be presented to them in an adequate way. Since visual representation of information greatly promotes man’s perception and cognition, visual displays of data and results of computational processing play a very important role in analysis. However, a simple combination of visualization with computational analysis is not sufficient. The challenge is to build analytical tools and environments where the power of computational methods is synergistically combined with man’s background knowledge, flexible thinking, imagination, and capacity for insight. This is the main goal of the emerging multidisciplinary research field of Visual Analytics (Thomas and Cook [45]), which is defined as the science of analytical reasoning facilitated by interactive visual interfaces. Analysis of movement data is an appropriate target for a synergy of diverse technologies, including visualization, computations, database queries, data transformations, and other computer-based operations. In this chapter, we try to define what combination of visual and computational techniques can support the analysis of massive movement data and how these techniques should interact. Before that, we shall briefly overview the existing computer-based tools and techniques for visual analysis of movement data.",
"title": ""
},
{
"docid": "d352913b60263d12072a9b79bfe36d18",
"text": "Jauhar et al. (2015) recently proposed to learn sense-specific word representations by “retrofitting” standard distributional word representations to an existing ontology. We observe that this approach does not require an ontology, and can be generalized to any graph defining word senses and relations between them. We create such a graph using translations learned from parallel corpora. On a set of lexical semantic tasks, representations learned using parallel text perform roughly as well as those derived from WordNet, and combining the two representation types significantly improves performance.",
"title": ""
},
{
"docid": "543a4aacf3d0f3c33071b0543b699d3c",
"text": "This paper describes a buffer sharing technique that strikes a balance between the use of disk bandwidth and memory in order to maximize the performance of a video-on-demand server. We make the key observation that the configuration parameters of the system should be independent of the physical characteristics of the data (e.g., popularity of a clip). Instead, the configuration parameters are fixed and our strategy adjusts itself dynamically at run-time to support a pattern of access to the video clips.",
"title": ""
},
{
"docid": "72e9f82070605ca5f0467f29ad9ca780",
"text": "Social media are pervaded by unsubstantiated or untruthful rumors, that contribute to the alarming phenomenon of misinformation. The widespread presence of a heterogeneous mass of information sources may affect the mechanisms behind the formation of public opinion. Such a scenario is a florid environment for digital wildfires when combined with functional illiteracy, information overload, and confirmation bias. In this essay, we focus on a collection of works aiming at providing quantitative evidence about the cognitive determinants behind misinformation and rumor spreading. We account for users’ behavior with respect to two distinct narratives: a) conspiracy and b) scientific information sources. In particular, we analyze Facebook data on a time span of five years in both the Italian and the US context, and measure users’ response to i) information consistent with one’s narrative, ii) troll contents, and iii) dissenting information e.g., debunking attempts. Our findings suggest that users tend to a) join polarized communities sharing a common narrative (echo chambers), b) acquire information confirming their beliefs (confirmation bias) even if containing false claims, and c) ignore dissenting information.",
"title": ""
},
{
"docid": "7c2960e9fd059e57b5a0172e1d458250",
"text": "The main goal of this research is to discover the structure of home appliances usage patterns, hence providing more intelligence in smart metering systems by taking into account the usage of selected home appliances and the time of their usage. In particular, we present and apply a set of unsupervised machine learning techniques to reveal specific usage patterns observed at an individual household. The work delivers the solutions applicable in smart metering systems that might: (1) contribute to higher energy awareness; (2) support accurate usage forecasting; and (3) provide the input for demand response systems in homes with timely energy saving recommendations for users. The results provided in this paper show that determining household characteristics from smart meter data is feasible and allows for quickly grasping general trends in data.",
"title": ""
},
{
"docid": "6b125ab0691988a5836855346f277970",
"text": "Cardol (C₁₅:₃), isolated from cashew (Anacardium occidentale L.) nut shell liquid, has been shown to exhibit bactericidal activity against various strains of Staphylococcus aureus, including methicillin-resistant strains. The maximum level of reactive oxygen species generation was detected at around the minimum bactericidal concentration of cardol, while reactive oxygen species production drastically decreased at doses above the minimum bactericidal concentration. The primary response for bactericidal activity around the bactericidal concentration was noted to primarily originate from oxidative stress such as intracellular reactive oxygen species generation. High doses of cardol (C₁₅:₃) were shown to induce leakage of K⁺ from S. aureus cells, which may be related to the decrease in reactive oxygen species. Antioxidants such as α-tocopherol and ascorbic acid restricted reactive oxygen species generation and restored cellular damage induced by the lipid. Cardol (C₁₅:₃) overdose probably disrupts the native membrane-associated function as it acts as a surfactant. The maximum antibacterial activity of cardols against S. aureus depends on their log P values (partition coefficient in octanol/water) and is related to their similarity to those of anacardic acids isolated from the same source.",
"title": ""
},
{
"docid": "b83e784d3ec4afcf8f6ed49dbe90e157",
"text": "In this paper, the impact of an increased number of layers on the performance of axial flux permanent magnet synchronous machines (AFPMSMs) is studied. The studied parameters are the inductance, terminal voltages, PM losses, iron losses, the mean value of torque, and the ripple torque. It is shown that increasing the number of layers reduces the fundamental winding factor. In consequence, the rated torque for the same current reduces. However, the reduction of harmonics associated with a higher number of layers reduces the ripple torque, PM losses, and iron losses. Besides studying the performance of the AFPMSMs for the rated conditions, the study is broadened for the field weakening (FW) region. During the FW region, the flux of the PMs is weakened by an injection of a reversible d-axis current. This keeps the terminal voltage of the machine fixed at the rated value. The inductance plays an important role in the FW study. A complete study for the FW shows that the two layer winding has the optimum performance compared to machines with an other number of winding layers.",
"title": ""
},
{
"docid": "2b8d90c11568bb8b172eca20a48fd712",
"text": "INTRODUCTION\nCancer incidence and mortality estimates for 25 cancers are presented for the 40 countries in the four United Nations-defined areas of Europe and for the European Union (EU-27) for 2012.\n\n\nMETHODS\nWe used statistical models to estimate national incidence and mortality rates in 2012 from recently-published data, predicting incidence and mortality rates for the year 2012 from recent trends, wherever possible. The estimated rates in 2012 were applied to the corresponding population estimates to obtain the estimated numbers of new cancer cases and deaths in Europe in 2012.\n\n\nRESULTS\nThere were an estimated 3.45 million new cases of cancer (excluding non-melanoma skin cancer) and 1.75 million deaths from cancer in Europe in 2012. The most common cancer sites were cancers of the female breast (464,000 cases), followed by colorectal (447,000), prostate (417,000) and lung (410,000). These four cancers represent half of the overall burden of cancer in Europe. The most common causes of death from cancer were cancers of the lung (353,000 deaths), colorectal (215,000), breast (131,000) and stomach (107,000). In the European Union, the estimated numbers of new cases of cancer were approximately 1.4 million in males and 1.2 million in females, and around 707,000 men and 555,000 women died from cancer in the same year.\n\n\nCONCLUSION\nThese up-to-date estimates of the cancer burden in Europe alongside the description of the varying distribution of common cancers at both the regional and country level provide a basis for establishing priorities to cancer control actions in Europe. The important role of cancer registries in disease surveillance and in planning and evaluating national cancer plans is becoming increasingly recognised, but needs to be further advocated. The estimates and software tools for further analysis (EUCAN 2012) are available online as part of the European Cancer Observatory (ECO) (http://eco.iarc.fr).",
"title": ""
},
{
"docid": "ab572c22a75656c19e50b311eb4985ec",
"text": "With the increasingly complex electromagnetic environment of communication, as well as the gradually increased radar signal types, how to effectively identify the types of radar signals at low SNR becomes a hot topic. A radar signal recognition algorithm based on entropy features, which describes the distribution characteristics for different types of radar signals by extracting Shannon entropy, Singular spectrum Shannon entropy and Singular spectrum index entropy features, was proposed to achieve the purpose of signal identification. Simulation results show that, the algorithm based on entropies has good anti-noise performance, and it can still describe the characteristics of signals well even at low SNR, which can achieve the purpose of identification and classification for different radar signals.",
"title": ""
},
{
"docid": "5a4c9b6626d2d740246433972ad60f16",
"text": "We propose a new approach to the problem of neural network expressivity, which seeks to characterize how structural properties of a neural network family affect the functions it is able to compute. Our approach is based on an interrelated set of measures of expressivity, unified by the novel notion of trajectory length, which measures how the output of a network changes as the input sweeps along a one-dimensional path. Our findings can be summarized as follows:",
"title": ""
},
{
"docid": "36d6d14ab816a2fea62df31e370d7b1a",
"text": "Modern applications provide interfaces for scripting, but many users do not know how to write script commands. However, many users are familiar with the idea of entering keywords into a web search engine. Hence, if a user is familiar with the vocabulary of an application domain, we anticipate that they could write a set of keywords expressing a command in that domain. For instance, in the web browsing domain, a user might enter <B>click search button</B>. We call expressions of this form keyword commands, and we present a novel approach for translating keyword commands directly into executable code. Our prototype of this system in the web browsing domain translates <B>click search button</B> into the Chickenfoot code <B>click(findButton(\"search\"))</B>. This code is then executed in the context of a web browser to carry out the effect. We also present an implementation of this system in the domain of Microsoft Word. A user study revealed that subjects could use keyword commands to successfully complete 90% of the web browsing tasks in our study without instructions or training. Conversely, we would expect users to complete close to 0% of the tasks if they had to guess the underlying JavaScript commands with no instructions or training.",
"title": ""
},
{
"docid": "3cc07ea28720245f9c4983b0a4b1a66d",
"text": "A first line of attack in exploratory data analysis is data visualization, i.e., generating a 2-dimensional representation of data that makes clusters of similar points visually identifiable. Standard JohnsonLindenstrauss dimensionality reduction does not produce data visualizations. The t-SNE heuristic of van der Maaten and Hinton, which is based on non-convex optimization, has become the de facto standard for visualization in a wide range of applications. This work gives a formal framework for the problem of data visualization – finding a 2-dimensional embedding of clusterable data that correctly separates individual clusters to make them visually identifiable. We then give a rigorous analysis of the performance of t-SNE under a natural, deterministic condition on the “ground-truth” clusters (similar to conditions assumed in earlier analyses of clustering) in the underlying data. These are the first provable guarantees on t-SNE for constructing good data visualizations. We show that our deterministic condition is satisfied by considerably general probabilistic generative models for clusterable data such as mixtures of well-separated log-concave distributions. Finally, we give theoretical evidence that t-SNE provably succeeds in partially recovering cluster structure even when the above deterministic condition is not met.",
"title": ""
},
{
"docid": "11851c0615ad483b6c4f9d0e4ccc30b2",
"text": "In the era of information technology, human tend to develop better and more convenient lifestyle. Nowadays, almost all the electronic devices are equipped with wireless technology. A wireless communication network has numerous advantages and becomes an important application. The enhancements provide by the wireless technology gives the ease of control to the users and not least the mobility of the devices within the network. It is use the Zigbee as the wireless modules. The Smart Ordering System introduced current and fast way to order food at a restaurant. The system uses a small keypad to place orders and the order made by inserting the code on the keypad menu. This code comes along with the menu. The signal will be delivered to the order by the Zigbee technology, and it will automatically be displayed on the screen in the kitchen. Keywords— smart, ordering, S.O.S, Zigbee.",
"title": ""
},
{
"docid": "2399755bed6b1fc5fac495d54886acc0",
"text": "Lately fire outbreak is common issue happening in Malays and the damage caused by these type of incidents is tremendous toward nature and human interest. Due to this the need for application for fire detection has increases in recent years. In this paper we proposed a fire detection algorithm based on image processing techniques which is compatible in surveillance devices like CCTV, wireless camera to UAVs. The algorithm uses RGB colour model to detect the colour of the fire which is mainly comprehended by the intensity of the component R which is red colour. The growth of fire is detected using sobel edge detection. Finally a colour based segmentation technique was applied based on the results from the first technique and second technique to identify the region of interest (ROI) of the fire. After analysing 50 different fire scenarios images, the final accuracy obtained from testing the algorithm was 93.61% and the efficiency was 80.64%.",
"title": ""
},
{
"docid": "45ec93ccf4b2f6a6b579a4537ca73e9c",
"text": "Concurrent collections provide thread-safe, highly-scalable operations, and are widely used in practice. However, programmers can misuse these concurrent collections when composing two operations where a check on the collection (such as non-emptiness) precedes an action (such as removing an entry). Unless the whole composition is atomic, the program contains an atomicity violation bug. In this paper we present the first empirical study of CHECK-THEN-ACT idioms of Java concurrent collections in a large corpus of open-source applications. We catalog nine commonly misused CHECK-THEN-ACT idioms and show the correct usage. We quantitatively and qualitatively analyze 28 widely-used open source Java projects that use Java concurrency collections - comprising 6.4M lines of code. We classify the commonly used idioms, the ones that are the most error-prone, and the evolution of the programs with respect to misused idioms. We implemented a tool, CTADetector, to detect and correct misused CHECK-THEN-ACT idioms. Using CTADetector we found 282 buggy instances. We reported 155 to the developers, who examined 90 of them. The developers confirmed 60 as new bugs and accepted our patch. This shows that CHECK-THEN-ACT idioms are commonly misused in practice, and correcting them is important.",
"title": ""
}
] |
scidocsrr
|
dff9d0d7f03f37aa0d5db61a741a0580
|
Survey on Intrusion Detection System using Machine Learning Techniques
|
[
{
"docid": "b50efa7b82d929c1b8767e23e8359a06",
"text": "Intrusion detection (ID) is an important component of infrastructure protection mechanisms. Intrusion detection systems (IDSs) need to be accurate, adaptive, and extensible. Given these requirements and the complexities of today's network environments, we need a more systematic and automated IDS development process rather that the pure knowledge encoding and engineering approaches. This article describes a novel framework, MADAM ID, for Mining Audit Data for Automated Models for Instrusion Detection. This framework uses data mining algorithms to compute activity patterns from system audit data and extracts predictive features from the patterns. It then applies machine learning algorithms to the audit records taht are processed according to the feature definitions to generate intrusion detection rules. Results from the 1998 DARPA Intrusion Detection Evaluation showed that our ID model was one of the best performing of all the participating systems. We also briefly discuss our experience in converting the detection models produced by off-line data mining programs to real-time modules of existing IDSs.",
"title": ""
},
{
"docid": "0f853c6ccf6ce4cf025050135662f725",
"text": "This paper describes a technique of applying Genetic Algorithm (GA) to network Intrusion Detection Systems (IDSs). A brief overview of the Intrusion Detection System, genetic algorithm, and related detection techniques is presented. Parameters and evolution process for GA are discussed in detail. Unlike other implementations of the same problem, this implementation considers both temporal and spatial information of network connections in encoding the network connection information into rules in IDS. This is helpful for identification of complex anomalous behaviors. This work is focused on the TCP/IP network protocols.",
"title": ""
}
] |
[
{
"docid": "26140dbe32672dc138c46e7fd6f39b1a",
"text": "The state of the art in probabilistic demand forecasting [40] minimizes Quantile Loss to predict the future demand quantiles for different horizons. However, since quantiles aren’t additive, in order to predict the total demand for any wider future interval all required intervals are usually appended to the target vector during model training. The separate optimization of these overlapping intervals can lead to inconsistent forecasts, i.e. forecasts which imply an invalid joint distribution between different horizons. As a result, inter-temporal decision making algorithms that depend on the joint or step-wise conditional distribution of future demand cannot utilize these forecasts. In this work, we address the problem by using sample paths to predict future demand quantiles in a consistent manner and propose several novel methodologies to solve this problem. Our work covers the use of covariance shrinkage methods, autoregressive models, generative adversarial networks and also touches on the use of variational autoencoders and Bayesian Dropout.",
"title": ""
},
{
"docid": "26d0e97bbb14bc52b8dbb3c03522ac38",
"text": "Intraocular injections of rhodamine and horseradish peroxidase in chameleon, labelled retrogradely neurons in the ventromedial tegmental region of the mesencephalon and the ventrolateral thalamus of the diencephalon. In both areas, staining was observed contralaterally to the injected eye. Labelling was occasionally observed in some rhombencephalic motor nuclei. These results indicate that chameleons, unlike other reptilian species, have two retinopetal nuclei.",
"title": ""
},
{
"docid": "15e2fc773fb558e55d617f4f9ac22f69",
"text": "Recent advances in ASR and spoken language processing have led to improved systems for automated assessment for spoken language. However, it is still challenging for automated scoring systems to achieve high performance in terms of the agreement with human experts when applied to non-native children’s spontaneous speech. The subpar performance is mainly caused by the relatively low recognition rate on non-native children’s speech. In this paper, we investigate different neural network architectures for improving non-native children’s speech recognition and the impact of the features extracted from the corresponding ASR output on the automated assessment of speaking proficiency. Experimental results show that bidirectional LSTM-RNN can outperform feed-forward DNN in ASR, with an overall relative WER reduction of 13.4%. The improved speech recognition can then boost the language proficiency assessment performance. Correlations between the rounded automated scores and expert scores range from 0.66 to 0.70 for the three speaking tasks studied, similar to the humanhuman agreement levels for these tasks.",
"title": ""
},
{
"docid": "aa223de93696eec79feb627f899f8e8d",
"text": "The standard life events methodology for the prediction of psychological symptoms was compared with one focusing on relatively minor events, namely, the hassles and uplifts of everyday life. Hassles and Uplifts Scales were constructed and administered once a month for 10 consecutive months to a community sample of middle-aged adults. It was found that the Hassles Scale was a better predictor of concurrent and subsequent psychological symptoms than were the life events scores, and that the scale shared most of the variance in symptoms accounted for by life events. When the effects of life events scores were removed, hassles and symptoms remained significantly correlated. Uplifts were positively related to symptoms for women but not for men. Hassles and uplifts were also shown to be related, although only modestly so, to positive and negative affect, thus providing discriminate validation for hassles and uplifts in comparison to measures of emotion. It was concluded that the assessment of daily hassles and uplifts may be a better approach to the prediction of adaptational outcomes than the usual life events approach.",
"title": ""
},
{
"docid": "d704917077795fbe16e52ea2385e19ef",
"text": "The objectives of this review were to summarize the evidence from randomized controlled trials (RCTs) on the effects of animal-assisted therapy (AAT). Studies were eligible if they were RCTs. Studies included one treatment group in which AAT was applied. We searched the following databases from 1990 up to October 31, 2012: MEDLINE via PubMed, CINAHL, Web of Science, Ichushi Web, GHL, WPRIM, and PsycINFO. We also searched all Cochrane Database up to October 31, 2012. Eleven RCTs were identified, and seven studies were about \"Mental and behavioral disorders\". Types of animal intervention were dog, cat, dolphin, bird, cow, rabbit, ferret, and guinea pig. The RCTs conducted have been of relatively low quality. We could not perform meta-analysis because of heterogeneity. In a study environment limited to the people who like animals, AAT may be an effective treatment for mental and behavioral disorders such as depression, schizophrenia, and alcohol/drug addictions, and is based on a holistic approach through interaction with animals in nature. To most effectively assess the potential benefits for AAT, it will be important for further research to utilize and describe (1) RCT methodology when appropriate, (2) reasons for non-participation, (3) intervention dose, (4) adverse effects and withdrawals, and (5) cost.",
"title": ""
},
{
"docid": "37f4da100d31ad1da1ba21168c95d7e9",
"text": "An AC chopper controller with symmetrical Pulse-Width Modulation (PWM) is proposed to achieve better performance for a single-phase induction motor compared to phase-angle control line-commutated voltage controllers and integral-cycle control of thyristors. Forced commutated device IGBT controlled by a microcontroller was used in the AC chopper which has the advantages of simplicity, ability to control large amounts of power and low waveform distortion. In this paper the simulation and hardware models of a simple single phase IGBT An AC controller has been developed which showed good results.",
"title": ""
},
{
"docid": "554a3f5f19503a333d3788cf46ffcef2",
"text": "Hospital overcrowding has been a problem in Thai public healthcare system. The main cause of this problem is the limited available resources, including a limited number of doctors, nurses, and limited capacity and availability of medical devices. There have been attempts to alleviate the problem through various strategies. In this paper, a low-cost system was developed and tested in a public hospital with limited budget. The system utilized QR code and smartphone application to capture as-is hospital processes and the time spent on individual activities. With the available activity data, two algorithms were developed to identify two quantities that are valuable to conduct process improvement: the most congested time and bottleneck activities. The system was implemented in a public hospital and results were presented.",
"title": ""
},
{
"docid": "9eae7dded031b37956ceea6e68f1076c",
"text": "One of the core principles of the SAP HANA database system is the comprehensive support of distributed query facility. Supporting scale-out scenarios was one of the major design principles of the system from the very beginning. Within this paper, we first give an overview of the overall functionality with respect to data allocation, metadata caching and query routing. We then dive into some level of detail for specific topics and explain features and methods not common in traditional disk-based database systems. In summary, the paper provides a comprehensive overview of distributed query processing in SAP HANA database to achieve scalability to handle large databases and heterogeneous types of workloads.",
"title": ""
},
{
"docid": "54ed287c473d796c291afda23848338e",
"text": "Shared memory and message passing are two opposing communication models for parallel multicomputer architectures. Comparing such architectures has been difficult, because applications must be hand-crafted for each architecture, often resulting in radically different sources for comparison. While it is clear that shared memory machines are currently easier to program, in the future, programs will be written in high-level languages and compiled to the specific parallel target, thus eliminating this difference.In this paper, we evaluate several parallel architecture alternatives --- message passing, NUMA, and cachecoherent shared memory --- for a collection of scientific benchmarks written in C*, a data-parallel language. Using a single suite of C* source programs, we compile each benchmark and simulate the interconnect for the alternative models. Our objective is to examine underlying, technology-independent costs inherent in each alternative. Our results show the relative work required to execute these data parallel programs on the different architectures, and point out where some models have inherent advantages for particular data-parallel program styles.",
"title": ""
},
{
"docid": "9c799b4d771c724969be7b392697ebee",
"text": "Search engines need to model user satisfaction to improve their services. Since it is not practical to request feedback on searchers' perceptions and search outcomes directly from users, search engines must estimate satisfaction from behavioral signals such as query refinement, result clicks, and dwell times. This analysis of behavior in the aggregate leads to the development of global metrics such as satisfied result clickthrough (typically operationalized as result-page clicks with dwell time exceeding a particular threshold) that are then applied to all searchers' behavior to estimate satisfac-tion levels. However, satisfaction is a personal belief and how users behave when they are satisfied can also differ. In this paper we verify that searcher behavior when satisfied and dissatisfied is indeed different among individual searchers along a number of dimensions. As a result, we introduce and evaluate learned models of satisfaction for individual searchers and searcher cohorts. Through experimentation via logs from a large commercial Web search engine, we show that our proposed models can predict search satisfaction more accurately than a global baseline that applies the same satisfaction model across all users. Our findings have implications for the study and application of user satisfaction in search systems.",
"title": ""
},
{
"docid": "2aade03834c6db2ecc2912996fd97501",
"text": "User contributions in the form of posts, comments, and votes are essential to the success of online communities. However, allowing user participation also invites undesirable behavior such as trolling. In this paper, we characterize antisocial behavior in three large online discussion communities by analyzing users who were banned from these communities. We find that such users tend to concentrate their efforts in a small number of threads, are more likely to post irrelevantly, and are more successful at garnering responses from other users. Studying the evolution of these users from the moment they join a community up to when they get banned, we find that not only do they write worse than other users over time, but they also become increasingly less tolerated by the community. Further, we discover that antisocial behavior is exacerbated when community feedback is overly harsh. Our analysis also reveals distinct groups of users with different levels of antisocial behavior that can change over time. We use these insights to identify antisocial users early on, a task of high practical importance to community maintainers.",
"title": ""
},
{
"docid": "2aefddf5e19601c8338f852811cebdee",
"text": "This paper presents a system that allows online building of 3D wireframe models through a combination of user interaction and automated methods from a handheld camera-mouse. Crucially, the model being built is used to concurrently compute camera pose, permitting extendable tracking while enabling the user to edit the model interactively. In contrast to other model building methods that are either off-line and/or automated but computationally intensive, the aim here is to have a system that has low computational requirements and that enables the user to define what is relevant (and what is not) at the time the model is being built. OutlinAR hardware is also developed which simply consists of the combination of a camera with a wide field of view lens and a wheeled computer mouse.",
"title": ""
},
{
"docid": "c3f2726c10ebad60d715609f15b67b43",
"text": "Sleep-waking cycles are fundamental in human circadian rhythms and their disruption can have consequences for behaviour and performance. Such disturbances occur due to domestic or occupational schedules that do not permit normal sleep quotas, rapid travel across multiple meridians and extreme athletic and recreational endeavours where sleep is restricted or totally deprived. There are methodological issues in quantifying the physiological and performance consequences of alterations in the sleep-wake cycle if the effects on circadian rhythms are to be separated from the fatigue process. Individual requirements for sleep show large variations but chronic reduction in sleep can lead to immuno-suppression. There are still unanswered questions about the sleep needs of athletes, the role of 'power naps' and the potential for exercise in improving the quality of sleep.",
"title": ""
},
{
"docid": "7ab5f56b615848ba5d8dc2f149fd8bf2",
"text": "At present, most outdoor video-surveillance, driver-assistance and optical remote sensing systems have been designed to work under good visibility and weather conditions. Poor visibility often occurs in foggy or hazy weather conditions and can strongly influence the accuracy or even the general functionality of such vision systems. Consequently, it is important to import actual weather-condition data to the appropriate processing mode. Recently, significant progress has been made in haze removal from a single image [1,2]. Based on the hazy weather classification, specialized approaches, such as a dehazing process, can be employed to improve recognition. Figure 1 shows a sample processing flow of our dehazing program.",
"title": ""
},
{
"docid": "7a356a485b46c6fc712a0174947e142e",
"text": "A systematic review of the literature related to effective occupational therapy interventions in rehabilitation of individuals with work-related forearm, wrist, and hand injuries and illnesses was conducted as part of the Evidence-Based Literature Review Project of the American Occupational Therapy Association. This review provides a comprehensive overview and analysis of 36 studies that addressed many of the interventions commonly used in hand rehabilitation. Findings reveal that the use of occupation-based activities has reasonable yet limited evidence to support its effectiveness. This review supports the premise that many client factors can be positively affected through the use of several commonly used occupational therapy-related modalities and methods. The implications for occupational therapy practice, research, and education and limitations of reviewed studies are also discussed.",
"title": ""
},
{
"docid": "7ac42bef7a9e0c8bd33f359a157f24e0",
"text": "Monte Carlo tree search (MCTS) is a heuristic search method that is used to efficiently search decision trees. The method is particularly efficient in searching trees with a high branching factor. MCTS has a number of advantages over traditional tree search algorithms like simplicity, adaptability etc. This paper is a study of existing literature on different types of MCTS, specifically on using Genetic Algorithms with MCTS. It studies the advantages and disadvantages of this approach, and applies an enhanced variant to Gomoku, a board game with a high branching factor.",
"title": ""
},
{
"docid": "fead6ca9612b29697f73cb5e57c0a1cc",
"text": "This research examines the effect of online social capital and Internet use on the normally negative effects of technology addiction, especially for individuals prone to self-concealment. Self-concealment is a personality trait that describes individuals who are more likely to withhold personal and private information, inhibiting catharsis and wellbeing. Addiction, in any context, is also typically associated with negative outcomes. However, we investigate the hypothesis that communication technology addiction may positively affect wellbeing for self-concealing individuals when online interaction is positive, builds relationships, or fosters a sense of community. Within these parameters, increased communication through mediated channels (and even addiction) may reverse the otherwise negative effects of self-concealment on wellbeing. Overall, the proposed model offers qualified support for the continued analysis of mediated communication as a potential source for improving the wellbeing for particular individuals. This study is important because we know that healthy communication in relationships, including disclosure, is important to wellbeing. This study recognizes that not all people are comfortable communicating in face-to-face settings. Our findings offer evidence that the presence of computers in human behaviors (e.g., mediated channels of communication and NCTs) enables some individuals to communicate and fos ter beneficial interpersonal relationships, and improve their wellbeing.",
"title": ""
},
{
"docid": "4c61d388acfde29dbf049842ef54a800",
"text": "Image matting plays an important role in image and video editing. However, the formulation of image matting is inherently ill-posed. Traditional methods usually employ interaction to deal with the image matting problem with trimaps and strokes, and cannot run on the mobile phone in real-time. In this paper, we propose a real-time automatic deep matting approach for mobile devices. By leveraging the densely connected blocks and the dilated convolution, a light full convolutional network is designed to predict a coarse binary mask for portrait image. And a feathering block, which is edge-preserving and matting adaptive, is further developed to learn the guided filter and transform the binary mask into alpha matte. Finally, an automatic portrait animation system based on fast deep matting is built on mobile devices, which does not need any interaction and can realize real-time matting with 15 fps. The experiments show that the proposed approach achieves comparable results with the state-of-the-art matting solvers.",
"title": ""
},
{
"docid": "fde2aefec80624ff4bc21d055ffbe27b",
"text": "Object detector with region proposal networks such as Fast/Faster R-CNN [1, 2] have shown the state-of-the art performance on several benchmarks. However, they have limited success for detecting small objects. We argue the limitation is related to insufficient performance of Fast R-CNN block in Faster R-CNN. In this paper, we propose a refining block for Fast R-CNN. We further merge the block and Faster R-CNN into a single network (RF-RCNN). The RF-RCNN was applied on plate and human detection in RoadView image that consists of high resolution street images (over 30M pixels). As a result, the RF-RCNN showed great improvement over the Faster-RCNN.",
"title": ""
},
{
"docid": "a9fc5418c0b5789b02dd6638a1b61b5d",
"text": "As the homeostatis characteristics of nerve systems show, artificial neural networks are considered to be robust to variation of circuit components and interconnection faults. However, the tolerance of neural networks depends on many factors, such as the fault model, the network size, and the training method. In this study, we analyze the fault tolerance of fixed-point feed-forward deep neural networks for the implementation in CMOS digital VLSI. The circuit errors caused by the interconnection as well as the processing units are considered. In addition to the conventional and dropout training methods, we develop a new technique that randomly disconnects weights during the training to increase the error resiliency. Feed-forward deep neural networks for phoneme recognition are employed for the experiments.",
"title": ""
}
] |
scidocsrr
|
2c1506c5719c699dfb2d6720e7f6fae3
|
Multimodal emotion recognition from expressive faces, body gestures and speech
|
[
{
"docid": "113cf957b47a8b8e3bbd031aa9a28ff2",
"text": "We present an approach for the recognition of acted emotional states based on the analysis of body movement and gesture expressivity. According to research showing that distinct emotions are often associated with different qualities of body movement, we use nonpropositional movement qualities (e.g. amplitude, speed and fluidity of movement) to infer emotions, rather than trying to recognise different gesture shapes expressing specific emotions. We propose a method for the analysis of emotional behaviour based on both direct classification of time series and a model that provides indicators describing the dynamics of expressive motion cues. Finally we show and interpret the recognition rates for both proposals using different classification algorithms.",
"title": ""
},
{
"docid": "dadcecd178721cf1ea2b6bf51bc9d246",
"text": "8 Research on speech and emotion is moving from a period of exploratory research into one where there is a prospect 9 of substantial applications, notably in human–computer interaction. Progress in the area relies heavily on the devel10 opment of appropriate databases. This paper addresses four main issues that need to be considered in developing 11 databases of emotional speech: scope, naturalness, context and descriptors. The state of the art is reviewed. A good deal 12 has been done to address the key issues, but there is still a long way to go. The paper shows how the challenge of 13 developing appropriate databases is being addressed in three major recent projects––the Reading–Leeds project, the 14 Belfast project and the CREST–ESP project. From these and other studies the paper draws together the tools and 15 methods that have been developed, addresses the problems that arise and indicates the future directions for the de16 velopment of emotional speech databases. 2002 Published by Elsevier Science B.V.",
"title": ""
}
] |
[
{
"docid": "26d8f073cfe1e907183022564e6bde80",
"text": "With advances in computer hardware, 3D game worlds are becoming larger and more complex. Consequently the development of game worlds becomes increasingly time and resource intensive. This paper presents a framework for generation of entire virtual worlds using procedural generation. The approach is demonstrated with the example of a virtual city.",
"title": ""
},
{
"docid": "04cf981a76c74b198ebe4703d0039e36",
"text": "The acquisition of high-fidelity, long-term neural recordings in vivo is critically important to advance neuroscience and brain⁻machine interfaces. For decades, rigid materials such as metal microwires and micromachined silicon shanks were used as invasive electrophysiological interfaces to neurons, providing either single or multiple electrode recording sites. Extensive research has revealed that such rigid interfaces suffer from gradual recording quality degradation, in part stemming from tissue damage and the ensuing immune response arising from mechanical mismatch between the probe and brain. The development of \"soft\" neural probes constructed from polymer shanks has been enabled by advancements in microfabrication; this alternative has the potential to mitigate mismatch-related side effects and thus improve the quality of recordings. This review examines soft neural probe materials and their associated microfabrication techniques, the resulting soft neural probes, and their implementation including custom implantation and electrical packaging strategies. The use of soft materials necessitates careful consideration of surgical placement, often requiring the use of additional surgical shuttles or biodegradable coatings that impart temporary stiffness. Investigation of surgical implantation mechanics and histological evidence to support the use of soft probes will be presented. The review concludes with a critical discussion of the remaining technical challenges and future outlook.",
"title": ""
},
{
"docid": "0ce46853852a20e5e0ab9aacd3ec20c1",
"text": "In immunocompromised subjects, Epstein-Barr virus (EBV) infection of terminally differentiated oral keratinocytes may result in subclinical productive infection of the virus in the stratum spinosum and in the stratum granulosum with shedding of infectious virions into the oral fluid in the desquamating cells. In a minority of cases this productive infection with dysregulation of the cell cycle of terminally differentiated epithelial cells may manifest as oral hairy leukoplakia. This is a white, hyperkeratotic, benign lesion of low morbidity, affecting primarily the lateral border of the tongue. Factors that determine whether productive EBV replication within the oral epithelium will cause oral hairy leukoplakia include the fitness of local immune responses, the profile of EBV gene expression, and local environmental factors.",
"title": ""
},
{
"docid": "51c4dd282e85db5741b65ae4386f6c48",
"text": "In this paper, we present an end-to-end approach to simultaneously learn spatio-temporal features and corresponding similarity metric for video-based person re-identification. Given the video sequence of a person, features from each frame that are extracted from all levels of a deep convolutional network can preserve a higher spatial resolution from which we can model finer motion patterns. These lowlevel visual percepts are leveraged into a variant of recurrent model to characterize the temporal variation between time-steps. Features from all time-steps are then summarized using temporal pooling to produce an overall feature representation for the complete sequence. The deep convolutional network, recurrent layer, and the temporal pooling are jointly trained to extract comparable hidden-unit representations from input pair of time series to compute their corresponding similarity value. The proposed framework combines time series modeling and metric learning to jointly learn relevant features and a good similarity measure between time sequences of person. Experiments demonstrate that our approach achieves the state-of-the-art performance for video-based person re-identification on iLIDS-VID and PRID 2011, the two primary public datasets for this purpose.",
"title": ""
},
{
"docid": "c2f338aef785f0d6fee503bf0501a558",
"text": "Recognizing 3-D objects in cluttered scenes is a challenging task. Common approaches find potential feature correspondences between a scene and candidate models by matching sampled local shape descriptors and select a few correspondences with the highest descriptor similarity to identify models that appear in the scene. However, real scans contain various nuisances, such as noise, occlusion, and featureless object regions. This makes selected correspondences have a certain portion of false positives, requiring adopting the time-consuming model verification many times to ensure accurate recognition. This paper proposes a 3-D object recognition approach with three key components. First, we construct a Signature of Geometric Centroids descriptor that is descriptive and robust, and apply it to find high-quality potential feature correspondences. Second, we measure geometric compatibility between a pair of potential correspondences based on isometry and three angle-preserving components. Third, we perform effective correspondence selection by using both descriptor similarity and compatibility with an auxiliary set of “less” potential correspondences. Experiments on publicly available data sets demonstrate the robustness and/or efficiency of the descriptor, selection approach, and recognition framework. Comparisons with the state-of-the-arts validate the superiority of our recognition approach, especially under challenging scenarios.",
"title": ""
},
{
"docid": "3e9f98a1aa56e626e47a93b7973f999a",
"text": "This paper presents a sociocultural knowledge ontology (OntoSOC) modeling approach. OntoSOC modeling approach is based on Engeström‟s Human Activity Theory (HAT). That Theory allowed us to identify fundamental concepts and relationships between them. The top-down precess has been used to define differents sub-concepts. The modeled vocabulary permits us to organise data, to facilitate information retrieval by introducing a semantic layer in social web platform architecture, we project to implement. This platform can be considered as a « collective memory » and Participative and Distributed Information System (PDIS) which will allow Cameroonian communities to share an co-construct knowledge on permanent organized activities.",
"title": ""
},
{
"docid": "77d0845463db0f4e61864b37ec1259b7",
"text": "A new form of the variational autoencoder (VAE) is proposed, based on the symmetric KullbackLeibler divergence. It is demonstrated that learning of the resulting symmetric VAE (sVAE) has close connections to previously developed adversarial-learning methods. This relationship helps unify the previously distinct techniques of VAE and adversarially learning, and provides insights that allow us to ameliorate shortcomings with some previously developed adversarial methods. In addition to an analysis that motivates and explains the sVAE, an extensive set of experiments validate the utility of the approach.",
"title": ""
},
{
"docid": "d1f8ee3d6dbc7ddc76b84ad2b0bfdd16",
"text": "Cognitive radio technology addresses the limited availability of wireless spectrum and inefficiency of spectrum usage. Cognitive Radio (CR) devices sense their environment, detect spatially unused spectrum and opportunistically access available spectrum without creating harmful interference to the incumbents. In cellular systems with licensed spectrum, the efficient utilization of the spectrum as well as the protection of primary users is equally important, which imposes opportunities and challenges for the application of CR. This paper introduces an experimental framework for 5G cognitive radio access in current 4G LTE cellular systems. It can be used to study CR concepts in different scenarios, such as 4G to 5G system migrations, machine-type communications, device-to-device communications, and load balancing. Using our framework, selected measurement results are presented that compare Long Term Evolution (LTE) Orthogonal Frequency Division Multiplex (OFDM) with a candidate 5G waveform called Generalized Frequency Division Multiplexing (GFDM) and quantify the benefits of GFDM in CR scenarios.",
"title": ""
},
{
"docid": "1d935fd69bcc3aca58f03e5d34892076",
"text": "• Healthy behaviour interventions should be initiated in people newly diagnosed with type 2 diabetes. • In people with type 2 diabetes with A1C <1.5% above the person’s individualized target, antihyperglycemic pharmacotherapy should be added if glycemic targets are not achieved within 3 months of initiating healthy behaviour interventions. • In people with type 2 diabetes with A1C ≥1.5% above target, antihyperglycemic agents should be initiated concomitantly with healthy behaviour interventions, and consideration could be given to initiating combination therapy with 2 agents. • Insulin should be initiated immediately in individuals with metabolic decompensation and/or symptomatic hyperglycemia. • In the absence of metabolic decompensation, metformin should be the initial agent of choice in people with newly diagnosed type 2 diabetes, unless contraindicated. • Dose adjustments and/or additional agents should be instituted to achieve target A1C within 3 to 6 months. Choice of second-line antihyperglycemic agents should be made based on individual patient characteristics, patient preferences, any contraindications to the drug, glucose-lowering efficacy, risk of hypoglycemia, affordability/access, effect on body weight and other factors. • In people with clinical cardiovascular (CV) disease in whom A1C targets are not achieved with existing pharmacotherapy, an antihyperglycemic agent with demonstrated CV outcome benefit should be added to antihyperglycemic therapy to reduce CV risk. • In people without clinical CV disease in whom A1C target is not achieved with current therapy, if affordability and access are not barriers, people with type 2 diabetes and their providers who are concerned about hypoglycemia and weight gain may prefer an incretin agent (DPP-4 inhibitor or GLP-1 receptor agonist) and/or an SGLT2 inhibitor to other agents as they improve glycemic control with a low risk of hypoglycemia and weight gain. • In people receiving an antihyperglycemic regimen containing insulin, in whom glycemic targets are not achieved, the addition of a GLP-1 receptor agonist, DPP-4 inhibitor or SGLT2 inhibitor may be considered before adding or intensifying prandial insulin therapy to improve glycemic control with less weight gain and comparable or lower hypoglycemia risk.",
"title": ""
},
{
"docid": "409f3b2768a8adf488eaa6486d1025a2",
"text": "The aim of the study was to investigate prospectively the direction of the relationship between adolescent girls' body dissatisfaction and self-esteem. Participants were 242 female high school students who completed questionnaires at two points in time, separated by 2 years. The questionnaire contained measures of weight (BMI), body dissatisfaction (perceived overweight, figure dissatisfaction, weight satisfaction) and self-esteem. Initial body dissatisfaction predicted self-esteem at Time 1 and Time 2, and initial self-esteem predicted body dissatisfaction at Time 1 and Time 2. However, linear panel analysis (regression analyses controlling for Time 1 variables) found that aspects of Time 1 weight and body dissatisfaction predicted change in self-esteem, but not vice versa. It was concluded that young girls with heavier actual weight and perceptions of being overweight were particularly vulnerable to developing low self-esteem.",
"title": ""
},
{
"docid": "fc2a7c789f742dfed24599997845b604",
"text": "An axially symmetric power combiner, which utilizes a tapered conical impedance matching network to transform ten 50-Omega inputs to a central coaxial line over the X-band, is presented. The use of a conical line allows standard transverse electromagnetic design theory to be used, including tapered impedance matching networks. This, in turn, alleviates the problem of very low impedance levels at the common port of conical line combiners, which normally requires very high-precision manufacturing and assembly. The tapered conical line is joined to a tapered coaxial line for a completely smooth transmission line structure. Very few full-wave analyses are needed in the design process since circuit models are optimized to achieve a wide operating bandwidth. A ten-way prototype was developed at X-band with a 47% bandwidth, very low losses, and excellent agreement between simulated and measured results.",
"title": ""
},
{
"docid": "6006d2a032b60c93e525a8a28828cc7e",
"text": "Recent advances in genome engineering indicate that innovative crops developed by targeted genome modification (TGM) using site-specific nucleases (SSNs) have the potential to avoid the regulatory issues raised by genetically modified organisms. These powerful SSNs tools, comprising zinc-finger nucleases, transcription activator-like effector nucleases, and clustered regulatory interspaced short palindromic repeats/CRISPR-associated systems, enable precise genome engineering by introducing DNA double-strand breaks that subsequently trigger DNA repair pathways involving either non-homologous end-joining or homologous recombination. Here, we review developments in genome-editing tools, summarize their applications in crop organisms, and discuss future prospects. We also highlight the ability of these tools to create non-transgenic TGM plants for next-generation crop breeding.",
"title": ""
},
{
"docid": "98269ed4d72abecb6112c35e831fc727",
"text": "The goal of this article is to place the role that social media plays in collective action within a more general theoretical structure, using the events of the Arab Spring as a case study. The article presents two broad theoretical principles. The first is that one cannot understand the role of social media in collective action without first taking into account the political environment in which they operate. The second principle states that a significant increase in the use of the new media is much more likely to follow a significant amount of protest activity than to precede it. The study examines these two principles using political, media, and protest data from twenty Arab countries and the Palestinian Authority. The findings provide strong support for the validity of the claims.",
"title": ""
},
{
"docid": "2348652010d1dec37a563e3eed15c090",
"text": "This study firstly examines the current literature concerning ERP implementation problems during implementation phases and causes of ERP implementation failure. A multiple case study research methodology was adopted to understand “why” and “how” these ERP systems could not be implemented successfully. Different stakeholders (including top management, project manager, project team members and ERP consultants) from these case studies were interviewed, and ERP implementation documents were reviewed for triangulation. An ERP life cycle framework was applied to study the ERP implementation process and the associated problems in each phase of ERP implementation. Fourteen critical failure factors were identified and analyzed, and three common critical failure factors (poor consultant effectiveness, project management effectiveness and poo555îr quality of business process re-engineering) were examined and discussed. Future research on ERP implementation and critical failure factors is discussed. It is hoped that this research will help to bridge the current literature gap and provide practical advice for both academics and practitioners.",
"title": ""
},
{
"docid": "1ef814163a5c91155a2d7e1b4b19f4d7",
"text": "In this article, a frequency reconfigurable fractal patch antenna using pin diodes is proposed and studied. The antenna structure has been designed on FR-4 low-cost substrate material of relative permittivity εr = 4.4, with a compact volume of 30×30×0.8 mm3. The bandwidth and resonance frequency of the antenna design will be increased when we exploit the fractal iteration on the patch antenna. This antenna covers some service bands such as: WiMAX, m-WiMAX, WLAN, C-band and X band applications. The simulation of the proposed antenna is carried out using CST microwave studio. The radiation pattern and S parameter are further presented and discussed.",
"title": ""
},
{
"docid": "2c79e4e8563b3724014a645340b869ce",
"text": "Development of linguistic technologies and penetration of social media provide powerful possibilities to investigate users' moods and psychological states of people. In this paper we discussed possibility to improve accuracy of stock market indicators predictions by using data about psychological states of Twitter users. For analysis of psychological states we used lexicon-based approach, which allow us to evaluate presence of eight basic emotions in more than 755 million tweets. The application of Support Vectors Machine and Neural Networks algorithms to predict DJIA and S&P500 indicators are discussed.",
"title": ""
},
{
"docid": "fabcb243bff004279cfb5d522a7bed4b",
"text": "Vein pattern is the network of blood vessels beneath person’s skin. Vein patterns are sufficiently different across individuals, and they are stable unaffected by ageing and no significant changed in adults by observing. It is believed that the patterns of blood vein are unique to every individual, even among twins. Finger vein authentication technology has several important features that set it apart from other forms of biometrics as a highly secure and convenient means of personal authentication. This paper presents a finger-vein image matching method based on minutiae extraction and curve analysis. This proposed system is implemented in MATLAB. Experimental results show that the proposed method performs well in improving finger-vein matching accuracy.",
"title": ""
},
{
"docid": "6deab7156f09594f497806d6f6ad2a27",
"text": "The development of the Multidimensional Health Locus of Control scales is described. Scales have been developed to tap beliefs that the source of reinforcements for health-related behaviors is primarily internal, a matter of chance, or under the control of powerful others. These scales are based on earlier work with a general Health Locus of Control Scale, which, in turn, was developed from Rotter's social learning theory. Equivalent forms of the scales are presented along with initial internal consistency and validity data. Possible means of utilizing these scales are provided.",
"title": ""
},
{
"docid": "027e10898845955beb5c81518f243555",
"text": "As the field of Natural Language Processing has developed, research has progressed on ambitious semantic tasks like Recognizing Textual Entailment (RTE). Systems that approach these tasks may perform sophisticated inference between sentences, but often depend heavily on lexical resources like WordNet to provide critical information about relationships and entailments between lexical items. However, lexical resources are expensive to create and maintain, and are never fully comprehensive. Distributional Semantics has long provided a method to automatically induce meaning representations for lexical items from large corpora with little or no annotation efforts. The resulting representations are excellent as proxies of semantic similarity: words will have similar representations if their semantic meanings are similar. Yet, knowing two words are similar does not tell us their relationship or whether one entails the other. We present several models for identifying specific relationships and entailments from distributional representations of lexical semantics. Broadly, this work falls into two distinct but related areas: the first predicts specific ontology relations and entailment decisions between lexical items devoid of context; and the second predicts specific lexical paraphrases in complete sentences. We provide insight and analysis of how and why our models are able to generalize to novel lexical items and improve upon prior work. We propose several shortand long-term extensions to our work. In the short term, we propose applying one of our hypernymy-detection models to other relationships and evaluating our more recent work in an end-to-end RTE system. In the long-term, we propose adding consistency constraints to our lexical relationship prediction, better integration of context into our lexical paraphrase model, and new distributional models for improving word representations.",
"title": ""
},
{
"docid": "bffbc725b52468b41c53b156f6eadedb",
"text": "This paper presents the design and experimental evaluation of an underwater robot that is propelled by a pair of lateral undulatory fins, inspired by the locomotion of rays and cuttlefish. Each fin mechanism is comprised of three individually actuated fin rays, which are interconnected by an elastic membrane. An on-board microcontroller generates the rays’ motion pattern that result in the fins’ undulations, through which propulsion is generated. The prototype, which is fully untethered and energetically autonomous, also integrates an Inertial Measurement Unit for navigation purposes, a wireless communication module, and a video camera for recording underwater footage. Due to its small size and low manufacturing cost, the developed prototype can also serve as an educational platform for underwater robotics.",
"title": ""
}
] |
scidocsrr
|
100cb9db89c6d73c190af415c731c5ef
|
Stratification, Imaging, and Management of Acute Massive and Submassive Pulmonary Embolism.
|
[
{
"docid": "b32286014bb7105e62fba85a9aab9019",
"text": "PURPOSE\nSystemic thrombolysis for the treatment of acute pulmonary embolism (PE) carries an estimated 20% risk of major hemorrhage, including a 3%-5% risk of hemorrhagic stroke. The authors used evidence-based methods to evaluate the safety and effectiveness of modern catheter-directed therapy (CDT) as an alternative treatment for massive PE.\n\n\nMATERIALS AND METHODS\nThe systematic review was initiated by electronic literature searches (MEDLINE, EMBASE) for studies published from January 1990 through September 2008. Inclusion criteria were applied to select patients with acute massive PE treated with modern CDT. Modern techniques were defined as the use of low-profile devices (< or =10 F), mechanical fragmentation and/or aspiration of emboli including rheolytic thrombectomy, and intraclot thrombolytic injection if a local drug was infused. Relevant non-English language articles were translated into English. Paired reviewers assessed study quality and abstracted data. Meta-analysis was performed by using random effects models to calculate pooled estimates for complications and clinical success rates across studies. Clinical success was defined as stabilization of hemodynamics, resolution of hypoxia, and survival to hospital discharge.\n\n\nRESULTS\nFive hundred ninety-four patients from 35 studies (six prospective, 29 retrospective) met the criteria for inclusion. The pooled clinical success rate from CDT was 86.5% (95% confidence interval [CI]: 82.1%, 90.2%). Pooled risks of minor and major procedural complications were 7.9% (95% CI: 5.0%, 11.3%) and 2.4% (95% CI: 1.9%, 4.3%), respectively. Data on the use of systemic thrombolysis before CDT were available in 571 patients; 546 of those patients (95%) were treated with CDT as the first adjunct to heparin without previous intravenous thrombolysis.\n\n\nCONCLUSIONS\nModern CDT is a relatively safe and effective treatment for acute massive PE. At experienced centers, CDT should be considered as a first-line treatment for patients with massive PE.",
"title": ""
}
] |
[
{
"docid": "8093101949a96d27082712ce086bf11f",
"text": "Transition-based dependency parsers often need sequences of local shift and reduce operations to produce certain attachments. Correct individual decisions hence require global information about the sentence context and mistakes cause error propagation. This paper proposes a novel transition system, arc-swift, that enables direct attachments between tokens farther apart with a single transition. This allows the parser to leverage lexical information more directly in transition decisions. Hence, arc-swift can achieve significantly better performance with a very small beam size. Our parsers reduce error by 3.7–7.6% relative to those using existing transition systems on the Penn Treebank dependency parsing task and English Universal Dependencies.",
"title": ""
},
{
"docid": "443df7fa37723021c2079fd524f199ab",
"text": "OBJECTIVE\nCircumcision, performed for religious or medical reasons is the procedure of surgical excision of the skin covering the glans penis, preputium in a certain shape and dimension so as to expose the tip of the glans penis. Short- and long- term complication rates of up to 50% have been reported, varying due to the recording system of different countries in which the procedure has been accepted as a widely performed simple surgical procedure. In this study, treatment procedures in patients presented to our clinic with complications after circumcision are described and methods to decrease the rate of the complications are reviewed.\n\n\nMATERIAL AND METODS\nCases that presented to our clinic between 2010 and 2013 with early complications of circumcision were retrospectively reviewed. Cases with acceptedly major complications as excess skin excision, skin necrosis and total amputation of the glans were included in the study, while cases with minor complications such as bleeding, hematoma and infection were excluded from the study.\n\n\nRESULTS\nRepair with full- thickness skin grafts was performed in patients with excess skin excision. In cases with skin necrosis, following the debridement of the necrotic skin, primary repair or repair with full- thickness graft was performed in cases where full- thickness skin defects developed and other cases with partial skin loss were left to secondary healing. Repair with an inguinal flap was performed in the case with glans amputation.\n\n\nCONCLUSION\nCircumcisions performed by untrained individuals are to be blamed for the complications of circumcision reported in this country. The rate of complications increases during the \"circumcision feasts\" where multiple circumcisions were performed. This also predisposes to transmission of various diseases, primarily hepatitis B/C and AIDS. Circumcision is a surgical procedure that should be performed by specialists under appropriate sterile circumstances in which the rate of complications would be decreased. The child may be exposed to recurrent psychosocial and surgical trauma when it is performed by incompetent individuals.",
"title": ""
},
{
"docid": "88163c30fdafafcec1b69eaa995e3a99",
"text": "Managing privacy in the IoT presents a significant challenge. We make the case that information obtained by auditing the flows of data can assist in demonstrating that the systems handling personal data satisfy regulatory and user requirements. Thus, components handling personal data should be audited to demonstrate that their actions comply with all such policies and requirements. A valuable side-effect of this approach is that such an auditing process will highlight areas where technical enforcement has been incompletely or incorrectly specified. There is a clear role for technical assistance in aligning privacy policy enforcement mechanisms with data protection regulations. The first step necessary in producing technology to accomplish this alignment is to gather evidence of data flows. We describe our work producing, representing and querying audit data and discuss outstanding challenges.",
"title": ""
},
{
"docid": "eced9f448727b7461e253f48d9cf8505",
"text": "Near-range videos contain objects that are close to the camera. These videos often contain discontinuous depth variation (DDV), which is the main challenge to the existing video stabilization methods. Traditionally, 2D methods are robust to various camera motions (e.g., quick rotation and zooming) under scenes with continuous depth variation (CDV). However, in the presence of DDV, they often generate wobbled results due to the limited ability of their 2D motion models. Alternatively, 3D methods are more robust in handling near-range videos. We show that, by compensating rotational motions and ignoring translational motions, near-range videos can be successfully stabilized by 3D methods without sacrificing the stability too much. However, it is time-consuming to reconstruct the 3D structures for the entire video and sometimes even impossible due to rapid camera motions. In this paper, we combine the advantages of 2D and 3D methods, yielding a hybrid approach that is robust to various camera motions and can handle the near-range scenarios well. To this end, we automatically partition the input video into CDV and DDV segments. Then, the 2D and 3D approaches are adopted for CDV and DDV clips, respectively. Finally, these segments are stitched seamlessly via a constrained optimization. We validate our method on a large variety of consumer videos.",
"title": ""
},
{
"docid": "902f4f012c6e0f86228bea2f35cc691c",
"text": "Research on personality’s role in coping is inconclusive. Proactive coping ability is one’s tendency to expect and prepare for life’s challenges (Schwarzer & Taubert, 2002). This type of coping provides a refreshing conceptualization of coping that allows an examination of personality’s role in coping that transcends the current situational versus dispositional coping conundrum. Participants (N = 49) took the Proactive Coping Inventory (Greenglass, Schwarzer, & Taubert, 1999) and their results were correlated with all domains and facets of the Five-Factor Model (FFM; Costa & McCrae, 1995). Results showed strong correlations between a total score (which encompassed 6 proactive coping scales), and Extraversion, Agreeableness, Conscientiousness, and Neuroticism, as well as between several underlying domain facets. Results also showed strong correlations between specific proactive coping subscales and several domains and facets of the FFM. Implications for the influence of innate personality factors in one’s ability to cope are discussed. An individual’s methods of coping with adversity are important aspects of their overall adaptation. Although characteristic ways of coping likely reflect learned experiences and situational factors to some degree, it is also likely that innate dispositions contribute to specific coping styles and overall ability to cope. Thus, there may be systematic relationships between enduring personality traits and coping ability. To show the theoretical importance of such a relationship, an account of empirical data that highlights the fundamental role of personality will develop a rationale for the hypothesized influence of personality on overall adaptation, and reasons why personality is likely to affect coping ability. Personality Until recently, the field has lacked consensus regarding an overall, comprehensive theory of personality. The emergence of the Five-Factor Model (FFM) over the past 10 to 15 years has provided a valuable paradigm from which to gain deeper understanding of important adaptational characteristics. Though there is still some disparity with regard to the comprehensiveness and conversely the succinctness of the model, there is no other model as well supported by research than the FFM (McCrae & John, 1992). The FiveFactor Model (FFM) consists of five broad domains and 30 lower-order facets that surfaced over decades of research and factor analysis (see Cattell, 1943, for an in-depth review). Though debate ensues concerning the exact name of each domain (Loehlin, Hambrick... / Individual Differences Research, 2010, Vol. 8, No. 2, pp. 67-77 68 1992), it is generally agreed that five is the true number of mutually exclusive domains. The five domain names used by Costa and McCrae (1995) will be described for our purposes: Neuroticism, Extraversion, Openness to Experience, Agreeableness, and Conscientiousness. Neuroticism is best understood as “individual differences in the tendency to experience distress” (McCrae & John, 1992, p. 195). Further, Neuroticism is ways in which a person thinks about and deals with problems and experiences that arise due to their susceptibility to unpleasant experiences. The definition of Extraversion is historically not as parsimonious as that of Neuroticism, because Extraversion encompasses a broader theme. The tendency toward social interaction and positive affect (Watson & Clark, 1997) is usually evident in a person who is highly extraverted. The next domain, Openness to Experience, encompasses intellectual curiosity as well as other affinities that are not related to intellect; for example, this domain has shown to describe a person who appreciates aesthetic value and who has a creative lifestyle (McCrae & John, 1992). Agreeableness is a domain that has often been associated with morality and the ability to get along with others (McCrae & John, 1992). An agreeable person would tend to work well in a group setting, because agreeableness is often expressed as a person’s tendency toward pro-social behavior (Graziano & Eisenberg, 1997). The final domain is Conscientiousness. Conscientious persons are “governed by conscious” and “diligent and thorough” (McCrae & John, 1992, p. 197). Further, Conscientiousness is often used to describe one’s ability to be in command of their behavior; i.e., driven and goal oriented (Hogan & Ones, 1997). The FFM is robust in several respects. First, the model suggests that personality is related to temperament, and is not influenced by environmental factors (McCrae et al., 2000). Instead, the ways traits are expressed are affected by culture, developmental influences, and situational factors. For example, a person’s personality can produce several different response patterns depending on the environment. Therefore, personality can be considered an enduring and relatively stable trait. Second, research on the FFM shows that the five factors are legitimate in a crosscultural context (McCrae & Costa, 1987). McCrae and Costa showed that six different translations of their FFM-based personality test, the NEO-PI-R, supported the validity of the previously described five factors. Moreover, the same five factors were evident and dominant in many different cultures that utilize extremely diverse linguistic patterns (1987). In a more recent study (McCrae et al., 2000) that investigated “intrinsic maturation”, pan-cultural age-related changes in personality profiles were evidenced. The implication is that as people in diverse cultures age, uniform changes in their personality profiles are observed. The emergent pattern showed that levels of Neuroticism, Extraversion, and Openness to Experience decrease with age, and that levels of Agreeableness and Conscientiousness increase with age in many cultures (McCrae et al., 2000). Gender differences in personality also seem to be cross-cultural. Williams, Satterwhite, and Best (1999) used data from 25 countries that had previously been used in the identification of gender stereotypes. A re-analysis of these data in the context of the FFM showed that the cross-cultural gender stereotype for females was higher on Agreeableness than it was for males, and the cross-cultural gender stereotype for males Hambrick... / Individual Differences Research, 2010, Vol. 8, No. 2, pp. 67-77 69 was higher than females on the other four domains. Though these data do not represent actual male and female responses on a personality inventory, it is remarkable that gender stereotypes alone would relate so distinctly to the FFM. The FFM has amassed plenty of evidence that personality is pervasive, enduring, and basic. Though individuals experience circumstances that cultivate certain abstract characteristics and promote particular outcomes, these tendencies and outcomes are derivatives of a diathesis that is created by personality traits (Costa & McCrae, 1992). Thus, it is practical to use personality to predict adaptational characteristics, such as coping ability. Coping Folkman and Lazarus (1980) defined coping as “the cognitive and behavioral efforts made to master, tolerate, or reduce external and internal demands and conflicts among them” (p. 223). The cognitive aspect of coping ability pertains to how threatening or important to the well-being of a person a stressful event is considered to be. The behavioral aspect of coping ability refers to the actual strategies and techniques a person employs to either change the situation (problem-focused coping) or to deal with the distressful emotions that arose due to the situation (emotion-focused coping). Clearly, the concept of coping is multi-faceted. The ways in which people appraise situations vary, the ways in which situations influence the options a person has to contend with situations vary, and the person-centered characteristics that predispose a person to certain appraisals and responses at each stage of the coping situation vary. Accordingly, Lazarus and Folkman (1987) formulated a transactional theory of coping that considers both a person’s coping response and their cognitive appraisal of the situation. This theory suggests that the person-environment interaction is dynamic and largely unpredictable. Despite evidence for coping as a process and the impact of situational factors on coping, it is important to realize that exact strategies employed are highly variable from person to person (Folkman & Lazarus, 1985). In addition, Lazarus and Folkman (1987) suggest that person-centered characteristics are influential to coping at the most basic level. For example, they recognize that emotion-focused coping tends to be related to person-centered characteristics; for example, some people are not able to cognitively reduce their stress or anxiety, while others are. In addition, the concept of cognitive appraisal creates the possibility that some people will appraise events to be more threatening or more amenable than others. Moreover, different people employ diverse behavioral styles to cope with the same situation (Folkman & Lazarus, 1985). Since the emergence and prominence of the FFM, the focus in coping research has moved increasingly toward an attempt to understand the dispositional basis of coping. Studies that employ dispositional coping measures (see Carver, Scheier, & Weintraub, 1989, for one such scale) have examined the relationship of self-reported coping tendencies to the FFM. One study (Watson & Hubbard, 1996) found that Neuroticism relates to maladaptive coping styles, Conscientiousness relates to problem-focused, action-oriented coping styles, Extraversion relates to social-support seeking, and Agreeableness shows only a modest correlation to coping style. O’Brien and DeLongis (1996) observed similar results, but continued to assert that the best understanding of the Hambrick... / Individual Differences Research, 2010, Vol. 8, No. 2, pp. 67-77 70 role of personality in the coping process is one that takes situational and dispositional ",
"title": ""
},
{
"docid": "b39afe542e7c1a05f18de205d9588e0c",
"text": "Transmission of Web3D media over the Internet can be slow, especially when downloading huge 3D models through relatively limited bandwidth. Currently, 3D compression and progressive meshes are used to alleviate the problem, but these schemes do not consider similarity among the 3D components, leaving rooms for improvement in terms of efficiency. This paper proposes a similarity-aware 3D model reduction method, called Lightweight Progressive Meshes (LPM). The key idea of LPM is to search similar components in a 3D model, and reuse them through the construction of a Lightweight Scene Graph (LSG). The proposed LPM offers three significant benefits. First, the size of 3D models can be reduced for transmission without almost any precision loss of the original models. Second, when rendering, decompression is not needed to restore the original model, and instanced rendering can be fully exploited. Third, it is extremely efficient under very limited bandwidth, especially when transmitting large 3D scenes. Performance on real data justifies the effectiveness of our LPM, which improves the state-of-the-art in Web3D media transmission.",
"title": ""
},
{
"docid": "644729aad373c249100181fa0b0775e8",
"text": "Cloud broker is an entity that manages the use, performance and delivery of cloud services and negotiates relationships between cloud providers and cloud consumers. In real life scenarios, automated cloud service brokering is often challenging because the service descriptions may involve complex constraints and require flexible semantic matching. Furthermore, cloud providers often use non-standard formats leading to semantic interoperability issues. In this paper, we formulate cloud service brokering under a service oriented framework, and propose a novel OWL-S based semantic cloud service discovery and selection system. The proposed system supports dynamic semantic matching of cloud services described with complex constraints. We consider a practical cloud service brokering scenario, and show with detailed illustration that our system is promising for real-life applications.",
"title": ""
},
{
"docid": "1b0abb269fcfddc9dd00b3f8a682e873",
"text": "Fully convolutional neural networks (F-CNNs) have set the state-of-the-art in image segmentation for a plethora of applications. Architectural innovations within F-CNNs have mainly focused on improving spatial encoding or network connectivity to aid gradient flow. In this paper, we explore an alternate direction of recalibrating the feature maps adaptively, to boost meaningful features, while suppressing weak ones. We draw inspiration from the recently proposed squeeze & excitation (SE) module for channel recalibration of feature maps for image classification. Towards this end, we introduce three variants of SE modules for image segmentation, (i) squeezing spatially and exciting channel-wise (cSE), (ii) squeezing channel-wise and exciting spatially (sSE) and (iii) concurrent spatial and channel squeeze & excitation (scSE). We effectively incorporate these SE modules within three different state-of-theart F-CNNs (DenseNet, SD-Net, U-Net) and observe consistent improvement of performance across all architectures, while minimally effecting model complexity. Evaluations are performed on two challenging applications: whole brain segmentation on MRI scans and organ segmentation on whole body contrast enhanced CT scans.",
"title": ""
},
{
"docid": "8ba94bf9142c924aaf131c5571a5a661",
"text": "Worldwide, 30% – 40% of women and 13% of men suffer from osteoporotic fractures of the bone, particularly the older people. Doctors in the hospitals need to manually inspect a large number of x-ray images to identify the fracture cases. Automated detection of fractures in x-ray images can help to lower the workload of doctors by screening out the easy cases, leaving a small number of difficult cases and the second confirmation to the doctors to examine more closely. To our best knowledge, such a system does not exist as yet. This paper describes a method of measuring the neck-shaft angle of the femur, which is one of the main diagnostic rules that doctors use to determine whether a fracture is present at the femur. Experimental tests performed on test images confirm that the method is accurate in measuring neck-shaft angle and detecting certain types of femur fractures.",
"title": ""
},
{
"docid": "903a5b7fb82d3d46b02e720b2db9c982",
"text": "A heuristic recursive algorithm for the two-dimensional rectangular strip packing problem is presented. It is based on a recursive structure combined with branch-and-bound techniques. Several lengths are tried to determine the minimal plate length to hold all the items. Initially the plate is taken as a block. For the current block considered, the algorithm selects an item, puts it at the bottom-left corner of the block, and divides the unoccupied region into two smaller blocks with an orthogonal cut. The dividing cut is vertical if the block width is equal to the plate width; otherwise it is horizontal. Both lower and upper bounds are used to prune unpromising branches. The computational results on a class of benchmark problems indicate that the algorithm performs better than several recently published algorithms. 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "6e47d81ddb9a1632d0ef162c92b0a454",
"text": "Neural machine translation (NMT) systems have recently achieved results comparable to the state of the art on a few translation tasks, including English→French and English→German. The main purpose of the Montreal Institute for Learning Algorithms (MILA) submission to WMT’15 is to evaluate this new approach on a greater variety of language pairs. Furthermore, the human evaluation campaign may help us and the research community to better understand the behaviour of our systems. We use the RNNsearch architecture, which adds an attention mechanism to the encoderdecoder. We also leverage some of the recent developments in NMT, including the use of large vocabularies, unknown word replacement and, to a limited degree, the inclusion of monolingual language models.",
"title": ""
},
{
"docid": "d5238992b0433383023df48fd99fd656",
"text": "We compute upper and lower bounds on the VC dimension and pseudodimension of feedforward neural networks composed of piecewise polynomial activation functions. We show that if the number of layers is fixed, then the VC dimension and pseudo-dimension grow as W log W, where W is the number of parameters in the network. This result stands in opposition to the case where the number of layers is unbounded, in which case the VC dimension and pseudo-dimension grow as W2. We combine our results with recently established approximation error rates and determine error bounds for the problem of regression estimation by piecewise polynomial networks with unbounded weights.",
"title": ""
},
{
"docid": "7749b46bc899b3d876d63d8f3d0981ea",
"text": "This paper details the control and guidance architecture for the T-wing tail-sitter unmanned air vehicle, (UAV). The T-wing is a vertical take off and landing (VTOL) UAV that is capable of both wing-born horizontal flight and propeller born vertical mode flight including hover and descent. During low-speed vertical flight the T-wing uses propeller wash over its aerodynamic surfaces to effect control. At the lowest level, the vehicle uses a mixture of classical and LQR controllers for angular rate and translational velocity control. These low-level controllers are directed by a series of proportional guidance controllers for the vertical, horizontal and transition flight modes that allow the vehicle to achieve autonomous waypoint navigation. The control design for the T-wing is complicated by the large differences in vehicle dynamics between vertical and horizontal flight; the difficulty of accurately predicting the low-speed vehicle aerodynamics; and the basic instability of the vertical flight mode. This paper considers the control design problem for the T-wing in light of these factors. In particular it focuses on the integration of all the different types and levels of controllers into a full flight-vehicle control system.",
"title": ""
},
{
"docid": "a0fcd09ea8f29a0827385ae9f48ddd44",
"text": "Networks play a central role in modern data analysis, enabling us to reason about systems by studying the relationships between their parts. Most often in network analysis, the edges are given. However, in many systems it is difficult or impossible to measure the network directly. Examples of latent networks include economic interactions linking financial instruments and patterns of reciprocity in gang violence. In these cases, we are limited to noisy observations of events associated with each node. To enable analysis of these implicit networks, we develop a probabilistic model that combines mutuallyexciting point processes with random graph models. We show how the Poisson superposition principle enables an elegant auxiliary variable formulation and a fully-Bayesian, parallel inference algorithm. We evaluate this new model empirically on several datasets.",
"title": ""
},
{
"docid": "ec44e814277dd0d45a314c42ef417cbe",
"text": "INTRODUCTION Oxygen support therapy should be given to the patients with acute hypoxic respiratory insufficiency in order to provide oxygenation of the tissues until the underlying pathology improves. The inspiratory flow rate requirement of patients with respiratory insufficiency varies between 30 and 120 L/min. Low flow and high flow conventional oxygen support systems produce a maximum flow rate of 15 L/min, and FiO2 changes depending on the patient’s peak inspiratory flow rate, respiratory pattern, the mask that is used, or the characteristics of the cannula. The inability to provide adequate airflow leads to discomfort in tachypneic patients. With high-flow nasal oxygen (HFNO) cannulas, warmed and humidified air matching the body temperature can be regulated at flow rates of 5–60 L/min, and oxygen delivery varies between 21% and 100%. When HFNO, first used in infants, was reported to increase the risk of infection, its long-term use was stopped. This problem was later eliminated with the use of sterile water, and its use has become a current issue in critical adult patients as well. Studies show that HFNO treatment improves physiological parameters when compared to conventional oxygen systems. Although there are studies indicating successful applications in different patient groups, there are also studies indicating that it does not create any difference in clinical parameters, but patient comfort is better in HFNO when compared with standard oxygen therapy and noninvasive mechanical ventilation (NIMV) (1-6). In this compilation, the physiological effect mechanisms of HFNO treatment and its use in various clinical situations are discussed in the light of current studies.",
"title": ""
},
{
"docid": "e4c27a97a355543cf113a16bcd28ca50",
"text": "A metamaterial-based broadband low-profile grid-slotted patch antenna is presented. By slotting the radiating patch, a periodic array of series capacitor loaded metamaterial patch cells is formed, and excited through the coupling aperture in a ground plane right underneath and parallel to the slot at the center of the patch. By exciting two adjacent resonant modes simultaneously, broadband impedance matching and consistent radiation are achieved. The dispersion relation of the capacitor-loaded patch cell is applied in the mode analysis. The proposed grid-slotted patch antenna with a low profile of 0.06 λ0 (λ0 is the center operating wavelength in free space) achieves a measured bandwidth of 28% for the |S11| less than -10 dB and maximum gain of 9.8 dBi.",
"title": ""
},
{
"docid": "708d024f7fccc00dd3961ecc9aca1893",
"text": "Transportation networks play a crucial role in human mobility, the exchange of goods and the spread of invasive species. With 90 per cent of world trade carried by sea, the global network of merchant ships provides one of the most important modes of transportation. Here, we use information about the itineraries of 16 363 cargo ships during the year 2007 to construct a network of links between ports. We show that the network has several features that set it apart from other transportation networks. In particular, most ships can be classified into three categories: bulk dry carriers, container ships and oil tankers. These three categories do not only differ in the ships' physical characteristics, but also in their mobility patterns and networks. Container ships follow regularly repeating paths whereas bulk dry carriers and oil tankers move less predictably between ports. The network of all ship movements possesses a heavy-tailed distribution for the connectivity of ports and for the loads transported on the links with systematic differences between ship types. The data analysed in this paper improve current assumptions based on gravity models of ship movements, an important step towards understanding patterns of global trade and bioinvasion.",
"title": ""
},
{
"docid": "77bdd6c3f5065ef4abfaa70d34bc020a",
"text": "The discovery of disease-causing mutations typically requires confirmation of the variant or gene in multiple unrelated individuals, and a large number of rare genetic diseases remain unsolved due to difficulty identifying second families. To enable the secure sharing of case records by clinicians and rare disease scientists, we have developed the PhenomeCentral portal (https://phenomecentral.org). Each record includes a phenotypic description and relevant genetic information (exome or candidate genes). PhenomeCentral identifies similar patients in the database based on semantic similarity between clinical features, automatically prioritized genes from whole-exome data, and candidate genes entered by the users, enabling both hypothesis-free and hypothesis-driven matchmaking. Users can then contact other submitters to follow up on promising matches. PhenomeCentral incorporates data for over 1,000 patients with rare genetic diseases, contributed by the FORGE and Care4Rare Canada projects, the US NIH Undiagnosed Diseases Program, the EU Neuromics and ANDDIrare projects, as well as numerous independent clinicians and scientists. Though the majority of these records have associated exome data, most lack a molecular diagnosis. PhenomeCentral has already been used to identify causative mutations for several patients, and its ability to find matching patients and diagnose these diseases will grow with each additional patient that is entered.",
"title": ""
},
{
"docid": "9efa07624d538272a5da844c74b2f56d",
"text": "Electronic health records (EHRs), digitization of patients’ health record, offer many advantages over traditional ways of keeping patients’ records, such as easing data management and facilitating quick access and real-time treatment. EHRs are a rich source of information for research (e.g. in data analytics), but there is a risk that the published data (or its leakage) can compromise patient privacy. The k-anonymity model is a widely used privacy model to study privacy breaches, but this model only studies privacy against identity disclosure. Other extensions to mitigate existing limitations in k-anonymity model include p-sensitive k-anonymity model, p+-sensitive k-anonymity model, and (p, α)-sensitive k-anonymity model. In this paper, we point out that these existing models are inadequate in preserving the privacy of end users. Specifically, we identify situations where p+sensitive k-anonymity model is unable to preserve the privacy of individuals when an adversary can identify similarities among the categories of sensitive values. We term such attack as Categorical Similarity Attack (CSA). Thus, we propose a balanced p+-sensitive k-anonymity model, as an extension of the p+-sensitive k-anonymity model. We then formally analyze the proposed model using High-Level Petri Nets (HLPN) and verify its properties using SMT-lib and Z3 solver.We then evaluate the utility of release data using standard metrics and show that our model outperforms its counterparts in terms of privacy vs. utility tradeoff. © 2017 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
ca8916e9093b82a22f0eb62bf055f942
|
Understanding and Designing Complex Systems: Response to "A framework for optimal high-level descriptions in science and engineering - preliminary report"
|
[
{
"docid": "0f9ef379901c686df08dd0d1bb187e22",
"text": "This paper studies the minimum achievable source coding rate as a function of blocklength <i>n</i> and probability ϵ that the distortion exceeds a given level <i>d</i> . Tight general achievability and converse bounds are derived that hold at arbitrary fixed blocklength. For stationary memoryless sources with separable distortion, the minimum rate achievable is shown to be closely approximated by <i>R</i>(<i>d</i>) + √<i>V</i>(<i>d</i>)/(<i>n</i>) <i>Q</i><sup>-1</sup>(ϵ), where <i>R</i>(<i>d</i>) is the rate-distortion function, <i>V</i>(<i>d</i>) is the rate dispersion, a characteristic of the source which measures its stochastic variability, and <i>Q</i><sup>-1</sup>(·) is the inverse of the standard Gaussian complementary cumulative distribution function.",
"title": ""
}
] |
[
{
"docid": "58c488555240ded980033111a9657be4",
"text": "BACKGROUND\nThe management of opioid-induced constipation (OIC) is often complicated by the fact that clinical measures of constipation do not always correlate with patient perception. As the discomfort associated with OIC can lead to poor compliance with the opioid treatment, a shift in focus towards patient assessment is often advocated.\n\n\nSCOPE\nThe Bowel Function Index * (BFI) is a new patient-assessment scale that has been developed and validated specifically for OIC. It is a physician-administered, easy-to-use scale made up of three items (ease of defecation, feeling of incomplete bowel evacuation, and personal judgement of constipation). An extensive analysis has been performed in order to validate the BFI as reliable, stable, clinically valid, and responsive to change in patients with OIC, with a 12-point change in score constituting a clinically relevant change in constipation.\n\n\nFINDINGS\nThe results of the validation analysis were based on major clinical trials and have been further supported by data from a large open-label study and a pharmaco-epidemiological study, in which the BFI was used effectively to assess OIC in a large population of patients treated with opioids. Although other patient self-report scales exist, the BFI offers several unique advantages. First, by being physician-administered, the BFI minimizes reading and comprehension difficulties; second, by offering general and open-ended questions which capture patient perspective, the BFI is likely to detect most patients suffering from OIC; third, by being short and easy-to-use, it places little burden on the patient, thereby increasing the likelihood of gathering accurate information.\n\n\nCONCLUSION\nAltogether, the available data suggest that the BFI will be useful in clinical trials and in daily practice.",
"title": ""
},
{
"docid": "e6ca00d92f6e54ec66943499fba77005",
"text": "This paper covers aspects of governing information data on enterprise level using IBM solutions. In particular it focus on one of the key elements of governance — data lineage for EU GDPR regulations.",
"title": ""
},
{
"docid": "e2ba4f88f4b1a8afcf51882bc7cfa634",
"text": "The embodied and situated approach to artificial intelligence (AI) has matured and become a viable alternative to traditional computationalist approaches with respect to the practical goal of building artificial agents, which can behave in a robust and flexible manner under changing real-world conditions. Nevertheless, some concerns have recently been raised with regard to the sufficiency of current embodied AI for advancing our scientific understanding of intentional agency. While from an engineering or computer science perspective this limitation might not be relevant, it is of course highly relevant for AI researchers striving to build accurate models of natural cognition. We argue that the biological foundations of enactive cognitive science can provide the conceptual tools that are needed to diagnose more clearly the shortcomings of current embodied AI. In particular, taking an enactive perspective points to the need for AI to take seriously the organismic roots of autonomous agency and sense-making. We identify two necessary systemic requirements, namely constitutive autonomy and adaptivity, which lead us to introduce two design principles of enactive AI. It is argued that the development of such enactive AI poses a significant challenge to current methodologies. However, it also provides a promising way of eventually overcoming the current limitations of embodied AI, especially in terms of providing fuller models of natural embodied cognition. Finally, some practical implications and examples of the two design principles of enactive AI are also discussed.",
"title": ""
},
{
"docid": "565a8ea886a586dc8894f314fa21484a",
"text": "BACKGROUND\nThe Entity Linking (EL) task links entity mentions from an unstructured document to entities in a knowledge base. Although this problem is well-studied in news and social media, this problem has not received much attention in the life science domain. One outcome of tackling the EL problem in the life sciences domain is to enable scientists to build computational models of biological processes with more efficiency. However, simply applying a news-trained entity linker produces inadequate results.\n\n\nMETHODS\nSince existing supervised approaches require a large amount of manually-labeled training data, which is currently unavailable for the life science domain, we propose a novel unsupervised collective inference approach to link entities from unstructured full texts of biomedical literature to 300 ontologies. The approach leverages the rich semantic information and structures in ontologies for similarity computation and entity ranking.\n\n\nRESULTS\nWithout using any manual annotation, our approach significantly outperforms state-of-the-art supervised EL method (9% absolute gain in linking accuracy). Furthermore, the state-of-the-art supervised EL method requires 15,000 manually annotated entity mentions for training. These promising results establish a benchmark for the EL task in the life science domain. We also provide in depth analysis and discussion on both challenges and opportunities on automatic knowledge enrichment for scientific literature.\n\n\nCONCLUSIONS\nIn this paper, we propose a novel unsupervised collective inference approach to address the EL problem in a new domain. We show that our unsupervised approach is able to outperform a current state-of-the-art supervised approach that has been trained with a large amount of manually labeled data. Life science presents an underrepresented domain for applying EL techniques. By providing a small benchmark data set and identifying opportunities, we hope to stimulate discussions across natural language processing and bioinformatics and motivate others to develop techniques for this largely untapped domain.",
"title": ""
},
{
"docid": "e82631018c9bc25098882cc8464a8d7b",
"text": "This paper describes several existing data link layer protocols that provide real-time capabilities on wired networks, focusing on token-ring and Carrier Sense Multiple Access based networks. Existing modifications to provide better real-time capabilities and performance are also described. Finally the pros and cons regarding the At-Home Anywhere project are discussed.",
"title": ""
},
{
"docid": "2fde207669557def4e22612d51f31afe",
"text": "Using neural networks for learning motion controllers from motion capture data is becoming popular due to the natural and smooth motions they can produce, the wide range of movements they can learn and their compactness once they are trained. Despite these advantages, these systems require large amounts of motion capture data for each new character or style of motion to be generated, and systems have to undergo lengthy retraining, and often reengineering, to get acceptable results. This can make the use of these systems impractical for animators and designers and solving this issue is an open and rather unexplored problem in computer graphics. In this paper we propose a transfer learning approach for adapting a learned neural network to characters that move in different styles from those on which the original neural network is trained. Given a pretrained character controller in the form of a Phase-Functioned Neural Network for locomotion, our system can quickly adapt the locomotion to novel styles using only a short motion clip as an example. We introduce a canonical polyadic tensor decomposition to reduce the amount of parameters required for learning from each new style, which both reduces the memory burden at runtime and facilitates learning from smaller quantities of data. We show that our system is suitable for learning stylized motions with few clips of motion data and synthesizing smooth motions in real-time. CCS Concepts •Computing methodologies → Animation; Neural networks; Motion capture;",
"title": ""
},
{
"docid": "05874da7b27475377dcd8f7afdd1bc5a",
"text": "The main aim of this paper is to provide automatic irrigation to the plants which helps in saving money and water. The entire system is controlled using 8051 micro controller which is programmed as giving the interrupt signal to the sprinkler.Temperature sensor and humidity sensor are connected to internal ports of micro controller via comparator,When ever there is a change in temperature and humidity of the surroundings these sensors senses the change in temperature and humidity and gives an interrupt signal to the micro-controller and thus the sprinkler is activated.",
"title": ""
},
{
"docid": "b8dcf30712528af93cb43c5960435464",
"text": "The first clinical description of Parkinson's disease (PD) will embrace its two century anniversary in 2017. For the past 30 years, mitochondrial dysfunction has been hypothesized to play a central role in the pathobiology of this devastating neurodegenerative disease. The identifications of mutations in genes encoding PINK1 (PTEN-induced kinase 1) and Parkin (E3 ubiquitin ligase) in familial PD and their functional association with mitochondrial quality control provided further support to this hypothesis. Recent research focused mainly on their key involvement in the clearance of damaged mitochondria, a process known as mitophagy. It has become evident that there are many other aspects of this complex regulated, multifaceted pathway that provides neuroprotection. As such, numerous additional factors that impact PINK1/Parkin have already been identified including genes involved in other forms of PD. A great pathogenic overlap amongst different forms of familial, environmental and even sporadic disease is emerging that potentially converges at the level of mitochondrial quality control. Tremendous efforts now seek to further detail the roles and exploit PINK1 and Parkin, their upstream regulators and downstream signaling pathways for future translation. This review summarizes the latest findings on PINK1/Parkin-directed mitochondrial quality control, its integration and cross-talk with other disease factors and pathways as well as the implications for idiopathic PD. In addition, we highlight novel avenues for the development of biomarkers and disease-modifying therapies that are based on a detailed understanding of the PINK1/Parkin pathway.",
"title": ""
},
{
"docid": "6ad201e411520ff64881b49915415788",
"text": "What is the right supervisory signal to train visual representations? Current approaches in computer vision use category labels from datasets such as ImageNet to train ConvNets. However, in case of biological agents, visual representation learning does not require millions of semantic labels. We argue that biological agents use physical interactions with the world to learn visual representations unlike current vision systems which just use passive observations (images and videos downloaded from web). For example, babies push objects, poke them, put them in their mouth and throw them to learn representations. Towards this goal, we build one of the first systems on a Baxter platform that pushes, pokes, grasps and observes objects in a tabletop environment. It uses four different types of physical interactions to collect more than 130K datapoints, with each datapoint providing supervision to a shared ConvNet architecture allowing us to learn visual representations. We show the quality of learned representations by observing neuron activations and performing nearest neighbor retrieval on this learned representation. Quantitatively, we evaluate our learned ConvNet on image classification tasks and show improvements compared to learning without external data. Finally, on the task of instance retrieval, our network outperforms the ImageNet network on recall@1 by 3 %.",
"title": ""
},
{
"docid": "df6f6e52f97cfe2d7ff54d16ed9e2e54",
"text": "Example-based texture synthesis algorithms have gained widespread popularity for their ability to take a single input image and create a perceptually similar non-periodic texture. However, previous methods rely on single input exemplars that can capture only a limited band of spatial scales. For example, synthesizing a continent-like appearance at a variety of zoom levels would require an impractically high input resolution. In this paper, we develop a multiscale texture synthesis algorithm. We propose a novel example-based representation, which we call an exemplar graph, that simply requires a few low-resolution input exemplars at different scales. Moreover, by allowing loops in the graph, we can create infinite zooms and infinitely detailed textures that are impossible with current example-based methods. We also introduce a technique that ameliorates inconsistencies in the user's input, and show that the application of this method yields improved interscale coherence and higher visual quality. We demonstrate optimizations for both CPU and GPU implementations of our method, and use them to produce animations with zooming and panning at multiple scales, as well as static gigapixel-sized images with features spanning many spatial scales.",
"title": ""
},
{
"docid": "7526ae3542d1e922bd73be0da7c1af72",
"text": "Cooperative coevolutionary algorithms (CCEAs) rely on multiple coevolving populations for the evolution of solutions composed of coadapted components. CCEAs enable, for instance, the evolution of cooperative multiagent systems composed of heterogeneous agents, where each agent is modelled as a component of the solution. Previous works have, however, shown that CCEAs are biased toward stability: the evolutionary process tends to converge prematurely to stable states instead of (near-)optimal solutions. In this study, we show how novelty search can be used to avoid the counterproductive attraction to stable states in coevolution. Novelty search is an evolutionary technique that drives evolution toward behavioural novelty and diversity rather than exclusively pursuing a static objective. We evaluate three novelty-based approaches that rely on, respectively (1) the novelty of the team as a whole, (2) the novelty of the agents’ individual behaviour, and (3) the combination of the two. We compare the proposed approaches with traditional fitness-driven cooperative coevolution in three simulated multirobot tasks. Our results show that team-level novelty scoring is the most effective approach, significantly outperforming fitness-driven coevolution at multiple levels. Novelty-driven cooperative coevolution can substantially increase the potential of CCEAs while maintaining a computational complexity that scales well with the number of populations.",
"title": ""
},
{
"docid": "b5270bbcbe8ed4abf8ae5dabe02bb933",
"text": "We address the use of three-dimensional facial shape information for human face identification. We propose a new method to represent faces as 3D registered point clouds. Fine registration of facial surfaces is done by first automatically finding important facial landmarks and then, establishing a dense correspondence between points on the facial surface with the help of a 3D face template-aided thin plate spline algorithm. After the registration of facial surfaces, similarity between two faces is defined as a discrete approximation of the volume difference between facial surfaces. Experiments done on the 3D RMA dataset show that the proposed algorithm performs as good as the point signature method, and it is statistically superior to the point distribution model-based method and the 2D depth imagery technique. In terms of computational complexity, the proposed algorithm is faster than the point signature method.",
"title": ""
},
{
"docid": "ca21a20152eef5081fa51e7f3a5c2d87",
"text": "We review some of the most widely used patterns for the programming of microservices: circuit breaker, service discovery, and API gateway. By systematically analysing different deployment strategies for these patterns, we reach new insight especially for the application of circuit breakers. We also evaluate the applicability of Jolie, a language for the programming of microservices, for these patterns and report on other standard frameworks offering similar solutions. Finally, considerations for future developments are presented.",
"title": ""
},
{
"docid": "b75e9077cc745b15fa70267c3b0eba45",
"text": "This study explored the relation of shame proneness and guilt proneness to constructive versus destructive responses to anger among 302 children (Grades 4-6), adolescents (Grades 7-11), 176 college students, and 194 adults. Across all ages, shame proneness was clearly related to maladaptive response to anger, including malevolent intentions; direct, indirect, and displaced aggression; self-directed hostility; and negative long-term consequences. In contrast, guilt proneness was associated with constructive means of handling anger, including constructive intentions, corrective action and non-hostile discussion with the target of the anger, cognitive reappraisals of the target's role, and positive long-term consequences. Escapist-diffusing responses showed some interesting developmental trends. Among children, these dimensions were positively correlated with guilt and largely unrelated to shame; among older participants, the results were mixed.",
"title": ""
},
{
"docid": "a07472c2f086332bf0f97806255cb9d5",
"text": "The Learning Analytics Dashboard (LAD) is an application to show students’ online behavior patterns in a virtual learning environment. This supporting tool works by tracking students’ log-files, mining massive amounts of data to find meaning, and visualizing the results so they can be comprehended at a glance. This paper reviews previously developed applications to analyze their features. Based on the implications from the review of previous studies as well as a preliminary investigation on the need for such tools, an early version of the LAD was designed and developed. Also, in order to improve the LAD, a usability test incorporating a stimulus recall interview was conducted with 38 college students in two blended learning classes. Evaluation of this tool was performed in an experimental research setting with a control group and additional surveys were conducted asking students’ about perceived usefulness, conformity, level of understanding of graphs, and their behavioral changes. The results indicated that this newly developed learning analytics tool did not significantly impact on their learning achievement. However, lessons learned from the usability and pilot tests support that visualized information impacts on students’ understanding level; and the overall satisfaction with dashboard plays as a covariant that impacts on both the degree of understanding and students’ perceived change of behavior. Taking in the results of the tests and students’ openended responses, a scaffolding strategy to help them understand the meaning of the information displayed was included in each sub section of the dashboard. Finally, this paper discusses future directions in regard to improving LAD so that it better supports students’ learning performance, which might be helpful for those who develop learning analytics applications for students.",
"title": ""
},
{
"docid": "213862a47773c5ad34aa69b8b0a951d1",
"text": "The next generation wireless networks are expected to operate in fully automated fashion to meet the burgeoning capacity demand and to serve users with superior quality of experience. Mobile wireless networks can leverage spatio-temporal information about user and network condition to embed the system with end-to-end visibility and intelligence. Big data analytics has emerged as a promising approach to unearth meaningful insights and to build artificially intelligent models with assistance of machine learning tools. Utilizing aforementioned tools and techniques, this paper contributes in two ways. First, we utilize mobile network data (Big Data)—call detail record—to analyze anomalous behavior of mobile wireless network. For anomaly detection purposes, we use unsupervised clustering techniques namely k-means clustering and hierarchical clustering. We compare the detected anomalies with ground truth information to verify their correctness. From the comparative analysis, we observe that when the network experiences abruptly high (unusual) traffic demand at any location and time, it identifies that as anomaly. This helps in identifying regions of interest in the network for special action such as resource allocation, fault avoidance solution, etc. Second, we train a neural-network-based prediction model with anomalous and anomaly-free data to highlight the effect of anomalies in data while training/building intelligent models. In this phase, we transform our anomalous data to anomaly-free and we observe that the error in prediction, while training the model with anomaly-free data has largely decreased as compared to the case when the model was trained with anomalous data.",
"title": ""
},
{
"docid": "76a2bc6a8649ffe9111bfaa911572c9d",
"text": "URL shortening services have become extremely popular. However, it is still unclear whether they are an effective and reliable tool that can be leveraged to hide malicious URLs, and to what extent these abuses can impact the end users. With these questions in mind, we first analyzed existing countermeasures adopted by popular shortening services. Surprisingly, we found such countermeasures to be ineffective and trivial to bypass. This first measurement motivated us to proceed further with a large-scale collection of the HTTP interactions that originate when web users access live pages that contain short URLs. To this end, we monitored 622 distinct URL shortening services between March 2010 and April 2012, and collected 24,953,881 distinct short URLs. With this large dataset, we studied the abuse of short URLs. Despite short URLs are a significant, new security risk, in accordance with the reports resulting from the observation of the overall phishing and spamming activity, we found that only a relatively small fraction of users ever encountered malicious short URLs. Interestingly, during the second year of measurement, we noticed an increased percentage of short URLs being abused for drive-by download campaigns and a decreased percentage of short URLs being abused for spam campaigns. In addition to these security-related findings, our unique monitoring infrastructure and large dataset allowed us to complement previous research on short URLs and analyze these web services from the user's perspective.",
"title": ""
},
{
"docid": "c5eb252d17c2bec8ab168ca79ec11321",
"text": "Personalization is pervasive in the online space as it leads to higher efficiency and revenue by allowing the most relevant content to be served to each user. However, recent studies suggest that personalization methods can propagate societal or systemic biases and polarize opinions; this has led to calls for regulatory mechanisms and algorithms to combat bias and inequality. Algorithmically, bandit optimization has enjoyed great success in learning user preferences and personalizing content or feeds accordingly. We propose an algorithmic framework that allows for the possibility to control bias or discrimination in such bandit-based personalization. Our model allows for the specification of general fairness constraints on the sensitive types of the content that can be displayed to a user. The challenge, however, is to come up with a scalable and low regret algorithm for the constrained optimization problem that arises. Our main technical contribution is a provably fast and low-regret algorithm for the fairness-constrained bandit optimization problem. Our proofs crucially leverage the special structure of our problem. Experiments on synthetic and real-world data sets show that our algorithmic framework can control bias with only a minor loss to revenue. ∗A short version of this paper appeared in the FAT/ML 2017 workshop (https://arxiv.org/abs/1707.02260) 1 ar X iv :1 80 2. 08 67 4v 1 [ cs .L G ] 2 3 Fe b 20 18",
"title": ""
},
{
"docid": "f66ebffa2efda9a4728a85c0b3a94fc7",
"text": "The vulnerability of face recognition systems is a growing concern that has drawn the interest from both academic and research communities. Despite the availability of a broad range of face presentation attack detection (PAD) (or countermeasure or antispoofing) schemes, there exists no superior PAD technique due to evolution of sophisticated presentation attacks (or spoof attacks). In this paper, we present a new perspective for face presentation attack detection by introducing light field camera (LFC). Since the use of a LFC can record the direction of each incoming ray in addition to the intensity, it exhibits an unique characteristic of rendering multiple depth (or focus) images in a single capture. Thus, we present a novel approach that involves exploring the variation of the focus between multiple depth (or focus) images rendered by the LFC that in turn can be used to reveal the presentation attacks. To this extent, we first collect a new face artefact database using LFC that comprises of 80 subjects. Face artefacts are generated by simulating two widely used attacks, such as photo print and electronic screen attack. Extensive experiments carried out on the light field face artefact database have revealed the outstanding performance of the proposed PAD scheme when benchmarked with various well established state-of-the-art schemes.",
"title": ""
},
{
"docid": "5654bea8e2fe999fe52ec7536edd0f52",
"text": "Mobile app developers constantly monitor feedback in user reviews with the goal of improving their mobile apps and better meeting user expectations. Thus, automated approaches have been proposed in literature with the aim of reducing the effort required for analyzing feedback contained in user reviews via automatic classification/prioritization according to specific topics. In this paper, we introduce SURF (Summarizer of User Reviews Feedback), a novel approach to condense the enormous amount of information that developers of popular apps have to manage due to user feedback received on a daily basis. SURF relies on a conceptual model for capturing user needs useful for developers performing maintenance and evolution tasks. Then it uses sophisticated summarisation techniques for summarizing thousands of reviews and generating an interactive, structured and condensed agenda of recommended software changes. We performed an end-to-end evaluation of SURF on user reviews of 17 mobile apps (5 of them developed by Sony Mobile), involving 23 developers and researchers in total. Results demonstrate high accuracy of SURF in summarizing reviews and the usefulness of the recommended changes. In evaluating our approach we found that SURF helps developers in better understanding user needs, substantially reducing the time required by developers compared to manually analyzing user (change) requests and planning future software changes.",
"title": ""
}
] |
scidocsrr
|
901a4c113ec10d01b934f80bb6ac0dc8
|
Software clones in scratch projects: on the presence of copy-and-paste in computational thinking learning
|
[
{
"docid": "c536e79078d7d5778895e5ac7f02c95e",
"text": "Block-based programming languages like Scratch, Alice and Blockly are becoming increasingly common as introductory languages in programming education. There is substantial research showing that these visual programming environments are suitable for teaching programming concepts. But, what do people do when they use Scratch? In this paper we explore the characteristics of Scratch programs. To this end we have scraped the Scratch public repository and retrieved 250,000 projects. We present an analysis of these projects in three different dimensions. Initially, we look at the types of blocks used and the size of the projects. We then investigate complexity, used abstractions and programming concepts. Finally we detect code smells such as large scripts, dead code and duplicated code blocks. Our results show that 1) most Scratch programs are small, however Scratch programs consisting of over 100 sprites exist, 2) programming abstraction concepts like procedures are not commonly used and 3) Scratch programs do suffer from code smells including large scripts and unmatched broadcast signals.",
"title": ""
}
] |
[
{
"docid": "686e8892c22a740fbd781f0cc0150a9d",
"text": "Difficulty with handwriting is one of the most frequent reasons that children in the public schools are referred to occupational therapy. Current research on the influence of ergonomic factors, such as pencil grip and pressure, and perceptual-motor factors traditionally believed to affect handwriting, is reviewed. Factors such as visual perception show little relationship to handwriting, whereas tactile-kinesthetic, visual-motor, and motor planning appear to be more closely related to handwriting. By better understanding the ergonomic and perceptual-motor factors that contribute to and influence handwriting, therapists will be better able to design rationally based intervention programs.",
"title": ""
},
{
"docid": "189ecff4c6f01ba870908fa4abc8db91",
"text": "Graph processing is becoming increasingly prevalent across many application domains. In spite of this prevalence, there is little research about how graphs are actually used in practice. We conducted an online survey aimed at understanding: (i) the types of graphs users have; (ii) the graph computations users run; (iii) the types of graph software users use; and (iv) the major challenges users face when processing their graphs. We describe the responses of the participants to our questions, highlighting common patterns and challenges. The participants’ responses revealed surprising facts about graph processing in practice, which we hope can guide future research.",
"title": ""
},
{
"docid": "b1e4fb97e4b1d31e4064f174e50f17d3",
"text": "We propose an inverse reinforcement learning (IRL) approach using Deep QNetworks to extract the rewards in problems with large state spaces. We evaluate the performance of this approach in a simulation-based autonomous driving scenario. Our results resemble the intuitive relation between the reward function and readings of distance sensors mounted at different poses on the car. We also show that, after a few learning rounds, our simulated agent generates collision-free motions and performs human-like lane change behaviour.",
"title": ""
},
{
"docid": "ce0cfd1dd69e235f942b2e7583b8323b",
"text": "Increasing use of the World Wide Web as a B2C commercial tool raises interest in understanding the key issues in building relationships with customers on the Internet. Trust is believed to be the key to these relationships. Given the differences between a virtual and a conventional marketplace, antecedents and consequences of trust merit re-examination. This research identifies a number of key factors related to trust in the B2C context and proposes a framework based on a series of underpinning relationships among these factors. The findings in this research suggest that people are more likely to purchase from the web if they perceive a higher degree of trust in e-commerce and have more experience in using the web. Customer’s trust levels are likely to be influenced by the level of perceived market orientation, site quality, technical trustworthiness, and user’s web experience. People with a higher level of perceived site quality seem to have a higher level of perceived market orientation and trustworthiness towards e-commerce. Furthermore, people with a higher level of trust in e-commerce are more likely to participate in e-commerce. Positive ‘word of mouth’, money back warranty and partnerships with well-known business partners, rank as the top three effective risk reduction tactics. These findings complement the previous findings on e-commerce and shed light on how to establish a trust relationship on the World Wide Web. 2003 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "9ace030a915a6ec8bf8f35b918c8c8aa",
"text": "Why are boys at risk? To address this question, I use the perspective of regulation theory to offer a model of the deeper psychoneurobiological mechanisms that underlie the vulnerability of the developing male. The central thesis of this work dictates that significant gender differences are seen between male and female social and emotional functions in the earliest stages of development, and that these result from not only differences in sex hormones and social experiences but also in rates of male and female brain maturation, specifically in the early developing right brain. I present interdisciplinary research which indicates that the stress-regulating circuits of the male brain mature more slowly than those of the female in the prenatal, perinatal, and postnatal critical periods, and that this differential structural maturation is reflected in normal gender differences in right-brain attachment functions. Due to this maturational delay, developing males also are more vulnerable over a longer period of time to stressors in the social environment (attachment trauma) and toxins in the physical environment (endocrine disruptors) that negatively impact right-brain development. In terms of differences in gender-related psychopathology, I describe the early developmental neuroendocrinological and neurobiological mechanisms that are involved in the increased vulnerability of males to autism, early onset schizophrenia, attention deficit hyperactivity disorder, and conduct disorders as well as the epigenetic mechanisms that can account for the recent widespread increase of these disorders in U.S. culture. I also offer a clinical formulation of early assessments of boys at risk, discuss the impact of early childcare on male psychopathogenesis, and end with a neurobiological model of optimal adult male socioemotional functions.",
"title": ""
},
{
"docid": "5bee5208fa2676b7a7abf4ef01f392b8",
"text": "Artificial Intelligence (AI) is a general term that implies the use of a computer to model intelligent behavior with minimal human intervention. AI is generally accepted as having started with the invention of robots. The term derives from the Czech word robota, meaning biosynthetic machines used as forced labor. In this field, Leonardo Da Vinci's lasting heritage is today's burgeoning use of robotic-assisted surgery, named after him, for complex urologic and gynecologic procedures. Da Vinci's sketchbooks of robots helped set the stage for this innovation. AI, described as the science and engineering of making intelligent machines, was officially born in 1956. The term is applicable to a broad range of items in medicine such as robotics, medical diagnosis, medical statistics, and human biology-up to and including today's \"omics\". AI in medicine, which is the focus of this review, has two main branches: virtual and physical. The virtual branch includes informatics approaches from deep learning information management to control of health management systems, including electronic health records, and active guidance of physicians in their treatment decisions. The physical branch is best represented by robots used to assist the elderly patient or the attending surgeon. Also embodied in this branch are targeted nanorobots, a unique new drug delivery system. The societal and ethical complexities of these applications require further reflection, proof of their medical utility, economic value, and development of interdisciplinary strategies for their wider application.",
"title": ""
},
{
"docid": "11ecb3df219152d33020ba1c4f8848bb",
"text": "Despite its great importance, modern network infrastructure is remarkable for the lack of rigor in its engineering. The Internet, which began as a research experiment, was never designed to handle the users and applications it hosts today. The lack of formalization of the Internet architecture meant limited abstractions and modularity, particularly for the control and management planes, thus requiring for every new need a new protocol built from scratch. This led to an unwieldy ossified Internet architecture resistant to any attempts at formal verification and to an Internet culture where expediency and pragmatism are favored over formal correctness. Fortunately, recent work in the space of clean slate Internet design-in particular, the software defined networking (SDN) paradigm-offers the Internet community another chance to develop the right kind of architecture and abstractions. This has also led to a great resurgence in interest of applying formal methods to specification, verification, and synthesis of networking protocols and applications. In this paper, we present a self-contained tutorial of the formidable amount of work that has been done in formal methods and present a survey of its applications to networking.",
"title": ""
},
{
"docid": "704df193801e9cd282c0ce2f8a72916b",
"text": "We present our preliminary work in developing augmented reali ty systems to improve methods for the construction, inspection, and renovatio n of architectural structures. Augmented reality systems add virtual computer-generated mate rial to the surrounding physical world. Our augmented reality systems use see-through headworn displays to overlay graphics and sounds on a person’s naturally occurring vision and hearing. As the person moves about, the position and orientation of his or her head is tracked, allowing the overlaid material to remai n tied to the physical world. We describe an experimental augmented reality system tha t shows the location of columns behind a finished wall, the location of re-bar s inside one of the columns, and a structural analysis of the column. We also discuss our pre liminary work in developing an augmented reality system for improving the constructio n of spaceframes. Potential uses of more advanced augmented reality systems are presented.",
"title": ""
},
{
"docid": "a37aae87354ff25bf7937adc7a9f8e62",
"text": "Vectorizing hand-drawn sketches is an important but challenging task. Many businesses rely on fashion, mechanical or structural designs which, sooner or later, need to be converted in vectorial form. For most, this is still a task done manually. This paper proposes a complete framework that automatically transforms noisy and complex hand-drawn sketches with different stroke types in a precise, reliable and highly-simplified vectorized model. The proposed framework includes a novel line extraction algorithm based on a multi-resolution application of Pearson’s cross correlation and a new unbiased thinning algorithm that can get rid of scribbles and variable-width strokes to obtain clean 1-pixel lines. Other contributions include variants of pruning, merging and edge linking procedures to post-process the obtained paths. Finally, a modification of the original Schneider’s vectorization algorithm is designed to obtain fewer control points in the resulting Bézier splines. All the steps presented in this framework have been extensively tested and compared with state-of-the-art algorithms, showing (both qualitatively and quantitatively) their outperformance. Moreover they exhibit fast real-time performance, making them suitable for integration in any computer graphics toolset.",
"title": ""
},
{
"docid": "13e61389de352298bf9581bc8a8714cc",
"text": "A bacterial gene (neo) conferring resistance to neomycin-kanamycin antibiotics has been inserted into SV40 hybrid plasmid vectors and introduced into cultured mammalian cells by DNA transfusion. Whereas normal cells are killed by the antibiotic G418, those that acquire and express neo continue to grow in the presence of G418. In the course of the selection, neo DNA becomes associated with high molecular weight cellular DNA and is retained even when cells are grown in the absence of G418 for extended periods. Since neo provides a marker for dominant selections, cell transformation to G418 resistance is an efficient means for cotransformation of nonselected genes.",
"title": ""
},
{
"docid": "3fa30df910c964bb2bf27a885aa59495",
"text": "In an Intelligent Environment, he user and the environment work together in a unique manner; the user expresses what he wishes to do, and the environment recognizes his intentions and helps out however it can. If well-implemented, such an environment allows the user to interact with it in the manner that is most natural for him personally. He should need virtually no time to learn to use it and should be more productive once he has. But to implement a useful and natural Intelligent Environment, he designers are faced with a daunting task: they must design a software system that senses what its users do, understands their intentions, and then responds appropriately. In this paper we argue that, in order to function reasonably in any of these ways, an Intelligent Environment must make use of declarative representations of what the user might do. We present our evidence in the context of the Intelligent Classroom, a facility that aids a speaker in this way and uses its understanding to produce a video of his presentation.",
"title": ""
},
{
"docid": "6133ec98d838c576f1441e9d7fa58528",
"text": "Since repositories are a key tool in making scholarly knowledge open access (OA), determining their web presence and visibility on the Web (both are proxies of web impact) is essential, particularly in Google (search engine par excellence) and Google Scholar (a tool increasingly used by researchers to search for academic information). The few studies conducted so far have been limited to very specific geographic areas (USA), which makes it necessary to find out what is happening in other regions that are not part of mainstream academia, and where repositories play a decisive role in the visibility of scholarly production. The main objective of this study is to ascertain the web presence and visibility of Latin American repositories in Google and Google Scholar through the application of page count and web mention indicators respectively. For a sample of 137 repositories, the results indicate that the indexing ratio is low in Google, and virtually nonexistent in Google Scholar; they also indicate a complete lack of correspondence between the repository records and the data produced by these two search tools. These results are mainly attributable to limitations arising from the use of description schemas that are incompatible with Google Scholar (repository design) and the reliability of web mention indicators (search engines). We conclude that neither Google nor Google Scholar accurately represent the actual size of OA content published by Latin American repositories; this may indicate a non-indexed, hidden side to OA, which could be limiting the dissemination and consumption of OA scholarly literature.",
"title": ""
},
{
"docid": "c95b4720e567003c078b7858c3b43590",
"text": "The fate of differentiation of G1E cells is determined, among other things, by a handful of transcription factors (TFs) binding the neighborhood of appropriate gene targets. The problem of understanding the dynamics of gene expression regulation is a feature learning problem on high dimensional space determined by the sizes of gene neighborhoods, but that can be projected on a much lower dimensional manifold whose space depends on the number of TFs and the number of ways they interact. To learn this manifold, we train a deep convolutional network on the activity of TF binding on 20Kb gene neighborhoods labeled by binarized levels of target gene expression. After supervised training of the model we achieve 77% accuracy as estimated by 10-fold CV. We discuss methods for the representation of the model knowledge back into the input space. We use this representation to highlight important patterns and genome locations with biological importance.",
"title": ""
},
{
"docid": "a9f70ea201e17bca3b97f6ef7b2c1c15",
"text": "Network embedding task aims at learning low-dimension latent representations of vertices while preserving the structure of a network simultaneously. Most existing network embedding methods mainly focus on static networks, which extract and condense the network information without temporal information. However, in the real world, networks keep evolving, where the linkage states between the same vertex pairs at consequential timestamps have very close correlations. In this paper, we propose to study the network embedding problem and focus on modeling the linkage evolution in the dynamic network setting. To address this problem, we propose a deep dynamic network embedding method. More specifically, the method utilizes the historical information obtained from the network snapshots at past timestamps to learn latent representations of the future network. In the proposed embedding method, the objective function is carefully designed to incorporate both the network internal and network dynamic transition structures. Extensive empirical experiments prove the effectiveness of the proposed model on various categories of real-world networks, including a human contact network, a bibliographic network, and e-mail networks. Furthermore, the experimental results also demonstrate the significant advantages of the method compared with both the state-of-the-art embedding techniques and several existing baseline methods.",
"title": ""
},
{
"docid": "836bdb7960c7679c4d7b4285f04b65b4",
"text": "PURPOSE\nBendamustine hydrochloride is an alkylating agent with novel mechanisms of action. This phase II multicenter study evaluated the efficacy and toxicity of bendamustine in patients with B-cell non-Hodgkin's lymphoma (NHL) refractory to rituximab.\n\n\nPATIENTS AND METHODS\nPatients received bendamustine 120 mg/m(2) intravenously on days 1 and 2 of each 21-day cycle. Outcomes included response, duration of response, progression-free survival, and safety.\n\n\nRESULTS\nSeventy-six patients, ages 38 to 84 years, with predominantly stage III/IV indolent (80%) or transformed (20%) disease were treated; 74 were assessable for response. Twenty-four (32%) were refractory to chemotherapy. Patients received a median of two prior unique regimens. An overall response rate of 77% (15% complete response, 19% unconfirmed complete response, and 43% partial) was observed. The median duration of response was 6.7 months (95% CI, 5.1 to 9.9 months), 9.0 months (95% CI, 5.8 to 16.7) for patients with indolent disease, and 2.3 months (95% CI, 1.7 to 5.1) for those with transformed disease. Thirty-six percent of these responses exceeded 1 year. The most frequent nonhematologic adverse events included nausea and vomiting, fatigue, constipation, anorexia, fever, cough, and diarrhea. Grade 3 or 4 reversible hematologic toxicities included neutropenia (54%), thrombocytopenia (25%), and anemia (12%).\n\n\nCONCLUSION\nSingle-agent bendamustine produced durable objective responses with acceptable toxicity in heavily pretreated patients with rituximab-refractory, indolent NHL. These findings are promising and will serve as a benchmark for future clinical trials in this novel patient population.",
"title": ""
},
{
"docid": "1e1b5ae673204208a1afbca9267bfa69",
"text": "Article History Received: 19 March 2018 Revised: 30 April 2018 Accepted: 2 May 2018 Published: 5 May 2018",
"title": ""
},
{
"docid": "d7bf9a0b87a1062fd07794660d86f9dc",
"text": "Portraiture plays a substantial role in traditional painting, yet it has not been studied in depth in painterly rendering research. The difficulty in rendering human portraits is due to our acute visual perception to the structure of human face. To achieve satisfactory results, a portrait rendering algorithm should account for facial structure. In this paper, we present an example-based method to render portrait paintings from photographs, by transferring brush strokes from previously painted portrait templates by artists. These strokes carry rich information about not only the facial structure but also how artists depict the structure with large and decisive brush strokes and vibrant colors. With a dictionary of portrait painting templates for different types of faces, we show that this method can produce satisfactory results.",
"title": ""
},
{
"docid": "e3da610a131922990edaa6216ff4a025",
"text": "Learning high-level image representations using object proposals has achieved remarkable success in multi-label image recognition. However, most object proposals provide merely coarse information about the objects, and only carefully selected proposals can be helpful for boosting the performance of multi-label image recognition. In this paper, we propose an object-proposal-free framework for multi-label image recognition: random crop pooling (RCP). Basically, RCP performs stochastic scaling and cropping over images before feeding them to a standard convolutional neural network, which works quite well with a max-pooling operation for recognizing the complex contents of multi-label images. To better fit the multi-label image recognition task, we further develop a new loss function-the dynamic weighted Euclidean loss-for the training of the deep network. Our RCP approach is amazingly simple yet effective. It can achieve significantly better image recognition performance than the approaches using object proposals. Moreover, our adapted network can be easily trained in an end-to-end manner. Extensive experiments are conducted on two representative multi-label image recognition data sets (i.e., PASCAL VOC 2007 and PASCAL VOC 2012), and the results clearly demonstrate the superiority of our approach.",
"title": ""
},
{
"docid": "5c444fcd85dd89280eee016fd1cbd175",
"text": "Over the last years, object detection has become a more and more active field of research in robotics. An important problem in object detection is the need for sufficient labeled training data to learn good classifiers. In this paper we show how to significantly reduce the need for manually labeled training data by leveraging data sets available on the World Wide Web. Specifically, we show how to use objects from Google’s 3D Warehouse to train an object detection system for 3D point clouds collected by robots navigating through both urban and indoor environments. In order to deal with the different characteristics of the web data and the real robot data, we additionally use a small set of labeled point clouds and perform domain adaptation. Our experiments demonstrate that additional data taken from the 3D Warehouse along with our domain adaptation greatly improves the classification accuracy on real-world environments.",
"title": ""
},
{
"docid": "2eebc7477084b471f9e9872ba8751359",
"text": "Despite significant progress in the development of human action detection datasets and algorithms, no current dataset is representative of real-world aerial view scenarios. We present Okutama-Action, a new video dataset for aerial view concurrent human action detection. It consists of 43 minute-long fully-annotated sequences with 12 action classes. Okutama-Action features many challenges missing in current datasets, including dynamic transition of actions, significant changes in scale and aspect ratio, abrupt camera movement, as well as multi-labeled actors. As a result, our dataset is more challenging than existing ones, and will help push the field forward to enable real-world applications.",
"title": ""
}
] |
scidocsrr
|
5a82d30b5b3db8ee29e44ca3b06f2aa1
|
Classification of design parameters for E-commerce websites: A novel fuzzy Kano approach
|
[
{
"docid": "1c0efa706f999ee0129d21acbd0ef5ab",
"text": "Ten years ago, we presented the DeLone and McLean Information Systems (IS) Success Model as a framework and model for measuring the complexdependent variable in IS research. In this paper, we discuss many of the important IS success research contributions of the last decade, focusing especially on research efforts that apply, validate, challenge, and propose enhancements to our original model. Based on our evaluation of those contributions, we propose minor refinements to the model and propose an updated DeLone and McLean IS Success Model. We discuss the utility of the updated model for measuring e-commerce system success. Finally, we make a series of recommendations regarding current and future measurement of IS success. 10 DELONE AND MCLEAN",
"title": ""
}
] |
[
{
"docid": "8eb2a660107b304caf574bdf7fad3f23",
"text": "To enhance torque density by harmonic current injection, optimal slot/pole combinations for five-phase permanent magnet synchronous motors (PMSM) with fractional-slot concentrated windings (FSCW) are chosen. The synchronous and the third harmonic winding factors are calculated for a series of slot/pole combinations. Two five-phase PMSM, with general FSCW (GFSCW) and modular stator and FSCW (MFSCW), are analyzed and compared in detail, including the stator structures, star of slots diagrams, and MMF harmonic analysis based on the winding function theory. The analytical results are verified by finite element method, the torque characteristics and phase back-EMF are also taken into considerations. Results show that the MFSCW PMSM can produce higher average torque, while characterized by more MMF harmonic contents and larger ripple torque.",
"title": ""
},
{
"docid": "1f7fa34fd7e0f4fd7ff9e8bba2a78e3c",
"text": "Today many multi-national companies or organizations are adopting the use of automation. Automation means replacing the human by intelligent robots or machines which are capable to work as human (may be better than human). Artificial intelligence is a way of making machines, robots or software to think like human. As the concept of artificial intelligence is use in robotics, it is necessary to understand the basic functions which are required for robots to think and work like human. These functions are planning, acting, monitoring, perceiving and goal reasoning. These functions help robots to develop its skills and implement it. Since robotics is a rapidly growing field from last decade, it is important to learn and improve the basic functionality of robots and make it more useful and user-friendly.",
"title": ""
},
{
"docid": "23305a36194ad3c9b6b3f667c79bd273",
"text": "Evidence used to reconstruct the morphology and function of the brain (and the rest of the central nervous system) in fossil hominin species comes from the fossil and archeological records. Although the details provided about human brain evolution are scarce, they benefit from interpretations informed by interspecific comparative studies and, in particular, human pathology studies. In recent years, new information has come to light about fossil DNA and ontogenetic trajectories, for which pathology research has significant implications. We briefly describe and summarize data from the paleoarcheological and paleoneurological records about the evolution of fossil hominin brains, including behavioral data most relevant to brain research. These findings are brought together to characterize fossil hominin taxa in terms of brain structure and function and to summarize brain evolution in the human lineage.",
"title": ""
},
{
"docid": "4d18ea8816e9e4abf428b3f413c82f9e",
"text": "This paper reviews computer vision and image analysis studies aiming at automated diagnosis or screening of malaria infection in microscope images of thin blood film smears. Existing works interpret the diagnosis problem differently or propose partial solutions to the problem. A critique of these works is furnished. In addition, a general pattern recognition framework to perform diagnosis, which includes image acquisition, pre-processing, segmentation, and pattern classification components, is described. The open problems are addressed and a perspective of the future work for realization of automated microscopy diagnosis of malaria is provided.",
"title": ""
},
{
"docid": "16a1f15e8e414b59a230fb4a28c53cc7",
"text": "In this study we examined whether the effects of mental fatigue on behaviour are due to reduced action monitoring as indexed by the error related negativity (Ne/ERN), N2 and contingent negative variation (CNV) event-related potential (ERP) components. Therefore, we had subjects perform a task, which required a high degree of action monitoring, continuously for 2h. In addition we tried to relate the observed behavioural and electrophysiological changes to motivational processes and individual differences. Changes in task performance due to fatigue were accompanied by a decrease in Ne/ERN and N2 amplitude, reflecting impaired action monitoring, as well as a decrease in CNV amplitude which reflects reduced response preparation with increasing fatigue. Increasing the motivational level of our subjects resulted in changes in behaviour and brain activity that were different for individual subjects. Subjects that increased their performance accuracy displayed an increase in Ne/ERN amplitude, while subjects that increased their response speed displayed an increase in CNV amplitude. We will discuss the effects prolonged task performance on the behavioural and physiological indices of action monitoring, as well as the relationship between fatigue, motivation and individual differences.",
"title": ""
},
{
"docid": "eaf1c419853052202cb90246e48a3697",
"text": "The objective of this document is to promote the use of dynamic daylight performance measures for sustainable building design. The paper initially explores the shortcomings of conventional, static daylight performance metrics which concentrate on individual sky conditions, such as the common daylight factor. It then provides a review of previously suggested dynamic daylight performance metrics, discussing the capability of these metrics to lead to superior daylighting designs and their accessibility to nonsimulation experts. Several example offices are examined to demonstrate the benefit of basing design decisions on dynamic performance metrics as opposed to the daylight factor. Keywords—–daylighting, dynamic, metrics, sustainable buildings",
"title": ""
},
{
"docid": "a20ba0bb564711edc201b0e021e0dee9",
"text": "We approach the task of human silhouette extraction from color and thermal image sequences using automatic image registration. Image registration between color and thermal images is a challenging problem due to the difficulties associated with finding correspondence. However, the moving people in a static scene provide cues to address this problem. In this paper, we propose a hierarchical scheme to automatically find the correspondence between the preliminary human silhouettes extracted from synchronous color and thermal image sequences for image registration. Next, we discuss strategies for probabilistically combining cues from registered color and thermal images for improved human silhouette detection. It is shown that the proposed approach achieves good results for image registration and human silhouette extraction. Experimental results also show a comparison of various sensor fusion strategies and demonstrate the improvement in performance over nonfused cases for human silhouette extraction. 2006 Published by Elsevier Ltd on behalf of Pattern Recognition Society.",
"title": ""
},
{
"docid": "04a15b226d2466ea03306e3f413b4bd0",
"text": "More and more people express their opinions on social media such as Facebook and Twitter. Predictive analysis on social media time-series allows the stake-holders to leverage this immediate, accessible and vast reachable communication channel to react and proact against the public opinion. In particular, understanding and predicting the sentiment change of the public opinions will allow business and government agencies to react against negative sentiment and design strategies such as dispelling rumors and post balanced messages to revert the public opinion. In this paper, we present a strategy of building statistical models from the social media dynamics to predict collective sentiment dynamics. We model the collective sentiment change without delving into micro analysis of individual tweets or users and their corresponding low level network structures. Experiments on large-scale Twitter data show that the model can achieve above 85% accuracy on directional sentiment prediction.",
"title": ""
},
{
"docid": "bd89993bebdbf80b516626881d459333",
"text": "Creating a mobile application often requires the developers to create one for Android och one for iOS, the two leading operating systems for mobile devices. The two applications may have the same layout and logic but several components of the user interface (UI) will differ and the applications themselves need to be developed in two different languages. This process is gruesome since it is time consuming to create two applications and it requires two different sets of knowledge. There have been attempts to create techniques, services or frameworks in order to solve this problem but these hybrids have not been able to provide a native feeling of the resulting applications. This thesis has evaluated the newly released framework React Native that can create both iOS and Android applications by compiling the code written in React. The resulting applications can share code and consists of the UI components which are unique for each platform. The thesis focused on Android and tried to replicate an existing Android application in order to measure user experience and performance. The result was surprisingly positive for React Native as some user could not tell the two applications apart and nearly all users did not mind using a React Native application. The performance evaluation measured GPU frequency, CPU load, memory usage and power consumption. Nearly all measurements displayed a performance advantage for the Android application but the differences were not protruding. The overall experience is that React Native a very interesting framework that can simplify the development process for mobile applications to a high degree. As long as the application itself is not too complex, the development is uncomplicated and one is able to create an application in very short time and be compiled to both Android and iOS. First of all I would like to express my deepest gratitude for Valtech who aided me throughout the whole thesis with books, tools and knowledge. They supplied me with two very competent consultants Alexander Lindholm and Tomas Tunström which made it possible for me to bounce off ideas and in the end having a great thesis. Furthermore, a big thanks to the other students at Talangprogrammet who have supported each other and me during this period of time and made it fun even when it was as most tiresome. Furthermore I would like to thank my examiner Erik Berglund at Linköpings university who has guided me these last months and provided with insightful comments regarding the paper. Ultimately I would like to thank my family who have always been there to support me and especially my little brother who is my main motivation in life.",
"title": ""
},
{
"docid": "ee9e24f38d7674e601ab13b73f3d37db",
"text": "This paper presents the design of an application specific hardware for accelerating High Frequency Trading applications. It is optimized to achieve the lowest possible latency for interpreting market data feeds and hence enable minimal round-trip times for executing electronic stock trades. The implementation described in this work enables hardware decoding of Ethernet, IP and UDP as well as of the FAST protocol which is a common protocol to transmit market feeds. For this purpose, we developed a microcode engine with a corresponding instruction set as well as a compiler which enables the flexibility to support a wide range of applied trading protocols. The complete system has been implemented in RTL code and evaluated on an FPGA. Our approach shows a 4x latency reduction in comparison to the conventional Software based approach.",
"title": ""
},
{
"docid": "5e503aaee94e2dc58f9311959d5a142e",
"text": "The use of the fast Fourier transform in power spectrum analysis is described. Principal advantages of this method are a reduction in the number of computations and in required core storage, and convenient application in nonstationarity tests. The method involves sectioning the record and averaging modified periodograms of the sections. T INTRODLCTION HIS PAPER outlines a method for the application of the fast Fourier transform algorithm to the estimation of power spectra, which involves sectioning the record, taking modified periodograms of these sections, and averaging these modified periodo-grams. In many instances this method involves fewer computations than other methods. Moreover, it involves the transformation of sequences which are shorter than the whole record which is an advantage when computations are to be performed on a machine with limited core storage. Finally, it directly yields a potential resolution in the time dimension which is useful for testing and measuring nonstationarity. As will be pointed out, it is closely related to the method of complex demodulation described Let X(j), j= 0, N-1 be a sample from a stationary , second-order stochastic sequence. Assume for simplicity that E(X) 0. Let X(j) have spectral density Pcf), I f \\ 5%. We take segments, possibly overlapping, of length L with the starting points of these segments D units apart. Let X,(j),j=O, L 1 be the first such segment. Then Xdj) X($ and finally X&) X(j+ (K 1)D) j 0, ,L-1. We suppose we have K such segments; Xl(j), X,($, and that they cover the entire record, Le., that (K-1)DfL N. This segmenting is illustrated in Fig. 1. The method of estimation is as follows. For each segment of length L we calculate a modified periodo-gram. That is, we select a data window W(j), j= 0, L-1, and form the sequences Xl(j)W(j), X,(j) W(j). We then take the finite Fourier transforms A1(n), AK(~) of these sequences. Here ~k(n) xk(j) w(j)e-z~cijnlL 1 L-1 L j-0 and i= Finally, we obtain the K modified periodograms L U Ik(fn) I Ah(%) k 1, 2, K, where f n 0 , o-,L/2 n \" L and 1 Wyj). L j=o The spectral estimate is the average of these periodo",
"title": ""
},
{
"docid": "8e7088af6940cf3c2baa9f6261b402be",
"text": "Empathy is an integral part of human social life, as people care about and for others who experience adversity. However, a specific “pathogenic” form of empathy, marked by automatic contagion of negative emotions, can lead to stress and burnout. This is particularly detrimental for individuals in caregiving professions who experience empathic states more frequently, because it can result in illness and high costs for health systems. Automatically recognizing pathogenic empathy from text is potentially valuable to identify at-risk individuals and monitor burnout risk in caregiving populations. We build a model to predict this type of empathy from social media language on a data set we collected of users’ Facebook posts and their answers to a new questionnaire measuring empathy. We obtain promising results in identifying individuals’ empathetic states from their social media (Pearson r = 0.252,",
"title": ""
},
{
"docid": "dcbec6eea7b3157285298f303eb78840",
"text": "Osteochondral tissue engineering has shown an increasing development to provide suitable strategies for the regeneration of damaged cartilage and underlying subchondral bone tissue. For reasons of the limitation in the capacity of articular cartilage to self-repair, it is essential to develop approaches based on suitable scaffolds made of appropriate engineered biomaterials. The combination of biodegradable polymers and bioactive ceramics in a variety of composite structures is promising in this area, whereby the fabrication methods, associated cells and signalling factors determine the success of the strategies. The objective of this review is to present and discuss approaches being proposed in osteochondral tissue engineering, which are focused on the application of various materials forming bilayered composite scaffolds, including polymers and ceramics, discussing the variety of scaffold designs and fabrication methods being developed. Additionally, cell sources and biological protein incorporation methods are discussed, addressing their interaction with scaffolds and highlighting the potential for creating a new generation of bilayered composite scaffolds that can mimic the native interfacial tissue properties, and are able to adapt to the biological environment.",
"title": ""
},
{
"docid": "5123d52a50b75e37e90ed7224d531a18",
"text": "Tarlov or perineural cysts are nerve root cysts found most commonly at the sacral spine level arising between covering layers of the perineurium and the endoneurium near the dorsal root ganglion. The cysts are relatively rare and most of them are asymptomatic. Some Tarlov cysts can exert pressure on nerve elements resulting in pain, radiculopathy and even multiple radiculopathy of cauda equina. There is no consensus on the appropriate therapeutic options of Tarlov cysts. The authors present a case of two sacral cysts diagnosed with magnetic resonance imaging. The initial symptoms were low back pain and sciatica and progressed to cauda equina syndrome. Surgical treatment was performed by sacral laminectomy and wide cyst fenestration. The neurological deficits were recovered and had not recurred after a follow-up period of nine months. The literature was reviewed and discussed. This is the first reported case in Thailand.",
"title": ""
},
{
"docid": "eebca83626e8568e8b92019541466873",
"text": "There is a need for new spectrum access protocols that are opportunistic, flexible and efficient, yet fair. Game theory provides a framework for analyzing spectrum access, a problem that involves complex distributed decisions by independent spectrum users. We develop a cooperative game theory model to analyze a scenario where nodes in a multi-hop wireless network need to agree on a fair allocation of spectrum. We show that in high interference environments, the utility space of the game is non-convex, which may make some optimal allocations unachievable with pure strategies. However, we show that as the number of channels available increases, the utility space becomes close to convex and thus optimal allocations become achievable with pure strategies. We propose the use of the Nash Bargaining Solution and show that it achieves a good compromise between fairness and efficiency, using a small number of channels. Finally, we propose a distributed algorithm for spectrum sharing and show that it achieves allocations reasonably close to the Nash Bargaining Solution.",
"title": ""
},
{
"docid": "ad78f226f21bd020e625659ad3ddbf74",
"text": "We study the approach to jamming in hard-sphere packings and, in particular, the pair correlation function g(2) (r) around contact, both theoretically and computationally. Our computational data unambiguously separate the narrowing delta -function contribution to g(2) due to emerging interparticle contacts from the background contribution due to near contacts. The data also show with unprecedented accuracy that disordered hard-sphere packings are strictly isostatic: i.e., the number of exact contacts in the jamming limit is exactly equal to the number of degrees of freedom, once rattlers are removed. For such isostatic packings, we derive a theoretical connection between the probability distribution of interparticle forces P(f) (f) , which we measure computationally, and the contact contribution to g(2) . We verify this relation for computationally generated isostatic packings that are representative of the maximally random jammed state. We clearly observe a maximum in P(f) and a nonzero probability of zero force, shedding light on long-standing questions in the granular-media literature. We computationally observe an unusual power-law divergence in the near-contact contribution to g(2) , persistent even in the jamming limit, with exponent -0.4 clearly distinguishable from previously proposed inverse-square-root divergence. Additionally, we present high-quality numerical data on the two discontinuities in the split-second peak of g(2) and use a shared-neighbor analysis of the graph representing the contact network to study the local particle clusters responsible for the peculiar features. Finally, we present the computational data on the contact contribution to g(2) for vacancy-diluted fcc crystal packings and also investigate partially crystallized packings along the transition from maximally disordered to fully ordered packings. We find that the contact network remains isostatic even when ordering is present. Unlike previous studies, we find that ordering has a significant impact on the shape of P(f) for small forces.",
"title": ""
},
{
"docid": "b57b392e89b92aecb03235eeaaf248c8",
"text": "Recent advances in semiconductor performance made possible by organic π-electron molecules, carbon-based nanomaterials, and metal oxides have been a central scientific and technological research focus over the past decade in the quest for flexible and transparent electronic products. However, advances in semiconductor materials require corresponding advances in compatible gate dielectric materials, which must exhibit excellent electrical properties such as large capacitance, high breakdown strength, low leakage current density, and mechanical flexibility on arbitrary substrates. Historically, conventional silicon dioxide (SiO2) has dominated electronics as the preferred gate dielectric material in complementary metal oxide semiconductor (CMOS) integrated transistor circuitry. However, it does not satisfy many of the performance requirements for the aforementioned semiconductors due to its relatively low dielectric constant and intransigent processability. High-k inorganics such as hafnium dioxide (HfO2) or zirconium dioxide (ZrO2) offer some increases in performance, but scientists have great difficulty depositing these materials as smooth films at temperatures compatible with flexible plastic substrates. While various organic polymers are accessible via chemical synthesis and readily form films from solution, they typically exhibit low capacitances, and the corresponding transistors operate at unacceptably high voltages. More recently, researchers have combined the favorable properties of high-k metal oxides and π-electron organics to form processable, structurally well-defined, and robust self-assembled multilayer nanodielectrics, which enable high-performance transistors with a wide variety of unconventional semiconductors. In this Account, we review recent advances in organic-inorganic hybrid gate dielectrics, fabricated by multilayer self-assembly, and their remarkable synergy with unconventional semiconductors. We first discuss the principals and functional importance of gate dielectric materials in thin-film transistor (TFT) operation. Next, we describe the design, fabrication, properties, and applications of solution-deposited multilayer organic-inorganic hybrid gate dielectrics, using self-assembly techniques, which provide bonding between the organic and inorganic layers. Finally, we discuss approaches for preparing analogous hybrid multilayers by vapor-phase growth and discuss the properties of these materials.",
"title": ""
},
{
"docid": "20fd36e287a631c82aa8527e6a36931f",
"text": "Creating a mesh is the first step in a wide range of applications, including scientific computing and computer graphics. An unstructured simplex mesh requires a choice of meshpoints (vertex nodes) and a triangulation. We want to offer a short and simple MATLAB code, described in more detail than usual, so the reader can experiment (and add to the code) knowing the underlying principles. We find the node locations by solving for equilibrium in a truss structure (using piecewise linear force-displacement relations) and we reset the topology by the Delaunay algorithm. The geometry is described implicitly by its distance function. In addition to being much shorter and simpler than other meshing techniques, our algorithm typically produces meshes of very high quality. We discuss ways to improve the robustness and the performance, but our aim here is simplicity. Readers can download (and edit) the codes from http://math.mit.edu/~persson/mesh.",
"title": ""
},
{
"docid": "5d934dd45e812336ad12cee90d1e8cdf",
"text": "As research on the connection between narcissism and social networking site (SNS) use grows, definitions of SNS and measurements of their use continue to vary, leading to conflicting results. To improve understanding of the relationship between narcissism and SNS use, as well as the implications of differences in definition and measurement, we examine two ways of measuring Facebook and Twitter use by testing the hypothesis that SNS use is positively associated with narcissism. We also explore the relation between these types of SNS use and different components of narcissism within college students and general adult samples. Our findings suggest that for college students, posting on Twitter is associated with the Superiority component of narcissistic personality while Facebook posting is associated with the Exhibitionism component. Conversely, adults high in Superiority post on Facebook more rather than Twitter. For adults, Facebook and Twitter are both used more by those focused on their own appearances but not as a means of showing off, as is the case with college students. Given these differences, it is essential for future studies of SNS use and personality traits to distinguish between different types of SNS, different populations, and different types of use. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
5b249be5ecd6332f3b560cd46fbf4d90
|
Chinese Grammatical Error Diagnosis with Long Short-Term Memory Networks
|
[
{
"docid": "b205346e003c429cd2b32dc759921643",
"text": "Sentence correction has been an important emerging issue in computer-assisted language learning. However, existing techniques based on grammar rules or statistical machine translation are still not robust enough to tackle the common errors in sentences produced by second language learners. In this paper, a relative position language model and a parse template language model are proposed to complement traditional language modeling techniques in addressing this problem. A corpus of erroneous English-Chinese language transfer sentences along with their corrected counterparts is created and manually judged by human annotators. Experimental results show that compared to a state-of-the-art phrase-based statistical machine translation system, the error correction performance of the proposed approach achieves a significant improvement using human evaluation.",
"title": ""
},
{
"docid": "aa80366addac8af9cc5285f98663b9b6",
"text": "Automatic detection of sentence errors is an important NLP task and is valuable to assist foreign language learners. In this paper, we investigate the problem of word ordering errors in Chinese sentences and propose classifiers to detect this type of errors. Word n-gram features in Google Chinese Web 5-gram corpus and ClueWeb09 corpus, and POS features in the Chinese POStagged ClueWeb09 corpus are adopted in the classifiers. The experimental results show that integrating syntactic features, web corpus features and perturbation features are useful for word ordering error detection, and the proposed classifier achieves 71.64% accuracy in the experimental datasets. 協助非中文母語學習者偵測中文句子語序錯誤 自動偵測句子錯誤是自然語言處理研究一項重要議題,對於協助外語學習者很有價值。在 這篇論文中,我們研究中文句子語序錯誤的問題,並提出分類器來偵測這種類型的錯誤。 在分類器中我們使用的特徵包括:Google 中文網路 5-gram 語料庫、與 ClueWeb09 語料庫 的中文詞彙 n-grams及中文詞性標注特徵。實驗結果顯示,整合語法特徵、網路語料庫特 徵、及擾動特徵對偵測中文語序錯誤有幫助。在實驗所用的資料集中,合併使用這些特徵 所得的分類器效能可達 71.64%。",
"title": ""
}
] |
[
{
"docid": "d5e54133fa5166f0e72884bd3501bbfb",
"text": "In order to explore the characteristics of the evolution behavior of the time-varying relationships between multivariate time series, this paper proposes an algorithm to transfer this evolution process to a complex network. We take the causality patterns as nodes and the succeeding sequence relations between patterns as edges. We used four time series as sample data. The results of the analysis reveal some statistical evidences that the causalities between time series is in a dynamic process. It implicates that stationary long-term causalities are not suitable for some special situations. Some short-term causalities that our model recognized can be referenced to the dynamic adjustment of the decisions. The results also show that weighted degree of the nodes obeys power law distribution. This implies that a few types of causality patterns play a major role in the process of the transition and that international crude oil market is statistically significantly not random. The clustering effect appears in the transition process and different clusters have different transition characteristics which provide probability information for predicting the evolution of the causality. The approach presents a potential to analyze multivariate time series and provides important information for investors and decision makers.",
"title": ""
},
{
"docid": "f6bb2c30fb95a8d120b525875bc2fda6",
"text": "We propose a method to learn deep ReLU-based classifiers that are provably robust against normbounded adversarial perturbations on the training data. For previously unseen examples, the approach is guaranteed to detect all adversarial examples, though it may flag some non-adversarial examples as well. The basic idea is to consider a convex outer approximation of the set of activations reachable through a norm-bounded perturbation, and we develop a robust optimization procedure that minimizes the worst case loss over this outer region (via a linear program). Crucially, we show that the dual problem to this linear program can be represented itself as a deep network similar to the backpropagation network, leading to very efficient optimization approaches that produce guaranteed bounds on the robust loss. The end result is that by executing a few more forward and backward passes through a slightly modified version of the original network (though possibly with much larger batch sizes), we can learn a classifier that is provably robust to any norm-bounded adversarial attack. We illustrate the approach on a number of tasks to train classifiers with robust adversarial guarantees (e.g. for MNIST, we produce a convolutional classifier that provably has less than 5.8% test error for any adversarial attack with bounded `∞ norm less than = 0.1), and code for all experiments is available at http://github.com/ locuslab/convex_adversarial. Machine Learning Department, Carnegie Mellon University, Pittsburgh PA, 15213, USA Computer Science Department, Carnegie Mellon University, Pittsburgh PA, 15213, USA. Correspondence to: Eric Wong <[email protected]>, J. Zico Kolter <[email protected]>. Proceedings of the 35 th International Conference on Machine Learning, Stockholm, Sweden, PMLR 80, 2018. Copyright 2018 by the author(s).",
"title": ""
},
{
"docid": "4ad1aa5086c15be3d5ba9d692d1772a2",
"text": "We demonstrate the feasibility of detecting pathology in chest x-rays using deep learning approaches based on non-medical learning. Convolutional neural networks (CNN) learn higher level image representations. In this work we explore the features extracted from layers of the CNN along with a set of classical features, including GIST and bag-ofwords (BoW). We show results of classification using each feature set as well as fusing among the features. Finally, we perform feature selection on the collection of features to show the most informative feature set for the task. Results of 0.78-0.95 AUC for various pathologies are shown on a dataset of more than 600 radiographs. This study shows the strength and robustness of the CNN features. We conclude that deep learning with large scale nonmedical image databases may be a good substitute, or addition to domain specific representations which are yet to be available for general medical image recognition tasks.",
"title": ""
},
{
"docid": "980565c38859db2df10db238d8a4dc61",
"text": "Performing High Voltage (HV) tasks with a multi craft work force create a special set of safety circumstances. This paper aims to present vital information relating to when it is acceptable to use a single or a two-layer soil structure. Also it discusses the implication of the high voltage infrastructure on the earth grid and the safety of this implication under a single or a two-layer soil structure. A multiple case study is investigated to show the importance of using the right soil resistivity structure during the earthing system design. Keywords—Earth Grid, EPR, High Voltage, Soil Resistivity Structure, Step Voltage, Touch Voltage.",
"title": ""
},
{
"docid": "8b9e4490a1e9a70d9bb35a9c87a391d4",
"text": "The latest advances in eHealth and mHealth have propitiated the rapidly creation and expansion of mobile applications for health care. One of these types of applications are the clinical decision support systems, which nowadays are being implemented in mobile apps to facilitate the access to health care professionals in their daily clinical decisions. The aim of this paper is twofold. Firstly, to make a review of the current systems available in the literature and in commercial stores. Secondly, to analyze a sample of applications in order to obtain some conclusions and recommendations. Two reviews have been done: a literature review on Scopus, IEEE Xplore, Web of Knowledge and PubMed and a commercial review on Google play and the App Store. Five applications from each review have been selected to develop an in-depth analysis and to obtain more information about the mobile clinical decision support systems. Ninety-two relevant papers and 192 commercial apps were found. Forty-four papers were focused only on mobile clinical decision support systems. One hundred seventy-one apps were available on Google play and 21 on the App Store. The apps are designed for general medicine and 37 different specialties, with some features common in all of them despite of the different medical fields objective. The number of mobile clinical decision support applications and their inclusion in clinical practices has risen in the last years. However, developers must be careful with their interface or the easiness of use, which can impoverish the experience of the users.",
"title": ""
},
{
"docid": "5b545c14a8784383b8d921eb27991749",
"text": "In this chapter, neural networks are used to predict the future stock prices and develop a suitable trading system. Wavelet analysis is used to de-noise the time series and the results are compared with the raw time series prediction without wavelet de-noising. Standard and Poor 500 (S&P 500) is used in experiments. We use a gradual data sub-sampling technique, i.e., training the network mostly with recent data, but without neglecting past data. In addition, effects of NASDAQ 100 are studied on prediction of S&P 500. A daily trading strategy is employed to buy/sell according to the predicted prices and to calculate the directional efficiency and the rate of returns for different periods. There are numerous exchange traded funds (ETF’s), which attempt to replicate the performance of S&P 500 by holding the same stocks in the same proportions as the index, and therefore, giving the same percentage returns as S&P 500. Therefore, this study can be used to help invest in any of the various ETFs, which replicates the performance of S&P 500. The experimental results show that neural networks, with appropriate training and input data, can be used to achieve high profits by investing in ETFs based on S&P 500.",
"title": ""
},
{
"docid": "e8bf5fbe2ec29e0ea7ef6a368a54147e",
"text": "In this paper a combined Ground Penetrating Radar (GPR) and Synthetic Aperture Radar (SAR) technique is introduced, which considers the soil surface refraction and the wave propagation in the ground. By using Fermat's principle and the Sober operator, the SAR image of the GPR data is optimized, whereas the soil's permittivity is estimated. The theoretical approach is discussed thoroughly and measurements that were carried out on a test sand box verify the proposed technique.",
"title": ""
},
{
"docid": "1e7b1bbaba8b9f9a1e28db42e18c23bf",
"text": "To use their pool of resources efficiently, distributed stream-processing systems push query operators to nodes within the network. Currently, these operators, ranging from simple filters to custom business logic, are placed manually at intermediate nodes along the transmission path to meet application-specific performance goals. Determining placement locations is challenging because network and node conditions change over time and because streams may interact with each other, opening venues for reuse and repositioning of operators. This paper describes a stream-based overlay network (SBON), a layer between a stream-processing system and the physical network that manages operator placement for stream-processing systems. Our design is based on a cost space, an abstract representation of the network and on-going streams, which permits decentralized, large-scale multi-query optimization decisions. We present an evaluation of the SBON approach through simulation, experiments on PlanetLab, and an integration with Borealis, an existing stream-processing engine. Our results show that an SBON consistently improves network utilization, provides low stream latency, and enables dynamic optimization at low engineering cost.",
"title": ""
},
{
"docid": "4beac4e75474bdda0f0d005e5d235f90",
"text": "We present a neural transducer model with visual attention that learns to generate LATEX markup of a real-world math formula given its image. Applying sequence modeling and transduction techniques that have been very successful across modalities such as natural language, image, handwriting, speech and audio; we construct an image-to-markup model that learns to produce syntactically and semantically correct LATEX markup code over 150 words long and achieves a BLEU score of 89%; improving upon the previous state-of-art for the Im2Latex problem. We also demonstrate with heat-map visualization how attention helps in interpreting the model and can pinpoint (localize) symbols on the image accurately despite having been trained without any bounding box data.",
"title": ""
},
{
"docid": "986f55bb12d71e534e1e2fe970f610fb",
"text": "Code corpora, as observed in large software systems, are now known to be far more repetitive and predictable than natural language corpora. But why? Does the difference simply arise from the syntactic limitations of programming languages? Or does it arise from the differences in authoring decisions made by the writers of these natural and programming language texts? We conjecture that the differences are not entirely due to syntax, but also from the fact that reading and writing code is un-natural for humans, and requires substantial mental effort; so, people prefer to write code in ways that are familiar to both reader and writer. To support this argument, we present results from two sets of studies: 1) a first set aimed at attenuating the effects of syntax, and 2) a second, aimed at measuring repetitiveness of text written in other settings (e.g. second language, technical/specialized jargon), which are also effortful to write. We find that this repetition in source code is not entirely the result of grammar constraints, and thus some repetition must result from human choice. While the evidence we find of similar repetitive behavior in technical and learner corpora does not conclusively show that such language is used by humans to mitigate difficulty, it is consistent with that theory. This discovery of “non-syntactic” repetitive behaviour is actionable, and can be leveraged for statistically significant improvements on the code suggestion task. We discuss this finding, and other future implications on practice, and for research.",
"title": ""
},
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "e2a5f57497e57881092e33c6ab3ec817",
"text": "Doc2Sent2Vec is an unsupervised approach to learn low-dimensional feature vector (or embedding) for a document. This embedding captures the semantics of the document and can be fed as input to machine learning algorithms to solve a myriad number of applications in the field of data mining and information retrieval. Some of these applications include document classification, retrieval, and ranking.\n The proposed approach is two-phased. In the first phase, the model learns a vector for each sentence in the document using a standard word-level language model. In the next phase, it learns the document representation from the sentence sequence using a novel sentence-level language model. Intuitively, the first phase captures the word-level coherence to learn sentence embeddings, while the second phase captures the sentence-level coherence to learn document embeddings. Compared to the state-of-the-art models that learn document vectors directly from the word sequences, we hypothesize that the proposed decoupled strategy of learning sentence embeddings followed by document embeddings helps the model learn accurate and rich document representations.\n We evaluate the learned document embeddings by considering two classification tasks: scientific article classification and Wikipedia page classification. Our model outperforms the current state-of-the-art models in the scientific article classification task by ?12.07% and the Wikipedia page classification task by ?6.93%, both in terms of F1 score. These results highlight the superior quality of document embeddings learned by the Doc2Sent2Vec approach.",
"title": ""
},
{
"docid": "3405c4808237f8d348db27776d6e9b61",
"text": "Pheochromocytomas are catecholamine-releasing tumors that can be found in an extraadrenal location in 10% of the cases. Almost half of all pheochromocytomas are now discovered incidentally during cross-sectional imaging for unrelated causes. We present a case of a paragaglioma of the organ of Zuckerkandl that was discovered incidentally during a magnetic resonance angiogram performed for intermittent claudication. Subsequent investigation with computed tompgraphy and I-123 metaiodobenzylguanine scintigraphy as well as an overview of the literature are also presented.",
"title": ""
},
{
"docid": "cd13c8d9b950c35c73aeaadd2cfa1efb",
"text": "The significant worldwide increase in observed river runoff has been tentatively attributed to the stomatal \"antitranspirant\" response of plants to rising atmospheric CO(2) [Gedney N, Cox PM, Betts RA, Boucher O, Huntingford C, Stott PA (2006) Nature 439: 835-838]. However, CO(2) also is a plant fertilizer. When allowing for the increase in foliage area that results from increasing atmospheric CO(2) levels in a global vegetation model, we find a decrease in global runoff from 1901 to 1999. This finding highlights the importance of vegetation structure feedback on the water balance of the land surface. Therefore, the elevated atmospheric CO(2) concentration does not explain the estimated increase in global runoff over the last century. In contrast, we find that changes in mean climate, as well as its variability, do contribute to the global runoff increase. Using historic land-use data, we show that land-use change plays an additional important role in controlling regional runoff values, particularly in the tropics. Land-use change has been strongest in tropical regions, and its contribution is substantially larger than that of climate change. On average, land-use change has increased global runoff by 0.08 mm/year(2) and accounts for approximately 50% of the reconstructed global runoff trend over the last century. Therefore, we emphasize the importance of land-cover change in forecasting future freshwater availability and climate.",
"title": ""
},
{
"docid": "21e17ad2d2a441940309b7eacd4dec6e",
"text": "ÐWith a huge amount of data stored in spatial databases and the introduction of spatial components to many relational or object-relational databases, it is important to study the methods for spatial data warehousing and OLAP of spatial data. In this paper, we study methods for spatial OLAP, by integration of nonspatial OLAP methods with spatial database implementation techniques. A spatial data warehouse model, which consists of both spatial and nonspatial dimensions and measures, is proposed. Methods for computation of spatial data cubes and analytical processing on such spatial data cubes are studied, with several strategies proposed, including approximation and selective materialization of the spatial objects resulted from spatial OLAP operations. The focus of our study is on a method for spatial cube construction, called object-based selective materialization, which is different from cuboid-based selective materialization proposed in previous studies of nonspatial data cube construction. Rather than using a cuboid as an atomic structure during the selective materialization, we explore granularity on a much finer level, that of a single cell of a cuboid. Several algorithms are proposed for object-based selective materialization of spatial data cubes and the performance study has demonstrated the effectiveness of these techniques. Index TermsÐData warehouse, data mining, online analytical processing (OLAP), spatial databases, spatial data analysis, spatial",
"title": ""
},
{
"docid": "48f25218a45d12907dba7b42b2148a40",
"text": "Cross-site scripting (XSS) vulnerabilities are among the most common and serious web application vulnerabilities. It is challenging to eliminate XSS vulnerabilities because it is difficult for web applications to sanitize all user input appropriately. We present Noncespaces, a technique that enables web clients to distinguish between trusted and untrusted content to prevent exploitation of XSS vulnerabilities. Using Noncespaces, a web application randomizes the the (X)HTML tags and attributes in each document before delivering it to the client. As long as the attacker is unable to guess the random mapping, the client can distinguish between trusted content created by the web application and untrusted content provided by an attacker. To implement Noncespaces with minimal changes to web applications, we leverage a popular web application architecture to automatically apply Noncespaces to static content processed through a popular PHP template engine. We design a policy language for Noncespaces, implement a training mode to assist policy development, and conduct extensive security testing of a generated policy for two large web applications to show the effectiveness of our technique. a 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "8970ace14fef5499de4bf810ab66c7ce",
"text": "Glioblastoma multiforme is the most common primary malignant brain tumour, with a median survival of about one year. This poor prognosis is due to therapeutic resistance and tumour recurrence after surgical removal. Precisely how recurrence occurs is unknown. Using a genetically engineered mouse model of glioma, here we identify a subset of endogenous tumour cells that are the source of new tumour cells after the drug temozolomide (TMZ) is administered to transiently arrest tumour growth. A nestin-ΔTK-IRES-GFP (Nes-ΔTK-GFP) transgene that labels quiescent subventricular zone adult neural stem cells also labels a subset of endogenous glioma tumour cells. On arrest of tumour cell proliferation with TMZ, pulse-chase experiments demonstrate a tumour re-growth cell hierarchy originating with the Nes-ΔTK-GFP transgene subpopulation. Ablation of the GFP+ cells with chronic ganciclovir administration significantly arrested tumour growth, and combined TMZ and ganciclovir treatment impeded tumour development. Thus, a relatively quiescent subset of endogenous glioma cells, with properties similar to those proposed for cancer stem cells, is responsible for sustaining long-term tumour growth through the production of transient populations of highly proliferative cells.",
"title": ""
},
{
"docid": "9bb86141611c54978033e2ea40f05b15",
"text": "In this work we investigate the problem of road scene semanti c segmentation using Deconvolutional Networks (DNs). Several c onstraints limit the practical performance of DNs in this context: firstly, the pa ucity of existing pixelwise labelled training data, and secondly, the memory const rai ts of embedded hardware, which rule out the practical use of state-of-theart DN architectures such as fully convolutional networks (FCN). To address the fi rst constraint, we introduce a Multi-Domain Road Scene Semantic Segmentation (M DRS3) dataset, aggregating data from six existing densely and sparsely lab elled datasets for training our models, and two existing, separate datasets for test ing their generalisation performance. We show that, while MDRS3 offers a greater volu me and variety of data, end-to-end training of a memory efficient DN does not yield satisfactory performance. We propose a new training strategy to over c me this, based on (i) the creation of a best-possible source network (S-Net ) from the aggregated data, ignoring time and memory constraints; and (ii) the tra nsfer of knowledge from S-Net to the memory-efficient target network (T-Net). W e evaluate different techniques for S-Net creation and T-Net transferral, and de monstrate that training a constrained deconvolutional network in this manner can un lock better performance than existing training approaches. Specifically, we s how that a target network can be trained to achieve improved accuracy versus an FC N despite using less than 1% of the memory. We believe that our approach can be useful beyond automotive scenarios where labelled data is similarly scar ce o fragmented and where practical constraints exist on the desired model size . We make available our network models and aggregated multi-domain dataset for reproducibility.",
"title": ""
},
{
"docid": "75177326b8408f755100bf86e1f8bd90",
"text": "We propose a general method for constructing Tanner graphs having a large girth by establishing edges or connections between symbol and check nodes in an edge-by-edge manner, called progressive edge-growth (PEG) algorithm. Lower bounds on the girth of PEG Tanner graphs and on the minimum distance of the resulting low-density parity-check (LDPC) codes are derived in terms of parameters of the graphs. Simple variations of the PEG algorithm can also be applied to generate linear-time encodeable LDPC codes. Regular and irregular LDPC codes using PEG Tanner graphs and allowing symbol nodes to take values over GF(q) (q>2) are investigated. Simulation results show that the PEG algorithm is a powerful algorithm to generate good short-block-length LDPC codes.",
"title": ""
},
{
"docid": "3b9c658245726acdb246e984cae666c5",
"text": "In pursuing a refined Learning Styles Inventory (LSI), Kolb has moved away from the original cyclical nature of his model of experiential learning. Kolb’s model has not adapted to current research and has failed to increase understanding of learning. A critical examination of Kolb’s experiential learning theory in terms of epistemology, educational neuroscience, and model analysis reveals the need for an experiential learning theory that addresses these issues. This article re-conceptualizes experiential learning by building from cognitive neuroscience, Dynamic Skill Theory, and effective experiential education practices into a self-adjusting fractal-like cycle that we call CoConstructed Developmental Teaching Theory (CDTT). CDTT is a biologically driven model of teaching. It is a cohesive framework of ideas that have been presented before but not linked in a coherent manner to the biology of the learning process. In addition, it orders the steps in a neurobiologically supported sequence. CDTT opens new avenues of research utilizing evidenced-based teaching practices and provides a basis for a new conversation. However, thorough testing remains.",
"title": ""
}
] |
scidocsrr
|
8d5b3c58c516701d54f793797ccf132c
|
Splicebuster: A new blind image splicing detector
|
[
{
"docid": "80f88101ea4d095a0919e64b7db9cadb",
"text": "The objective of this work is object retrieval in large scale image datasets, where the object is specified by an image query and retrieval should be immediate at run time in the manner of Video Google [28]. We make the following three contributions: (i) a new method to compare SIFT descriptors (RootSIFT) which yields superior performance without increasing processing or storage requirements; (ii) a novel method for query expansion where a richer model for the query is learnt discriminatively in a form suited to immediate retrieval through efficient use of the inverted index; (iii) an improvement of the image augmentation method proposed by Turcot and Lowe [29], where only the augmenting features which are spatially consistent with the augmented image are kept. We evaluate these three methods over a number of standard benchmark datasets (Oxford Buildings 5k and 105k, and Paris 6k) and demonstrate substantial improvements in retrieval performance whilst maintaining immediate retrieval speeds. Combining these complementary methods achieves a new state-of-the-art performance on these datasets.",
"title": ""
},
{
"docid": "213387a29384a2974b09bfef3085e63e",
"text": "The ease of creating image forgery using image-splicing techniques will soon make our naive trust on image authenticity a tiling of the past. In prior work, we observed the capability of the bicoherence magnitude and phase features for image splicing detection. To bridge the gap between empirical observations and theoretical justifications, in this paper, an image-splicing model based on the idea of bipolar signal perturbation is proposed and studied. A theoretical analysis of the model leads to propositions and predictions consistent with the empirical observations.",
"title": ""
}
] |
[
{
"docid": "d67b7f0595bfa17a9c83c8c125eeef46",
"text": "Thanks to the proliferation of Online Social Networks (OSNs) and Linked Data, graph data have been constantly increasing, reaching massive scales and complexity. Thus, tools to store and manage such data efficiently are absolutely essential. To address this problem, various technologies have been employed, such as relational, object and graph databases. In this paper we present a benchmark that evaluates graph databases with a set of workloads, inspired from OSN mining use case scenarios. In addition to standard network operations, the paper focuses on the problem of community detection and we propose the adaptation of the Louvain method on top of graph databases. The paper reports a comprehensive comparative evaluation between three popular graph databases, Titan, OrientDB and Neo4j. Our experimental results show that, in the current development status, OrientDB is the fastest solution with respect to the Louvain method, while Neo4j performs the query workloads fastest. Moreover, Neo4j and Titan handle better massive and single insertion operations respectively.",
"title": ""
},
{
"docid": "52c7ac92b5da3b37e3d657afa3e06377",
"text": "Research on implicit cognition and addiction has expanded greatly during the past decade. This research area provides new ways to understand why people engage in behaviors that they know are harmful or counterproductive in the long run. Implicit cognition takes a different view from traditional cognitive approaches to addiction by assuming that behavior is often not a result of a reflective decision that takes into account the pros and cons known by the individual. Instead of a cognitive algebra integrating many cognitions relevant to choice, implicit cognition assumes that the influential cognitions are the ones that are spontaneously activated during critical decision points. This selective review highlights many of the consistent findings supporting predictive effects of implicit cognition on substance use and abuse in adolescents and adults; reveals a recent integration with dual-process models; outlines the rapid evolution of different measurement tools; and introduces new routes for intervention.",
"title": ""
},
{
"docid": "16c87d75564404d52fc2abac55297931",
"text": "SHADE is an adaptive DE which incorporates success-history based parameter adaptation and one of the state-of-the-art DE algorithms. This paper proposes L-SHADE, which further extends SHADE with Linear Population Size Reduction (LPSR), which continually decreases the population size according to a linear function. We evaluated the performance of L-SHADE on CEC2014 benchmarks and compared its search performance with state-of-the-art DE algorithms, as well as the state-of-the-art restart CMA-ES variants. The experimental results show that L-SHADE is quite competitive with state-of-the-art evolutionary algorithms.",
"title": ""
},
{
"docid": "f3471acc1405bbd9546cc8ec42267053",
"text": "The authors examined the association between semen quality and caffeine intake among 2,554 young Danish men recruited when they were examined to determine their fitness for military service in 2001-2005. The men delivered a semen sample and answered a questionnaire including information about caffeine intake from various sources, from which total caffeine intake was calculated. Moderate caffeine and cola intakes (101-800 mg/day and < or =14 0.5-L bottles of cola/week) compared with low intake (< or =100 mg/day, no cola intake) were not associated with semen quality. High cola (>14 0.5-L bottles/week) and/or caffeine (>800 mg/day) intake was associated with reduced sperm concentration and total sperm count, although only significant for cola. High-intake cola drinkers had an adjusted sperm concentration and total sperm count of 40 mill/mL (95% confidence interval (CI): 32, 51) and 121 mill (95% CI: 92, 160), respectively, compared with 56 mill/mL (95% CI: 50, 64) and 181 mill (95% CI: 156, 210) in non-cola-drinkers, which could not be attributed to the caffeine they consumed because it was <140 mg/day. Therefore, the authors cannot exclude the possibility of a threshold above which cola, and possibly caffeine, negatively affects semen quality. Alternatively, the less healthy lifestyle of these men may explain these findings.",
"title": ""
},
{
"docid": "a35987d8f93b12eca04350d3ec7e1b4a",
"text": "The volume and quality of data, but also their relevance, are crucial when performing data analysis. In this paper, a study of the influence of different types of data is presented, particularly in the context of educational data obtained from Learning Management Systems (LMSs). These systems provide a large amount of data from the student activity but they usually do not describe the results of the learning process, i.e., they describe the behaviour but not the learning results. The starting hypothesis states that complementing behavioural data with other more relevant data (regarding learning outcomes) can lead to a better analysis of the learning process, that is, in particular it is possible to early predict the student final performance. A learning platform has been specially developed to collect data not just from the usage but also related to the way students learn and progress in training activities. Data of both types are used to build a progressive predictive system for helping in the learning process. This model is based on a classifier that uses the Support Vector Machine technique. As a result, the system obtains a weekly classification of each student as the probability of belonging to one of three classes: high, medium and low performance. The results show that, supplementing behavioural data with learning data allows us to obtain better predictions about the results of the students in a learning system. Moreover, it can be deduced that the use of heterogeneous data enriches the final performance of the prediction algorithms.",
"title": ""
},
{
"docid": "b85330c2d0816abe6f28fd300e5f9b75",
"text": "This paper presents a novel dual polarized planar aperture antenna using the low-temperature cofired ceramics technology to realize a novel antenna-in-package for a 60-GHz CMOS differential transceiver chip. Planar aperture antenna technology ensures high gain and wide bandwidth. Differential feeding is adopted to be compatible with the chip. Dual polarization makes the antenna function as a pair of single polarized antennas but occupies much less area. The antenna is ±45° dual polarized, and each polarization acts as either a transmitting (TX) or receiving (RX) antenna. This improves the signal-to-noise ratio of the wireless channel in a point-to-point communication, because the TX/RX polarization of one antenna is naturally copolarized with the RX/TX polarization of the other antenna. A prototype of the proposed antenna is designed, fabricated, and measured, whose size is 12 mm × 12 mm × 1.128 mm (2.4λ0 × 2.4λ0 × 0.226λ0). The measurement shows that the -10 dB impedance bandwidth covers the entire 60 GHz unlicensed band (57-64 GHz) for both polarizations. Within the bandwidth, the isolation between the ports of the two polarizations is better than 26 dB, and the gain is higher than 10 dBi with a peak of around 12 dBi for both polarizations.",
"title": ""
},
{
"docid": "9dbb1b0b6a35bd78b35982a4957cdec4",
"text": "Many modern Web-services ignore existing Web-standards and develop their own interfaces to publish their services. This reduces interoperability and increases network latency, which in turn reduces scalability of the service. The Web grew from a few thousand requests per day to million requests per hour without significant loss of performance. Applying the same architecture underlying the modern Web to Web-services could improve existing and forthcoming applications. REST is the idealized model of the interactions within an Web-application and became the foundation of the modern Web-architecture, it has been designed to meet the needs of Internet-scale distributed hypermedia systems by emphasizing scalability, generality of interfaces, independent deployment and allowing intermediary components to reduce network latency.",
"title": ""
},
{
"docid": "67250ebd6a3c2c0e28182c8bc6ba57cc",
"text": "Financial support: None Conflict of interest: None ABSTRACT The observation of mucous membranes should be part of a dermatological examination. It is known that early diagnosis is critical for the prognosis of patients with malignant melanocytic lesions. Nevertheless, integrating this step into the examination routine and performing a differential diagnosis between benign and malignant mucosal lesions with only clinical signs, are great challenges. Dermoscopy is still seldom-used for pigmented lesions of mucous membranes, however recent studies have shown its potential. In light of a case of melanoma of the lip, the authors provide tips and data from the literature that highlight the usefulness of the technique, and support the use of dermoscopic examination in the dermatologist’s routine.",
"title": ""
},
{
"docid": "c7afa12d10877eb7397176f2c4ab143e",
"text": "Software-defined networking (SDN) has received a great deal of attention from both academia and industry in recent years. Studies on SDN have brought a number of interesting technical discussions on network architecture design, along with scientific contributions. Researchers, network operators, and vendors are trying to establish new standards and provide guidelines for proper implementation and deployment of such novel approach. It is clear that many of these research efforts have been made in the southbound of the SDN architecture, while the northbound interface still needs improvements. By focusing in the SDN northbound, this paper surveys the body of knowledge and discusses the challenges for developing SDN software. We investigate the existing solutions and identify trends and challenges on programming for SDN environments. We also discuss future developments on techniques, specifications, and methodologies for programmable networks, with the orthogonal view from the software engineering discipline.",
"title": ""
},
{
"docid": "03bddfeabe8f9a6e9f333659d028c038",
"text": "This paper presents a methodology for the evaluation of table understanding algorithms for PDF documents. The evaluation takes into account three major tasks: table detection, table structure recognition and functional analysis. We provide a general and flexible output model for each task along with corresponding evaluation metrics and methods. We also present a methodology for collecting and ground-truthing PDF documents based on consensus-reaching principles and provide a publicly available ground-truthed dataset.",
"title": ""
},
{
"docid": "c8d9ec6aa63b783e4c591dccdbececcf",
"text": "The use of context is critical for scene understanding in computer vision, where the recognition of an object is driven by both local appearance and the object’s relationship to other elements of the scene (context). Most current approaches rely on modeling the relationships between object categories as a source of context. In this paper we seek to move beyond categories to provide a richer appearancebased model of context. We present an exemplar-based model of objects and their relationships, the Visual Memex, that encodes both local appearance and 2D spatial context between object instances. We evaluate our model on Torralba’s proposed Context Challenge against a baseline category-based system. Our experiments suggest that moving beyond categories for context modeling appears to be quite beneficial, and may be the critical missing ingredient in scene understanding systems.",
"title": ""
},
{
"docid": "b4e3d746969860c7a3946487a2609a03",
"text": "People tracking is a key technology for autonomous systems, intelligent cars and social robots operating in populated environments. What makes the task di fficult is that the appearance of humans in range data can change drastically as a function of body pose, distance to the sensor, self-occlusion and occlusion by other objects. In this paper we propose a novel approach to pedestrian detection in 3D range data based on supervised learning techniques to create a bank of classifiers for different height levels of the human body. In particular, our approach applies AdaBoost to train a strong classifier from geometrical and statistical features of groups of neighboring points at the same height. In a second step, the AdaBoost classifiers mutually enforce their evidence across di fferent heights by voting into a continuous space. Pedestrians are finally found efficiently by mean-shift search for local maxima in the voting space. Experimental results carried out with 3D laser range data illustrate the robustness and e fficiency of our approach even in cluttered urban environments. The learned people detector reaches a classification rate up to 96% from a single 3D scan.",
"title": ""
},
{
"docid": "b34dbcd4a852e55b698df76d73afe0e9",
"text": "We present a new method for automatically detecting circular objects in images: we detect an osculating circle to an elliptic arc using a Hough transform, iteratively deforming it into an ellipse, removing outlier pixels, and searching for a separate edge. The voting space is restricted to one and two dimensions for efficiency, and special weighting schemes are introduced to enhance the accuracy. We demonstrate the effectiveness of our method using real images. Finally, we apply our method to the calibration of a turntable for 3-D object shape reconstruction.",
"title": ""
},
{
"docid": "b5d54f10aebd99d898dfb52d75e468e6",
"text": "As the technology to secure information improves, hackers will employ less technical means to get access to unauthorized data. The use of Social Engineering as a non tech method of hacking has been increasingly used during the past few years. There are different types of social engineering methods reported but what is lacking is a unifying effort to understand these methods in the aggregate. This paper aims to classify these methods through taxonomy so that organizations can gain a better understanding of these attack methods and accordingly be vigilant against them.",
"title": ""
},
{
"docid": "78586d909ba8f5f200ffb0a8dc8ecd0a",
"text": "Recent advances in the field of 3D printing have utilized embedded electronic interconnects in order to construct advanced electronic devices. This work builds on these advances in order to construct and characterize arbitrarily formed capacitive sensors using fine-pitch copper mesh and embedded copper wires. Three varieties of sensors were fabricated and tested, including a small area wire sensor (320μm width), a large area mesh sensor (2cm2), and a fully embedded demonstration model. In order to test and characterize these sensors in FDM materials, three distinct tests were explored. Specifically, the sensors were able to distinguish between three metallic materials and distinguish salt water from distilled water. These capacitive sensors have many potential sensing applications, such as biomedical sensing, human interface devices, material sensing, electronics characterization, and environmental sensing. As such, this work specifically examines optimum mesh/wire capacitive parameters as well as potential applications such as 3D printed integrated material sensing.",
"title": ""
},
{
"docid": "5acad83ce99c6403ef20bfa62672eafd",
"text": "A large class of sequential decision-making problems under uncertainty can be modeled as Markov and Semi-Markov Decision Problems, when their underlying probability structure has a Markov chain. They may be solved by using classical dynamic programming methods. However, dynamic programming methods suffer from the curse of dimensionality and break down rapidly in face of large state spaces. In addition, dynamic programming methods require the exact computation of the so-called transition probabilities, which are often hard to obtain and are hence said to suffer from the curse of modeling as well. In recent years, a simulation-based method, called reinforcement learning, has emerged in the literature. It can, to a great extent, alleviate stochastic dynamic programming of its curses by generating near-optimal solutions to problems having large state-spaces and complex transition mechanisms. In this paper, a simulation-based algorithm that solves Markov and Semi-Markov decision problems is presented, along with its convergence analysis. The algorithm involves a step-size based transformation on two time scales. Its convergence analysis is based on a recent result on asynchronous convergence of iterates on two time scales. We present numerical results from the new algorithm on a classical preventive maintenance case study of a reasonable size, where results on the optimal policy are also available. In addition, we present a tutorial that explains the framework of reinforcement learning in the context of semi-Markov decision problems for long-run average cost.",
"title": ""
},
{
"docid": "559be3dd29ae8f6f9a9c99951c82a8d3",
"text": "This paper presents a comprehensive literature review on environment perception for intelligent vehicles. The state-of-the-art algorithms and modeling methods for intelligent vehicles are given, with a summary of their pros and cons. A special attention is paid to methods for lane and road detection, traffic sign recognition, vehicle tracking, behavior analysis, and scene understanding. In addition, we provide information about datasets, common performance analysis, and perspectives on future research directions in this area.",
"title": ""
},
{
"docid": "d9f1dd36a9acba7932fa08a7702fca39",
"text": "Automatic segmentation of liver lesions is a fundamental requirement towards the creation of computer aided diagnosis (CAD) and decision support systems (CDS). Traditional segmentation approaches depend heavily upon hand-crafted features and a priori knowledge of the user. As such, these methods are difficult to adopt within a clinical environment. Recently, deep learning methods based on fully convolutional networks (FCNs) have been successful in many segmentation problems primarily because they leverage a large labelled dataset to hierarchically learn the features that best correspond to the shallow visual appearance as well as the deep semantics of the areas to be segmented. However, FCNs based on a 16 layer VGGNet architecture have limited capacity to add additional layers. Therefore, it is challenging to learn more discriminative features among different classes for FCNs. In this study, we overcome these limitations using deep residual networks (ResNet) to segment liver lesions. ResNet contain skip connections between convolutional layers, which solved the problem of the training degradation of training accuracy in very deep networks and thereby enables the use of additional layers for learning more discriminative features. In addition, we achieve more precise boundary definitions through a novel cascaded ResNet architecture with multi-scale fusion to gradually learn and infer the boundaries of both the liver and the liver lesions. Our proposed method achieved 4th place in the ISBI 2017 Liver Tumor Segmentation Challenge by the submission deadline.",
"title": ""
},
{
"docid": "fa69a8a67ab695fd74e3bfc25206c94c",
"text": "Recent advances in recurrent neural nets (RNNs) have shown much promise in many applications in natural language processing. For most of these tasks, such as sentiment analysis of customer reviews, a recurrent neural net model parses the entire review before forming a decision. We argue that reading the entire input is not always necessary in practice, since a lot of reviews are often easy to classify, i.e., a decision can be formed after reading some crucial sentences or words in the provided text. In this paper, we present an approach of fast reading for text classification. Inspired by several well-known human reading techniques, our approach implements an intelligent recurrent agent which evaluates the importance of the current snippet in order to decide whether to make a prediction, or to skip some texts, or to re-read part of the sentence. Our agent uses an RNN module to encode information from the past and the current tokens, and applies a policy module to form decisions. With an end-to-end training algorithm based on policy gradient, we train and test our agent on several text classification datasets and achieve both higher efficiency and better accuracy compared to previous approaches.",
"title": ""
},
{
"docid": "4b5ac7d23ffcfc965f5f54ef227099bc",
"text": "In this brief, we propose a fast yet energy-efficient reconfigurable approximate carry look-ahead adder (RAP-CLA). This adder has the ability of switching between the approximate and exact operating modes making it suitable for both error-resilient and exact applications. The structure, which is more area and power efficient than state-of-the-art reconfigurable approximate adders, is achieved by some modifications to the conventional carry look ahead adder (CLA). The efficacy of the proposed RAP-CLA adder is evaluated by comparing its characteristics to those of two state-of-the-art reconfigurable approximate adders as well as the conventional (exact) CLA in a 15 nm FinFET technology. The results reveal that, in the approximate operating mode, the proposed 32-bit adder provides up to 55% and 28% delay and power reductions compared to those of the exact CLA, respectively, at the cost of up to 35.16% error rate. It also provides up to 49% and 19% lower delay and power consumption, respectively, compared to other approximate adders considered in this brief. Finally, the effectiveness of the proposed adder on two image processing applications of smoothing and sharpening is demonstrated.",
"title": ""
}
] |
scidocsrr
|
13cefe805419a1c3e889333347883769
|
A Joint Model of Language and Perception for Grounded Attribute Learning
|
[
{
"docid": "47faebac1eecb05bc749f3e820c55486",
"text": "Current approaches for semantic parsing take a supervised approach requiring a considerable amount of training data which is expensive and difficult to obtain. This supervision bottleneck is one of the major difficulties in scaling up semantic parsing. We argue that a semantic parser can be trained effectively without annotated data, and introduce an unsupervised learning algorithm. The algorithm takes a self training approach driven by confidence estimation. Evaluated over Geoquery, a standard dataset for this task, our system achieved 66% accuracy, compared to 80% of its fully supervised counterpart, demonstrating the promise of unsupervised approaches for this task.",
"title": ""
},
{
"docid": "0670d09e35907b1d2efd29370b117b4c",
"text": "Consumer depth cameras, such as the Microsoft Kinect, are capable of providing frames of dense depth values at real time. One fundamental question in utilizing depth cameras is how to best extract features from depth frames. Motivated by local descriptors on images, in particular kernel descriptors, we develop a set of kernel features on depth images that model size, 3D shape, and depth edges in a single framework. Through extensive experiments on object recognition, we show that (1) our local features capture different aspects of cues from a depth frame/view that complement one another; (2) our kernel features significantly outperform traditional 3D features (e.g. Spin images); and (3) we significantly improve the capabilities of depth and RGB-D (color+depth) recognition, achieving 10–15% improvement in accuracy over the state of the art.",
"title": ""
},
{
"docid": "6b7daba104f8e691dd32cba0b4d66ecd",
"text": "This paper presents the first empirical results to our knowledge on learning synchronous grammars that generate logical forms. Using statistical machine translation techniques, a semantic parser based on a synchronous context-free grammar augmented with λoperators is learned given a set of training sentences and their correct logical forms. The resulting parser is shown to be the bestperforming system so far in a database query domain.",
"title": ""
}
] |
[
{
"docid": "8fe7a08de96768ea04b89bd6eefd96bc",
"text": "This paper introduces a new, unsupervised algorithm for noun phrase coreference resolution. It differs from existing methods in that it views coreference resolution as a clustering task. In an evaluation on the MUC-6 coreference resolution corpus, the algorithm achieves an F-measure of 53.6%, placing it firmly between the worst (40%) and best (65%) systems in the MUC-6 evaluation. More importantly, the clustering approach outperforms the only MUC-6 system to treat coreference resolution as a learning problem. The clustering algorithm appears to provide a flexible mechanism for coordinating the application of context-independent and context-dependent constraints and preferences for accurate partitioning of noun phrases into coreference equivalence classes.",
"title": ""
},
{
"docid": "52da82decb732b3782ad1e3877fe6568",
"text": "Deep learning algorithms require large amounts of labeled data which is difficult to attain for medical imaging. Even if a particular dataset is accessible, a learned classifier struggles to maintain the same level of performance on a different medical imaging dataset from a new or never-seen data source domain. Utilizing generative adversarial networks in a semi-supervised learning architecture, we address both problems of labeled data scarcity and data domain overfitting. For cardiac abnormality classification in chest X-rays, we demonstrate that an order of magnitude less data is required with semi-supervised learning generative adversarial networks than with conventional supervised learning convolutional neural networks. In addition, we demonstrate its robustness across different datasets for similar classification tasks.",
"title": ""
},
{
"docid": "4bd161b3e91dea05b728a72ade72e106",
"text": "Julio Rodriguez∗ Faculté des Sciences et Techniques de l’Ingénieur (STI), Institut de Microtechnique (IMT), Laboratoire de Production Microtechnique (LPM), Ecole Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne, Switzerland and Fakultät für Physik, Universität Bielefeld, D-33501 Bielefeld, Germany ∗Corresponding author. Email: [email protected] and [email protected]",
"title": ""
},
{
"docid": "b4978b2fbefc79fba6e69ad8fd55ebf9",
"text": "This paper proposes an approach based on Least Squares Suppo rt Vect r Machines (LS-SVMs) for solving second order parti al differential equations (PDEs) with variable coe fficients. Contrary to most existing techniques, the proposed m thod provides a closed form approximate solution. The optimal representat ion of the solution is obtained in the primal-dual setting. T he model is built by incorporating the initial /boundary conditions as constraints of an optimization prob lem. The developed method is well suited for problems involving singular, variable and const a t coefficients as well as problems with irregular geometrical domai ns. Numerical results for linear and nonlinear PDEs demonstrat e he efficiency of the proposed method over existing methods.",
"title": ""
},
{
"docid": "96bc9c8fa154d8e6cc7d0486c99b43d5",
"text": "A Transmission Line Transformer (TLT) can be used to transform high-voltage nanosecond pulses. These transformers rely on the fact that the length of the pulse is shorter than the transmission lines used. This allows connecting the transmission lines in parallel at the input and in series at the output. In the ideal case such structures achieve a voltage gain which equals the number of transmission lines used. To achieve maximum efficiency, mismatch and secondary modes must be suppressed. Here we describe a TLT based on parallel plate transmission lines. The chosen geometry results in a high efficiency, due to good matching and minimized secondary modes. A second advantage of this design is that the electric field strength between the conductors is the same throughout the entire TLT. This makes the design suitable for high voltage applications. To investigate the concept of this TLT design, measurements are done on two different TLT designs. One TLT consists of 4 transmission lines, while the other one has 8 lines. Both designs are constructed of DiBond™. This material consists of a flat polyethylene inner core with an aluminum sheet on both sides. Both TLT's have an input impedance of 3.125 Ω. Their output impedances are 50 and 200 Ω, respectively. The measurements show that, on a matched load, this structure achieves a voltage gain factor of 3.9 when using 4 transmission lines and 7.9 when using 8 lines.",
"title": ""
},
{
"docid": "528eded044a3567ed2a8b123767d473e",
"text": "In our previous study, we presented a nonverbal interface that used biopotential signals, such as electrooculargraphic (EOG) and electromyographic (EMG), captured by a simple brain-computer interface. In this paper, we apply the nonverbal interface to hands-free control of an electric wheelchair. Based on the biopotential signals, the interface recognizes the operator's gestures, such as closing the jaw, wrinkling the forehead, and looking towards left and right. By combining these gestures, the operator controls linear and turning motions, velocity, and the steering angle of the wheelchair. Experimental results for navigating the wheelchair in a hallway environment confirmed the feasibility of the proposed method.",
"title": ""
},
{
"docid": "f6783c1f37bb125fd35f4fbfedfde648",
"text": "This paper presents an attributed graph-based approach to an intricate data mining problem of revealing affiliated, interdependent entities that might be at risk of being tempted into fraudulent transfer pricing. We formalize the notions of controlled transactions and interdependent parties in terms of graph theory. We investigate the use of clustering and rule induction techniques to identify candidate groups (hot spots) of suspect entities. Further, we find entities that require special attention with respect to transfer pricing audits using network analysis and visualization techniques in IBM i2 Analyst's Notebook.",
"title": ""
},
{
"docid": "1a65b9d35bce45abeefe66882dcf4448",
"text": "Data is nowadays an invaluable resource, indeed it guides all business decisions in most of the computer-aided human activities. Threats to data integrity are thus of paramount relevance, as tampering with data may maliciously affect crucial business decisions. This issue is especially true in cloud computing environments, where data owners cannot control fundamental data aspects, like the physical storage of data and the control of its accesses. Blockchain has recently emerged as a fascinating technology which, among others, provides compelling properties about data integrity. Using the blockchain to face data integrity threats seems to be a natural choice, but its current limitations of low throughput, high latency, and weak stability hinder the practical feasibility of any blockchain-based solutions. In this paper, by focusing on a case study from the European SUNFISH project, which concerns the design of a secure by-design cloud federation platform for the public sector, we precisely delineate the actual data integrity needs of cloud computing environments and the research questions to be tackled to adopt blockchain-based databases. First, we detail the open research questions and the difficulties inherent in addressing them. Then, we outline a preliminary design of an effective blockchain-based database for cloud computing environments.",
"title": ""
},
{
"docid": "eb92c76e00ed0970bbec416e49607394",
"text": "This paper proposes an air-core transformer integration method, which mounts the transformer straightly into the multi-layer PCB, and maintains the proper distance between the inner transformer and other components on the top layer. Compared with other 3D integration method, the air-core transformer is optimized and modeled carefully to avoid the electromagnetic interference (EMI) of the magnetic fields. The integration method reduces the PCB area significantly, ensuring higher power density and similar efficiency as the conventional planar layout because the air-core transformer magnetic field does not affect other components. Moreover, the converters with the integrated PCB transformer can be manufactured with high consistency. With the air-core transformer, the overall height is only the sum of twice the PCB thickness and components height. In addition, the proposed integration method reduces the power loop inductance by 64%. It is applied to two resonant flyback converters operating at 20 MHz with Si MOSFETs, and 30 MHz with eGaN HEMTs respectively. The full load efficiency of the 30 MHz prototype is 80.1% with 5 V input and 5 V/ 2 W output. It achieves the power density of 32 W/in3.",
"title": ""
},
{
"docid": "357a7c930f3beb730533e2220a94a022",
"text": "The fused Lasso penalty enforces sparsity in both the coefficients and their successive differences, which is desirable for applications with features ordered in some meaningful way. The resulting problem is, however, challenging to solve, as the fused Lasso penalty is both non-smooth and non-separable. Existing algorithms have high computational complexity and do not scale to large-size problems. In this paper, we propose an Efficient Fused Lasso Algorithm (EFLA) for optimizing this class of problems. One key building block in the proposed EFLA is the Fused Lasso Signal Approximator (FLSA). To efficiently solve FLSA, we propose to reformulate it as the problem of finding an \"appropriate\" subgradient of the fused penalty at the minimizer, and develop a Subgradient Finding Algorithm (SFA). We further design a restart technique to accelerate the convergence of SFA, by exploiting the special \"structures\" of both the original and the reformulated FLSA problems. Our empirical evaluations show that, both SFA and EFLA significantly outperform existing solvers. We also demonstrate several applications of the fused Lasso.",
"title": ""
},
{
"docid": "7956e5fd3372716cb5ae16c6f9e846fb",
"text": "Understanding query intent helps modern search engines to improve search results as well as to display instant answers to the user. In this work, we introduce an accurate query classification method to detect the intent of a user search query. We propose using convolutional neural networks (CNN) to extract query vector representations as the features for the query classification. In this model, queries are represented as vectors so that semantically similar queries can be captured by embedding them into a vector space. Experimental results show that the proposed method can effectively detect intents of queries with higher precision and recall compared to current methods.",
"title": ""
},
{
"docid": "438ad24a900164555542b7dbec65b929",
"text": "This paper presents a method for sentiment analysis specifically designed to work with Twitter data (tweets), taking into account their structure, length and specific language. The approach employed makes it easily extendible to other languages and makes it able to process tweets in near real time. The main contributions of this work are: a) the pre-processing of tweets to normalize the language and generalize the vocabulary employed to express sentiment; b) the use minimal linguistic processing, which makes the approach easily portable to other languages; c) the inclusion of higher order n-grams to spot modifications in the polarity of the sentiment expressed; d) the use of simple heuristics to select features to be employed; e) the application of supervised learning using a simple Support Vector Machines linear classifier on a set of realistic data. We show that using the training models generated with the method described we can improve the sentiment classification performance, irrespective of the domain and distribution of the test sets.",
"title": ""
},
{
"docid": "b2fb874fa2dadb8d3b2a23b111a85660",
"text": "The aim of the present research is to study the rel ationship between “internet addiction” and “meta-co gnitive skills” with “academic achievement” in students of Islamic Azad University, Hamedan branch. This is de criptive – correlational method is used. To measure meta-cogni tive skills and internet addiction of students Well s questionnaire and Young questionnaire are used resp ectively. The population of the study is students o f Islamic Azad University of Hamedan. Using proportional stra tified random sampling the sample size was 375 stud ents. The results of the study showed that there is no signif icant relationship between two variables of “meta-c ognition” and “Internet addiction”(P >0.184).However, there is a significant relationship at 5% level between the tw o variables \"meta-cognition\" and \"academic achievement\" (P<0.00 2). Also, a significant inverse relationship was ob served between the average of two variables of \"Internet a ddiction\" and \"academic achievement\" at 5% level (P <0.031). There is a significant difference in terms of metacognition among the groups of different fields of s tudies. Furthermore, there is a significant difference in t erms of internet addiction scores among students be longing to different field of studies. In explaining the acade mic achievement variable variance of “meta-cognitio ” and “Internet addiction” using combined regression, it was observed that the above mentioned variables exp lain 16% of variable variance of academic achievement simultane ously.",
"title": ""
},
{
"docid": "df4b4119653789266134cf0b7571e332",
"text": "Automatic detection of lymphocyte in H&E images is a necessary first step in lots of tissue image analysis algorithms. An accurate and robust automated lymphocyte detection approach is of great importance in both computer science and clinical studies. Most of the existing approaches for lymphocyte detection are based on traditional image processing algorithms and/or classic machine learning methods. In the recent years, deep learning techniques have fundamentally transformed the way that a computer interprets images and have become a matchless solution in various pattern recognition problems. In this work, we design a new deep neural network model which extends the fully convolutional network by combining the ideas in several recent techniques, such as shortcut links. Also, we design a new training scheme taking the prior knowledge about lymphocytes into consideration. The training scheme not only efficiently exploits the limited amount of free-form annotations from pathologists, but also naturally supports efficient fine-tuning. As a consequence, our model has the potential of self-improvement by leveraging the errors collected during real applications. Our experiments show that our deep neural network model achieves good performance in the images of different staining conditions or different types of tissues.",
"title": ""
},
{
"docid": "b52fb324287ec47860e189062f961ad8",
"text": "In this paper we reexamine the place and role of stable model semantics in logic programming and contrast it with a least Herbrand model approach to Horn programs. We demonstrate that inherent features of stable model semantics naturally lead to a logic programming system that offers an interesting alternative to more traditional logic programming styles of Horn logic programming, stratified logic programming and logic programming with well-founded semantics. The proposed approach is based on the interpretation of program clauses as constraints. In this setting programs do not describe a single intended model, but a family of stable models. These stable models encode solutions to the constraint satisfaction problem described by the program. Our approach imposes restrictions on the syntax of logic programs. In particular, function symbols are eliminated from the language. We argue that the resulting logic programming system is well-attuned to problems in the class NP, has a well-defined domain of applications, and an emerging methodology of programming. We point out that what makes the whole approach viable is recent progress in implementations of algorithms to compute stable models of propositional logic programs.",
"title": ""
},
{
"docid": "d4bd583808c9e105264c001cbcb6b4b0",
"text": "It is common for clinicians, researchers, and public policymakers to describe certain drugs or objects (e.g., games of chance) as “addictive,” tacitly implying that the cause of addiction resides in the properties of drugs or other objects. Conventional wisdom encourages this view by treating different excessive behaviors, such as alcohol dependence and pathological gambling, as distinct disorders. Evidence supporting a broader conceptualization of addiction is emerging. For example, neurobiological research suggests that addictive disorders might not be independent:2 each outwardly unique addiction disorder might be a distinctive expression of the same underlying addiction syndrome. Recent research pertaining to excessive eating, gambling, sexual behaviors, and shopping also suggests that the existing focus on addictive substances does not adequately capture the origin, nature, and processes of addiction. The current view of separate addictions is similar to the view espoused during the early days of AIDS diagnosis, when rare diseases were not",
"title": ""
},
{
"docid": "fb1a178c7c097fbbf0921dcef915dc55",
"text": "AIMS\nThe management of open lower limb fractures in the United Kingdom has evolved over the last ten years with the introduction of major trauma networks (MTNs), the publication of standards of care and the wide acceptance of a combined orthopaedic and plastic surgical approach to management. The aims of this study were to report recent changes in outcome of open tibial fractures following the implementation of these changes.\n\n\nPATIENTS AND METHODS\nData on all patients with an open tibial fracture presenting to a major trauma centre between 2011 and 2012 were collected prospectively. The treatment and outcomes of the 65 Gustilo Anderson Grade III B tibial fractures were compared with historical data from the same unit.\n\n\nRESULTS\nThe volume of cases, the proportion of patients directly admitted and undergoing first debridement in a major trauma centre all increased. The rate of limb salvage was maintained at 94% and a successful limb reconstruction rate of 98.5% was achieved. The rate of deep bone infection improved to 1.6% (one patient) in the follow-up period.\n\n\nCONCLUSION\nThe reasons for these improvements are multifactorial, but the major trauma network facilitating early presentation to the major trauma centre, senior orthopaedic and plastic surgical involvement at every stage and proactive microbiological management, may be important factors.\n\n\nTAKE HOME MESSAGE\nThis study demonstrates that a systemised trauma network combined with evidence based practice can lead to improvements in patient care.",
"title": ""
},
{
"docid": "fbffbfcd9121ae576879e4021696f020",
"text": "Segmenting semantic objects from images and parsing them into their respective semantic parts are fundamental steps towards detailed object understanding in computer vision. In this paper, we propose a joint solution that tackles semantic object and part segmentation simultaneously, in which higher object-level context is provided to guide part segmentation, and more detailed part-level localization is utilized to refine object segmentation. Specifically, we first introduce the concept of semantic compositional parts (SCP) in which similar semantic parts are grouped and shared among different objects. A two-stream fully convolutional network (FCN) is then trained to provide the SCP and object potentials at each pixel. At the same time, a compact set of segments can also be obtained from the SCP predictions of the network. Given the potentials and the generated segments, in order to explore long-range context, we finally construct an efficient fully connected conditional random field (FCRF) to jointly predict the final object and part labels. Extensive evaluation on three different datasets shows that our approach can mutually enhance the performance of object and part segmentation, and outperforms the current state-of-the-art on both tasks.",
"title": ""
},
{
"docid": "959352af8a9517da7e53347ecfa17585",
"text": "OBJECTIVE\nElectronic health records (EHRs) are an increasingly common data source for clinical risk prediction, presenting both unique analytic opportunities and challenges. We sought to evaluate the current state of EHR based risk prediction modeling through a systematic review of clinical prediction studies using EHR data.\n\n\nMETHODS\nWe searched PubMed for articles that reported on the use of an EHR to develop a risk prediction model from 2009 to 2014. Articles were extracted by two reviewers, and we abstracted information on study design, use of EHR data, model building, and performance from each publication and supplementary documentation.\n\n\nRESULTS\nWe identified 107 articles from 15 different countries. Studies were generally very large (median sample size = 26 100) and utilized a diverse array of predictors. Most used validation techniques (n = 94 of 107) and reported model coefficients for reproducibility (n = 83). However, studies did not fully leverage the breadth of EHR data, as they uncommonly used longitudinal information (n = 37) and employed relatively few predictor variables (median = 27 variables). Less than half of the studies were multicenter (n = 50) and only 26 performed validation across sites. Many studies did not fully address biases of EHR data such as missing data or loss to follow-up. Average c-statistics for different outcomes were: mortality (0.84), clinical prediction (0.83), hospitalization (0.71), and service utilization (0.71).\n\n\nCONCLUSIONS\nEHR data present both opportunities and challenges for clinical risk prediction. There is room for improvement in designing such studies.",
"title": ""
},
{
"docid": "3224233a8a91c8d44e366b7b2ab8e7a1",
"text": "In this work we describe the scenario of fully-immersive desktop VR, which serves the overall goal to seamlessly integrate with existing workflows and workplaces of data analysts and researchers, such that they can benefit from the gain in productivity when immersed in their data-spaces. Furthermore, we provide a literature review showing the status quo of techniques and methods available for realizing this scenario under the raised restrictions. Finally, we propose a concept of an analysis framework and the decisions made and the decisions still to be taken, to outline how the described scenario and the collected methods are feasible in a real use case.",
"title": ""
}
] |
scidocsrr
|
bfda19d343bdf1a8d9a29e47626de9a5
|
Towards a secure network architecture for smart grids in 5G era
|
[
{
"docid": "002aec0b09bbd2d0e3453c9b3aa8d547",
"text": "It is often appealing to assume that existing solutions can be directly applied to emerging engineering domains. Unfortunately, careful investigation of the unique challenges presented by new domains exposes its idiosyncrasies, thus often requiring new approaches and solutions. In this paper, we argue that the “smart” grid, replacing its incredibly successful and reliable predecessor, poses a series of new security challenges, among others, that require novel approaches to the field of cyber security. We will call this new field cyber-physical security. The tight coupling between information and communication technologies and physical systems introduces new security concerns, requiring a rethinking of the commonly used objectives and methods. Existing security approaches are either inapplicable, not viable, insufficiently scalable, incompatible, or simply inadequate to address the challenges posed by highly complex environments such as the smart grid. A concerted effort by the entire industry, the research community, and the policy makers is required to achieve the vision of a secure smart grid infrastructure.",
"title": ""
},
{
"docid": "e5de9d00055e011fbe25636f12b467e6",
"text": "The development of a trustworthy smart grid requires a deeper understanding of potential impacts resulting from successful cyber attacks. Estimating feasible attack impact requires an evaluation of the grid's dependency on its cyber infrastructure and its ability to tolerate potential failures. A further exploration of the cyber-physical relationships within the smart grid and a specific review of possible attack vectors is necessary to determine the adequacy of cybersecurity efforts. This paper highlights the significance of cyber infrastructure security in conjunction with power application security to prevent, mitigate, and tolerate cyber attacks. A layered approach is introduced to evaluating risk based on the security of both the physical power applications and the supporting cyber infrastructure. A classification is presented to highlight dependencies between the cyber-physical controls required to support the smart grid and the communication and computations that must be protected from cyber attack. The paper then presents current research efforts aimed at enhancing the smart grid's application and infrastructure security. Finally, current challenges are identified to facilitate future research efforts.",
"title": ""
}
] |
[
{
"docid": "62f52788757b0e9de06f124e162c3491",
"text": "Throughout the evolution process, Earth's magnetic field (MF, about 50 microT) was a natural component of the environment for living organisms. Biological objects, flying on planned long-term interplanetary missions, would experience much weaker magnetic fields, since galactic MF is known to be 0.1-1 nT. However, the role of weak magnetic fields and their influence on functioning of biological organisms are still insufficiently understood, and is actively studied. Numerous experiments with seedlings of different plant species placed in weak magnetic field have shown that the growth of their primary roots is inhibited during early germination stages in comparison with control. The proliferative activity and cell reproduction in meristem of plant roots are reduced in weak magnetic field. Cell reproductive cycle slows down due to the expansion of G1 phase in many plant species (and of G2 phase in flax and lentil roots), while other phases of cell cycle remain relatively stable. In plant cells exposed to weak magnetic field, the functional activity of genome at early pre-replicate period is shown to decrease. Weak magnetic field causes intensification of protein synthesis and disintegration in plant roots. At ultrastructural level, changes in distribution of condensed chromatin and nucleolus compactization in nuclei, noticeable accumulation of lipid bodies, development of a lytic compartment (vacuoles, cytosegresomes and paramural bodies), and reduction of phytoferritin in plastids in meristem cells were observed in pea roots exposed to weak magnetic field. Mitochondria were found to be very sensitive to weak magnetic field: their size and relative volume in cells increase, matrix becomes electron-transparent, and cristae reduce. Cytochemical studies indicate that cells of plant roots exposed to weak magnetic field show Ca2+ over-saturation in all organelles and in cytoplasm unlike the control ones. The data presented suggest that prolonged exposures of plants to weak magnetic field may cause different biological effects at the cellular, tissue and organ levels. They may be functionally related to systems that regulate plant metabolism including the intracellular Ca2+ homeostasis. However, our understanding of very complex fundamental mechanisms and sites of interactions between weak magnetic fields and biological systems is still incomplete and still deserve strong research efforts.",
"title": ""
},
{
"docid": "e054c2d3b52441eaf801e7d2dd54dce9",
"text": "The concept of centrality is often invoked in social network analysis, and diverse indices have been proposed to measure it. This paper develops a unified framework for the measurement of centrality. All measures of centrality assess a node’s involvement in the walk structure of a network. Measures vary along four key dimensions: type of nodal involvement assessed, type of walk considered, property of walk assessed, and choice of summary measure. If we cross-classify measures by type of nodal involvement (radial versus medial) and property of walk assessed (volume versus length), we obtain a four-fold polychotomization with one cell empty which mirrors Freeman’s 1979 categorization. At a more substantive level, measures of centrality summarize a node’s involvement in or contribution to the cohesiveness of the network. Radial measures in particular are reductions of pair-wise proximities/cohesion to attributes of nodes or actors. The usefulness and interpretability of radial measures depend on the fit of the cohesion matrix to the onedimensional model. In network terms, a network that is fit by a one-dimensional model has a core-periphery structure in which all nodes revolve more or less closely around a single core. This in turn implies that the network does not contain distinct cohesive subgroups. Thus, centrality is shown to be intimately connected with the cohesive subgroup structure of a network. © 2005 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "0ff727ff06c02d2e371798ad657153c9",
"text": "Most approaches to human attribute and action recognition in still images are based on image representation in which multi-scale local features are pooled across scale into a single, scale-invariant encoding. Both in bag-of-words and the recently popular representations based on convolutional neural networks, local features are computed at multiple scales. However, these multi-scale convolutional features are pooled into a single scale-invariant representation. We argue that entirely scale-invariant image representations are sub-optimal and investigate approaches to scale coding within a bag of deep features framework. Our approach encodes multi-scale information explicitly during the image encoding stage. We propose two strategies to encode multi-scale information explicitly in the final image representation. We validate our two scale coding techniques on five datasets: Willow, PASCAL VOC 2010, PASCAL VOC 2012, Stanford-40 and Human Attributes (HAT-27). On all datasets, the proposed scale coding approaches outperform both the scale-invariant method and the standard deep features of the same network. Further, combining our scale coding approaches with standard deep features leads to consistent improvement over the state of the art.",
"title": ""
},
{
"docid": "114affaf4e25819aafa1c11da26b931f",
"text": "We propose a coherent mathematical model for human fingerprint images. Fingerprint structure is represented simply as a hologram - namely a phase modulated fringe pattern. The holographic form unifies analysis, classification, matching, compression, and synthesis of fingerprints in a self-consistent formalism. Hologram phase is at the heart of the method; a phase that uniquely decomposes into two parts via the Helmholtz decomposition theorem. Phase also circumvents the infinite frequency singularities that always occur at minutiae. Reliable analysis is possible using a recently discovered two-dimensional demodulator. The parsimony of this model is demonstrated by the reconstruction of a fingerprint image with an extreme compression factor of 239.",
"title": ""
},
{
"docid": "d7bb22eefbff0a472d3e394c61788be2",
"text": "Crowd evacuation of a building has been studied over the last decades. In this paper, seven methodological approaches for crowd evacuation have been identified. These approaches include cellular automata models, lattice gas models, social force models, fluid-dynamic models, agent-based models, game theoretic models, and approaches based on experiments with animals. According to available literatures, we discuss the advantages and disadvantages of these approaches, and conclude that a variety of different kinds of approaches should be combined to study crowd evacuation. Psychological and physiological elements affecting individual and collective behaviors should be also incorporated into the evacuation models. & 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "6838d497f81c594cb1760c075b0f5d48",
"text": "Unsupervised learning with generative adversarial networks (GANs) has proven hugely successful. Regular GANs hypothesize the discriminator as a classifier with the sigmoid cross entropy loss function. However, we found that this loss function may lead to the vanishing gradients problem during the learning process. To overcome such a problem, we propose in this paper the Least Squares Generative Adversarial Networks (LSGANs) which adopt the least squares loss function for the discriminator. We show that minimizing the objective function of LSGAN yields minimizing the Pearson $x^{2}$ divergence. There are two benefits of LSGANs over regular GANs. First, LSGANs are able to generate higher quality images than regular GANs. Second, LSGANs perform more stable during the learning process. We train LSGANs on several datasets, and the experimental results show that the images generated by LSGANs are of better quality than regular GANs. Furthermore, we evaluate the stability of LSGANs in two groups. One is to compare between LSGANs and regular GANs without gradient penalty. The other one is to compare between LSGANs with gradient penalty and WGANs with gradient penalty. We conduct four experiments to illustrate the stability of LSGANs. The other one is to compare between LSGANs with gradient penalty (LSGANs-GP) and WGANs with gradient penalty (WGANs-GP). The experimental results show that LSGANs-GP succeed in training for all the difficult architectures used in WGANs-GP, including 101-layer ResNet.",
"title": ""
},
{
"docid": "c49ed75ce48fb92db6e80e4fe8af7127",
"text": "The One Class Classification (OCC) problem is different from the conventional binary/multi-class classification problem in the sense that in OCC, the negative class is either not present or not properly sampled. The problem of classifying positive (or target) cases in the absence of appropriately-characterized negative cases (or outliers) has gained increasing attention in recent years. Researchers have addressed the task of OCC by using different methodologies in a variety of application domains. In this paper we formulate a taxonomy with three main categories based on the way OCC has been envisaged, implemented and applied by various researchers in different application domains. We also present a survey of current state-of-the-art OCC algorithms, their importance, applications and limitations.",
"title": ""
},
{
"docid": "124f40ccd178e6284cc66b88da98709d",
"text": "The tripeptide glutathione is the thiol compound present in the highest concentration in cells of all organs. Glutathione has many physiological functions including its involvement in the defense against reactive oxygen species. The cells of the human brain consume about 20% of the oxygen utilized by the body but constitute only 2% of the body weight. Consequently, reactive oxygen species which are continuously generated during oxidative metabolism will be generated in high rates within the brain. Therefore, the detoxification of reactive oxygen species is an essential task within the brain and the involvement of the antioxidant glutathione in such processes is very important. The main focus of this review article will be recent results on glutathione metabolism of different brain cell types in culture. The glutathione content of brain cells depends strongly on the availability of precursors for glutathione. Different types of brain cells prefer different extracellular glutathione precursors. Glutathione is involved in the disposal of peroxides by brain cells and in the protection against reactive oxygen species. In coculture astroglial cells protect other neural cell types against the toxicity of various compounds. One mechanism for this interaction is the supply by astroglial cells of glutathione precursors to neighboring cells. Recent results confirm the prominent role of astrocytes in glutathione metabolism and the defense against reactive oxygen species in brain. These results also suggest an involvement of a compromised astroglial glutathione system in the oxidative stress reported for neurological disorders.",
"title": ""
},
{
"docid": "eae04aa2942bfd3752fb596f645e2c2e",
"text": "PURPOSE\nHigh fasting blood glucose (FBG) can lead to chronic diseases such as diabetes mellitus, cardiovascular and kidney diseases. Consuming probiotics or synbiotics may improve FBG. A systematic review and meta-analysis of controlled trials was conducted to clarify the effect of probiotic and synbiotic consumption on FBG levels.\n\n\nMETHODS\nPubMed, Scopus, Cochrane Library, and Cumulative Index to Nursing and Allied Health Literature databases were searched for relevant studies based on eligibility criteria. Randomized or non-randomized controlled trials which investigated the efficacy of probiotics or synbiotics on the FBG of adults were included. Studies were excluded if they were review articles and study protocols, or if the supplement dosage was not clearly mentioned.\n\n\nRESULTS\nA total of fourteen studies (eighteen trials) were included in the analysis. Random-effects meta-analyses were conducted for the mean difference in FBG. Overall reduction in FBG observed from consumption of probiotics and synbiotics was borderline statistically significant (-0.18 mmol/L 95 % CI -0.37, 0.00; p = 0.05). Neither probiotic nor synbiotic subgroup analysis revealed a significant reduction in FBG. The result of subgroup analysis for baseline FBG level ≥7 mmol/L showed a reduction in FBG of 0.68 mmol/L (-1.07, -0.29; ρ < 0.01), while trials with multiple species of probiotics showed a more pronounced reduction of 0.31 mmol/L (-0.58, -0.03; ρ = 0.03) compared to single species trials.\n\n\nCONCLUSION\nThis meta-analysis suggests that probiotic and synbiotic supplementation may be beneficial in lowering FBG in adults with high baseline FBG (≥7 mmol/L) and that multispecies probiotics may have more impact on FBG than single species.",
"title": ""
},
{
"docid": "097f1a491b7266b5d3baf7c7d1331bbe",
"text": "A polysilicon transistor based active matrix organic light emitting diode (AMOLED) pixel with high pixel to pixel luminance uniformity is reported. The new pixel powers the OLEDs with small constant currents to ensure consistent brightness and extended life. Excellent pixel to pixel current drive uniformity is obtained despite the threshold voltage variation inherent in polysilicon transistors. Other considerations in the design of pixels for high information content AMOLED displays are discussed.",
"title": ""
},
{
"docid": "8ee3d3200ed95cad5ff4ed77c08bb608",
"text": "We present a rare case of a non-fatal impalement injury of the brain. A 13-year-old boy was found in his classroom unconsciously lying on floor. His classmates reported that they had been playing, and throwing building bricks, when suddenly the boy collapsed. The emergency physician did not find significant injuries. Upon admission to a hospital, CT imaging revealed a \"blood path\" through the brain. After clinical forensic examination, an impalement injury was diagnosed, with the entry wound just below the left eyebrow. Eventually, the police presented a variety of pointers that were suspected to have caused the injury. Forensic trace analysis revealed human blood on one of the pointers, and subsequent STR analysis linked the blood to the injured boy. Confronted with the results of the forensic examination, the classmates admitted that they had been playing \"sword fights\" using the pointers, and that the boy had been hit during the game. The case illustrates the difficulties of diagnosing impalement injuries, and identifying the exact cause of the injury.",
"title": ""
},
{
"docid": "4cdf0df648d3ee5e8cf07001924f73ae",
"text": "Electronic Health Records (EHR) narratives are a rich source of information, embedding high-resolution information of value to secondary research use. However, because the EHRs are mostly in natural language free-text and highly ambiguity-ridden, many natural language processing algorithms have been devised around them to extract meaningful structured information about clinical entities. The performance of the algorithms however, largely varies depending on the training dataset as well as the effectiveness of the use of background knowledge to steer the learning process.\n In this paper we study the impact of initializing the training of a neural network natural language processing algorithm with pre-defined clinical word embeddings to improve feature extraction and relationship classification between entities. We add our embedding framework to a bi-directional long short-term memory (Bi-LSTM) neural network, and further study the effect of using attention weights in neural networks for sequence labelling tasks to extract knowledge of Adverse Drug Reactions (ADRs). We incorporate unsupervised word embeddings using Word2Vec and GloVe from widely available medical resources such as Multiparameter Intelligent Monitoring in Intensive Care (MIMIC) II corpora, Unified Medical Language System (UMLS) as well as embed pharmaco lexicon from available EHRs. Our algorithm, implemented using two datasets, shows that our architecture outperforms baseline Bi-LSTM or Bi-LSTM networks using linear chain and Skip-Chain conditional random fields (CRF).",
"title": ""
},
{
"docid": "73104192eb7d098d15d14c347ba4b60e",
"text": "The launching of Microsoft Kinect with skeleton tracking technique opens up new potentials for skeleton based human action recognition. However, the 3D human skeletons, generated via skeleton tracking from the depth map sequences, are generally very noisy and unreliable. In this paper, we introduce a robust informative joints based human action recognition method. Inspired by the instinct of the human vision system, we analyze the mean contributions of human joints for each action class via differential entropy of the joint locations. There is significant difference between most of the actions, and the contribution ratio is highly in accordance with common sense. We present a novel approach named skeleton context to measure similarity between postures and exploit it for action recognition. The similarity is calculated by extracting the multi-scale pairwise position distribution for each informative joint. Then feature sets are evaluated in a bag-of-words scheme using a linear CRFs. We report experimental results and validate the method on two public action dataset. Experiments results have shown that the proposed approach is discriminative for similar human action recognition and well adapted to the intra-class variation. & 2015 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "4c12d04ce9574aab071964e41f0c5f4e",
"text": "The complete genome sequence of Treponema pallidum was determined and shown to be 1,138,006 base pairs containing 1041 predicted coding sequences (open reading frames). Systems for DNA replication, transcription, translation, and repair are intact, but catabolic and biosynthetic activities are minimized. The number of identifiable transporters is small, and no phosphoenolpyruvate:phosphotransferase carbohydrate transporters were found. Potential virulence factors include a family of 12 potential membrane proteins and several putative hemolysins. Comparison of the T. pallidum genome sequence with that of another pathogenic spirochete, Borrelia burgdorferi, the agent of Lyme disease, identified unique and common genes and substantiates the considerable diversity observed among pathogenic spirochetes.",
"title": ""
},
{
"docid": "f53a2ca0fda368d0e90cbb38076658af",
"text": "RNAi therapeutics is a powerful tool for treating diseases by sequence-specific targeting of genes using siRNA. Since its discovery, the need for a safe and efficient delivery system for siRNA has increased. Here, we have developed and characterized a delivery platform for siRNA based on the natural polysaccharide starch in an attempt to address unresolved delivery challenges of RNAi. Modified potato starch (Q-starch) was successfully obtained by substitution with quaternary reagent, providing Q-starch with cationic properties. The results indicate that Q-starch was able to bind siRNA by self-assembly formation of complexes. For efficient and potent gene silencing we monitored the physical characteristics of the formed nanoparticles at increasing N/P molar ratios. The minimum ratio for complete entrapment of siRNA was 2. The resulting complexes, which were characterized by a small diameter (~30 nm) and positive surface charge, were able to protect siRNA from enzymatic degradation. Q-starch/siRNA complexes efficiently induced P-glycoprotein (P-gp) gene silencing in the human ovarian adenocarcinoma cell line, NCI-ADR/Res (NAR), over expressing the targeted gene and presenting low toxicity. Additionally, Q-starch-based complexes showed high cellular uptake during a 24-hour study, which also suggested that intracellular siRNA delivery barriers governed the kinetics of siRNA transfection. In this study, we have devised a promising siRNA delivery vector based on a starch derivative for efficient and safe RNAi application.",
"title": ""
},
{
"docid": "7267e5082c890dfa56a745d3b28425cc",
"text": "Natural Orifice Translumenal Endoscopic Surgery (NOTES) has recently attracted lots of attention, promising surgical procedures with fewer complications, better cosmesis, lower pains and faster recovery. Several robotic systems were developed aiming to enable abdominal surgeries in a NOTES manner. Although these robotic systems demonstrated the surgical concept, characteristics which could fully enable NOTES procedures remain unclear. This paper presents the development of an endoscopic continuum testbed for finalizing system characteristics of a surgical robot for NOTES procedures, which include i) deployability (the testbed can be deployed in a folded endoscope configuration and then be unfolded into a working configuration), ii) adequate workspace, iii) sufficient distal dexterity (e.g. suturing capability), and iv) desired mechanics properties (e.g. enough load carrying capability). Continuum mechanisms were implemented in the design and a diameter of 12mm of this testbed in its endoscope configuration was achieved. Results of this paper could be used to form design references for future development of NOTES robots.",
"title": ""
},
{
"docid": "25828231caaf3288ed4fdb27df7f8740",
"text": "This paper reports on an algorithm to support autonomous vehicles in reasoning about occluded regions of their environment to make safe, reliable decisions. In autonomous driving scenarios, other traffic participants are often occluded from sensor measurements by buildings or large vehicles like buses or trucks, which makes tracking dynamic objects challenging.We present a method to augment standard dynamic object trackers with means to 1) estimate the occluded state of other traffic agents and 2) robustly associate the occluded estimates with new observations after the tracked object reenters the visible region of the sensor horizon. We perform occluded state estimation using a dynamics model that accounts for the driving behavior of traffic agents and a hybrid Gaussian mixture model (hGMM) to capture multiple hypotheses over discrete behavior, such as driving along different lanes or turning left or right at an intersection. Upon new observations, we associate them to existing estimates in terms of the Kullback-Leibler divergence (KLD). We evaluate the proposed method in simulation and using a real-world traffic-tracking dataset from an autonomous vehicle platform. Results show that our method can handle significantly prolonged occlusions when compared to a standard dynamic object tracking system.",
"title": ""
},
{
"docid": "8a0ee163723b4e0c2fa531669af3ae39",
"text": "As the computer becomes more ubiquitous throughout society, the security of networks and information technologies is a growing concern. Recent research has found hackers making use of social media platforms to form communities where sharing of knowledge and tools that enable cybercriminal activity is common. However, past studies often report only generalized community behaviors and do not scrutinize individual members; in particular, current research has yet to explore the mechanisms in which some hackers become key actors within their communities. Here we explore two major hacker communities from the United States and China in order to identify potential cues for determining key actors. The relationships between various hacker posting behaviors and reputation are observed through the use of ordinary least squares regression. Results suggest that the hackers who contribute to the cognitive advance of their community are generally considered the most reputable and trustworthy among their peers. Conversely, the tenure of hackers and their discussion quality were not significantly correlated with reputation. Results are consistent across both forums, indicating the presence of a common hacker culture that spans multiple geopolitical regions.",
"title": ""
},
{
"docid": "3ce0ea80f7ae945a4fef8cbde458c644",
"text": "Deficits in 'executive function' (EF) are characteristic of several clinical disorders, most notably Autism Spectrum Disorders (ASD) and Attention-Deficit/Hyperactivity Disorder (ADHD). In this study, age- and IQ-matched groups with ASD, ADHD, or typical development (TD) were compared on a battery of EF tasks tapping three core domains: response selection/inhibition, flexibility, and planning/working memory. Relations between EF, age and everyday difficulties (rated by parents and teachers) were also examined. Both clinical groups showed significant EF impairments compared with TD peers. The ADHD group showed greater inhibitory problems on a Go-no-Go task, while the ASD group was significantly worse on response selection/monitoring in a cognitive estimates task. Age-related improvements were clearer in ASD and TD than in ADHD. At older (but not younger) ages, the ASD group outperformed the ADHD group, performing as well as the TD group on many EF measures. EF scores were related to specific aspects of communicative and social adaptation, and negatively correlated with hyperactivity in ASD and TD. Within the present groups, the overall findings suggested less severe and persistent EF deficits in ASD (including Asperger Syndrome) than in ADHD.",
"title": ""
}
] |
scidocsrr
|
468ca2613a1e5673aaaceaa50c2fed83
|
Leveraging Intra-User and Inter-User Representation Learning for Automated Hate Speech Detection
|
[
{
"docid": "a986826041730d953dfbf9fbc1b115a6",
"text": "This article presents a general class of associative reinforcement learning algorithms for connectionist networks containing stochastic units. These algorithms, called REINFORCE algorithms, are shown to make weight adjustments in a direction that lies along the gradient of expected reinforcement in both immediate-reinforcement tasks and certain limited forms of delayed-reinforcement tasks, and they do this without explicitly computing gradient estimates or even storing information from which such estimates could be computed. Specific examples of such algorithms are presented, some of which bear a close relationship to certain existing algorithms while others are novel but potentially interesting in their own right. Also given are results that show how such algorithms can be naturally integrated with backpropagation. We close with a brief discussion of a number of additional issues surrounding the use of such algorithms, including what is known about their limiting behaviors as well as further considerations that might be used to help develop similar but potentially more powerful reinforcement learning algorithms.",
"title": ""
}
] |
[
{
"docid": "1b55f94f93a34ac1acf79cedfae10cfd",
"text": "PROBLEM/CONDITION\nEach year in the United States, an estimated one in six residents requires medical treatment for an injury, and an estimated one in 10 residents visits a hospital emergency department (ED) for treatment of a nonfatal injury. This report summarizes national data on fatal and nonfatal injuries in the United States for 2001, by age; sex; mechanism, intent, and type of injury; and other selected characteristics.\n\n\nREPORTING PERIOD COVERED\nJanuary-December 2001.\n\n\nDESCRIPTION OF SYSTEM\n\n\n\nDESCRIPTION OF THE SYSTEM\nFatal injury data are derived from CDC's National Vital Statistics System (NVSS) and include information obtained from official death certificates throughout the United States. Nonfatal injury data, other than gunshot injuries, are from the National Electronic Injury Surveillance System All Injury Program (NEISS-AIP), a national stratified probability sample of 66 U.S. hospital EDs. Nonfatal firearm and BB/pellet gunshot injury data are from CDC's Firearm Injury Surveillance Study, being conducted by using the National Electronic Injury Surveillance System (NEISS), a national stratified probability sample of 100 U.S. hospital EDs.\n\n\nRESULTS\nIn 2001, approximately 157,078 persons in the United States (age-adjusted injury death rate: 54.9/100,000 population; 95% confidence interval [CI] = 54.6-55.2/100,000) died from an injury, and an estimated 29,721,821 persons with nonfatal injuries (age-adjusted nonfatal injury rate: 10404.3/100,000; 95% CI = 10074.9-10733.7/ 100,000) were treated in U.S. hospital EDs. The overall injury-related case-fatality rate (CFR) was 0.53%, but CFRs varied substantially by age (rates for older persons were higher than rates for younger persons); sex (rates were higher for males than females); intent (rates were higher for self-harm-related than for assault and unintentional injuries); and mechanism (rates were highest for drowning, suffocation/inhalation, and firearm-related injury). Overall, fatal and nonfatal injury rates were higher for males than females and disproportionately affected younger and older persons. For fatal injuries, 101,537 (64.6%) were unintentional, and 51,326 (32.7%) were violence-related, including homicides, legal intervention, and suicide. For nonfatal injuries, 27,551,362 (92.7%) were unintentional, and 2,155,912 (7.3%) were violence-related, including assaults, legal intervention, and self-harm. Overall, the leading cause of fatal injury was unintentional motor-vehicle-occupant injuries. The leading cause of nonfatal injury was unintentional falls; however, leading causes vary substantially by sex and age. For nonfatal injuries, the majority of injured persons were treated in hospital EDs for lacerations (25.8%), strains/sprains (20.2%), and contusions/abrasions (18.3%); the majority of injuries were to the head/neck region (29.5%) and the extremities (47.9%). Overall, 5.5% of those treated for nonfatal injuries in hospital EDs were hospitalized or transferred to another facility for specialized care.\n\n\nINTERPRETATION\nThis report provides the first summary report of fatal and nonfatal injuries that combines death data from NVSS and nonfatal injury data from NEISS-AIP. These data indicate that mortality and morbidity associated with injuries affect all segments of the population, although the leading external causes of injuries vary substantially by age and sex of injured persons. Injury prevention efforts should include consideration of the substantial differences in fatal and nonfatal injury rates, CFRs, and the leading causes of unintentional and violence-related injuries, in regard to the sex and age of injured persons.",
"title": ""
},
{
"docid": "d1b6091e010cba3abc340efeab77a97b",
"text": "Recently, the term knowledge graph has been used frequently in research and business, usually in close association with Semantic Web technologies, linked data, large-scale data analytics and cloud computing. Its popularity is clearly influenced by the introduction of Google’s Knowledge Graph in 2012, and since then the term has been widely used without a definition. A large variety of interpretations has hampered the evolution of a common understanding of knowledge graphs. Numerous research papers refer to Google’s Knowledge Graph, although no official documentation about the used methods exists. The prerequisite for widespread academic and commercial adoption of a concept or technology is a common understanding, based ideally on a definition that is free from ambiguity. We tackle this issue by discussing and defining the term knowledge graph, considering its history and diversity in interpretations and use. Our goal is to propose a definition of knowledge graphs that serves as basis for discussions on this topic and contributes to a common vision.",
"title": ""
},
{
"docid": "faf3967b2287b8bdfdf1ebc55bcd5910",
"text": "As an essential step in many computer vision tasks, camera calibration has been studied extensively. In this paper, we propose a novel calibration technique that, based on geometric analysis, camera parameters can be estimated effectively and accurately from just one view of only five corresponding points. Our core contribution is the geometric analysis for deriving the basic equations to realize camera calibration from four coplanar corresponding points and a fifth noncoplanar one. The position, orientation, and focal length of a zooming camera can be directly estimated with unique solution. The estimated parameters are further optimized by the bundle adjustment technique. The proposed calibration method is examined and evaluated on both computer simulated data and real images. The experimental results confirm the validity of the proposed method that camera parameters can be estimated with sufficient accuracy using just five-point correspondences from a single image, even in the presence of image noise.",
"title": ""
},
{
"docid": "438a9e517a98c6f98f7c86209e601f1b",
"text": "One of the most challenging tasks in large-scale multi-label image retrieval is to map images into binary codes while preserving multilevel semantic similarity. Recently, several deep supervised hashing methods have been proposed to learn hash functions that preserve multilevel semantic similarity with deep convolutional neural networks. However, these triplet label based methods try to preserve the ranking order of images according to their similarity degrees to the queries while not putting direct constraints on the distance between the codes of very similar images. Besides, the current evaluation criteria are not able to measure the performance of existing hashing methods on preserving fine-grained multilevel semantic similarity. To tackle these issues, we propose a novel Deep Multilevel Semantic Similarity Preserving Hashing (DMSSPH) method to learn compact similarity-preserving binary codes for the huge body of multi-label image data with deep convolutional neural networks. In our approach, we make the best of the supervised information in the form of pairwise labels to maximize the discriminability of output binary codes. Extensive evaluations conducted on several benchmark datasets demonstrate that the proposed method significantly outperforms the state-of-the-art supervised and unsupervised hashing methods at the accuracies of top returned images, especially for shorter binary codes. Meanwhile, the proposed method shows better performance on preserving fine-grained multilevel semantic similarity according to the results under the Jaccard coefficient based evaluation criteria we propose.",
"title": ""
},
{
"docid": "6838d497f81c594cb1760c075b0f5d48",
"text": "Unsupervised learning with generative adversarial networks (GANs) has proven hugely successful. Regular GANs hypothesize the discriminator as a classifier with the sigmoid cross entropy loss function. However, we found that this loss function may lead to the vanishing gradients problem during the learning process. To overcome such a problem, we propose in this paper the Least Squares Generative Adversarial Networks (LSGANs) which adopt the least squares loss function for the discriminator. We show that minimizing the objective function of LSGAN yields minimizing the Pearson $x^{2}$ divergence. There are two benefits of LSGANs over regular GANs. First, LSGANs are able to generate higher quality images than regular GANs. Second, LSGANs perform more stable during the learning process. We train LSGANs on several datasets, and the experimental results show that the images generated by LSGANs are of better quality than regular GANs. Furthermore, we evaluate the stability of LSGANs in two groups. One is to compare between LSGANs and regular GANs without gradient penalty. The other one is to compare between LSGANs with gradient penalty and WGANs with gradient penalty. We conduct four experiments to illustrate the stability of LSGANs. The other one is to compare between LSGANs with gradient penalty (LSGANs-GP) and WGANs with gradient penalty (WGANs-GP). The experimental results show that LSGANs-GP succeed in training for all the difficult architectures used in WGANs-GP, including 101-layer ResNet.",
"title": ""
},
{
"docid": "a2d97c2b71e6424d3f458b7730be0c90",
"text": "Fault detection in solar photovoltaic (PV) arrays is an essential task for increasing reliability and safety in PV systems. Because of PV's nonlinear characteristics, a variety of faults may be difficult to detect by conventional protection devices, leading to safety issues and fire hazards in PV fields. To fill this protection gap, machine learning techniques have been proposed for fault detection based on measurements, such as PV array voltage, current, irradiance, and temperature. However, existing solutions usually use supervised learning models, which are trained by numerous labeled data (known as fault types) and therefore, have drawbacks: 1) the labeled PV data are difficult or expensive to obtain, 2) the trained model is not easy to update, and 3) the model is difficult to visualize. To solve these issues, this paper proposes a graph-based semi-supervised learning model only using a few labeled training data that are normalized for better visualization. The proposed model not only detects the fault, but also further identifies the possible fault type in order to expedite system recovery. Once the model is built, it can learn PV systems autonomously over time as weather changes. Both simulation and experimental results show the effective fault detection and classification of the proposed method.",
"title": ""
},
{
"docid": "14cb6aa11fae4c370542b58a20b93da4",
"text": "Stray-current corrosion has been a source of concern for the transit authorities and utility companies since the inception of the electrified rail transit system. The corrosion problem caused by stray current was noticed within ten years of the first dc-powered rail line in the United States in 1888 [1] in Richmond, Virginia, and ever since, the control of stray current has been a critical issue. Similarly, the effects of rail and utility-pipe corrosion caused by stray current had been observed in Europe.",
"title": ""
},
{
"docid": "9f5ab2f666eb801d4839fcf8f0293ceb",
"text": "In recent years, Wireless Sensor Networks (WSNs) have emerged as a new powerful technology used in many applications such as military operations, surveillance system, Intelligent Transport Systems (ITS) etc. These networks consist of many Sensor Nodes (SNs), which are not only used for monitoring but also capturing the required data from the environment. Most of the research proposals on WSNs have been developed keeping in view of minimization of energy during the process of extracting the essential data from the environment where SNs are deployed. The primary reason for this is the fact that the SNs are operated on battery which discharges quickly after each operation. It has been found in literature that clustering is the most common technique used for energy aware routing in WSNs. The most popular protocol for clustering in WSNs is Low Energy Adaptive Clustering Hierarchy (LEACH) which is based on adaptive clustering technique. This paper provides the taxonomy of various clustering and routing techniques in WSNs based upon metrics such as power management, energy management, network lifetime, optimal cluster head selection, multihop data transmission etc. A comprehensive discussion is provided in the text highlighting the relative advantages and disadvantages of many of the prominent proposals in this category which helps the designers to select a particular proposal based upon its merits over the others. & 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "98162dc86a2c70dd55e7e3e996dc492c",
"text": "PURPOSE\nTo evaluate gastroesophageal reflux disease (GERD) symptoms, patient satisfaction, and antisecretory drug use in a large group of GERD patients treated with the Stretta procedure (endoluminal temperature-controlled radiofrequency energy for the treatment of GERD) at multiple centers since February 1999.\n\n\nMETHODS\nAll subjects provided informed consent. A health care provider from each institution administered a standardized GERD survey to patients who had undergone Stretta. Subjects provided (at baseline and follow-up) (1) GERD severity (none, mild, moderate, severe), (2) percentage of GERD symptom control, (3) satisfaction, and (4) antisecretory medication use. Outcomes were compared with the McNemar test, paired t test, and Wilcoxon signed rank test.\n\n\nRESULTS\nSurveys of 558 patients were evaluated (33 institutions, mean follow-up of 8 months). Most patients (76%) were dissatisfied with baseline antisecretory therapy for GERD. After treatment, onset of GERD relief was less than 2 months (68.7%) or 2 to 6 months (14.6%). The median drug requirement improved from proton pump inhibitors twice daily to antacids as needed (P < .0001). The percentage of patients with satisfactory GERD control (absent or mild) improved from 26.3% at baseline (on drugs) to 77.0% after Stretta (P < .0001). Median baseline symptom control on drugs was 50%, compared with 90% at follow-up (P < .0001). Baseline patient satisfaction on drugs was 23.2%, compared with 86.5% at follow-up (P < .0001). Subgroup analysis (<1 year vs. >1 year of follow-up) showed a superior effect on symptom control and drug use in those patients beyond 1 year of follow-up, supporting procedure durability.\n\n\nCONCLUSIONS\nThe Stretta procedure results in significant GERD symptom control and patient satisfaction, superior to that derived from drug therapy in this study group. The treatment effect is durable beyond 1 year, and most patients were off all antisecretory drugs at follow-up. These results support the use of the Stretta procedure for patients with GERD, particularly those with inadequate control of symptoms on medical therapy.",
"title": ""
},
{
"docid": "5c227388ee404354692ffa0b2f3697f3",
"text": "Automotive surround view camera system is an emerging automotive ADAS (Advanced Driver Assistance System) technology that assists the driver in parking the vehicle safely by allowing him/her to see a top-down view of the 360 degree surroundings of the vehicle. Such a system normally consists of four to six wide-angle (fish-eye lens) cameras mounted around the vehicle, each facing a different direction. From these camera inputs, a composite bird-eye view of the vehicle is synthesized and shown to the driver in real-time during parking. In this paper, we present a surround view camera solution that consists of three key algorithm components: geometric alignment, photometric alignment, and composite view synthesis. Our solution produces a seamlessly stitched bird-eye view of the vehicle from four cameras. It runs real-time on DSP C66x producing an 880x1080 output video at 30 fps.",
"title": ""
},
{
"docid": "77371cfa61dbb3053f3106f5433d23a7",
"text": "We present a new noniterative approach to synthetic aperture radar (SAR) autofocus, termed the multichannel autofocus (MCA) algorithm. The key in the approach is to exploit the multichannel redundancy of the defocusing operation to create a linear subspace, where the unknown perfectly focused image resides, expressed in terms of a known basis formed from the given defocused image. A unique solution for the perfectly focused image is then directly determined through a linear algebraic formulation by invoking an additional image support condition. The MCA approach is found to be computationally efficient and robust and does not require prior assumptions about the SAR scene used in existing methods. In addition, the vector-space formulation of MCA allows sharpness metric optimization to be easily incorporated within the restoration framework as a regularization term. We present experimental results characterizing the performance of MCA in comparison with conventional autofocus methods and discuss the practical implementation of the technique.",
"title": ""
},
{
"docid": "edfc15795f1f69d31c36f73c213d2b7d",
"text": "Three studies tested whether adopting strong (relative to weak) approach goals in relationships (i.e., goals focused on the pursuit of positive experiences in one's relationship such as fun, growth, and development) predict greater sexual desire. Study 1 was a 6-month longitudinal study with biweekly assessments of sexual desire. Studies 2 and 3 were 2-week daily experience studies with daily assessments of sexual desire. Results showed that approach relationship goals buffered against declines in sexual desire over time and predicted elevated sexual desire during daily sexual interactions. Approach sexual goals mediated the association between approach relationship goals and daily sexual desire. Individuals with strong approach goals experienced even greater desire on days with positive relationship events and experienced less of a decrease in desire on days with negative relationships events than individuals who were low in approach goals. In two of the three studies, the association between approach relationship goals and sexual desire was stronger for women than for men. Implications of these findings for maintaining sexual desire in long-term relationships are discussed.",
"title": ""
},
{
"docid": "981b4977ed3524545d9ae5016d45c8d6",
"text": "Related to different international activities in the Optical Wireless Communications (OWC) field Graz University of Technology (TUG) has high experience on developing different high data rate transmission systems and is well known for measurements and analysis of the OWC-channel. In this paper, a novel approach for testing Free Space Optical (FSO) systems in a controlled laboratory condition is proposed. Based on fibre optics technology, TUG testbed could effectively emulate the operation of real wireless optical communication systems together with various atmospheric perturbation effects such as fog and clouds. The suggested architecture applies an optical variable attenuator as a main device representing the tropospheric influences over the launched Gaussian beam in the free space channel. In addition, the current scheme involves an attenuator control unit with an external Digital Analog Converter (DAC) controlled by self-developed software. To obtain optimal results in terms of the presented setup, a calibration process including linearization of the non-linear attenuation versus voltage graph is performed. Finally, analytical results of the attenuation based on real measurements with the hardware channel emulator under laboratory conditions are shown. The implementation can be used in further activities to verify OWC-systems, before testing under real conditions.",
"title": ""
},
{
"docid": "30e93cb20194b989b26a8689f06b8343",
"text": "We present a robust method for solving the map matching problem exploiting massive GPS trace data. Map matching is the problem of determining the path of a user on a map from a sequence of GPS positions of that user --- what we call a trajectory. Commonly obtained from GPS devices, such trajectory data is often sparse and noisy. As a result, the accuracy of map matching is limited due to ambiguities in the possible routes consistent with trajectory samples. Our approach is based on the observation that many regularity patterns exist among common trajectories of human beings or vehicles as they normally move around. Among all possible connected k-segments on the road network (i.e., consecutive edges along the network whose total length is approximately k units), a typical trajectory collection only utilizes a small fraction. This motivates our data-driven map matching method, which optimizes the projected paths of the input trajectories so that the number of the k-segments being used is minimized. We present a formulation that admits efficient computation via alternating optimization. Furthermore, we have created a benchmark for evaluating the performance of our algorithm and others alike. Experimental results demonstrate that the proposed approach is superior to state-of-art single trajectory map matching techniques. Moreover, we also show that the extracted popular k-segments can be used to process trajectories that are not present in the original trajectory set. This leads to a map matching algorithm that is as efficient as existing single trajectory map matching algorithms, but with much improved map matching accuracy.",
"title": ""
},
{
"docid": "3d2200cc6b71995c6a4f88897bb73ea0",
"text": "With biomedical literature increasing at a rate of several thousand papers per week, it is impossible to keep abreast of all developments; therefore, automated means to manage the information overload are required. Text mining techniques, which involve the processes of information retrieval, information extraction and data mining, provide a means of solving this. By adding meaning to text, these techniques produce a more structured analysis of textual knowledge than simple word searches, and can provide powerful tools for the production and analysis of systems biology models.",
"title": ""
},
{
"docid": "def6762457fd4e95a35e3c83990c4943",
"text": "The possibility of controlling dexterous hand prostheses by using a direct connection with the nervous system is particularly interesting for the significant improvement of the quality of life of patients, which can derive from this achievement. Among the various approaches, peripheral nerve based intrafascicular electrodes are excellent neural interface candidates, representing an excellent compromise between high selectivity and relatively low invasiveness. Moreover, this approach has undergone preliminary testing in human volunteers and has shown promise. In this paper, we investigate whether the use of intrafascicular electrodes can be used to decode multiple sensory and motor information channels with the aim to develop a finite state algorithm that may be employed to control neuroprostheses and neurocontrolled hand prostheses. The results achieved both in animal and human experiments show that the combination of multiple sites recordings and advanced signal processing techniques (such as wavelet denoising and spike sorting algorithms) can be used to identify both sensory stimuli (in animal models) and motor commands (in a human volunteer). These findings have interesting implications, which should be investigated in future experiments.",
"title": ""
},
{
"docid": "8750e04065d8f0b74b7fee63f4966e59",
"text": "The Customer churn is a crucial activity in rapidly growing and mature competitive telecommunication sector and is one of the greatest importance for a project manager. Due to the high cost of acquiring new customers, customer churn prediction has emerged as an indispensable part of telecom sectors’ strategic decision making and planning process. It is important to forecast customer churn behavior in order to retain those customers that will churn or possible may churn. This study is another attempt which makes use of rough set theory, a rule-based decision making technique, to extract rules for churn prediction. Experiments were performed to explore the performance of four different algorithms (Exhaustive, Genetic, Covering, and LEM2). It is observed that rough set classification based on genetic algorithm, rules generation yields most suitable performance out of the four rules generation algorithms. Moreover, by applying the proposed technique on publicly available dataset, the results show that the proposed technique can fully predict all those customers that will churn or possibly may churn and also provides useful information to strategic decision makers as well.",
"title": ""
},
{
"docid": "e63836b5053b7f56d5ad5081a7ef79b7",
"text": "This paper presents interfaces for exploring large collections of fonts for design tasks. Existing interfaces typically list fonts in a long, alphabetically-sorted menu that can be challenging and frustrating to explore. We instead propose three interfaces for font selection. First, we organize fonts using high-level descriptive attributes, such as \"dramatic\" or \"legible.\" Second, we organize fonts in a tree-based hierarchical menu based on perceptual similarity. Third, we display fonts that are most similar to a user's currently-selected font. These tools are complementary; a user may search for \"graceful\" fonts, select a reasonable one, and then refine the results from a list of fonts similar to the selection. To enable these tools, we use crowdsourcing to gather font attribute data, and then train models to predict attribute values for new fonts. We use attributes to help learn a font similarity metric using crowdsourced comparisons. We evaluate the interfaces against a conventional list interface and find that our interfaces are preferred to the baseline. Our interfaces also produce better results in two real-world tasks: finding the nearest match to a target font, and font selection for graphic designs.",
"title": ""
},
{
"docid": "e812bed02753b807d1e03a2e05e87cb8",
"text": "ion level. It is especially useful in the case of expert-based estimation, where it is easier for experts to embrace and estimate smaller pieces of project work. Moreover, the increased level of detail during estimation—for instance, by breaking down software products and processes—implies higher transparency of estimates. In practice, there is a good chance that the bottom estimates would be mixed below and above the actual effort. As a consequence, estimation errors at the bottom level will cancel each other out, resulting in smaller estimation error than if a top-down approach were used. This phenomenon is related to the mathematical law of large numbers. However, the more granular the individual estimates, the more time-consuming the overall estimation process becomes. In industrial practice, a top-down strategy usually provides reasonably accurate estimates at relatively low overhead and without too much technical expertise. Although bottom-up estimation usually provides more accurate estimates, it requires the estimators involved to have expertise regarding the bottom activities and related product components that they estimate directly. In principle, applying bottom-up estimation pays off when the decomposed tasks can be estimated more accurately than the whole task. For instance, a bottom-up strategy proved to provide better results when applied to high-uncertainty or complex estimation tasks, which are usually underestimated when considered as a whole. Furthermore, it is often easy to forget activities and/or underestimate the degree of unexpected events, which leads to underestimation of total effort. However, from the mathematical point of view (law of large numbers mentioned), dividing the project into smaller work packages provides better data for estimation and reduces overall estimation error. Experiences presented by Jørgensen (2004b) suggest that in the context of expert-based estimation, software companies should apply a bottom-up strategy unless the estimators have experience from, or access to, very similar projects. In the context of estimation based on human judgment, typical threats of individual and group estimation should be considered. Refer to Sect. 6.4 for an overview of the strengths and weaknesses of estimation based on human judgment.",
"title": ""
},
{
"docid": "3509f90848c45ad34ebbd30b9d357c29",
"text": "Explaining underlying causes or effects about events is a challenging but valuable task. We define a novel problem of generating explanations of a time series event by (1) searching cause and effect relationships of the time series with textual data and (2) constructing a connecting chain between them to generate an explanation. To detect causal features from text, we propose a novel method based on the Granger causality of time series between features extracted from text such as N-grams, topics, sentiments, and their composition. The generation of the sequence of causal entities requires a commonsense causative knowledge base with efficient reasoning. To ensure good interpretability and appropriate lexical usage we combine symbolic and neural representations, using a neural reasoning algorithm trained on commonsense causal tuples to predict the next cause step. Our quantitative and human analysis show empirical evidence that our method successfully extracts meaningful causality relationships between time series with textual features and generates appropriate explanation between them.",
"title": ""
}
] |
scidocsrr
|
c902f944b7964751087a41e36f934f62
|
Looking Beyond Appearances: Synthetic Training Data for Deep CNNs in Re-identification
|
[
{
"docid": "30719d273f3966d80335db625792c3b7",
"text": "Recent years have produced great advances in training large, deep neural networks (DNNs), including notable successes in training convolutional neural networks (convnets) to recognize natural images. However, our understanding of how these models work, especially what computations they perform at intermediate layers, has lagged behind. Progress in the field will be further accelerated by the development of better tools for visualizing and interpreting neural nets. We introduce two such tools here. The first is a tool that visualizes the activations produced on each layer of a trained convnet as it processes an image or video (e.g. a live webcam stream). We have found that looking at live activations that change in response to user input helps build valuable intuitions about how convnets work. The second tool enables visualizing features at each layer of a DNN via regularized optimization in image space. Because previous versions of this idea produced less recognizable images, here we introduce several new regularization methods that combine to produce qualitatively clearer, more interpretable visualizations. Both tools are open source and work on a pretrained convnet with minimal setup. Published in the Deep Learning Workshop, 31 st International Conference on Machine Learning, Lille, France, 2015. Copyright 2015 by the author(s).",
"title": ""
},
{
"docid": "8d19d251e31dd3564f7bcab33cc3c9b7",
"text": "The visual appearance of a person is easily affected by many factors like pose variations, viewpoint changes and camera parameter differences. This makes person Re-Identification (ReID) among multiple cameras a very challenging task. This work is motivated to learn mid-level human attributes which are robust to such visual appearance variations. And we propose a semi-supervised attribute learning framework which progressively boosts the accuracy of attributes only using a limited number of labeled data. Specifically, this framework involves a three-stage training. A deep Convolutional Neural Network (dCNN) is first trained on an independent dataset labeled with attributes. Then it is fine-tuned on another dataset only labeled with person IDs using our defined triplet loss. Finally, the updated dCNN predicts attribute labels for the target dataset, which is combined with the independent dataset for the final round of fine-tuning. The predicted attributes, namely deep attributes exhibit superior generalization ability across different datasets. By directly using the deep attributes with simple Cosine distance, we have obtained surprisingly good accuracy on four person ReID datasets. Experiments also show that a simple metric learning modular further boosts our method, making it significantly outperform many recent works.",
"title": ""
},
{
"docid": "7d86abdf71d6c9dd05fc41e63952d7bf",
"text": "Crowdsourced 3D CAD models are easily accessible online, and can potentially generate an infinite number of training images for almost any object category. We show that augmenting the training data of contemporary Deep Convolutional Neural Net (DCNN) models with such synthetic data can be effective, especially when real training data is limited or not well matched to the target domain. Most freely available CAD models capture 3D shape but are often missing other low level cues, such as realistic object texture, pose, or background. In a detailed analysis, we use synthetic CAD images to probe the ability of DCNN to learn without these cues, with surprising findings. In particular, we show that when the DCNN is fine-tuned on the target detection task, it exhibits a large degree of invariance to missing low-level cues, but, when pretrained on generic ImageNet classification, it learns better when the low-level cues are simulated. We show that our synthetic DCNN training approach significantly outperforms previous methods on the benchmark PASCAL VOC2007 dataset when learning in the few-shot scenario and improves performance in a domain shift scenario on the Office benchmark.",
"title": ""
}
] |
[
{
"docid": "0cec4473828bf542d97b20b64071a890",
"text": "The effectiveness of knowledge transfer using classification algorithms depends on the difference between the distribution that generates the training examples and the one from which test examples are to be drawn. The task can be especially difficult when the training examples are from one or several domains different from the test domain. In this paper, we propose a locally weighted ensemble framework to combine multiple models for transfer learning, where the weights are dynamically assigned according to a model's predictive power on each test example. It can integrate the advantages of various learning algorithms and the labeled information from multiple training domains into one unified classification model, which can then be applied on a different domain. Importantly, different from many previously proposed methods, none of the base learning method is required to be specifically designed for transfer learning. We show the optimality of a locally weighted ensemble framework as a general approach to combine multiple models for domain transfer. We then propose an implementation of the local weight assignments by mapping the structures of a model onto the structures of the test domain, and then weighting each model locally according to its consistency with the neighborhood structure around the test example. Experimental results on text classification, spam filtering and intrusion detection data sets demonstrate significant improvements in classification accuracy gained by the framework. On a transfer learning task of newsgroup message categorization, the proposed locally weighted ensemble framework achieves 97% accuracy when the best single model predicts correctly only on 73% of the test examples. In summary, the improvement in accuracy is over 10% and up to 30% across different problems.",
"title": ""
},
{
"docid": "8363252a8a2ad78b290ca892830b1648",
"text": "Similarity-based approaches represent a promising direction for time series analysis. However, many such methods rely on parameter tuning, and some have shortcomings if the time series are multivariate (MTS), due to dependencies between attributes, or the time series contain missing data. In this paper, we address these challenges within the powerful context of kernel methods by proposing the robust time series cluster kernel (TCK). The approach taken leverages the missing data handling properties of Gaussian mixture models (GMM) augmented with informative prior distributions. An ensemble learning approach is exploited to ensure robustness to parameters by combining the clustering results of many GMM to form the final kernel. We evaluate the TCK on synthetic and real data and compare to other state-of-the-art techniques. The experimental results demonstrate that the TCK is robust to parameter choices, provides competitive results for MTS without missing data and outstanding results for missing data.",
"title": ""
},
{
"docid": "c2c2ddb9a6e42edcc1c035636ec1c739",
"text": "As the interest in DevOps continues to grow, there is an increasing need for software organizations to understand how to adopt it successfully. This study has as objective to clarify the concept and provide insight into existing challenges of adopting DevOps. First, the existing literature is reviewed. A definition of DevOps is then formed based on the literature by breaking down the concept into its defining characteristics. We interview 13 subjects in a software company adopting DevOps and, finally, we present 11 impediments for the company’s DevOps adoption that were identified based on the interviews.",
"title": ""
},
{
"docid": "5762adf6fc9a0bf6da037cdb10191400",
"text": "Graphics Processing Unit (GPU) virtualization is an enabling technology in emerging virtualization scenarios. Unfortunately, existing GPU virtualization approaches are still suboptimal in performance and full feature support. This paper introduces gVirt, a product level GPU virtualization implementation with: 1) full GPU virtualization running native graphics driver in guest, and 2) mediated pass-through that achieves both good performance and scalability, and also secure isolation among guests. gVirt presents a virtual full-fledged GPU to each VM. VMs can directly access performance-critical resources, without intervention from the hypervisor in most cases, while privileged operations from guest are trap-and-emulated at minimal cost. Experiments demonstrate that gVirt can achieve up to 95% native performance for GPU intensive workloads, and scale well up to 7 VMs.",
"title": ""
},
{
"docid": "8689be57a5689e27ed952ba16a7e14f7",
"text": "Mission-critical applications require Ultra-Reliable Low Latency (URLLC) wireless connections, where the packet error rate (PER) goes down to 10. Fulfillment of the bold reliability figures becomes meaningful only if it can be related to a statistical model in which the URLLC system operates. However, this model is generally not known and needs to be learned by sampling the wireless environment. In this paper we treat this fundamental problem in the simplest possible communicationtheoretic setting: selecting a transmission rate over a dynamic wireless channel in order to guarantee high transmission reliability. We introduce a novel statistical framework for design and assessment of URLLC systems, consisting of three key components: (i) channel model selection; (ii) learning the model using training; (3) selecting the transmission rate to satisfy the required reliability. As it is insufficient to specify the URLLC requirements only through PER, two types of statistical constraints are introduced, Averaged Reliability (AR) and Probably Correct Reliability (PCR). The analysis and the evaluations show that adequate model selection and learning are indispensable for designing consistent physical layer that asymptotically behaves as if the channel was known perfectly, while maintaining the reliability requirements in URLLC systems.",
"title": ""
},
{
"docid": "530d384b4b5a78fe92cea4b917be8c77",
"text": "The intent of this study was to quantify spine loading during different kettlebell swings and carries. No previously published studies of tissue loads during kettlebell exercises could be found. Given the popularity of kettlebells, this study was designed to provide an insight into the resulting joint loads. Seven male subjects participated in this investigation. In addition, a single case study of the kettlebell swing was performed on an accomplished kettlebell master. Electromyography, ground reaction forces (GRFs), and 3D kinematic data were recorded during exercises using a 16-kg kettlebell. These variables were input into an anatomically detailed biomechanical model that used normalized muscle activation; GRF; and spine, hip, and knee motion to calculate spine compression and shear loads. It was found that kettlebell swings create a hip-hinge squat pattern characterized by rapid muscle activation-relaxation cycles of substantial magnitudes (∼50% of a maximal voluntary contraction [MVC] for the low back extensors and 80% MVC for the gluteal muscles with a 16-kg kettlebell) resulting in about 3,200 N of low back compression. Abdominal muscular pulses together with the muscle bracing associated with carries create kettlebell-specific training opportunities. Some unique loading patterns discovered during the kettlebell swing included the posterior shear of the L4 vertebra on L5, which is opposite in polarity to a traditional lift. Thus, quantitative analysis provides an insight into why many individuals credit kettlebell swings with restoring and enhancing back health and function, although a few find that they irritate tissues.",
"title": ""
},
{
"docid": "80faeaceefd3851b51feef2e50694ef7",
"text": "The sentiment detection of texts has been witnessed a booming interest in recent years, due to the increased availability of online reviews in digital form and the ensuing need to organize them. Till to now, there are mainly four different problems predominating in this research community, namely, subjectivity classification, word sentiment classification, document sentiment classification and opinion extraction. In fact, there are inherent relations between them. Subjectivity classification can prevent the sentiment classifier from considering irrelevant or even potentially misleading text. Document sentiment classification and opinion extraction have often involved word sentiment classification techniques. This survey discusses related issues and main approaches to these problems. 2009 Published by Elsevier Ltd.",
"title": ""
},
{
"docid": "cd3bbec4c7f83c9fb553056b1b593bec",
"text": "We present results from experiments in using several pitch representations for jazz-oriented musical tasks performed by a recurrent neural network. We have run experiments with several kinds of recurrent networks for this purpose, and have found that Long Short-term Memory networks provide the best results. We show that a new pitch representation called Circles of Thirds works as well as two other published representations for these tasks, yet it is more succinct and enables faster learning. Recurrent Neural Networks and Music Many researchers are familiar with feedforward neural networks consisting of 2 or more layers of processing units, each with weighted connections to the next layer. Each unit passes the sum of its weighted inputs through a nonlinear sigmoid function. Each layer’s outputs are fed forward through the network to the next layer, until the output layer is reached. Weights are initialized to small initial random values. Via the back-propagation algorithm (Rumelhart et al. 1986), outputs are compared to targets, and the errors are propagated back through the connection weights. Weights are updated by gradient descent. Through an iterative training procedure, examples (inputs) and targets are presented repeatedly; the network learns a nonlinear function of the inputs. It can then generalize and produce outputs for new examples. These networks have been explored by the computer music community for classifying chords (Laden and Keefe 1991) and other musical tasks (Todd and Loy 1991, Griffith and Todd 1999). A recurrent network uses feedback from one or more of its units as input in choosing the next output. This means that values generated by units at time step t-1, say y(t-1), are part of the inputs x(t) used in selecting the next set of outputs y(t). A network may be fully recurrent; that is all units are connected back to each other and to themselves. Or part of the network may be fed back in recurrent links. Todd (Todd 1991) uses a Jordan recurrent network (Jordan 1986) to reproduce classical songs and then to produce new songs. The outputs are recurrently fed back as inputs as shown in Figure 1. In addition, self-recurrence on the inputs provides a decaying history of these inputs. The weight update algorithm is back-propagation, using teacher forcing (Williams and Zipser 1988). With teacher forcing, the target outputs are presented to the recurrent inputs from the output units (instead of the actual outputs, which are not correct yet during training). Pitches (on output or input) are represented in a localized binary representation, with one bit for each of the 12 chromatic notes. More bits can be added for more octaves. C is represented as 100000000000. C# is 010000000000, D is 001000000000. Time is divided into 16th note increments. Note durations are determined by how many increments a pitch’s output unit is on (one). E.g. an eighth note lasts for two time increments. Rests occur when all outputs are off (zero). Figure 1. Jordan network, with outputs fed back to inputs. (Mozer 1994)’s CONCERT uses a backpropagationthrough-time (BPTT) recurrent network to learn various musical tasks and to learn melodies with harmonic accompaniment. Then, CONCERT can run in generation mode to compose new music. The BPTT algorithm (Williams and Zipser 1992, Werbos 1988, Campolucci 1998) can be used with a fully recurrent network where the outputs of all units are connected to the inputs of all units, including themselves. The network can include external inputs and optionally, may include a regular feedforward output network (see Figure 2). The BPTT weight updates are proportional to the gradient of the sum of errors over every time step in the interval between start time t0 and end time t1, assuming the error at time step t is affected by the outputs at all previous time steps, starting with t0. BPTT requires saving all inputs, states, and errors for all time steps, and updating the weights in a batch operation at the end, time t1. One sequence (each example) requires one batch weight update. Figure 2. A fully self-recurrent network with external inputs, and optional feedforward output attachment. If there is no output attachment, one or more recurrent units are designated as output units. CONCERT is a combination of BPTT with a layer of output units that are probabilistically interpreted, and a maximum likelihood training criterion (rather than a squared error criterion). There are two sets of outputs (and two sets of inputs), one set for pitch and the other for duration. One pass through the network corresponds to a note, rather than a slice of time. We present only the pitch representation here since that is our focus. Mozer uses a psychologically based representation of musical notes. Figure 3 shows the chromatic circle (CC) and the circle of fifths (CF), used with a linear octave value for CONCERT’s pitch representation. Ignoring octaves, we refer to the rest of the representation as CCCF. Six digits represent the position of a pitch on CC and six more its position on CF. C is represented as 000000 000000, C# as 000001 111110, D as 000011 111111, and so on. Mozer uses -1,1 rather than 0,1 because of implementation details. Figure 3. Chromatic Circle on Left, Circle of Fifths on Right. Pitch position on each circle determines its representation. For chords, CONCERT uses the overlapping subharmonics representation of (Laden and Keefe, 1991). Each chord tone starts in Todd’s binary representation, but 5 harmonics (integer multiples of its frequency) are added. C3 is now C3, C4, G4, C5, E5 requiring a 3 octave representation. Because the 7th of the chord does not overlap with the triad harmonics, Laden and Keefe use triads only. C major triad C3, E3, G3, with harmonics, is C3, C4, G4, C5, E5, E3, E4, B4, E5, G#5, G3, G4, D4, G5, B5. The triad pitches and harmonics give an overlapping representation. Each overlapping pitch adds 1 to its corresponding input. CONCERT excludes octaves, leaving 12 highly overlapping chord inputs, plus an input that is positive when certain key-dependent chords appear, and learns waltzes over a harmonic chord structure. Eck and Schmidhuber (2002) use Long Short-term Memory (LSTM) recurrent networks to learn and compose blues music (Hochreiter and Schmidhuber 1997, and see Gers et al., 2000 for succinct pseudo-code for the algorithm). An LSTM network consists of input units, output units, and a set of memory blocks, each of which includes one or more memory cells. Blocks are connected to each other recurrently. Figure 4 shows an LSTM network on the left, and the contents of one memory block (this one with one cell) on the right. There may also be a direct connection from external inputs to the output units. This is the configuration found in Gers et al., and the one we use in our experiments. Eck and Schmidhuber also add recurrent connections from output units to memory blocks. Each block contains one or more memory cells that are self-recurrent. All other units in the block gate the inputs, outputs, and the memory cell itself. A memory cell can “cache” errors and release them for weight updates much later in time. The gates can learn to delay a block’s outputs, to reset the memory cells, and to inhibit inputs from reaching the cell or to allow inputs in. Figure 4. An LSTM network on the left and a one-cell memory block on the right, with input, forget, and output gates. Black squares on gate connections show that the gates can control whether information is passed to the cell, from the cell, or even within the cell. Weight updates are based on gradient descent, with multiplicative gradient calculations for gates, and approximations from the truncated BPTT (Williams and Peng 1990) and Real-Time Recurrent Learning (RTRL) (Robinson and Fallside 1987) algorithms. LSTM networks are able to perform counting tasks in time-series. Eck and Schmidhuber’s model of blues music is a 12-bar chord sequence over which music is composed/improvised. They successfully trained an LSTM network to learn a sequence of blues chords, with varying durations. Splitting time into 8th note increments, each chord’s duration is either 8 or 4 time steps (whole or half durations). Chords are sets of 3 or 4 tones (triads or triads plus sevenths), represented in a 12-bit localized binary representation with values of 1 for a chord pitch, and 0 for a non-chord pitch. Chords are inverted to fit in 1 octave. For example, C7 is represented as 100010010010 (C,E,G,B-flat), and F7 is 100101000100 (F,A,C,E-flat inverted to C,E-flat,F,A). The network has 4 memory blocks, each containing 2 cells. The outputs are considered probabilities of whether the corresponding note is on or off. The goal is to obtain an output of more that .5 for each note that should be on in a particular chord, with all other outputs below .5. Eck and Schmidhuber’s work includes learning melody and chords with two LSTM networks containing 4 blocks each. Connections are made from the chord network to the melody network, but not vice versa. The authors composed short 1-bar melodies over each of the 12 possible bars. The network is trained on concatenations of the short melodies over the 12-bar blues chord sequence. The melody network is trained until the chords network has learned according to the criterion. In music generation mode, the network can generate new melodies using this training. In a system called CHIME (Franklin 2000, 2001), we first train a Jordan recurrent network (Figure 1) to produce 3 Sonny Rollins jazz/blues melodies. The current chord and index number of the song are non-recurrent inputs to the network. Chords are represented as sets of 4 note values of 1 in a 12-note input layer, with non-chord note inputs set to 0 just as in Eck and Schmidhuber’s chord representation. Chords are also inverted to fit within one octave. 24 (2 octaves) of the outputs are notes, and the 25th is a rest. Of these 25, the unit with the largest value ",
"title": ""
},
{
"docid": "70cb3fed4ac11ae1fee4e56781c3aed2",
"text": "Affordances represent the behavior of objects in terms of the robot's motor and perceptual skills. This type of knowledge plays a crucial role in developmental robotic systems, since it is at the core of many higher level skills such as imitation. In this paper, we propose a general affordance model based on Bayesian networks linking actions, object features and action effects. The network is learnt by the robot through interaction with the surrounding objects. The resulting probabilistic model is able to deal with uncertainty, redundancy and irrelevant information. We evaluate the approach using a real humanoid robot that interacts with objects.",
"title": ""
},
{
"docid": "16e6acd62753e8c0c206bde20f3cbe52",
"text": "In this paper we focus our attention on the comparison of various lemmatization and stemming algorithms, which are often used in nature language processing (NLP). Sometimes these two techniques are considered to be identical, but there is an important difference. Lemmatization is generally more utilizable, because it produces the basic word form which is required in many application areas (i.e. cross-language processing and machine translation). However, lemmatization is a difficult task especially for highly inflected natural languages having a lot of words for the same normalized word form. We present a novel lemmatization algorithm which utilizes the multilingual semantic thesaurus Eurowordnet (EWN). We describe the algorithm in detail and compare it with other widely used algorithms for word normalization on two different corpora. We present promising results obtained by our EWN-based lemmatization approach in comparison to other techniques. We also discuss the influence of the word normalization on classification task in general. In overall, the performance of our method is good and it achieves similar precision and recall in comparison with other word normalization methods. However, our experiments indicate that word normalization does not affect the text classification task significantly.",
"title": ""
},
{
"docid": "4292a60a5f76fd3e794ce67d2ed6bde3",
"text": "If two translation systems differ differ in performance on a test set, can we trust that this indicates a difference in true system quality? To answer this question, we describe bootstrap resampling methods to compute statistical significance of test results, and validate them on the concrete example of the BLEU score. Even for small test sizes of only 300 sentences, our methods may give us assurances that test result differences are real.",
"title": ""
},
{
"docid": "0926e8e4dd33240c2ad4e028980f3f95",
"text": "The medical evaluation is an important part of the clinical and legal process when child sexual abuse is suspected. Practitioners who examine children need to be up to date on current recommendations regarding when, how, and by whom these evaluations should be conducted, as well as how the medical findings should be interpreted. A previously published article on guidelines for medical care for sexually abused children has been widely used by physicians, nurses, and nurse practitioners to inform practice guidelines in this field. Since 2007, when the article was published, new research has suggested changes in some of the guidelines and in the table that lists medical and laboratory findings in children evaluated for suspected sexual abuse and suggests how these findings should be interpreted with respect to sexual abuse. A group of specialists in child abuse pediatrics met in person and via online communication from 2011 through 2014 to review published research as well as recommendations from the Centers for Disease Control and Prevention and the American Academy of Pediatrics and to reach consensus on if and how the guidelines and approach to interpretation table should be updated. The revisions are based, when possible, on data from well-designed, unbiased studies published in high-ranking, peer-reviewed, scientific journals that were reviewed and vetted by the authors. When such studies were not available, recommendations were based on expert consensus.",
"title": ""
},
{
"docid": "afb573f1b5c7e442b98b3214dd73406c",
"text": "This paper seeks to analyze the phenomenon of wartime rape and sexual torture of Croatian and Iraqi men and to explore the avenues for its prosecution under international humanitarian and human rights law. Male rape, in time of war, is predominantly an assertion of power and aggression rather than an attempt on the part of the perpetrator to satisfy sexual desire. The effect of such a horrible attack is to damage the victim's psyche, rob him of his pride, and intimidate him. In Bosnia- Herzegovina, Croatia, and Iraq, therefore, male rape and sexual torture has been used as a weapon of war with dire consequences for the victim's mental, physical, and sexual health. Testimonies collected at the Medical Centre for Human Rights in Zagreb and reports received from Iraq make it clear that prisoners in these conflicts have been exposed to sexual humiliation, as well as to systematic and systemic sexual torture. This paper calls upon the international community to combat the culture of impunity in both dictator-ruled and democratic countries by bringing the crime of wartime rape into the international arena, and by removing all barriers to justice facing the victims. Moreover, it emphasizes the fact that wartime rape is the ultimate humiliation that can be inflicted on a human being, and it must be regarded as one of the most grievous crimes against humanity. The international community has to consider wartime rape a crime of war and a threat to peace and security. It is in this respect that civilian community associations can fulfill their duties by encouraging victims of male rape to break their silence and address their socio-medical needs, including reparations and rehabilitation.",
"title": ""
},
{
"docid": "353d6ed75f2a4bca5befb5fdbcea2bcc",
"text": "BACKGROUND\nThe number of mental health apps (MHapps) developed and now available to smartphone users has increased in recent years. MHapps and other technology-based solutions have the potential to play an important part in the future of mental health care; however, there is no single guide for the development of evidence-based MHapps. Many currently available MHapps lack features that would greatly improve their functionality, or include features that are not optimized. Furthermore, MHapp developers rarely conduct or publish trial-based experimental validation of their apps. Indeed, a previous systematic review revealed a complete lack of trial-based evidence for many of the hundreds of MHapps available.\n\n\nOBJECTIVE\nTo guide future MHapp development, a set of clear, practical, evidence-based recommendations is presented for MHapp developers to create better, more rigorous apps.\n\n\nMETHODS\nA literature review was conducted, scrutinizing research across diverse fields, including mental health interventions, preventative health, mobile health, and mobile app design.\n\n\nRESULTS\nSixteen recommendations were formulated. Evidence for each recommendation is discussed, and guidance on how these recommendations might be integrated into the overall design of an MHapp is offered. Each recommendation is rated on the basis of the strength of associated evidence. It is important to design an MHapp using a behavioral plan and interactive framework that encourages the user to engage with the app; thus, it may not be possible to incorporate all 16 recommendations into a single MHapp.\n\n\nCONCLUSIONS\nRandomized controlled trials are required to validate future MHapps and the principles upon which they are designed, and to further investigate the recommendations presented in this review. Effective MHapps are required to help prevent mental health problems and to ease the burden on health systems.",
"title": ""
},
{
"docid": "d60a5bbf4ca9beca8197e75845899e9a",
"text": "Learning from unbalanced datasets presents a convoluted problem in which traditional learning algorithms may perform poorly. The objective functions used for learning the classifiers typically tend to favor the larger, less important classes in such problems. This paper compares the performance of several popular decision tree splitting criteria – information gain, Gini measure, and DKM – and identifies a new skew insensitive measure in Hellinger distance. We outline the strengths of Hellinger distance in class imbalance, propose its application in forming decision trees, and perform a comprehensive comparative analysis between each decision tree construction method. In addition, we consider the performance of each tree within a powerful sampling wrapper framework to capture the interaction of the splitting metric and sampling. We evaluate over this wide range of datasets and determine which operate best under class imbalance.",
"title": ""
},
{
"docid": "25b77292def9ba880fecb58a38897400",
"text": "In this paper, we present a successful operation of Gallium Nitride(GaN)-based three-phase inverter with high efficiency of 99.3% for driving motor at 900W under the carrier frequency of 6kHz. This efficiency well exceeds the value by IGBT (Insulated Gate Bipolar Transistor). This demonstrates that GaN has a great potential for power switching application competing with SiC. Fully reduced on-state resistance in a new normally-off GaN transistor called Gate Injection Transistor (GIT) greatly helps to increase the efficiency. In addition, use of the bidirectional operation of the lateral and compact GITs with synchronous gate driving, the inverter is operated free from fly-wheel diodes which have been connected in parallel with IGBTs in a conventional inverter system.",
"title": ""
},
{
"docid": "2e6b034cbb73d91b70e3574a06140621",
"text": "ETHNOPHARMACOLOGICAL RELEVANCE\nBitter melon (Momordica charantia L.) has been widely used as an traditional medicine treatment for diabetic patients in Asia. In vitro and animal studies suggested its hypoglycemic activity, but limited human studies are available to support its use.\n\n\nAIM OF STUDY\nThis study was conducted to assess the efficacy and safety of three doses of bitter melon compared with metformin.\n\n\nMATERIALS AND METHODS\nThis is a 4-week, multicenter, randomized, double-blind, active-control trial. Patients were randomized into 4 groups to receive bitter melon 500 mg/day, 1,000 mg/day, and 2,000 mg/day or metformin 1,000 mg/day. All patients were followed for 4 weeks.\n\n\nRESULTS\nThere was a significant decline in fructosamine at week 4 of the metformin group (-16.8; 95% CI, -31.2, -2.4 μmol/L) and the bitter melon 2,000 mg/day group (-10.2; 95% CI, -19.1, -1.3 μmol/L). Bitter melon 500 and 1,000 mg/day did not significantly decrease fructosamine levels (-3.5; 95% CI -11.7, 4.6 and -10.3; 95% CI -22.7, 2.2 μmol/L, respectively).\n\n\nCONCLUSIONS\nBitter melon had a modest hypoglycemic effect and significantly reduced fructosamine levels from baseline among patients with type 2 diabetes who received 2,000 mg/day. However, the hypoglycemic effect of bitter melon was less than metformin 1,000 mg/day.",
"title": ""
},
{
"docid": "3bab09c8759c0b7040c48003c7a745bc",
"text": "We describe an approach to coreference resolution that relies on the intuition that easy decisions should be made early, while harder decisions should be left for later when more information is available. We are inspired by the recent success of the rule-based system of Raghunathan et al. (2010), which relies on the same intuition. Our system, however, automatically learns from training data what constitutes an easy decision. Thus, we can utilize more features, learn more precise weights, and adapt to any dataset for which training data is available. Experiments show that our system outperforms recent state-of-the-art coreference systems including Raghunathan et al.’s system as well as a competitive baseline that uses a pairwise classifier.",
"title": ""
},
{
"docid": "8c4b1b74d21dcf6d10852deecccece36",
"text": "Trolley problems have been used in the development of moral theory and the psychological study of moral judgments and behavior. Most of this research has focused on people from the West, with implicit assumptions that moral intuitions should generalize and that moral psychology is universal. However, cultural differences may be associated with differences in moral judgments and behavior. We operationalized a trolley problem in the laboratory, with economic incentives and real-life consequences, and compared British and Chinese samples on moral behavior and judgment. We found that Chinese participants were less willing to sacrifice one person to save five others, and less likely to consider such an action to be right. In a second study using three scenarios, including the standard scenario where lives are threatened by an on-coming train, fewer Chinese than British participants were willing to take action and sacrifice one to save five, and this cultural difference was more pronounced when the consequences were less severe than death.",
"title": ""
},
{
"docid": "c85329b679be66d6738c305420d4a02a",
"text": "Energy Internet (EI) is proposed as the evolution of smart grid, aiming to integrate various forms of energy into a highly flexible and efficient grid that provides energy packing and routing functions, similar to the Internet. As an essential part in EI system, a scalable and interoperable communication infrastructure is critical in system construction and operation. In this article, we survey the recent research efforts on EI communications. The motivation and key concepts of EI are first introduced, followed by the key technologies and standardizations enabling the EI communications as well as security issues. Open challenges in system complexity, efficiency, reliability are explored and recent achievements in these research topics are summarized as well.",
"title": ""
}
] |
scidocsrr
|
16c415e08f3bc06c80b5184359e0d817
|
Active visual SLAM for robotic area coverage: Theory and experiment
|
[
{
"docid": "975019aa11bde7dfed5f8392f26260a7",
"text": "This paper reports a real-time monocular visual simultaneous localization and mapping (SLAM) algorithm and results for its application in the area of autonomous underwater ship hull inspection. The proposed algorithm overcomes some of the specific challenges associated with underwater visual SLAM, namely, limited field of view imagery and feature-poor regions. It does so by exploiting our SLAM navigation prior within the image registration pipeline and by being selective about which imagery is considered informative in terms of our visual SLAM map. A novel online bag-of-words measure for intra and interimage saliency are introduced and are shown to be useful for image key-frame selection, information-gain-based link hypothesis, and novelty detection. Results from three real-world hull inspection experiments evaluate the overall approach, including one survey comprising a 3.4-h/2.7-km-long trajectory.",
"title": ""
}
] |
[
{
"docid": "2e0190ff3874bcdb0cc129401f24a3ae",
"text": "End-to-end training makes the neural machine translation (NMT) architecture simpler, yet elegant compared to traditional statistical machine translation (SMT). However, little is known about linguistic patterns of morphology, syntax and semantics learned during the training of NMT systems, and more importantly, which parts of the architecture are responsible for learning each of these phenomena. In this paper we i) analyze how much morphology an NMT decoder learns, and ii) investigate whether injecting target morphology into the decoder helps it produce better translations. To this end we present three methods: i) joint generation, ii) joint-data learning, and iii) multi-task learning. Our results show that explicit morphological information helps the decoder learn target language morphology and improves the translation quality by 0.2–0.6 BLEU points.",
"title": ""
},
{
"docid": "a7e0ff324e4bf4884f0a6e35adf588a3",
"text": "Named Entity Recognition (NER) is a subtask of information extraction and aims to identify atomic entities in text that fall into predefined categories such as person, location, organization, etc. Recent efforts in NER try to extract entities and link them to linked data entities. Linked data is a term used for data resources that are created using semantic web standards such as DBpedia. There are a number of online tools that try to identify named entities in text and link them to linked data resources. Although one can use these tools via their APIs and web interfaces, they use different data resources and different techniques to identify named entities and not all of them reveal this information. One of the major tasks in NER is disambiguation that is identifying the right entity among a number of entities with the same names; for example \"apple\" standing for both \"Apple, Inc.\" the company and the fruit. We developed a similar tool called NERSO, short for Named Entity Recognition Using Semantic Open Data, to automatically extract named entities, disambiguating and linking them to DBpedia entities. Our disambiguation method is based on constructing a graph of linked data entities and scoring them using a graph-based centrality algorithm. We evaluate our system by comparing its performance with two publicly available NER tools. The results show that NERSO performs better.",
"title": ""
},
{
"docid": "af63f1e1efbb15f2f41a91deb6ec1e32",
"text": "OBJECTIVES\n: A systematic review of the literature to determine the ability of dynamic changes in arterial waveform-derived variables to predict fluid responsiveness and compare these with static indices of fluid responsiveness. The assessment of a patient's intravascular volume is one of the most difficult tasks in critical care medicine. Conventional static hemodynamic variables have proven unreliable as predictors of volume responsiveness. Dynamic changes in systolic pressure, pulse pressure, and stroke volume in patients undergoing mechanical ventilation have emerged as useful techniques to assess volume responsiveness.\n\n\nDATA SOURCES\n: MEDLINE, EMBASE, Cochrane Register of Controlled Trials and citation review of relevant primary and review articles.\n\n\nSTUDY SELECTION\n: Clinical studies that evaluated the association between stroke volume variation, pulse pressure variation, and/or stroke volume variation and the change in stroke volume/cardiac index after a fluid or positive end-expiratory pressure challenge.\n\n\nDATA EXTRACTION AND SYNTHESIS\n: Data were abstracted on study design, study size, study setting, patient population, and the correlation coefficient and/or receiver operating characteristic between the baseline systolic pressure variation, stroke volume variation, and/or pulse pressure variation and the change in stroke index/cardiac index after a fluid challenge. When reported, the receiver operating characteristic of the central venous pressure, global end-diastolic volume index, and left ventricular end-diastolic area index were also recorded. Meta-analytic techniques were used to summarize the data. Twenty-nine studies (which enrolled 685 patients) met our inclusion criteria. Overall, 56% of patients responded to a fluid challenge. The pooled correlation coefficients between the baseline pulse pressure variation, stroke volume variation, systolic pressure variation, and the change in stroke/cardiac index were 0.78, 0.72, and 0.72, respectively. The area under the receiver operating characteristic curves were 0.94, 0.84, and 0.86, respectively, compared with 0.55 for the central venous pressure, 0.56 for the global end-diastolic volume index, and 0.64 for the left ventricular end-diastolic area index. The mean threshold values were 12.5 +/- 1.6% for the pulse pressure variation and 11.6 +/- 1.9% for the stroke volume variation. The sensitivity, specificity, and diagnostic odds ratio were 0.89, 0.88, and 59.86 for the pulse pressure variation and 0.82, 0.86, and 27.34 for the stroke volume variation, respectively.\n\n\nCONCLUSIONS\n: Dynamic changes of arterial waveform-derived variables during mechanical ventilation are highly accurate in predicting volume responsiveness in critically ill patients with an accuracy greater than that of traditional static indices of volume responsiveness. This technique, however, is limited to patients who receive controlled ventilation and who are not breathing spontaneously.",
"title": ""
},
{
"docid": "223b74ccdafcd3fafa372cd6a4fbb6cb",
"text": "Android OS experiences a blazing popularity since the last few years. This predominant platform has established itself not only in the mobile world but also in the Internet of Things (IoT) devices. This popularity, however, comes at the expense of security, as it has become a tempting target of malicious apps. Hence, there is an increasing need for sophisticated, automatic, and portable malware detection solutions. In this paper, we propose MalDozer, an automatic Android malware detection and family attribution framework that relies on sequences classification using deep learning techniques. Starting from the raw sequence of the app's API method calls, MalDozer automatically extracts and learns the malicious and the benign patterns from the actual samples to detect Android malware. MalDozer can serve as a ubiquitous malware detection system that is not only deployed on servers, but also on mobile and even IoT devices. We evaluate MalDozer on multiple Android malware datasets ranging from 1 K to 33 K malware apps, and 38 K benign apps. The results show that MalDozer can correctly detect malware and attribute them to their actual families with an F1-Score of 96%e99% and a false positive rate of 0.06% e2%, under all tested datasets and settings. © 2018 The Author(s). Published by Elsevier Ltd on behalf of DFRWS. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).",
"title": ""
},
{
"docid": "e9e0ae150dbbfd2aa4f79c1119aef1b0",
"text": "Modern datacenter (DC) workloads are characterized by increasing diversity and differentiated QoS requirements in terms of the average or worst-case performance. The shift towards DC calls for the new OS architectures that not only gracefully achieve disparate performance goals, but also protect software investments. This paper presents the \"isolate first, then share\" OS architecture. We decompose the OS into the supervisor and several subOSes running side by side: a subOS directly manages physical resources without intervention from the supervisor (isolate resources first), while the supervisor can create, destroy, resize a subOS on-the-fly (then share). We confine state sharing among the supervisor and SubOSes (isolate states first), and provide fast inter-subOS communication mechanisms on demand (then share). We present the first implementation—RainForest, which supports unmodified Linux binaries. Our comprehensive evaluations show RainForest outperforms Linux with three different kernels, LXC, and Xen in terms of improving resource utilization, throughput, scalability, and worst-case performance. The RainForest source code is soon available.",
"title": ""
},
{
"docid": "da694b74b3eaae46d15f589e1abef4b8",
"text": "Impaired water quality caused by human activity and the spread of invasive plant and animal species has been identified as a major factor of degradation of coastal ecosystems in the tropics. The main goal of this study was to evaluate the performance of AnnAGNPS (Annualized NonPoint Source Pollution Model), in simulating runoff and soil erosion in a 48 km watershed located on the Island of Kauai, Hawaii. The model was calibrated and validated using 2 years of observed stream flow and sediment load data. Alternative scenarios of spatial rainfall distribution and canopy interception were evaluated. Monthly runoff volumes predicted by AnnAGNPS compared well with the measured data (R 1⁄4 0.90, P < 0.05); however, up to 60% difference between the actual and simulated runoff were observed during the driest months (May and July). Prediction of daily runoff was less accurate (R 1⁄4 0.55, P < 0.05). Predicted and observed sediment yield on a daily basis was poorly correlated (R 1⁄4 0.5, P < 0.05). For the events of small magnitude, the model generally overestimated sediment yield, while the opposite was true for larger events. Total monthly sediment yield varied within 50% of the observed values, except for May 2004. Among the input parameters the model was most sensitive to the values of ground residue cover and canopy cover. It was found that approximately one third of the watershed area had low sediment yield (0e1 t ha 1 y ), and presented limited erosion threat. However, 5% of the area had sediment yields in excess of 5 t ha 1 y . Overall, the model performed reasonably well, and it can be used as a management tool on tropical watersheds to estimate and compare sediment loads, and identify ‘‘hot spots’’ on the landscape. 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "bdcc0547fe01857f524d6a295da70387",
"text": "[Context and motivation] Research on eliciting requirements from a large number of online reviews using automated means has focused on functional aspects. Assuring the quality of an app is vital for its success. This is why user feedback concerning quality issues should be considered as well [Question/problem] But to what extent do online reviews of apps address quality characteristics? And how much potential is there to extract such knowledge through automation? [Principal ideas/results] By tagging online reviews, we found that users mainly write about \"usability\" and \"reliability\", but the majority of statements are on a subcharacteristic level, most notably regarding \"operability\", \"adaptability\", \"fault tolerance\", and \"interoperability\". A set of 16 language patterns regarding \"usability\" correctly identified 1,528 statements from a large dataset far more efficiently than our manual analysis of a small subset. [Contribution] We found that statements can especially be derived from online reviews about qualities by which users are directly affected, although with some ambiguity. Language patterns can identify statements about qualities with high precision, though the recall is modest at this time. Nevertheless, our results have shown that online reviews are an unused Big Data source for quality requirements.",
"title": ""
},
{
"docid": "0cf67f363a2912b287ae0321d0a2097e",
"text": "We survey the most recent BIS proposals for the credit risk measurement of retail credits in capital regulations. We also describe the recent trend away from relationship lending toward transactional lending in the small business loan arena. These trends create the opportunity to adopt more analytical, data-based approaches to credit risk measurement. We survey proprietary credit scoring models (such as Fair Isaac), as well as options-theoretic structural models (such as KMV and Moody’s RiskCalc), and reduced-form models (such as Credit Risk Plus). These models allow lenders and regulators to develop techniques that rely on portfolio aggregation to measure retail credit risk exposure. 2003 Elsevier B.V. All rights reserved. JEL classification: G21; G28",
"title": ""
},
{
"docid": "c898f6186ff15dff41dcb7b3376b975d",
"text": "The future grid is evolving into a smart distribution network that integrates multiple distributed energy resources ensuring at the same time reliable operation and increased power quality. In recent years, many research papers have addressed the voltage violation problems that arise from the high penetration of distributed generation. In view of the transition to active network management and the increase in the quantity of collected data, distributed control schemes have been proposed that use pervasive communications to deal with the complexity of smart grid. This paper reviews the recent publications on distributed and decentralized voltage control of smart distribution networks, summarizes their control models, and classifies the solution methodologies. Moreover, it comments on issues that should be addressed in the future and the perspectives of industry applications.",
"title": ""
},
{
"docid": "ea9f5956e09833c107d79d5559367e0e",
"text": "This research is to search for alternatives to the resolution of complex medical diagnosis where human knowledge should be apprehended in a general fashion. Successful application examples show that human diagnostic capabilities are significantly worse than the neural diagnostic system. This paper describes a modified feedforward neural network constructive algorithm (MFNNCA), a new algorithm for medical diagnosis. The new constructive algorithm with backpropagation; offer an approach for the incremental construction of near-minimal neural network architectures for pattern classification. The algorithm starts with minimal number of hidden units in the single hidden layer; additional units are added to the hidden layer one at a time to improve the accuracy of the network and to get an optimal size of a neural network. The MFNNCA was tested on several benchmarking classification problems including the cancer, heart disease and diabetes. Experimental results show that the MFNNCA can produce optimal neural network architecture with good generalization ability.",
"title": ""
},
{
"docid": "f122373d44be16dadd479c75cca34a2a",
"text": "This paper presents the design, fabrication, and evaluation of a novel type of valve that uses an electropermanent magnet [1]. This valve is then used to build actuators for a soft robot. The developed EPM valves require only a brief (5 ms) pulse of current to turn flow on or off for an indefinite period of time. EPMvalves are characterized and demonstrated to be well suited for the control of elastomer fluidic actuators. The valves drive the pressurization and depressurization of fluidic channels within soft actuators. Furthermore, the forward locomotion of a soft, multi-actuator rolling robot is driven by EPM valves. The small size and energy-efficiency of EPM valves may make them valuable in soft mobile robot applications.",
"title": ""
},
{
"docid": "7beb0fa9fa3519d291aa3d224bfc1b74",
"text": "In comparisons among Chicago neighbourhoods, homicide rates in 1988-93 varied more than 100-fold, while male life expectancy at birth ranged from 54 to 77 years, even with effects of homicide mortality removed. This \"cause deleted\" life expectancy was highly correlated with homicide rates; a measure of economic inequality added significant additional prediction, whereas median household income did not. Deaths from internal causes (diseases) show similar age patterns, despite different absolute levels, in the best and worst neighbourhoods, whereas deaths from external causes (homicide, accident, suicide) do not. As life expectancy declines across neighbourhoods, women reproduce earlier; by age 30, however, neighbourhood no longer affects age specific fertility. These results support the hypothesis that life expectancy itself may be a psychologically salient determinant of risk taking and the timing of life transitions.",
"title": ""
},
{
"docid": "d4aaea0107cbebd7896f4cb57fa39c05",
"text": "A novel method is proposed for performing multilabel, interactive image segmentation. Given a small number of pixels with user-defined (or predefined) labels, one can analytically and quickly determine the probability that a random walker starting at each unlabeled pixel will first reach one of the prelabeled pixels. By assigning each pixel to the label for which the greatest probability is calculated, a high-quality image segmentation may be obtained. Theoretical properties of this algorithm are developed along with the corresponding connections to discrete potential theory and electrical circuits. This algorithm is formulated in discrete space (i.e., on a graph) using combinatorial analogues of standard operators and principles from continuous potential theory, allowing it to be applied in arbitrary dimension on arbitrary graphs",
"title": ""
},
{
"docid": "98b536786ecfeab870467c5951924662",
"text": "An historical discussion is provided of the intellectual trends that caused nineteenth century interdisciplinary studies of physics and psychobiology by leading scientists such as Helmholtz, Maxwell, and Mach to splinter into separate twentieth-century scientific movements. The nonlinear, nonstationary, and nonlocal nature of behavioral and brain data are emphasized. Three sources of contemporary neural network research—the binary, linear, and continuous-nonlinear models—are noted. The remainder of the article describes results about continuous-nonlinear models: Many models of contentaddressable memory are shown to be special cases of the Cohen-Grossberg model and global Liapunov function, including the additive, brain-state-in-a-box, McCulloch-Pitts, Boltzmann machine, Hartline-Ratliff-Miller, shunting, masking field, bidirectional associative memory, Volterra-Lotka, Gilpin-Ayala, and Eigen-Schuster models. A Liapunov functional method is described for proving global limit or oscillation theorems Purchase Export",
"title": ""
},
{
"docid": "be70a14152656eb886c8a28e7e0dd613",
"text": "OBJECTIVES\nTranscutaneous electrical nerve stimulation (TENS) is an analgesic current that is used in many acute and chronic painful states. The aim of this study was to investigate central pain modulation by low-frequency TENS.\n\n\nMETHODS\nTwenty patients diagnosed with subacromial impingement syndrome of the shoulder were enrolled in the study. Patients were randomized into 2 groups: low-frequency TENS and sham TENS. Painful stimuli were delivered during which functional magnetic resonance imaging scans were performed, both before and after treatment. Ten central regions of interest that were reported to have a role in pain perception were chosen and analyzed bilaterally on functional magnetic resonance images. Perceived pain intensity during painful stimuli was evaluated using visual analog scale (VAS).\n\n\nRESULTS\nIn the low-frequency TENS group, there was a statistically significant decrease in the perceived pain intensity and pain-specific activation of the contralateral primary sensory cortex, bilateral caudal anterior cingulate cortex, and of the ipsilateral supplementary motor area. There was a statistically significant correlation between the change of VAS value and the change of activity in the contralateral thalamus, prefrontal cortex, and the ipsilateral posterior parietal cortex. In the sham TENS group, there was no significant change in VAS value and activity of regions of interest.\n\n\nDISCUSSION\nWe suggest that a 1-session low-frequency TENS may induce analgesic effect through modulation of discriminative, affective, and motor aspects of central pain perception.",
"title": ""
},
{
"docid": "2eb344b6701139be184624307a617c1b",
"text": "This work combines the central ideas from two different areas, crowd simulation and social network analysis, to tackle some existing problems in both areas from a new angle. We present a novel spatio-temporal social crowd simulation framework, Social Flocks, to revisit three essential research problems, (a) generation of social networks, (b) community detection in social networks, (c) modeling collective social behaviors in crowd simulation. Our framework produces social networks that satisfy the properties of high clustering coefficient, low average path length, and power-law degree distribution. It can also be exploited as a novel dynamic model for community detection. Finally our framework can be used to produce real-life collective social behaviors over crowds, including community-guided flocking, leader following, and spatio-social information propagation. Social Flocks can serve as visualization of simulated crowds for domain experts to explore the dynamic effects of the spatial, temporal, and social factors on social networks. In addition, it provides an experimental platform of collective social behaviors for social gaming and movie animations. Social Flocks demo is at http://mslab.csie.ntu.edu.tw/socialflocks/ .",
"title": ""
},
{
"docid": "0fd37a459c95b20e3d80021da1bb281d",
"text": "Social media data are increasingly used as the source of research in a variety of domains. A typical example is urban analytics, which aims at solving urban problems by analyzing data from different sources including social media. The potential value of social media data in tourism studies, which is one of the key topics in urban research, however has been much less investigated. This paper seeks to understand the relationship between social media dynamics and the visiting patterns of visitors to touristic locations in real-world cases. By conducting a comparative study, we demonstrate how social media characterizes touristic locations differently from other data sources. Our study further shows that social media data can provide real-time insights of tourists’ visiting patterns in big events, thus contributing to the understanding of social media data utility in tourism studies.",
"title": ""
},
{
"docid": "8d6b3e28ba335f2c3c98d18994610319",
"text": "We study a sensor node with an energy harvesting source. The generated energy can be stored in a buffer. The sensor node periodically senses a random field and generates a packet. These packets are stored in a queue and transmitted using the energy available at that time. We obtain energy management policies that are throughput optimal, i.e., the data queue stays stable for the largest possible data rate. Next we obtain energy management policies which minimize the mean delay in the queue. We also compare performance of several easily implementable sub-optimal energy management policies. A greedy policy is identified which, in low SNR regime, is throughput optimal and also minimizes mean delay.",
"title": ""
},
{
"docid": "58f39c555b96cb7bbc4d2bc76a19e937",
"text": "A corona discharge generator for surface treatment without the use of a step-up transformer with a high-voltage secondary is presented. The oil bath for high-voltage components is eliminated and still a reasonable volume, efficiency, and reliability of the generator are obtained. The voltage multiplication is achieved by an LC series resonant circuit. The resonant circuit is driven by a bridge type voltage-source resonant inverter. First, feasibility of the proposed method is proved by calculations. Closed form design expressions for key components of the electronic generator are provided. Second, a prototype of the electronic generator is built and efficiency measurements are performed. For power measurement, Lissajous figures and direct averaging of the instantaneous voltage-current product are used. The overall efficiency achieved is in the range between 80% and 90%.",
"title": ""
},
{
"docid": "6a2a1f6ff3fea681c37b19ac51c17fe6",
"text": "The present research investigates the influence of culture on telemedicine adoption and patient information privacy, security, and policy. The results, based on the SEM analysis of the data collected in the United States, demonstrate that culture plays a significant role in telemedicine adoption. The results further show that culture also indirectly influences telemedicine adoption through information security, information privacy, and information policy. Our empirical results further indicate that information security, privacy, and policy impact telemedicine adoption.",
"title": ""
}
] |
scidocsrr
|
31d8115fb763f004268d00aa1ef3ef48
|
A case study-based comparison of web testing techniques applied to AJAX web applications
|
[
{
"docid": "a46b1219945ddf41022073fa29729a10",
"text": "The economic relevance of Web applications increases the importance of controlling and improving their quality. Moreover, the new available technologies for their development allow the insertion of sophisticated functions, but often leave the developers responsible for their organization and evolution. As a consequence, a high demand is emerging for methodologies and tools for quality assurance of Web based systems.\nIn this paper, a UML model of Web applications is proposed for their high level representation. Such a model is the starting point for several analyses, which can help in the assessment of the static site structure. Moreover, it drives Web application testing, in that it can be exploited to define white box testing criteria and to semi-automatically generate the associated test cases.\nThe proposed techniques were applied to several real world Web applications. Results suggest that an automatic support to the verification and validation activities can be extremely beneficial. In fact, it guarantees that all paths in the site which satisfy a selected criterion are properly exercised before delivery. The high level of automation that is achieved in test case generation and execution increases the number of tests that are conducted and simplifies the regression checks.",
"title": ""
}
] |
[
{
"docid": "e9e5252319b5c62ba18628abc53a727b",
"text": "This paper proposes a robust and fast scheme to detect moving objects in a non-stationary camera. The state-of-the art methods still do not give a satisfactory performance due to drastic frame changes in a non-stationary camera. To improve the robustness in performance, we additionally use the spatio-temporal properties of moving objects. We build the foreground probability map which reflects the spatio-temporal properties, then we selectively apply the detection procedure and update the background model only to the selected pixels using the foreground probability. The foreground probability is also used to refine the initial detection results to obtain a clear foreground region. We compare our scheme quantitatively and qualitatively to the state-of-the-art methods in the detection quality and speed. The experimental results show that our scheme outperforms all other compared methods.",
"title": ""
},
{
"docid": "54d293423026d84bce69e8e073ebd6ac",
"text": "AIMS\nPredictors of Response to Cardiac Resynchronization Therapy (CRT) (PROSPECT) was the first large-scale, multicentre clinical trial that evaluated the ability of several echocardiographic measures of mechanical dyssynchrony to predict response to CRT. Since response to CRT may be defined as a spectrum and likely influenced by many factors, this sub-analysis aimed to investigate the relationship between baseline characteristics and measures of response to CRT.\n\n\nMETHODS AND RESULTS\nA total of 286 patients were grouped according to relative reduction in left ventricular end-systolic volume (LVESV) after 6 months of CRT: super-responders (reduction in LVESV > or =30%), responders (reduction in LVESV 15-29%), non-responders (reduction in LVESV 0-14%), and negative responders (increase in LVESV). In addition, three subgroups were formed according to clinical and/or echocardiographic response: +/+ responders (clinical improvement and a reduction in LVESV > or =15%), +/- responders (clinical improvement or a reduction in LVESV > or =15%), and -/- responders (no clinical improvement and no reduction in LVESV > or =15%). Differences in clinical and echocardiographic baseline characteristics between these subgroups were analysed. Super-responders were more frequently females, had non-ischaemic heart failure (HF), and had a wider QRS complex and more extensive mechanical dyssynchrony at baseline. Conversely, negative responders were more frequently in New York Heart Association class IV and had a history of ventricular tachycardia (VT). Combined positive responders after CRT (+/+ responders) had more non-ischaemic aetiology, more extensive mechanical dyssynchrony at baseline, and no history of VT.\n\n\nCONCLUSION\nSub-analysis of data from PROSPECT showed that gender, aetiology of HF, QRS duration, severity of HF, a history of VT, and the presence of baseline mechanical dyssynchrony influence clinical and/or LV reverse remodelling after CRT. Although integration of information about these characteristics would improve patient selection and counselling for CRT, further randomized controlled trials are necessary prior to changing the current guidelines regarding patient selection for CRT.",
"title": ""
},
{
"docid": "6329341da2a7e0957f2abde7f98764f9",
"text": "\"Enterprise Information Portals are applications that enable companies to unlock internally and externally stored information, and provide users a single gateway to personalized information needed to make informed business decisions. \" They are: \". . . an amalgamation of software applications that consolidate, manage, analyze and distribute information across and outside of an enterprise (including Business Intelligence, Content Management, Data Warehouse & Mart and Data Management applications.)\"",
"title": ""
},
{
"docid": "a67574d560911af698b7dddac4e8dd8a",
"text": "Ciliates are an ancient and diverse group of microbial eukaryotes that have emerged as powerful models for RNA-mediated epigenetic inheritance. They possess extensive sets of both tiny and long noncoding RNAs that, together with a suite of proteins that includes transposases, orchestrate a broad cascade of genome rearrangements during somatic nuclear development. This Review emphasizes three important themes: the remarkable role of RNA in shaping genome structure, recent discoveries that unify many deeply diverged ciliate genetic systems, and a surprising evolutionary \"sign change\" in the role of small RNAs between major species groups.",
"title": ""
},
{
"docid": "f5532b33092d22c97d1b6ebe69de051f",
"text": "Automatic personality recognition is useful for many computational applications, including recommendation systems, dating websites, and adaptive dialogue systems. There have been numerous successful approaches to classify the “Big Five” personality traits from a speaker’s utterance, but these have largely relied on judgments of personality obtained from external raters listening to the utterances in isolation. This work instead classifies personality traits based on self-reported personality tests, which are more valid and more difficult to identify. Our approach, which uses lexical and acoustic-prosodic features, yields predictions that are between 6.4% and 19.2% more accurate than chance. This approach predicts Opennessto-Experience and Neuroticism most successfully, with less accurate recognition of Extroversion. We compare the performance of classification and regression techniques, and also explore predicting personality clusters.",
"title": ""
},
{
"docid": "c44b101ca284790ddd845535c0a48fc0",
"text": "Understanding joint kinetics during activities of daily living furthers our understanding of the factors involved in joint pathology and the effects of treatment. In this study, we examined hip and knee joint kinetics during stair climbing in 35 young healthy subjects using a subject-specific knee model to estimate bone-on-bone tibiofemoral and patello-femoral joint contact forces. The net knee forces were below one body weight while the peak posterior-anterior contact force was close to one body weight. The peak distal-proximal contact force was on average 3 times body weight and could be as high as 6 times body weight. These contact forces occurred at a high degree of knee flexion where there is a smaller joint contact area resulting in high contact stresses. The peak knee adduction moment was 0.42 (0.15) Nm/kg while the flexion moment was 1.16 (0.24) Nm/kg. Similar peak moment values, but different curve profiles, were found for the hip. The hip and knee posterior-anterior shear forces and the knee flexion moment were higher during stair climbing than during level walking. The most striking difference between stair ascent and level walking was that the peak patello-femoral contact force was 8 times higher during stair ascent. These data can be used as baseline measures in pathology studies, as input to theoretical joint models, and as input to mechanical joint simulators.",
"title": ""
},
{
"docid": "6f0d9f383c0142b43ea440e6efb2a59a",
"text": "OBJECTIVES\nTo evaluate the effect of a probiotic product in acute self-limiting gastroenteritis in dogs.\n\n\nMETHODS\nThirty-six dogs suffering from acute diarrhoea or acute diarrhoea and vomiting were included in the study. The trial was performed as a randomised, double blind and single centre study with stratified parallel group design. The animals were allocated to equal looking probiotic or placebo treatment by block randomisation with a fixed block size of six. The probiotic cocktail consisted of thermo-stabilised Lactobacillus acidophilus and live strains of Pediococcus acidilactici, Bacillus subtilis, Bacillus licheniformis and Lactobacillus farciminis.\n\n\nRESULTS\nThe time from initiation of treatment to the last abnormal stools was found to be significantly shorter (P = 0.04) in the probiotic group compared to placebo group, the mean time was 1.3 days and 2.2 days, respectively. The two groups were found nearly equal with regard to time from start of treatment to the last vomiting episode.\n\n\nCLINICAL SIGNIFICANCE\nThe probiotic tested may reduce the convalescence time in acute self-limiting diarrhoea in dogs.",
"title": ""
},
{
"docid": "3eef0b6dee8d62e58a9369ed1e03d8ba",
"text": "Semi-supervised learning methods based on generative adversarial networks (GANs) obtained strong empirical results, but it is not clear 1) how the discriminator benefits from joint training with a generator, and 2) why good semi-supervised classification performance and a good generator cannot be obtained at the same time. Theoretically we show that given the discriminator objective, good semisupervised learning indeed requires a bad generator, and propose the definition of a preferred generator. Empirically, we derive a novel formulation based on our analysis that substantially improves over feature matching GANs, obtaining state-of-the-art results on multiple benchmark datasets2.",
"title": ""
},
{
"docid": "57801e8c3134a8466f6b487892c3e865",
"text": "In this paper, we propose and analyze an equivalent RLGC model of silicone rubber socket with single-ended signaling. A silicone rubber socket consists of highly dense metal powders in elastic silicone rubber. When the silicone rubber is compressed, metal powders form a column which corresponds to a pad of package. Thus, it can be modeled as a pair of cylinders. We have successfully verified the proposed model using a 3D electromagnetic (EM) solver in frequency domain and the eye diagram measurement in time domain. As a result, the verified model can be used to determine whether socket is applicable to the test system in the simulation level concurrently with offering physical insight and reducing time spent 3D EM simulating socket.",
"title": ""
},
{
"docid": "4432f16defbb62b841c96423be29e930",
"text": "Drones are becoming increasingly used in a wide variety of industries and services and are delivering profound socioeconomic benefits. Technology needs to be in place to ensure safe operation and management of the growing fleet of drones. Mobile networks have connected tens of billions of devices on the ground in the past decades and are now ready to connect the flying drones in the sky. In this article, we share some of our findings in cellular connectivity for low altitude drones. We first present and analyze field measurement data collected during drone flights in a commercial Long-Term Evolution (LTE) network. We then present simulation results to shed light on the performance of a network when the network is serving many drones simultaneously over a wide area. The results, analysis, and design insights presented in this article help enhance the understanding of the applicability and performance of mobile network connectivity to low altitude drones.",
"title": ""
},
{
"docid": "9cb02161eb65b06f474a8a263bd93d88",
"text": "BACKGROUND\nIdentifying key variables such as disorders within the clinical narratives in electronic health records has wide-ranging applications within clinical practice and biomedical research. Previous research has demonstrated reduced performance of disorder named entity recognition (NER) and normalization (or grounding) in clinical narratives than in biomedical publications. In this work, we aim to identify the cause for this performance difference and introduce general solutions.\n\n\nMETHODS\nWe use closure properties to compare the richness of the vocabulary in clinical narrative text to biomedical publications. We approach both disorder NER and normalization using machine learning methodologies. Our NER methodology is based on linear-chain conditional random fields with a rich feature approach, and we introduce several improvements to enhance the lexical knowledge of the NER system. Our normalization method - never previously applied to clinical data - uses pairwise learning to rank to automatically learn term variation directly from the training data.\n\n\nRESULTS\nWe find that while the size of the overall vocabulary is similar between clinical narrative and biomedical publications, clinical narrative uses a richer terminology to describe disorders than publications. We apply our system, DNorm-C, to locate disorder mentions and in the clinical narratives from the recent ShARe/CLEF eHealth Task. For NER (strict span-only), our system achieves precision=0.797, recall=0.713, f-score=0.753. For the normalization task (strict span+concept) it achieves precision=0.712, recall=0.637, f-score=0.672. The improvements described in this article increase the NER f-score by 0.039 and the normalization f-score by 0.036. We also describe a high recall version of the NER, which increases the normalization recall to as high as 0.744, albeit with reduced precision.\n\n\nDISCUSSION\nWe perform an error analysis, demonstrating that NER errors outnumber normalization errors by more than 4-to-1. Abbreviations and acronyms are found to be frequent causes of error, in addition to the mentions the annotators were not able to identify within the scope of the controlled vocabulary.\n\n\nCONCLUSION\nDisorder mentions in text from clinical narratives use a rich vocabulary that results in high term variation, which we believe to be one of the primary causes of reduced performance in clinical narrative. We show that pairwise learning to rank offers high performance in this context, and introduce several lexical enhancements - generalizable to other clinical NER tasks - that improve the ability of the NER system to handle this variation. DNorm-C is a high performing, open source system for disorders in clinical text, and a promising step toward NER and normalization methods that are trainable to a wide variety of domains and entities. (DNorm-C is open source software, and is available with a trained model at the DNorm demonstration website: http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/Demo/tmTools/#DNorm.).",
"title": ""
},
{
"docid": "d1d71c8a059e5f67f1f46ab35df20eda",
"text": "Recent years have witnessed some convergence in the architecture of entity search systems driven by a knowledge graph (KG) and a corpus with annotated entity mentions. However, each specific system has some limitations. We present AQQUCN, an entity search system that combines the best design principles into a public reference implementation. AQQUCN does not depend on well-formed question syntax, but works equally well with syntax-poor keyword queries. It uses several convolutional networks (convnets) to extract subtle, overlapping roles of query words. Instead of ranking structured query interpretations, which are then executed on the KG to return unranked sets, AQQUCN directly ranks response entities, by closely integrating coarse-grained predicates from the KG with fine-grained scoring from the corpus, into a single ranking model. Over and above competitive F1 score, AQQUCN gets the best entity ranking accuracy on two syntax-rich and two syntaxpoor public query workloads amounting to over 8,000 queries, with 16– 18% absolute improvement in mean average precision (MAP), compared to recent systems.",
"title": ""
},
{
"docid": "d274a98efb4568c5c320fc66cab56efd",
"text": "This paper presents the design and development of autonomous attitude stabilization, navigation in unstructured, GPS-denied environments, aggressive landing on inclined surfaces, and aerial gripping using onboard sensors on a low-cost, custom-built quadrotor. The development of a multi-functional micro air vehicle (MAV) that utilizes inexpensive off-the-shelf components presents multiple challenges due to noise and sensor accuracy, and there are control challenges involved with achieving various capabilities beyond navigation. This paper addresses these issues by developing a complete system from the ground up, addressing the attitude stabilization problem using extensive filtering and an attitude estimation filter recently developed in the literature. Navigation in both indoor and outdoor environments is achieved using a visual Simultaneous Localization and Mapping (SLAM) algorithm that relies on an onboard monocular camera. The system utilizes nested controllers for attitude stabilization, vision-based navigation, and guidance, with the navigation controller implemented using a This research was supported by the National Science Foundation under CAREER Award ECCS-0748287. Electronic supplementary material The online version of this article (doi:10.1007/s10514-012-9286-z) contains supplementary material, which is available to authorized users. V. Ghadiok ( ) · W. Ren Department of Electrical Engineering, University of California, Riverside, Riverside, CA 92521, USA e-mail: [email protected] W. Ren e-mail: [email protected] J. Goldin Electronic Systems Center, Hanscom Air Force Base, Bedford, MA 01731, USA e-mail: [email protected] nonlinear controller based on the sigmoid function. The efficacy of the approach is demonstrated by maintaining a stable hover even in the presence of wind gusts and when manually hitting and pulling on the quadrotor. Precision landing on inclined surfaces is demonstrated as an example of an aggressive maneuver, and is performed using only onboard sensing. Aerial gripping is accomplished with the addition of a secondary camera, capable of detecting infrared light sources, which is used to estimate the 3D location of an object, while an under-actuated and passively compliant manipulator is designed for effective gripping under uncertainty. The quadrotor is therefore able to autonomously navigate inside and outside, in the presence of disturbances, and perform tasks such as aggressively landing on inclined surfaces and locating and grasping an object, using only inexpensive, onboard sensors.",
"title": ""
},
{
"docid": "6a77bf61126da7a6aa61c10652e40650",
"text": "abnormality. Asymmetrical limb defects are characteristic and affect about 84% of patients. Hypoplastic or absent distal phalanges have been the most commonly reported limb anomalies, but some patients lack hands or lower legs [6]. At the other end of the spectrum, there may be hypoplastic nails. The lower limbs are generally more severely affected than the upper limbs. Congenital heart disease is another characteristic component of this syndrome, affecting about 8% of cases. Other clinical features seen in Adams Oliver syndrome include short stature, kidney (renal) malformations, cleft palate, small eyes (micropthalmia), spinal bifida occulta, accessory nipples, undescended testis, skin lesions and neurological abnormalities. Mental retardation is present in a few cases [7]. Though Adams Oliver syndrome does not usually alter the life span, various neurological complications can develop. To date our patient has manifested clinically with delayed milestones and epileptic fits. This patient requires a long term follow up for neurological manifestations like neuroectodermal tumours, hydrocephalus and various congenital anomalies which are likely to manifest at a later stage in life. ■ Acknowledgements. Conflict of interest: none. Financial support: none.",
"title": ""
},
{
"docid": "ea64ba0b1c3d4ed506fb3605893fef92",
"text": "We explore frame-level audio feature learning for chord recognition using artificial neural networks. We present the argument that chroma vectors potentially hold enough information to model harmonic content of audio for chord recognition, but that standard chroma extractors compute too noisy features. This leads us to propose a learned chroma feature extractor based on artificial neural networks. It is trained to compute chroma features that encode harmonic information important for chord recognition, while being robust to irrelevant interferences. We achieve this by feeding the network an audio spectrum with context instead of a single frame as input. This way, the network can learn to selectively compensate noise and resolve harmonic ambiguities. We compare the resulting features to hand-crafted ones by using a simple linear frame-wise classifier for chord recognition on various data sets. The results show that the learned feature extractor produces superior chroma vectors for chord recognition.",
"title": ""
},
{
"docid": "e399fd670b8b1f460d99ed06f04be41b",
"text": "Although the advantages of case study design are widely recognised, its original positivist underlying assumptions may mislead interpretive researchers aiming at theory building. The paper discusses the limitations of the case study design for theory building and explains how grounded theory systemic process adds to the case study design. The author reflects upon his experience in conducting research on the articulation of both traditional social networks and new virtual networks in six rural communities in Peru, using both case study design and grounded theory in a combined fashion in order to discover an emergent theory.",
"title": ""
},
{
"docid": "939f05a2265c6ab21b273a8127806279",
"text": "Acne is a common inflammatory disease. Scarring is an unwanted end point of acne. Both atrophic and hypertrophic scar types occur. Soft-tissue augmentation aims to improve atrophic scars. In this review, we will focus on the use of dermal fillers for acne scar improvement. Therefore, various filler types are characterized, and available data on their use in acne scar improvement are analyzed.",
"title": ""
},
{
"docid": "a4e122d0b827d25bea48d41487437d74",
"text": "We introduce UniAuth, a set of mechanisms for streamlining authentication to devices and web services. With UniAuth, a user first authenticates himself to his UniAuth client, typically his smartphone or wearable device. His client can then authenticate to other services on his behalf. In this paper, we focus on exploring the user experiences with an early iPhone prototype called Knock x Knock. To manage a variety of accounts securely in a usable way, Knock x Knock incorporates features not supported in existing password managers, such as tiered and location-aware lock control, authentication to laptops via knocking, and storing credentials locally while working with laptops seamlessly. In two field studies, 19 participants used Knock x Knock for one to three weeks with their own devices and accounts. Our participants were highly positive about Knock x Knock, demonstrating the desirability of our approach. We also discuss interesting edge cases and design implications.",
"title": ""
},
{
"docid": "54d242cf31eaa27823217d34ea3b5c0a",
"text": "In this paper, we propose to employ the convolutional neural network (CNN) for the image question answering (QA) task. Our proposed CNN provides an end-to-end framework with convolutional architectures for learning not only the image and question representations, but also their inter-modal interactions to produce the answer. More specifically, our model consists of three CNNs: one image CNN to encode the image content, one sentence CNN to compose the words of the question, and one multimodal convolution layer to learn their joint representation for the classification in the space of candidate answer words. We demonstrate the efficacy of our proposed model on the DAQUAR and COCO-QA datasets, which are two benchmark datasets for image QA, with the performances significantly outperforming the state-of-the-art.",
"title": ""
},
{
"docid": "018a8b222d1fa5d41a64f3f77fbb860a",
"text": "The classical music traditions of the Indian subcontinent, Hindustani and Carnatic, offer an excellent ground on which to test the limitations of current music information research approaches. At the same time, studies based on these music traditions can shed light on how to solve new and complex music modeling problems. Both traditions have very distinct characteristics, specially compared with western ones: they have developed unique instruments, musical forms, performance practices, social uses and context. In this article, we focus on the Carnatic music tradition of south India, especially on its melodic characteristics. We overview the theoretical aspects that are relevant for music information research and discuss the scarce computational approaches developed so far. We put emphasis on the limitations of the current methodologies and we present open issues that have not yet been addressed and that we believe are important to be worked on.",
"title": ""
}
] |
scidocsrr
|
8d85aff41a0720f16283c7128bc1f3b3
|
Automated Short Answer Scoring using Weighted Cosine Coefficient
|
[
{
"docid": "bd73a86a9b67ba26eeeecb2f582fd10a",
"text": "Many of UCLES' academic examinations make extensive use of questions that require candidates to write one or two sentences. For example, questions often ask candidates to state, to suggest, to describe, or to explain. These questions are a highly regarded and integral part of the examinations, and are also used extensively by teachers. A system that could partially or wholly automate valid marking of short, free text answers would therefore be valuable, but until The UCLES Group provides assessment services worldwide through three main business units. • Cambridge-ESOL (English for speakers of other languages) provides examinations in English as a foreign language and qualifications for language teachers throughout the world. • CIE (Cambridge International Examinations) provides international school examinations and international vocational awards. • OCR (Oxford, Cambridge and RSA Examinations) provides general and vocational qualifications to schools, colleges, employers, and training providers in the UK. For more information please visit http://www.ucles.org.uk",
"title": ""
},
{
"docid": "6df5a18a9ee6d2f035c69d7dc15ae9c6",
"text": "Automatic Essay Grading (AEG) system is defined as the computer technology that evaluates and grades written prose. The short essay answer, where the es say i written in short sentences where it has two types the open ended short answer and the close ended sho rt answer where it is our research domain based on the computer subject. The Marking of short essay answer automatically is one of the most complicated domains because it is relying heavily on the semant ic similarity in meaning refers to the degree to wh ich two sentences are similar in the meaning where both used similar words in the meaning, in this case Humans are able to easily judge if a concepts are r elated to each other, there for is a problem when S tudent use a synonym words during the answer in case they forget the target answer and they use their alterna ive words in the answer which will be different from th e Model answer that prepared by the structure. The Standard text similarity measures perform poorly on such tasks. Short answer only provides a limited content, because the length of the text is typicall y short, ranging from a single word to a dozen word s. This research has two propose; the first propose is Alternative Sentence Generator Method in order to generate the alternative model answer by connecting the method with the synonym dictionary. The second proposed three algorithms combined together in matching phase, Commons Words (COW), Longest Common Subsequence (LCS) and Semantic Dista nce (SD), these algorithms have been successfully used in many Natural Language Processi ng ystems and have yielded efficient results. The system was manually tested on 40 questions answ ered by three students and evaluated by teacher in class. The proposed system has yielded %82 corre lation-style with human grading, which has made the system significantly better than the other stat e of the art systems.",
"title": ""
},
{
"docid": "b74b4bf924478e6a70a2da33bc47ea23",
"text": "Most automatic scoring systems use pattern based that requires a lot of hard and tedious work. These systems work in a supervised manner where predefined patterns and scoring rules are generated. This paper presents a different unsupervised approach which deals with students’ answers holistically using text to text similarity. Different String-based and Corpus-based similarity measures were tested separately and then combined to achieve a maximum correlation value of 0.504. The achieved correlation is the best value achieved for unsupervised approach Bag of Words (BOW) when compared to previous work. Keywords-Automatic Scoring; Short Answer Grading; Semantic Similarity; String Similarity; Corpus-Based Similarity.",
"title": ""
},
{
"docid": "8e1935dd175b29142db7458d492bf698",
"text": "A similarity coefficient represents the similarity between two documents, two queries, or one document and one query. The retrieved documents can also be ranked in the order of presumed importance. A similarity coefficient is a function which computes the degree of similarity between a pair of text objects. There are a large number of similarity coefficients proposed in the literature, because the best similarity measure doesn't exist (yet !). In this paper we do a comparative analysis for finding out the most relevant document for the given set of keyword by using three similarity coefficients viz Jaccard, Dice and Cosine coefficients. This we perform using genetic algorithm approach. Due to the randomized nature of genetic algorithm the best fitness value is the average of 10 runs of the same code for a fixed number of iterations.The similarity coefficient for a set of documents retrieved for a given query from Google are find out then average relevancy in terms of fitness values using similarity coefficients is calculated. In this paper we have averaged 10 different generations for each query by running the program 10 times for the fixed value of Probability of Crossover Pc=0.7 and Probability of Mutation Pm=0.01. The same experiment was conducted for 10 queries.",
"title": ""
}
] |
[
{
"docid": "884121d37d1b16d7d74878fb6aff9cdb",
"text": "All models are wrong, but some are useful. 2 Acknowledgements The authors of this guide would like to thank David Warde-Farley, Guillaume Alain and Caglar Gulcehre for their valuable feedback. Special thanks to Ethan Schoonover, creator of the Solarized color scheme, 1 whose colors were used for the figures. Feedback Your feedback is welcomed! We did our best to be as precise, informative and up to the point as possible, but should there be anything you feel might be an error or could be rephrased to be more precise or com-prehensible, please don't refrain from contacting us. Likewise, drop us a line if you think there is something that might fit this technical report and you would like us to discuss – we will make our best effort to update this document. Source code and animations The code used to generate this guide along with its figures is available on GitHub. 2 There the reader can also find an animated version of the figures.",
"title": ""
},
{
"docid": "da3ba9c7e5000b5e957c961382da8409",
"text": "This paper presents a design, fabrication and characterization of a low-cost capacitive tilt sensor. The proposed sensor consists of a three-electrode capacitor, which contains two-phase of the air and liquid as the dielectric media. The three electrodes hold a plastic tube and the tube is positioned on a printed circuit board (PCB) which consists of a 127 kHz sine wave generator, a pre-amplifier, a rectifier and a low pass filter. The proposed sensor structure can measure tilt angles in the rage of 0° to 75°, where the linear relationship between the angle to be measured and the output signal was observed in the range of 0° to 50°. The sensitivity and resolution of the sensor are measured to be 40mV/degree and 0.5 degree, respectively.",
"title": ""
},
{
"docid": "6021388395ddd784422a22d30dac8797",
"text": "Introduction: The European Directive 2013/59/EURATOM requires patient radiation dose information to be included in the medical report of radiological procedures. To provide effective communication to the patient, it is necessary to first assess the patient's level of knowledge regarding medical exposure. The goal of this work is to survey patients’ current knowledge level of both medical exposure to ionizing radiation and professional disciplines and communication means used by patients to garner information. Material and Methods: A questionnaire was designed comprised of thirteen questions: 737 patients participated in the survey. The data were analysed based on population age, education, and number of radiological procedures received in the three years prior to survey. Results: A majority of respondents (56.4%) did not know which modality uses ionizing radiation. 74.7% had never discussed with healthcare professionals the risk concerning their medical radiological procedures. 70.1% were not aware of the professionals that have expertise to discuss the use of ionizing radiation for medical purposes, and 84.7% believe it is important to have the radiation dose information stated in the medical report. Conclusion: Patients agree with new regulations that it is important to know the radiation level related to the medical exposure, but there is little awareness in terms of which modalities use X-Rays and the professionals and channels that can help them to better understand the exposure information. To plan effective communication, it is essential to devise methods and adequate resources for key professionals (medical physicists, radiologists, referring physicians) to convey correct and effective information.",
"title": ""
},
{
"docid": "589a96c8932c9657b2a2854de6390b1f",
"text": "In this paper, proactive resource allocation based on user location for point-to-point communication over fading channels is introduced, whereby the source must transmit a packet when the user requests it within a deadline of a single time slot. We introduce a prediction model in which the source predicts the request arrival $T_p$ slots ahead, where $T_p$ denotes the prediction window (PW) size. The source allocates energy to transmit some bits proactively for each time slot of the PW with the objective of reducing the transmission energy over the non-predictive case. The requests are predicted based on the user location utilizing the prior statistics about the user requests at each location. We also assume that the prediction is not perfect. We propose proactive scheduling policies to minimize the expected energy consumption required to transmit the requested packets under two different assumptions on the channel state information at the source. In the first scenario, offline scheduling, we assume the channel states are known a-priori at the source at the beginning of the PW. In the second scenario, online scheduling, it is assumed that the source has causal knowledge of the channel state. Numerical results are presented showing the gains achieved by using proactive scheduling policies compared with classical (reactive) networks. Simulation results also show that increasing the PW size leads to a significant reduction in the consumed transmission energy even with imperfect prediction.",
"title": ""
},
{
"docid": "83355e7d2db67e42ec86f81909cfe8c1",
"text": "everal protocols for routing and forwarding in Wireless Mesh Networks (WMN) have been proposed, such as AODV, OLSR or B.A.T.M.A.N. However, providing support for e.g. flow-based routing where flows of one source take different paths through the network is hard to implement in a unified way using traditional routing protocols. OpenFlow is an emerging technology which makes network elements such as routers or switches programmable via a standardized interface. By using virtualization and flow-based routing, OpenFlow enables a rapid deployment of novel packet forwarding and routing algorithms, focusing on fixed networks. We propose an architecture that integrates OpenFlow with WMNs and provides such flow-based routing and forwarding capabilities. To demonstrate the feasibility of our OpenFlow based approach, we have implemented a simple solution to solve the problem of client mobility in a WMN which handles the fast migration of client addresses (e.g. IP addresses) between Mesh Access Points and the interaction with re-routing without the need for tunneling. Measurements from a real mesh testbed (KAUMesh) demonstrate the feasibility of our approach based on the evaluation of forwarding performance, control traffic and rule activation time.",
"title": ""
},
{
"docid": "15cf17abd1e8c19de94befe81bb36cac",
"text": "BACKGROUND\nAs preventing cancer with the help of a vaccine is a comparatively new concept, awareness and education about it will have important implication in the implementation of this strategy.\n\n\nMATERIALS AND METHODS\nPresent explorative questionnaire based survey included 618 MBBS students for final analysis.\n\n\nRESULTS\nMajority of participants (89.6%) were well aware of the preventable nature of cervical cancer. Most of them (89.2%) knew that necessary factor responsible for cervical cancer is infection with high risk HPV. Awareness regarding the availability of vaccine against cervical cancer was 75.6%. Females had a better awareness regarding availability of vaccine, target population for vaccination and about the catch up program. Overall acceptance of HPV vaccine among the population studied was 67.8%. Medical teaching had a definitive impact on the understanding of this important public health issue. Females seemed to be more ready to accept the vaccine and recommend it to others. For our study population the most common source of information was medical school teaching. Majority of participants agreed that the most important obstacle in implementation of HPV vaccination program in our country is inadequate information and 86.2% wanted to be educated by experts in this regard.\n\n\nCONCLUSION\nHPV vaccine for primary prevention of cervical cancer is a relatively new concept. Health professional will be able to play a pivotal role in popularizing this strategy.",
"title": ""
},
{
"docid": "0fc5441a3e8589b1bd15d56830c4ef79",
"text": "DevOps is an emerging paradigm to actively foster the collaboration between system developers and operations in order to enable efficient end-to-end automation of software deployment and management processes. DevOps is typically combined with Cloud computing, which enables rapid, on-demand provisioning of underlying resources such as virtual servers, storage, or database instances using APIs in a self-service manner. Today, an ever-growing amount of DevOps tools, reusable artifacts such as scripts, and Cloud services are available to implement DevOps automation. Thus, informed decision making on the appropriate approach (es) for the needs of an application is hard. In this work we present a collaborative and holistic approach to capture DevOps knowledge in a knowledgebase. Beside the ability to capture expert knowledge and utilize crowd sourcing approaches, we implemented a crawling framework to automatically discover and capture DevOps knowledge. Moreover, we show how this knowledge is utilized to deploy and operate Cloud applications.",
"title": ""
},
{
"docid": "6356a0272b95ade100ad7ececade9e36",
"text": "We describe a browser extension, PwdHash, that transparently produces a different password for each site, improving web password security and defending against password phishing and other attacks. Since the browser extension applies a cryptographic hash function to a combination of the plaintext password entered by the user, data associated with the web site, and (optionally) a private salt stored on the client machine, theft of the password received at one site will not yield a password that is useful at another site. While the scheme requires no changes on the server side, implementing this password method securely and transparently in a web browser extension turns out to be quite difficult. We describe the challenges we faced in implementing PwdHash and some techniques that may be useful to anyone facing similar security issues in a browser environment.",
"title": ""
},
{
"docid": "925fb99c8b63b9acc9d7313a4a766095",
"text": "WordNet proved that it is possible to construct a large-sc ale electronic lexical database on the principles of lexical semantics. It has been accept ed and used extensively by computational linguists ever since it was released. Some of its a pplications include information retrieval, language generation, question answering, text categ orization, text classification and word sense disambiguation. Inspired by WordNet's success, we propose as an alternative a similar re sou ce, based on the 1987 Penguin edition of R get’s Thesaurus of English Words and Phrases . Peter Mark Roget published his first Thesaurus over 150 years ago. Countless writers, orators and students of the English language have used it. Computational linguists have employed Roget’s for almost 50 years in Natural Language Processing . Some of the tasks they have used it for include machine translation, computing lexical ohesion in texts and constructing databases that can infer common sense knowledge. This dissert ation presents Roget’s merits by explaining what it really is and how it has been used, while c omparing its applications to those of WordNet . The NLP community has hesitated in accepting Roget’s Thesaurus because a proper machinetractable version was not available. This dissertation presents an implementation of a m achine-tractable version of the 1987 Penguin edition of Roget’s Thesaurus – the first implementation of its kind to use an e tir current edition. It explains the steps necessary for taking a machine-readable file and transforming it into a tractable system. This involves converting the le xical material into a format that can be more easily exploited, identifying data structures and d esigning classes to computerize the Thesaurus . Roget’s organization is studied in detail and contrasted w ith WordNet’s. We show two applications of the computerized Thesaurus : computing semantic similarity between words and phrases, and building lexical cha ins in a text. The experiments are performed using well-known benchmarks and the results are com pared to those of other systems that use Roget’s, WordNet and statistical techniques. Roget’s has turned out to be an excellent resource for measuring semantic similarity; lexical chains a re easily built but more difficult to evaluate. We also explain ways in which Roget’s Thesaurus and WordNet can be combined. To my parents, who are my most valued treasure.",
"title": ""
},
{
"docid": "5b76ef357e706d81b31fd9fabb8ea685",
"text": "This paper reports the design and development of aluminum nitride (AlN) piezoelectric RF resonant voltage amplifiers for Internet of Things (IoT) applications. These devices can provide passive and highly frequency selective voltage gain to RF backends with a capacitive input to drastically enhance sensitivity and to reduce power consumption of the transceiver. Both analytical and finite element models (FEM) have been utilized to identify the optimal designs. Consequently, an AlN voltage amplifier with an open circuit gain of 7.27 and a fractional bandwidth (FBW) of 0.11 % has been demonstrated. This work provides a material-agnostic framework for analytically optimizing piezoelectric voltage amplifiers.",
"title": ""
},
{
"docid": "8df98bd1576f3de19c1626322b3c66ef",
"text": "Image segmentation is the most important part in digital image processing. Segmentation is nothing but a portion of any image and object. In image segmentation, digital image is divided into multiple set of pixels. Image segmentation is generally required to cut out region of interest (ROI) from an image. Currently there are many different algorithms available for image segmentation. Each have their own advantages and purpose. In this paper, different image segmentation algorithms with their prospects are reviewed.",
"title": ""
},
{
"docid": "3a98eec0c3c9d9b5e99f44c6ae932686",
"text": "This letter proposes an ensemble neural network (Ensem-NN) for skeleton-based action recognition. The Ensem-NN is introduced based on the idea of ensemble learning, “two heads are better than one.” According to the property of skeleton sequences, we design one-dimensional convolution neural network with residual structure as Base-Net. From entirety to local, from focus to motion, we designed four different subnets based on the Base-Net to extract diverse features. The first subnet is a Two-stream Entirety Net , which performs on the entirety skeleton and explores both temporal and spatial features. The second is a Body-part Net, which can extract fine-grained spatial and temporal features. The third is an Attention Net, in which a channel-wised attention mechanism can learn important frames and feature channels. Frame-difference Net, as the fourth subnet, aims at exploring motion features. Finally, the four subnets are fused as one ensemble network. Experimental results show that the proposed Ensem-NN performs better than state-of-the-art methods on three widely used datasets.",
"title": ""
},
{
"docid": "b89d42f836730a782a9b0f5df5bbd5bd",
"text": "This paper proposes a new usability evaluation checklist, UseLearn, and a related method for eLearning systems. UseLearn is a comprehensive checklist which incorporates both quality and usability evaluation perspectives in eLearning systems. Structural equation modeling is deployed to validate the UseLearn checklist quantitatively. The experimental results show that the UseLearn method supports the determination of usability problems by criticality metric analysis and the definition of relevant improvement strategies. The main advantage of the UseLearn method is the adaptive selection of the most influential usability problems, and thus significant reduction of the time and effort for usability evaluation can be achieved. At the sketching and/or design stage of eLearning systems, it will provide an effective guidance to usability analysts as to what problems should be focused on in order to improve the usability perception of the end-users. Relevance to industry: During the sketching or design stage of eLearning platforms, usability problems should be revealed and eradicated to create more usable and quality eLearning systems to satisfy the end-users. The UseLearn checklist along with its quantitative methodology proposed in this study would be helpful for usability experts to achieve this goal. 2010 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "c5e8ddfd076377992848f3032d9dff93",
"text": "Speech Activity Detection(SAD) is a well researched problem for communication, command and control applications, where audio segments are short duration and solution proposed for noisy as well as clean environments. In this study, we investigate the SAD problem using NASA’s Apollo space mission data [1]. Unlike traditional speech corpora, the audio recordings in Apollo are extensive from a longitudinal perspective (i.e., 612 days each). From SAD perspective, the data offers many challenges: (i) noise distortion with variable SNR, (ii) channel distortion, and (iii) extended periods of non-speech activity. Here, we use the recently proposed Combo-SAD, which has performed remarkably well in DARPA RATS evaluations, as our baseline system [2]. Our analysis reveals that the ComboSAD performs well when speech-pause durations are balanced in the audio segment, but deteriorates significantly when speech is sparse or absent. In order to mitigate this problem, we propose a simple yet efficient technique which builds an alternative model of speech using data from a separate corpora, and embeds this new information within the Combo-SAD framework. Our experiments show that the proposed approach has a major impact on SAD performance (i.e., +30% absolute), especially in audio segments that contain sparse or no speech information.",
"title": ""
},
{
"docid": "d82c11c5a6981f1d3496e0838519704d",
"text": "This paper presents a detailed study of the nonuniform bipolar conduction phenomenon under electrostatic discharge (ESD) events in single-finger NMOS transistors and analyzes its implications for the design of ESD protection for deep-submicron CMOS technologies. It is shown that the uniformity of the bipolar current distribution under ESD conditions is severely degraded depending on device finger width ( ) and significantly influenced by the substrate and gate-bias conditions as well. This nonuniform current distribution is identified as a root cause of the severe reduction in ESD failure threshold current for the devices with advanced silicided processes. Additionally, the concept of an intrinsic second breakdown triggering current ( 2 ) is introduced, which is substrate-bias independent and represents the maximum achievable ESD failure strength for a given technology. With this improved understanding of ESD behavior involved in advanced devices, an efficient design window can be constructed for robust deep submicron ESD protection.",
"title": ""
},
{
"docid": "c73d635d686c73cdd702c54cdf7da82b",
"text": "Eliminating disparities in health is a primary goal of the federal government and many states. Our overarching objective should be to improve population health for all groups to the maximum extent. Ironically, enhancing population health and even the health of the disadvantaged can conflict with efforts to reduce disparities. This paper presents data showing that interventions that offer some of the largest possible gains for the disadvantaged may also increase disparities, and it examines policies that offer the potential to decrease disparities while improving population health. Enhancement of educational attainment and access to health services and income support for those in greatest need appear to be particularly important pathways to improved population health.",
"title": ""
},
{
"docid": "ca10e68cfb62ee0af7aae702801658dd",
"text": "Technology utilization in distance education has demonstrated its significance in the transfer of knowledge for both the instructors and the learners. This is also made possible through the use of the Internet which helps change the traditional teaching approaches into more modern methods when integrated with the pedagogical instruction. Mobile devices together with other forms of technology-based tools in education have established their potential in language teaching. In this regards, the Teaching of English as a Second Language (TESL) has become easier and more attractive via mobile learning. The aim of this study is to review the mobile-based teaching and learning in the English language classroom. Such integration of mobile learning with English language teaching may offer great innovations in the pedagogical delivery.",
"title": ""
},
{
"docid": "a78782e389313600620bfb68fc57a81f",
"text": "Online consumer reviews reflect the testimonials of real people, unlike advertisements. As such, they have critical impact on potential consumers, and indirectly on businesses. According to a Harvard study (Luca 2011), +1 rise in star-rating increases revenue by 5–9%. Problematically, such financial incentives have created a market for spammers to fabricate reviews, to unjustly promote or demote businesses, activities known as opinion spam (Jindal and Liu 2008). A vast majority of existing work on this problem have formulations based on static review data, with respective techniques operating in an offline fashion. Spam campaigns, however, are intended to make most impact during their course. Abnormal events triggered by spammers’ activities could be masked in the load of future events, which static analysis would fail to identify. In this work, we approach the opinion spam problem with a temporal formulation. Specifically, we monitor a list of carefully selected indicative signals of opinion spam over time and design efficient techniques to both detect and characterize abnormal events in real-time. Experiments on datasets from two different review sites show that our approach is fast, effective, and practical to be deployed in real-world systems.",
"title": ""
},
{
"docid": "fe5a716280415427c3e040e6abfde677",
"text": "As classic and intrinsic requirements, synthetic speech need to convey correct information with good quality of naturalness to listeners. Fundamental frequency (F0) contours need to be controlled to meet these requirements. Additional challenges have been introduced to tonal languages because the F0 contour reflects both intelligibility and naturalness of the speech. According to the fact that the F0 contour in a syllable conveys information asymmetrically, Tone nucleus model has been successfully established. In this study, Tone nucleus model is applied in order to generate F0 contours for Thai speech synthesis. This is among the first that has introduced the model to other tonal languages other than Mandarin. All tone nuclei for five distinctive tones are defined according to the underlying targets. The full process of F0 contour generation is presented from the nucleus extraction until the F0 contour generation for continuous speech. The efficiency and adaptability of the model in Thai language were confirmed by the objective and subjective tests. The model outperformed a baseline without applying the model. The generated F0 contours showed less distortion, more tone intelligibility and more naturalness. The modified method is also introduced for enhancement. The results showed significant improvement on the generated F0 contours.",
"title": ""
},
{
"docid": "611b755f959d542603057683706a1cd2",
"text": "The Net Promoter Score (NPS) is still a popular customer loyalty measurement despite recent studies arguing that customer loyalty is multidimensional. Therefore, firms require new data-driven methods that combine behavioral and attitudinal data sources. This paper provides a framework that holistically assesses and predicts customer loyalty using attitudinal and behavioral data sources. We built a novel customer loyalty predictive model that employs a big data approach to assessing and predicting customer loyalty in a B2B context. We demonstrate the use of varying big data sources, confirming that NPS measurement does not necessarily correspond to actual behavior. Our model utilises customers’ verbatim comments to understand why customers are churning.",
"title": ""
}
] |
scidocsrr
|
b915a3d4289c57ae8b2054d18bc8475e
|
Fully Connected Object Proposals for Video Segmentation
|
[
{
"docid": "3ae5e7ac5433f2449cd893e49f1b2553",
"text": "We propose a category-independent method to produce a bag of regions and rank them, such that top-ranked regions are likely to be good segmentations of different objects. Our key objectives are completeness and diversity: Every object should have at least one good proposed region, and a diverse set should be top-ranked. Our approach is to generate a set of segmentations by performing graph cuts based on a seed region and a learned affinity function. Then, the regions are ranked using structured learning based on various cues. Our experiments on the Berkeley Segmentation Data Set and Pascal VOC 2011 demonstrate our ability to find most objects within a small bag of proposed regions.",
"title": ""
}
] |
[
{
"docid": "a98f643c2a0e40a767f5ef57b0152adb",
"text": "Techniques for recognizing high-level events in consumer videos on the Internet have many applications. Systems that produced state-of-the-art recognition performance usually contain modules requiring extensive computation, such as the extraction of the temporal motion trajectories, which cannot be deployed on large-scale datasets. In this paper, we provide a comprehensive study on efficient methods in this area and identify technical options for super fast event recognition in Internet videos. We start from analyzing a multimodal baseline that has produced good performance on popular benchmarks, by systematically evaluating each component in terms of both computational cost and contribution to recognition accuracy. After that, we identify alternative features, classifiers, and fusion strategies that can all be efficiently computed. In addition, we also provide a study on the following interesting question: for event recognition in Internet videos, what is the minimum number of visual and audio frames needed to obtain a comparable accuracy to that of using all the frames? Results on two rigorously designed datasets indicate that similar results can be maintained by using only a small portion of the visual frames. We also find that, different from the visual frames, the soundtracks contain little redundant information and thus sampling is always harmful. Integrating all the findings, our suggested recognition system is 2,350-fold faster than a baseline approach with even higher recognition accuracies. It recognizes 20 classes on a 120-second video sequence in just 1.78 seconds, using a regular desktop computer.",
"title": ""
},
{
"docid": "f78534a09317be5097963d068c6af2cd",
"text": "Example-based single image super-resolution (SISR) methods use external training datasets and have recently attracted a lot of interest. Self-example based SISR methods exploit redundant non-local self-similar patterns in natural images and because of that are more able to adapt to the image at hand to generate high quality super-resolved images. In this paper, we propose to combine the advantages of example-based SISR and self-example based SISR. A novel hierarchical random forests based super-resolution (SRHRF) method is proposed to learn statistical priors from external training images. Each layer of random forests reduce the estimation error due to variance by aggregating prediction models from multiple decision trees. The hierarchical structure further boosts the performance by pushing the estimation error due to bias towards zero. In order to further adaptively improve the super-resolved image, a self-example random forests (SERF) is learned from an image pyramid pair constructed from the down-sampled SRHRF generated result. Extensive numerical results show that the SRHRF method enhanced using SERF (SRHRF+) achieves the state-of-the-art performance on natural images and yields substantially superior performance for image with rich self-similar patterns.",
"title": ""
},
{
"docid": "028eb67d71987c33c4a331cf02c6ff00",
"text": "We explore the feasibility of using crowd workers from Amazon Mechanical Turk to identify and rank sidewalk accessibility issues from a manually curated database of 100 Google Street View images. We examine the effect of three different interactive labeling interfaces (Point, Rectangle, and Outline) on task accuracy and duration. We close the paper by discussing limitations and opportunities for future work.",
"title": ""
},
{
"docid": "afadbcb8c025ad6feca693c05ce7b43f",
"text": "A data structure that implements a mergeable double-ended priority queue, namely therelaxed min-max heap, is presented. A relaxed min-max heap ofn items can be constructed inO(n) time. In the worst case, operationsfind_min() andfind_max() can be performed in constant time, while each of the operationsmerge(),insert(),delete_min(),delete_max(),decrease_key(), anddelete_key() can be performed inO(logn) time. Moreover,insert() hasO(1) amortized running time. If lazy merging is used,merge() will also haveO(1) worst-case and amortized time. The relaxed min-max heap is the first data structure that achieves these bounds using only two pointers (puls one bit) per item.",
"title": ""
},
{
"docid": "65baa2316024ca738f566a53818fc626",
"text": "The proper usage and creation of transfer functions for time-varying data sets is an often ignored problem in volume visualization. Although methods and guidelines exist for time-invariant data, little formal study for the timevarying case has been performed. This paper examines this problem, and reports the study that we have conducted to determine how the dynamic behavior of time-varying data may be captured by a single or small set of transfer functions. The criteria which dictate when more than one transfer function is needed were also investigated. Four data sets with different temporal characteristics were used for our study. Results obtained using two different classes of methods are discussed, along with lessons learned. These methods, including a new multiresolution opacity map approach, can be used for semi-automatic generation of transfer functions to explore large-scale time-varying data sets.",
"title": ""
},
{
"docid": "52ef7357fa379b7eede3c4ceee448a81",
"text": "(Note: This is a completely revised version of the article that was originally published in ACM Crossroads, Volume 13, Issue 4. Revisions were needed because of major changes to the Natural Language Toolkit project. The code in this version of the article will always conform to the very latest version of NLTK (v2.0b9 as of November 2010). Although the code is always tested, it is possible that a bug or two may have been introduced in the code during the course of this revision. If you find any, please report them to the author. If you are still using version 0.7 of the toolkit for some reason, please refer to http://www.acm.org/crossroads/xrds13-4/natural_language.html).",
"title": ""
},
{
"docid": "5409b6586b89bd3f0b21e7984383e1e1",
"text": "The dream of creating artificial devices that reach or outperform human intelligence is many centuries old. In this talk I present an elegant parameter-free theory of an optimal reinforcement learning agent embedded in an arbitrary unknown environment that possesses essentially all aspects of rational intelligence. The theory reduces all conceptual AI problems to pure computational questions. The necessary and sufficient ingredients are Bayesian probability theory; algorithmic information theory; universal Turing machines; the agent framework; sequential decision theory; and reinforcement learning, which are all important subjects in their own right. I also present some recent approximations, implementations, and applications of this modern top-down approach to AI. Marcus Hutter 3 Universal Artificial Intelligence Overview Goal: Construct a single universal agent that learns to act optimally in any environment. State of the art: Formal (mathematical, non-comp.) definition of such an agent. Accomplishment: Well-defines AI. Formalizes rational intelligence. Formal “solution” of the AI problem in the sense of ... =⇒ Reduces the conceptional AI problem to a (pure) computational problem. Evidence: Mathematical optimality proofs and some experimental results. Marcus Hutter 4 Universal Artificial Intelligence",
"title": ""
},
{
"docid": "8f177b79f0b89510bd84e1f503b5475f",
"text": "We propose a distributed cooperative framework among base stations (BS) with load balancing (dubbed as inter-BS for simplicity) for improving energy efficiency of OFDMA-based cellular access networks. Proposed inter-BS cooperation is formulated following the principle of ecological self-organization. Based on the network traffic, BSs mutually cooperate for distributing traffic among themselves and thus, the number of active BSs is dynamically adjusted for energy savings. For reducing the number of inter-BS communications, a three-step measure is taken by using estimated load factor (LF), initializing the algorithm with only the active BSs and differentiating neighboring BSs according to their operating modes for distributing traffic. An exponentially weighted moving average (EWMA)-based technique is proposed for estimating the LF in advance based on the historical data. Various selection schemes for finding the best BSs to distribute traffic are also explored. Furthermore, we present an analytical formulation for modeling the dynamic switching of BSs. A thorough investigation under a wide range of network settings is carried out in the context of an LTE system. Results demonstrate a significant enhancement in network energy efficiency yielding a much higher savings than the compared schemes. Moreover, frequency of inter-BS correspondences can be reduced by over 80%.",
"title": ""
},
{
"docid": "c42aaf64a6da2792575793a034820dcb",
"text": "Psychologists and psychiatrists commonly rely on self-reports or interviews to diagnose or treat behavioral addictions. The present study introduces a novel source of data: recordings of the actual problem behavior under investigation. A total of N = 58 participants were asked to fill in a questionnaire measuring problematic mobile phone behavior featuring several questions on weekly phone usage. After filling in the questionnaire, all participants received an application to be installed on their smartphones, which recorded their phone usage for five weeks. The analyses revealed that weekly phone usage in hours was overestimated; in contrast, numbers of call and text message related variables were underestimated. Importantly, several associations between actual usage and being addicted to mobile phones could be derived exclusively from the recorded behavior, but not from self-report variables. The study demonstrates the potential benefit to include methods of psychoinformatics in the diagnosis and treatment of problematic mobile phone use.",
"title": ""
},
{
"docid": "31045b2c3709102abe66906a0e8ae706",
"text": "Tandem mass spectrometry fragments a large number of molecules of the same peptide sequence into charged molecules of prefix and suffix peptide subsequences and then measures mass/charge ratios of these ions. The de novo peptide sequencing problem is to reconstruct the peptide sequence from a given tandem mass spectral data of k ions. By implicitly transforming the spectral data into an NC-spectrum graph G (V, E) where /V/ = 2k + 2, we can solve this problem in O(/V//E/) time and O(/V/2) space using dynamic programming. For an ideal noise-free spectrum with only b- and y-ions, we improve the algorithm to O(/V/ + /E/) time and O(/V/) space. Our approach can be further used to discover a modified amino acid in O(/V//E/) time. The algorithms have been implemented and tested on experimental data.",
"title": ""
},
{
"docid": "4e19a7342ff32f82bc743f40b3395ee3",
"text": "The face image is the most accessible biometric modality which is used for highly accurate face recognition systems, while it is vulnerable to many different types of presentation attacks. Face anti-spoofing is a very critical step before feeding the face image to biometric systems. In this paper, we propose a novel two-stream CNN-based approach for face anti-spoofing, by extracting the local features and holistic depth maps from the face images. The local features facilitate CNN to discriminate the spoof patches independent of the spatial face areas. On the other hand, holistic depth map examine whether the input image has a face-like depth. Extensive experiments are conducted on the challenging databases (CASIA-FASD, MSU-USSA, and Replay Attack), with comparison to the state of the art.",
"title": ""
},
{
"docid": "0b1b4c8d501c3b1ab350efe4f2249978",
"text": "Motivated by formation control of multiple non-holonomic mobile robots, this paper presents a trajectory tracking control scheme design for nonholonomic mobile robots that are equipped with low-level linear and angular velocities control systems. The design includes a nonlinear kinematic trajectory tracking control law and a tracking control gains selection method that provide a means to implement the nonlinear tracking control law systematically based on the dynamic control performance of the robot's low-level control systems. In addition, the proposed scheme, by design, enables the mobile robot to execute reference trajectories that are represented by time-parameterized waypoints. This feature provides the scheme a generic interface with higher-level trajectory planners. The trajectory tracking control scheme is validated using an iRobot Packbot's parameteric model estimated from experimental data.",
"title": ""
},
{
"docid": "ec4dae5e2aa5a5ef67944d82a6324c9d",
"text": "Parallel collection processing based on second-order functions such as map and reduce has been widely adopted for scalable data analysis. Initially popularized by Google, over the past decade this programming paradigm has found its way in the core APIs of parallel dataflow engines such as Hadoop's MapReduce, Spark's RDDs, and Flink's DataSets. We review programming patterns typical of these APIs and discuss how they relate to the underlying parallel execution model. We argue that fixing the abstraction leaks exposed by these patterns will reduce the cost of data analysis due to improved programmer productivity. To achieve that, we first revisit the algebraic foundations of parallel collection processing. Based on that, we propose a simplified API that (i) provides proper support for nested collection processing and (ii) alleviates the need of certain second-order primitives through comprehensions -- a declarative syntax akin to SQL. Finally, we present a metaprogramming pipeline that performs algebraic rewrites and physical optimizations which allow us to target parallel dataflow engines like Spark and Flink with competitive performance.",
"title": ""
},
{
"docid": "b100ca202f99e3ee086cd61f01349a30",
"text": "This paper is concerned with inertial-sensor-based tracking of the gravitation direction in mobile devices such as smartphones. Although this tracking problem is a classical one, choosing a good state-space for this problem is not entirely trivial. Even though for many other orientation related tasks a quaternion-based representation tends to work well, for gravitation tracking their use is not always advisable. In this paper we present a convenient linear quaternion-free state-space model for gravitation tracking. We also discuss the efficient implementation of the Kalman filter and smoother for the model. Furthermore, we propose an adaption mechanism for the Kalman filter which is able to filter out shot-noises similarly as has been proposed in context of adaptive and robust Kalman filtering. We compare the proposed approach to other approaches using measurement data collected with a smartphone.",
"title": ""
},
{
"docid": "aeac0766cc4e29fa0614649279970276",
"text": "Over the last two releases SQL Server has integrated two specialized engines into the core system: the Apollo column store engine for analytical workloads and the Hekaton in-memory engine for high-performance OLTP workloads. There is an increasing demand for real-time analytics, that is, for running analytical queries and reporting on the same system as transaction processing so as to have access to the freshest data. SQL Server 2016 will include enhancements to column store indexes and in-memory tables that significantly improve performance on such hybrid workloads. This paper describes four such enhancements: column store indexes on inmemory tables, making secondary column store indexes on diskbased tables updatable, allowing B-tree indexes on primary column store indexes, and further speeding up the column store scan operator.",
"title": ""
},
{
"docid": "e44d7f7668590726def631c5ec5f5506",
"text": "Today thanks to low cost and high performance DSP's, Kalman filtering (KF) becomes an efficient candidate to avoid mechanical sensors in motor control. We present in this work experimental results by using a steady state KF method to estimate the speed and rotor position for hybrid stepper motor. With this method the computing time is reduced. The Kalman gain is pre-computed from numerical simulation and introduced as a constant in the real time algorithm. The load torque is also on-line estimated by the same algorithm. At start-up the initial rotor position is detected by the impulse current method.",
"title": ""
},
{
"docid": "a0071f44de7741eb914c1fdb0e21026d",
"text": "This study examined relationships between mindfulness and indices of happiness and explored a fivefactor model of mindfulness. Previous research using this mindfulness model has shown that several facets predicted psychological well-being (PWB) in meditating and non-meditating individuals. The current study tested the hypothesis that the prediction of PWB by mindfulness would be augmented and partially mediated by self-compassion. Participants were 27 men and 96 women (mean age = 20.9 years). All completed self-report measures of mindfulness, PWB, personality traits (NEO-PI-R), and self-compassion. Results show that mindfulness is related to psychologically adaptive variables and that self-compassion is a crucial attitudinal factor in the mindfulness–happiness relationship. Findings are interpreted from the humanistic perspective of a healthy personality. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "018d855cdd9a5e95beba0ae39dddf4ce",
"text": "Citation Agrawal, Ajay K., Catalini, Christian, and Goldfarb, Avi. \"Some Simple Economics of Crowdfunding.\" Innovation Policy and the Economy 2013, ed. Josh Lerner and Scott Stern, Univeristy of Chicago Press, 2014, 1-47. © 2014 National Bureau of Economic Research Innovation Policy and the Economy As Published http://press.uchicago.edu/ucp/books/book/distributed/I/bo185081 09.html Publisher University of Chicago Press",
"title": ""
},
{
"docid": "61998885a181e074eadd41a2f067f697",
"text": "Introduction. Opinion mining has been receiving increasing attention from a broad range of scientific communities since early 2000s. The present study aims to systematically investigate the intellectual structure of opinion mining research. Method. Using topic search, citation expansion, and patent search, we collected 5,596 bibliographic records of opinion mining research. Then, intellectual landscapes, emerging trends, and recent developments were identified. We also captured domain-level citation trends, subject category assignment, keyword co-occurrence, document co-citation network, and landmark articles. Analysis. Our study was guided by scientometric approaches implemented in CiteSpace, a visual analytic system based on networks of co-cited documents. We also employed a dual-map overlay technique to investigate epistemological characteristics of the domain. Results. We found that the investigation of algorithmic and linguistic aspects of opinion mining has been of the community’s greatest interest to understand, quantify, and apply the sentiment orientation of texts. Recent thematic trends reveal that practical applications of opinion mining such as the prediction of market value and investigation of social aspects of product feedback have received increasing attention from the community. Conclusion. Opinion mining is fast-growing and still developing, exploring the refinements of related techniques and applications in a variety of domains. We plan to apply the proposed analytics to more diverse domains and comprehensive publication materials to gain more generalized understanding of the true structure of a science.",
"title": ""
},
{
"docid": "61ecbc652cf9f57136e8c1cd6fed2fb0",
"text": "Recent advancements in digital technology have attracted the interest of educators and researchers to develop technology-assisted inquiry-based learning environments in the domain of school science education. Traditionally, school science education has followed deductive and inductive forms of inquiry investigation, while the abductive form of inquiry has previously been sparsely explored in the literature related to computers and education. We have therefore designed a mobile learning application ‘ThinknLearn’, which assists high school students in generating hypotheses during abductive inquiry investigations. The M3 evaluation framework was used to investigate the effectiveness of using ‘ThinknLearn’ to facilitate student learning. The results indicated in this paper showed improvements in the experimental group’s learning performance as compared to a control group in pre-post tests. In addition, the experimental group also maintained this advantage during retention tests as well as developing positive attitudes toward mobile learning. 2012 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
251f643a1520bee0962922d0a60bab59
|
An Integrated UAV Navigation System Based on Aerial Image Matching
|
[
{
"docid": "5157063545b7ec7193126951c3bdb850",
"text": "This paper presents an integrated system for navigation parameter estimation using sequential aerial images, where navigation parameters represent the position and velocity information of an aircraft for autonomous navigation. The proposed integrated system is composed of two parts: relative position estimation and absolute position estimation. Relative position estimation recursively computes the current position of an aircraft by accumulating relative displacement estimates extracted from two successive aerial images. Simple accumulation of parameter values decreases the reliability of the extracted parameter estimates as an aircraft goes on navigating, resulting in a large position error. Therefore, absolute position estimation is required to compensate for the position error generated in relative position estimation. Absolute position estimation algorithms by image matching and digital elevation model (DEM) matching are presented. In image matching, a robust-oriented Hausdorff measure (ROHM) is employed, whereas in DEM matching the algorithm using multiple image pairs is used. Experiments with four real aerial image sequences show the effectiveness of the proposed integrated position estimation algorithm.",
"title": ""
},
{
"docid": "08bd4d2c48ebde047a8b36ce72fe61b6",
"text": "S imultaneous localization and mapping (SLAM) is the process by which a mobile robot can build a map of the environment and, at the same time, use this map to compute its location. The past decade has seen rapid and exciting progress in solving the SLAM problem together with many compelling implementations of SLAM methods. The great majority of work has focused on improving computational efficiency while ensuring consistent and accurate estimates for the map and vehicle pose. However, there has also been much research on issues such as nonlinearity, data association , and landmark characterization, all of which are vital in achieving a practical and robust SLAM implementation. This tutorial focuses on the recursive Bayesian formulation of the SLAM problem in which probability distributions or estimates of absolute or relative locations of landmarks and vehicle pose are obtained. Part I of this tutorial (IEEE Robotics & Auomation Magazine, vol. 13, no. 2) surveyed the development of the essential SLAM algorithm in state-space and particle filter form, described a number of key implementations, and cited locations of source code and real-world data for evaluation of SLAM algorithms. Part II of this tutorial (this article), surveys the current state of the art in SLAM research with a focus on three key areas: computational complexity, data association, and environment representation. Much of the mathematical notation and essential concepts used in this article are defined in Part I of this tutorial and, therefore, are not repeated here. SLAM, in its naive form, scales quadratically with the number of landmarks in a map. For real-time implementation, this scaling is potentially a substantial limitation in the use of SLAM methods. The complexity section surveys the many approaches that have been developed to reduce this complexity. These include linear-time state augmentation, sparsifica-tion in information form, partitioned updates, and submapping methods. A second major hurdle to overcome in the implementation of SLAM methods is to correctly associate observations of landmarks with landmarks held in the map. Incorrect association can lead to catastrophic failure of the SLAM algorithm. Data association is particularly important when a vehicle returns to a previously mapped region after a long excursion, the so-called loop-closure problem. The data association section surveys current data association methods used in SLAM. These include batch-validation methods that exploit constraints inherent in the SLAM formulation, appearance based methods, and multihypothesis techniques. The third development discussed in this tutorial is …",
"title": ""
}
] |
[
{
"docid": "0dbad8ca53615294bc25f7a2d8d41d99",
"text": "Faceted search is becoming a popular method to allow users to interactively search and navigate complex information spaces. A faceted search system presents users with key-value metadata that is used for query refinement. While popular in e-commerce and digital libraries, not much research has been conducted on which metadata to present to a user in order to improve the search experience. Nor are there repeatable benchmarks for evaluating a faceted search engine. This paper proposes the use of collaborative filtering and personalization to customize the search interface to each user's behavior. This paper also proposes a utility based framework to evaluate the faceted interface. In order to demonstrate these ideas and better understand personalized faceted search, several faceted search algorithms are proposed and evaluated using the novel evaluation methodology.",
"title": ""
},
{
"docid": "6844c0ab63ee51775f311bd63d05a455",
"text": "In a first step toward the development of an efficient and accurate protocol to estimate amino acids' pKa's in proteins, we present in this work how to reproduce the pKa's of alcohol and thiol based residues (namely tyrosine, serine, and cysteine) in aqueous solution from the knowledge of the experimental pKa's of phenols, alcohols, and thiols. Our protocol is based on the linear relationship between computed atomic charges of the anionic form of the molecules (being either phenolates, alkoxides, or thiolates) and their respective experimental pKa values. It is tested with different environment approaches (gas phase or continuum solvent-based approaches), with five distinct atomic charge models (Mulliken, Löwdin, NPA, Merz-Kollman, and CHelpG), and with nine different DFT functionals combined with 16 different basis sets. Moreover, the capability of semiempirical methods (AM1, RM1, PM3, and PM6) to also predict pKa's of thiols, phenols, and alcohols is analyzed. From our benchmarks, the best combination to reproduce experimental pKa's is to compute NPA atomic charge using the CPCM model at the B3LYP/3-21G and M062X/6-311G levels for alcohols (R(2) = 0.995) and thiols (R(2) = 0.986), respectively. The applicability of the suggested protocol is tested with tyrosine and cysteine amino acids, and precise pKa predictions are obtained. The stability of the amino acid pKa's with respect to geometrical changes is also tested by MM-MD and DFT-MD calculations. Considering its strong accuracy and its high computational efficiency, these pKa prediction calculations using atomic charges indicate a promising method for predicting amino acids' pKa in a protein environment.",
"title": ""
},
{
"docid": "d2e434f472b60e17ab92290c78706945",
"text": "In recent years, a variety of review-based recommender systems have been developed, with the goal of incorporating the valuable information in user-generated textual reviews into the user modeling and recommending process. Advanced text analysis and opinion mining techniques enable the extraction of various types of review elements, such as the discussed topics, the multi-faceted nature of opinions, contextual information, comparative opinions, and reviewers’ emotions. In this article, we provide a comprehensive overview of how the review elements have been exploited to improve standard content-based recommending, collaborative filtering, and preference-based product ranking techniques. The review-based recommender system’s ability to alleviate the well-known rating sparsity and cold-start problems is emphasized. This survey classifies state-of-the-art studies into two principal branches: review-based user profile building and review-based product profile building. In the user profile sub-branch, the reviews are not only used to create term-based profiles, but also to infer or enhance ratings. Multi-faceted opinions can further be exploited to derive the weight/value preferences that users place on particular features. In another sub-branch, the product profile can be enriched with feature opinions or comparative opinions to better reflect its assessment quality. The merit of each branch of work is discussed in terms of both algorithm development and the way in which the proposed algorithms are evaluated. In addition, we discuss several future trends based on the survey, which may inspire investigators to pursue additional studies in this area.",
"title": ""
},
{
"docid": "26b5d72d3135623765b389c8a2f40625",
"text": "Data preprocessing is a fundamental part of any machine learning application and frequently the most time-consuming aspect when developing a machine learning solution. Preprocessing for deep learning is characterized by pipelines that lazily load data and perform data transformation, augmentation, batching and logging. Many of these functions are common across applications but require different arrangements for training, testing or inference. Here we introduce a novel software framework named nuts-flow/ml that encapsulates common preprocessing operations as components, which can be flexibly arranged to rapidly construct efficient preprocessing pipelines for deep learning.",
"title": ""
},
{
"docid": "8bf1b97320a6b7319e4b36dfc11b6c7b",
"text": "In recent years, virtual reality exposure therapy (VRET) has become an interesting alternative for the treatment of anxiety disorders. Research has focused on the efficacy of VRET in treating anxiety disorders: phobias, panic disorder, and posttraumatic stress disorder. In this systematic review, strict methodological criteria are used to give an overview of the controlled trials regarding the efficacy of VRET in patients with anxiety disorders. Furthermore, research into process variables such as the therapeutic alliance and cognitions and enhancement of therapy effects through cognitive enhancers is discussed. The implications for implementation into clinical practice are considered.",
"title": ""
},
{
"docid": "7e38ba11e394acd7d5f62d6a11253075",
"text": "The body-schema concept is revisited in the context of embodied cognition, further developing the theory formulated by Marc Jeannerod that the motor system is part of a simulation network related to action, whose function is not only to shape the motor system for preparing an action (either overt or covert) but also to provide the self with information on the feasibility and the meaning of potential actions. The proposed computational formulation is based on a dynamical system approach, which is linked to an extension of the equilibrium-point hypothesis, called Passive Motor Paradigm: this dynamical system generates goal-oriented, spatio-temporal, sensorimotor patterns, integrating a direct and inverse internal model in a multi-referential framework. The purpose of such computational model is to operate at the same time as a general synergy formation machinery for planning whole-body actions in humanoid robots and/or for predicting coordinated sensory-motor patterns in human movements. In order to illustrate the computational approach, the integration of simultaneous, even partially conflicting tasks will be analyzed in some detail with regard to postural-focal dynamics, which can be defined as the fusion of a focal task, namely reaching a target with the whole-body, and a postural task, namely maintaining overall stability.",
"title": ""
},
{
"docid": "4301af5b0c7910480af37f01847fb1fe",
"text": "Cross-modal retrieval is a very hot research topic that is imperative to many applications involving multi-modal data. Discovering an appropriate representation for multi-modal data and learning a ranking function are essential to boost the cross-media retrieval. Motivated by the assumption that a compositional cross-modal semantic representation (pairs of images and text) is more attractive for cross-modal ranking, this paper exploits the existing image-text databases to optimize a ranking function for cross-modal retrieval, called deep compositional cross-modal learning to rank (C2MLR). In this paper, C2MLR considers learning a multi-modal embedding from the perspective of optimizing a pairwise ranking problem while enhancing both local alignment and global alignment. In particular, the local alignment (i.e., the alignment of visual objects and textual words) and the global alignment (i.e., the image-level and sentence-level alignment) are collaboratively utilized to learn the multi-modal embedding common space in a max-margin learning to rank manner. The experiments demonstrate the superiority of our proposed C2MLR due to its nature of multi-modal compositional embedding.",
"title": ""
},
{
"docid": "85f126fe22e74e3f5b1f1ad3adec0036",
"text": "Debate is open as to whether social media communities resemble real-life communities, and to what extent. We contribute to this discussion by testing whether established sociological theories of real-life networks hold in Twitter. In particular, for 228,359 Twitter profiles, we compute network metrics (e.g., reciprocity, structural holes, simmelian ties) that the sociological literature has found to be related to parts of one’s social world (i.e., to topics, geography and emotions), and test whether these real-life associations still hold in Twitter. We find that, much like individuals in real-life communities, social brokers (those who span structural holes) are opinion leaders who tweet about diverse topics, have geographically wide networks, and express not only positive but also negative emotions. Furthermore, Twitter users who express positive (negative) emotions cluster together, to the extent of having a correlation coefficient between one’s emotions and those of friends as high as 0.45. Understanding Twitter’s social dynamics does not only have theoretical implications for studies of social networks but also has practical implications, including the design of self-reflecting user interfaces that make people aware of their emotions, spam detection tools, and effective marketing campaigns.",
"title": ""
},
{
"docid": "d0a6ca9838f8844077fdac61d1d75af1",
"text": "Depth-first search, as developed by Tarjan and coauthors, is a fundamental technique of efficient algorithm design for graphs [23]. This note presents depth-first search algorithms for two basic problems, strong and biconnected components. Previous algorithms either compute auxiliary quantities based on the depth-first search tree (e.g., LOWPOINT values) or require two passes. We present one-pass algorithms that only maintain a representation of the depth-first search path. This gives a simplified view of depth-first search without sacrificing efficiency. In greater detail, most depth-first search algorithms (e.g., [23,10,11]) compute so-called LOWPOINT values that are defined in terms of the depth-first search tree. Because of the success of this method LOWPOINT values have become almost synonymous with depth-first search. LOWPOINT values are regarded as crucial in the strong and biconnected component algorithms, e.g., [14, pp. 94, 514]. Tarjan’s LOWPOINT method for strong components is presented in texts [1, 7,14,16,17,21]. The strong component algorithm of Kosaraju and Sharir [22] is often viewed as conceptu-",
"title": ""
},
{
"docid": "13b887760a87bc1db53b16eb4fba2a01",
"text": "Customer preferences for products are drifting over time. Product perception and popularity are constantly changing as new selection emerges. Similarly, customer inclinations are evolving, leading them to ever redefine their taste. Thus, modeling temporal dynamics should be a key when designing recommender systems or general customer preference models. However, this raises unique challenges. Within the eco-system intersecting multiple products and customers, many different characteristics are shifting simultaneously, while many of them influence each other and often those shifts are delicate and associated with a few data instances. This distinguishes the problem from concept drift explorations, where mostly a single concept is tracked. Classical time-window or instance-decay approaches cannot work, as they lose too much signal when discarding data instances. A more sensitive approach is required, which can make better distinctions between transient effects and long term patterns. The paradigm we offer is creating a model tracking the time changing behavior throughout the life span of the data. This allows us to exploit the relevant components of all data instances, while discarding only what is modeled as being irrelevant. Accordingly, we revamp two leading collaborative filtering recommendation approaches. Evaluation is made on a large movie rating dataset by Netflix. Results are encouraging and better than those previously reported on this dataset.",
"title": ""
},
{
"docid": "7a7e0363ca4ad5c83a571449f53834ca",
"text": "Principal component analysis (PCA) minimizes the mean square error (MSE) and is sensitive to outliers. In this paper, we present a new rotational-invariant PCA based on maximum correntropy criterion (MCC). A half-quadratic optimization algorithm is adopted to compute the correntropy objective. At each iteration, the complex optimization problem is reduced to a quadratic problem that can be efficiently solved by a standard optimization method. The proposed method exhibits the following benefits: 1) it is robust to outliers through the mechanism of MCC which can be more theoretically solid than a heuristic rule based on MSE; 2) it requires no assumption about the zero-mean of data for processing and can estimate data mean during optimization; and 3) its optimal solution consists of principal eigenvectors of a robust covariance matrix corresponding to the largest eigenvalues. In addition, kernel techniques are further introduced in the proposed method to deal with nonlinearly distributed data. Numerical results demonstrate that the proposed method can outperform robust rotational-invariant PCAs based on L1 norm when outliers occur.",
"title": ""
},
{
"docid": "2fc645ec4f9fe757be65f3f02b803b50",
"text": "Multicast communication plays a crucial role in Mobile Adhoc Networks (MANETs). MANETs provide low cost, self configuring devices for multimedia data communication in military battlefield scenarios, disaster and public safety networks (PSN). Multicast communication improves the network performance in terms of bandwidth consumption, battery power and routing overhead as compared to unicast for same volume of data communication. In recent past, a number of multicast routing protocols (MRPs) have been proposed that tried to resolve issues and challenges in MRP. Multicast based group communication demands dynamic construction of efficient and reliable route for multimedia data communication during high node mobility, contention, routing and channel overhead. This paper gives an insight into the merits and demerits of the currently known research techniques and provides a better environment to make reliable MRP. It presents a ample study of various Quality of Service (QoS) techniques and existing enhancement in mesh based MRPs. Mesh topology based MRPs are classified according to their enhancement in routing mechanism and QoS modification on On-Demand Multicast Routing Protocol (ODMRP) protocol to improve performance metrics. This paper covers the most recent, robust and reliable QoS and Mesh based MRPs, classified based on their operational features, with their advantages and limitations, and provides comparison of their performance parameters.",
"title": ""
},
{
"docid": "1cc81fa2fbfc2a47eb07bb7ef969d657",
"text": "Wind Turbines (WT) are one of the fastest growing sources of power production in the world today and there is a constant need to reduce the costs of operating and maintaining them. Condition monitoring (CM) is a tool commonly employed for the early detection of faults/failures so as to minimise downtime and maximize productivity. This paper provides a review of the state-of-the-art in the CM of wind turbines, describing the different maintenance strategies, CM techniques and methods, and highlighting in a table the various combinations of these that have been reported in the literature. Future research opportunities in fault diagnostics are identified using a qualitative fault tree analysis. Crown Copyright 2012 Published by Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "b8b7abcef8e23f774bd4e74067a27e6f",
"text": "This note evaluates several hardware platforms and operating systems using a set of benchmarks that test memory bandwidth and various operating system features such as kernel entry/exit and file systems. The overall conclusion is that operating system performance does not seem to be improving at the same rate as the base speed of the underlying hardware. Copyright 1989 Digital Equipment Corporation d i g i t a l Western Research Laboratory 100 Hamilton Avenue Palo Alto, California 94301 USA",
"title": ""
},
{
"docid": "6168c4c547dca25544eedf336e369d95",
"text": "Big Data means a very large amount of data and includes a range of methodologies such as big data collection, processing, storage, management, and analysis. Since Big Data Text Mining extracts a lot of features and data, clustering and classification can result in high computational complexity and the low reliability of the analysis results. In particular, a TDM (Term Document Matrix) obtained through text mining represents term-document features but features a sparse matrix. In this paper, the study focuses on selecting a set of optimized features from the corpus. A Genetic Algorithm (GA) is used to extract terms (features) as desired according to term importance calculated by the equation found. The study revolves around feature selection method to lower computational complexity and to increase analytical performance.We designed a new genetic algorithm to extract features in text mining. TF-IDF is used to reflect document-term relationships in feature extraction. Through the repetitive process, features are selected as many as the predetermined number. We have conducted clustering experiments on a set of spammail documents to verify and to improve feature selection performance. And we found that the proposal FSGA algorithm shown better performance of Text Clustering and Classification than using all of features.",
"title": ""
},
{
"docid": "9a4dab93461185ea98ccea7733081f73",
"text": "This article discusses two standards operating on principles of cognitive radio in television white space (TV WS) frequencies 802.22and 802.11af. The comparative analysis of these systems will be presented and the similarities as well as the differences among these two perspective standards will be discussed from the point of view of physical (PHY), medium access control (MAC) and cognitive layers.",
"title": ""
},
{
"docid": "8549f04362f52ddec78e48dd6e1cadce",
"text": "In recent years both the number and the size of organisational databases have increased rapidly. However, although available processing power has also grown, the increase in stored data has not necessarily led to a corresponding increase in useful information and knowledge. This has led to a growing interest in the development of tools capable of harnessing the increased processing power available to better utilise the potential of stored data. The terms “Knowledge Discovery in Databases” and “Data Mining” have been adopted for a field of research dealing with the automatic discovery of knowledge implicit within databases. Data mining is useful in situations where the volume of data is either too large or too complicated for manual processing or, to a lesser extent, where human experts are unavailable to provide knowledge. The success already attained by a wide range of data mining applications has continued to prompt further investigation into alternative data mining techniques and the extension of data mining to new domains. This paper surveys, from the standpoint of the database systems community, current issues in data mining research by examining the architectural and process models adopted by knowledge discovery systems, the different types of discovered knowledge, the way knowledge discovery systems operate on different data types, various techniques for knowledge discovery and the ways in which discovered knowledge is used.",
"title": ""
},
{
"docid": "00daf995562570c89901ca73e23dd29d",
"text": "As advances in technology make payloads and instruments for space missions smaller, lighter, and more power efficient, a niche market is emerging from the university community to perform rapidly developed, low-cost missions on very small spacecraft - micro, nano, and picosatellites. Among this class of spacecraft, are CubeSats, with a basic form of 10 times 10 times 10 cm, weighing a maximum of 1kg. In order to serve as viable alternative to larger spacecraft, small satellite platforms must provide the end user with access to space and similar functionality to mainstream missions. However, despite recent advances, small satellites have not been able to reach their full potential. Without launch vehicles dedicated to launching small satellites as primary payloads, launch opportunities only exist in the form of co-manifest or secondary payload missions, with launches often subsidized by the government. In addition, power, size, and mass constraints create additional hurdles for small satellites. To date, the primary method of increasing a small satellite's capability has been focused on miniaturization of technology. The CubeSat Program embraces this approach, but has also focused on developing an infrastructure to offset unavoidable limitations caused by the constraints of small satellite missions. The main components of this infrastructure are: an extensive developer community, standards for spacecraft and launch vehicle interfaces, and a network of ground stations. This paper will focus on the CubeSat Program, its history, and the philosophy behind the various elements that make it a practical an enabling alternative for access to space.",
"title": ""
},
{
"docid": "ebf8c89f326b0c1e9b0d2f565b5b30a6",
"text": "OBJECTIVE\nTo identify the cross-national prevalence of psychotic symptoms in the general population and to analyze their impact on health status.\n\n\nMETHOD\nThe sample was composed of 256,445 subjects (55.9% women), from nationally representative samples of 52 countries worldwide participating in the World Health Organization's World Health Survey. Standardized and weighted prevalence of psychotic symptoms were calculated in addition to the impact on health status as assessed by functioning in multiple domains.\n\n\nRESULTS\nOverall prevalences for specific symptoms ranged from 4.80% (SE = 0.14) for delusions of control to 8.37% (SE = 0.20) for delusions of reference and persecution. Prevalence figures varied greatly across countries. All symptoms of psychosis produced a significant decline in health status after controlling for potential confounders. There was a clear change in health impact between subjects not reporting any symptom and those reporting at least one symptom (effect size of 0.55).\n\n\nCONCLUSIONS\nThe prevalence of the presence of at least one psychotic symptom has a wide range worldwide varying as much as from 0.8% to 31.4%. Psychotic symptoms signal a problem of potential public health concern, independent of the presence of a full diagnosis of psychosis, as they are common and are related to a significant decrement in health status. The presence of at least one psychotic symptom is related to a significant poorer health status, with a regular linear decrement in health depending on the number of symptoms.",
"title": ""
},
{
"docid": "27c2c015c6daaac99b34d00845ec646c",
"text": "Virtual worlds, such as Second Life and Everquest, have grown into virtual game communities that have economic potential. In such communities, virtual items are bought and sold between individuals for real money. The study detailed in this paper aims to identify, model and test the individual determinants for the decision to purchase virtual items within virtual game communities. A comprehensive understanding of these key determinants will enable researchers to further the understanding of player behavior towards virtual item transactions, which are an important aspect of the economic system within virtual games and often raise one of the biggest challenges for game community operators. A model will be developed via a mixture of new constructs and established theories, including the theory of planned behavior (TPB), the technology acceptance model (TAM), trust theory and unified theory of acceptance and use of technology (UTAUT). For this purpose the research uses a sequential, multi-method approach in two phases: combining the use of inductive, qualitative data from focus groups and expert interviews in phase one; and deductive, quantitative survey data in phase two. The final model will hopefully provide an impetus to further research in the area of virtual game community transaction behavior. The paper rounds off with a discussion of further research challenges in this area over the next seven years.",
"title": ""
}
] |
scidocsrr
|
75efc265cc6cf400edf09c3b305b0939
|
Supply Chain Object Discovery with Semantic-enhanced Blockchain
|
[
{
"docid": "ce871576011a3dfc99bc613e86fddc80",
"text": "Digital supply chain integration is becoming increasingly dynamic. Access to customer demand needs to be shared effectively, and product and service deliveries must be tracked to provide visibility in the supply chain. Business process integration is based on standards and reference architectures, which should offer end-to-end integration of product data. Companies operating in supply chains establish process and data integration through the specialized intermediate companies, whose role is to establish interoperability by mapping and integrating companyspecific data for various organizations and systems. This has typically caused high integration costs, and diffusion is slow. This paper investigates the requirements and functionalities of supply chain integration. Cloud integration can be expected to offer a cost-effective business model for interoperable digital supply chains. We explain how supply chain integration through the blockchain technology can achieve disruptive transformation in digital supply chains and networks.",
"title": ""
},
{
"docid": "4a811a48f913e1529f70937c771d01da",
"text": "An interesting research problem in our age of Big Data is that of determining provenance. Granular evaluation of provenance of physical goods--e.g. tracking ingredients of a pharmaceutical or demonstrating authenticity of luxury goods--has often not been possible with today's items that are produced and transported in complex, inter-organizational, often internationally-spanning supply chains. Recent adoption of Internet of Things and Blockchain technologies give promise at better supply chain provenance. We are particularly interested in the blockchain as many favoured use cases of blockchain are for provenance tracking. We are also interested in applying ontologies as there has been some work done on knowledge provenance, traceability, and food provenance using ontologies. In this paper, we make a case for why ontologies can contribute to blockchain design. To support this case, we analyze a traceability ontology and translate some of its representations to smart contracts that execute a provenance trace and enforce traceability constraints on the Ethereum blockchain platform.",
"title": ""
}
] |
[
{
"docid": "ca906d18fca3f4ee83224b7728cbd379",
"text": "AIM\nTo investigate the effect of some psychosocial variables on nurses' job satisfaction.\n\n\nBACKGROUND\nNurses' job satisfaction is one of the most important factors in determining individuals' intention to stay or leave a health-care organisation. Literature shows a predictive role of work climate, professional commitment and work values on job satisfaction, but their conjoint effect has rarely been considered.\n\n\nMETHODS\nA cross-sectional questionnaire survey was adopted. Participants were hospital nurses and data were collected in 2011.\n\n\nRESULTS\nProfessional commitment and work climate positively predicted nurses' job satisfaction. The effect of intrinsic vs. extrinsic work value orientation on job satisfaction was completely mediated by professional commitment.\n\n\nCONCLUSIONS\nNurses' job satisfaction is influenced by both contextual and personal variables, in particular work climate and professional commitment. According to a more recent theoretical framework, work climate, work values and professional commitment interact with each other in determining nurses' job satisfaction.\n\n\nIMPLICATIONS FOR NURSING MANAGEMENT\nNursing management must be careful to keep the context of work tuned to individuals' attitude and vice versa. Improving the work climate can have a positive effect on job satisfaction, but its effect may be enhanced by favouring strong professional commitment and by promoting intrinsic more than extrinsic work values.",
"title": ""
},
{
"docid": "1c4fc20b2cfda58d9c3e22ecf97af506",
"text": "Cognitive function requires the coordination of neural activity across many scales, from neurons and circuits to large-scale networks. As such, it is unlikely that an explanatory framework focused upon any single scale will yield a comprehensive theory of brain activity and cognitive function. Modelling and analysis methods for neuroscience should aim to accommodate multiscale phenomena. Emerging research now suggests that multi-scale processes in the brain arise from so-called critical phenomena that occur very broadly in the natural world. Criticality arises in complex systems perched between order and disorder, and is marked by fluctuations that do not have any privileged spatial or temporal scale. We review the core nature of criticality, the evidence supporting its role in neural systems and its explanatory potential in brain health and disease.",
"title": ""
},
{
"docid": "9e5eead043459905bd9c4af981c5d587",
"text": "The chapter gives general information about graphene, namely its structure, properties and methods of preparation, and highlights the methods for the preparation of graphene-based polymer nanocomposites.",
"title": ""
},
{
"docid": "223252b8bf99671eedd622c99bc99aaf",
"text": "We present a novel dataset for natural language generation (NLG) in spoken dialogue systems which includes preceding context (user utterance) along with each system response to be generated, i.e., each pair of source meaning representation and target natural language paraphrase. We expect this to allow an NLG system to adapt (entrain) to the user’s way of speaking, thus creating more natural and potentially more successful responses. The dataset has been collected using crowdsourcing, with several stages to obtain natural user utterances and corresponding relevant, natural, and contextually bound system responses. The dataset is available for download under the Creative Commons 4.0 BY-SA license.",
"title": ""
},
{
"docid": "99982ebadc1913bfb0ee99270dedfae7",
"text": "As a consequence of optimal investment choices, a firm’s assets and growth options change in predictable ways. Using a dynamic model, we show that this imparts predictability to changes in a firm’s systematic risk, and its expected return. Simulations show that the model simultaneously reproduces: ~i! the time-series relation between the book-to-market ratio and asset returns; ~ii! the cross-sectional relation between book-to-market, market value, and return; ~iii! contrarian effects at short horizons; ~iv! momentum effects at longer horizons; and ~v! the inverse relation between interest rates and the market risk premium. RECENT EMPIRICAL RESEARCH IN FINANCE has focused on regularities in the cross section of expected returns that appear anomalous relative to traditional models. Stock returns are related to book-to-market, and market value.1 Past returns have also been shown to predict relative performance, through the documented success of contrarian and momentum strategies.2 Existing explanations for these results are that they are due to behavioral biases or risk premia for omitted state variables.3 These competing explanations are difficult to evaluate without models that explicitly tie the characteristics of interest to risks and risk premia. For example, with respect to book-to-market, Lakonishok et al. ~1994! argue: “The point here is simple: although the returns to the B0M strategy are impressive, B0M is not a ‘clean’ variable uniquely associated with eco* Berk is at the University of California, Berkeley, and NBER; Green is at Carnegie Mellon University; and Naik is with the University of British Columbia. We acknowledge the research assistance of Robert Mitchell and Dave Peterson. We have benefited from and are grateful for comments by seminar participants at Berkeley, British Columbia, Carnegie Mellon, Dartmouth, Duke, Michigan, Minnesota, North Carolina, Northwestern, Rochester, Utah, Washington at St. Louis, Washington, Wharton, Wisconsin, Yale, the 1996 meetings of the Western Finance Association, and the 1997 Utah Winter Finance Conference and the suggestions from an anonymous referee and from the editor, René Stulz. We also acknowledge financial support for this research from the Social Sciences and Humanities Research Council of Canada and the Bureau of Asset Management at University of British Columbia. The computer programs used in this paper are available on this journal’s web page: http:00www.afajof.org 1 See Fama and French ~1992! for summary evidence. 2 See Conrad and Kaul ~1998! for a recent summary of evidence on this subject. 3 See Lakonishok, Shleifer, and Vishny ~1994! for arguments in favor of behavioral biases and Fama and French ~1993! for an interpretation in terms of state variable risks. THE JOURNAL OF FINANCE • VOL. LIV, NO. 5 • OCTOBER 1999",
"title": ""
},
{
"docid": "1e5202850748b0f613807b0452eb89a2",
"text": "This paper introduces a hierarchical image merging scheme based on a multiresolution contrast decomposition (the ratio of low-pass pyramid). The composite images produced by this scheme preserve those details from the input images that are most relevant to visual perception. Some applications of the method are indicated.",
"title": ""
},
{
"docid": "dd1f7671025d79dead0a87fef6cec409",
"text": "PURPOSE This article summarizes prior work in the learning sciences and discusses one perspective—situative learning—in depth. Situativity refers to the central role of context, including the physical and social aspects of the environment, on learning. Furthermore, it emphasizes the socially and culturally negotiated nature of thought and action of persons in interaction. The aim of the article is to provide a foundation for future work on engineering learning and to suggest ways in which the learning sciences and engineering education research communities might work to their mutual benefit.",
"title": ""
},
{
"docid": "6e60d6b878c35051ab939a03bdd09574",
"text": "We propose a new CNN-CRF end-to-end learning framework, which is based on joint stochastic optimization with respect to both Convolutional Neural Network (CNN) and Conditional Random Field (CRF) parameters. While stochastic gradient descent is a standard technique for CNN training, it was not used for joint models so far. We show that our learning method is (i) general, i.e. it applies to arbitrary CNN and CRF architectures and potential functions; (ii) scalable, i.e. it has a low memory footprint and straightforwardly parallelizes on GPUs; (iii) easy in implementation. Additionally, the unified CNN-CRF optimization approach simplifies a potential hardware implementation. We empirically evaluate our method on the task of semantic labeling of body parts in depth images and show that it compares favorably to competing techniques.",
"title": ""
},
{
"docid": "bc5c008b5e443b83b2a66775c849fffb",
"text": "Continuous glucose monitoring (CGM) sensors are portable devices that allow measuring and visualizing the glucose concentration in real time almost continuously for several days and are provided with hypo/hyperglycemic alerts and glucose trend information. CGM sensors have revolutionized Type 1 diabetes (T1D) management, improving glucose control when used adjunctively to self-monitoring blood glucose systems. Furthermore, CGM devices have stimulated the development of applications that were impossible to create without a continuous-time glucose signal, e.g., real-time predictive alerts of hypo/hyperglycemic episodes based on the prediction of future glucose concentration, automatic basal insulin attenuation methods for hypoglycemia prevention, and the artificial pancreas. However, CGM sensors' lack of accuracy and reliability limited their usability in the clinical practice, calling upon the academic community for the development of suitable signal processing methods to improve CGM performance. The aim of this paper is to review the past and present algorithmic challenges of CGM sensors, to show how they have been tackled by our research group, and to identify the possible future ones.",
"title": ""
},
{
"docid": "ddc3241c09a33bde1346623cf74e6866",
"text": "This paper presents a new technique for predicting wind speed and direction. This technique is based on using a linear time-series-based model relating the predicted interval to its corresponding one- and two-year old data. The accuracy of the model for predicting wind speeds and directions up to 24 h ahead have been investigated using two sets of data recorded during winter and summer season at Madison weather station. Generated results are compared with their corresponding values when using the persistent model. The presented results validate the effectiveness and accuracy of the proposed prediction model for wind speed and direction.",
"title": ""
},
{
"docid": "12b115e3b759fcb87956680d6e89d7aa",
"text": "The calibration system presented in this article enables to calculate optical parameters i.e. intrinsic and extrinsic of both thermal and visual cameras used for 3D reconstruction of thermal images. Visual cameras are in stereoscopic set and provide a pair of stereo images of the same object which are used to perform 3D reconstruction of the examined object [8]. The thermal camera provides information about temperature distribution on the surface of an examined object. In this case the term of 3D reconstruction refers to assigning to each pixel of one of the stereo images (called later reference image) a 3D coordinate in the respective camera reference frame [8]. The computed 3D coordinate is then re-projected on to the thermograph and thus to the known 3D position specific temperature is assigned. In order to remap the 3D coordinates on to thermal image it is necessary to know the position of thermal camera against visual camera and therefore a calibration of the set of the three cameras must be performed. The presented calibration system includes special calibration board (fig.1) whose characteristic points of well known position are recognizable both by thermal and visual cameras. In order to detect calibration board characteristic points’ image coordinates, especially in thermal camera, a new procedure was designed.",
"title": ""
},
{
"docid": "e83873daee4f8dae40c210987d9158e8",
"text": "Domain ontologies are important information sources for knowledge-based systems. Yet, building domain ontologies from scratch is known to be a very labor-intensive process. In this study, we present our semi-automatic approach to building an ontology for the domain of wind energy which is an important type of renewable energy with a growing share in electricity generation all over the world. Related Wikipedia articles are first processed in an automated manner to determine the basic concepts of the domain together with their properties and next the concepts, properties, and relationships are organized to arrive at the ultimate ontology. We also provide pointers to other engineering ontologies which could be utilized together with the proposed wind energy ontology in addition to its prospective application areas. The current study is significant as, to the best of our knowledge, it proposes the first considerably wide-coverage ontology for the wind energy domain and the ontology is built through a semi-automatic process which makes use of the related Web resources, thereby reducing the overall cost of the ontology building process.",
"title": ""
},
{
"docid": "d6ffefe59311865aab98dede1cc2c602",
"text": "We develop a 3D object detection algorithm that uses latent support surfaces to capture contextual relationships in indoor scenes. Existing 3D representations for RGB-D images capture the local shape and appearance of object categories, but have limited power to represent objects with different visual styles. The detection of small objects is also challenging because the search space is very large in 3D scenes. However, we observe that much of the shape variation within 3D object categories can be explained by the location of a latent support surface, and smaller objects are often supported by larger objects. Therefore, we explicitly use latent support surfaces to better represent the 3D appearance of large objects, and provide contextual cues to improve the detection of small objects. We evaluate our model with 19 object categories from the SUN RGB-D database, and demonstrate state-of-the-art performance.",
"title": ""
},
{
"docid": "efd87c8a9570944a0cd2bff16d75ffc5",
"text": "Deep neural networks show very good performance in phoneme and speech recognition applications when compared to previously used GMM (Gaussian Mixture Model)-based ones. However, efficient implementation of deep neural networks is difficult because the network size needs to be very large when high recognition accuracy is demanded. In this work, we develop a digital VLSI for phoneme recognition using deep neural networks and assess the design in terms of throughput, chip size, and power consumption. The developed VLSI employs a fixed-point optimization method that only uses +Δ, 0, and -Δ for representing each of the weight. The design employs 1,024 simple processing units in each layer, which however can be scaled easily according to the needed throughput, and the throughput of the architecture varies from 62.5 to 1,000 times of the real-time processing speed.",
"title": ""
},
{
"docid": "1b34ce669b77895322ee677605b9880a",
"text": "This paper presents a series of new augmented reality user interaction techniques to support the capture and creation of 3D geometry of large outdoor structures, part of an overall concept we have named construction at a distance. We use information about the user's physical presence, along with hand and head gestures, to allow the user to capture and create the geometry of objects that are orders of magnitude larger than themselves, with no prior information or assistance. Using augmented reality and these new techniques, users can enter geometry and verify its accuracy in real time. This paper includes a number of examples showing objects that have been modelled in the physical world, demonstrating the usefulness of the techniques.",
"title": ""
},
{
"docid": "66b088871549d5ec924dbe500522d6f8",
"text": "Being able to effectively measure similarity between patents in a complex patent citation network is a crucial task in understanding patent relatedness. In the past, techniques such as text mining and keyword analysis have been applied for patent similarity calculation. The drawback of these approaches is that they depend on word choice and writing style of authors. Most existing graph-based approaches use common neighbor-based measures, which only consider direct adjacency. In this work we propose new similarity measures for patents in a patent citation network using only the patent citation network structure. The proposed similarity measures leverage direct and indirect co-citation links between patents. A challenge is when some patents receive a large number of citations, thus are considered more similar to many other patents in the patent citation network. To overcome this challenge, we propose a normalization technique to account for the case where some pairs are ranked very similar to each other because they both are cited by many other patents. We validate our proposed similarity measures using US class codes for US patents and the well-known Jaccard similarity index. Experiments show that the proposed methods perform well when compared to the Jaccard similarity index.",
"title": ""
},
{
"docid": "abbb210122d470215c5a1d0420d9db06",
"text": "Ensemble clustering, also known as consensus clustering, is emerging as a promising solution for multi-source and/or heterogeneous data clustering. The co-association matrix based method, which redefines the ensemble clustering problem as a classical graph partition problem, is a landmark method in this area. Nevertheless, the relatively high time and space complexity preclude it from real-life large-scale data clustering. We therefore propose SEC, an efficient Spectral Ensemble Clustering method based on co-association matrix. We show that SEC has theoretical equivalence to weighted K-means clustering and results in vastly reduced algorithmic complexity. We then derive the latent consensus function of SEC, which to our best knowledge is among the first to bridge co-association matrix based method to the methods with explicit object functions. The robustness and generalizability of SEC are then investigated to prove the superiority of SEC in theory. We finally extend SEC to meet the challenge rising from incomplete basic partitions, based on which a scheme for big data clustering can be formed. Experimental results on various real-world data sets demonstrate that SEC is an effective and efficient competitor to some state-of-the-art ensemble clustering methods and is also suitable for big data clustering.",
"title": ""
},
{
"docid": "03bf4029ef68b58162abc15d0a0d702c",
"text": "In searching for a general \"zero-current-Switching\" technique for DC-DC converters, the concept of resonant switches is developed. As a combination of switching device and LC network, the resonant switch offers advantages of quasi-sinusoidal current waveforms, zero switching stresses, zero switching losses, self-commutation, and reduced EMI. Furthermore, application of the resonant switch concept to conventional converters leads to the discovery of a host of new converter circuits.",
"title": ""
},
{
"docid": "314722d112f5520f601ed6917f519466",
"text": "In this work we propose an online multi person pose tracking approach which works on two consecutive frames It−1 and It . The general formulation of our temporal network allows to rely on any multi person pose estimation approach as spatial network. From the spatial network we extract image features and pose features for both frames. These features serve as input for our temporal model that predicts Temporal Flow Fields (TFF). These TFF are vector fields which indicate the direction in which each body joint is going to move from frame It−1 to frame It . This novel representation allows to formulate a similarity measure of detected joints. These similarities are used as binary potentials in a bipartite graph optimization problem in order to perform tracking of multiple poses. We show that these TFF can be learned by a relative small CNN network whilst achieving state-of-the-art multi person pose tracking results.",
"title": ""
},
{
"docid": "6e7a43826490fe80692da334ef38f5a4",
"text": "We present a modular system for detection and correction of errors made by nonnative (English as a Second Language = ESL) writers. We focus on two error types: the incorrect use of determiners and the choice of prepositions. We use a decisiontree approach inspired by contextual spelling systems for detection and correction suggestions, and a large language model trained on the Gigaword corpus to provide additional information to filter out spurious suggestions. We show how this system performs on a corpus of non-native English text and discuss strategies for future enhancements.",
"title": ""
}
] |
scidocsrr
|
d4cd30380124355a15e8cf4e2ec5f356
|
Unsupervised Learning of Visual Features through Spike Timing Dependent Plasticity
|
[
{
"docid": "3112c11544c9dfc5dc5cf67e74e4ba4b",
"text": "How long does it take for the human visual system to process a complex natural image? Subjectively, recognition of familiar objects and scenes appears to be virtually instantaneous, but measuring this processing time experimentally has proved difficult. Behavioural measures such as reaction times can be used1, but these include not only visual processing but also the time required for response execution. However, event-related potentials (ERPs) can sometimes reveal signs of neural processing well before the motor output2. Here we use a go/no-go categorization task in which subjects have to decide whether a previously unseen photograph, flashed on for just 20 ms, contains an animal. ERP analysis revealed a frontal negativity specific to no-go trials that develops roughly 150 ms after stimulus onset. We conclude that the visual processing needed to perform this highly demanding task can be achieved in under 150 ms.",
"title": ""
},
{
"docid": "642a52ad5f774fa92cd5073577549ff2",
"text": "It is often supposed that the messages sent to the visual cortex by the retinal ganglion cells are encoded by the mean firing rates observed on spike trains generated with a Poisson process. Using an information transmission approach, we evaluate the performances of two such codes, one based on the spike count and the other on the mean interspike interval, and compare the results with a rank order code, where the first ganglion cells to emit a spike are given a maximal weight. Our results show that the rate codes are far from optimal for fast information transmission and that the temporal structure of the spike train can be efficiently used to maximize the information transfer rate under conditions where each cell needs to fire only one spike.",
"title": ""
}
] |
[
{
"docid": "f4e25ab5cf4df27f9aa198ff25b1d9c1",
"text": "The pattern of hair growth, morphology of the hair shafts, and the hair root state are described in four girls and two boys with prepubertal hypertrichosis. The exact nosology of this form of excessive hair growth is discussed in relation to hirsuties and the possibility of it representing an 'atavistic' trait.",
"title": ""
},
{
"docid": "9d28e5b6ad14595cd2d6b4071a867f6f",
"text": "This paper presents the analysis and the comparison study of a High-voltage High-frequency Ozone Generator using PWM and Phase-Shifted PWM full-bridge inverter as a power supply. The circuits operations of the inverters are fully described. In order to ensure that zero voltage switching (ZVS) mode always operated over a certain range of a frequency variation, a series-compensated resonant inductor is included. The comparison study are ozone quantity and output voltage that supplied by the PWM and Phase-Shifted PWM full-bridge inverter. The ozone generator fed by Phase-Shifted PWM full-bridge inverter, is capability of varying ozone gas production quantity by varying the frequency and phase shift angle of the converter whilst the applied voltage to the electrode is kept constant. However, the ozone generator fed by PWM full-bridge inverter, is capability of varying ozone gas production quantity by varying the frequency of the converter whilst the applied voltage to the electrode is decreased. As a consequence, the absolute ozone quantity affected by the frequency is possibly achieved.",
"title": ""
},
{
"docid": "b9d8442482b3f44a741f4531e49af840",
"text": "Optimal wound healing requires adequate nutrition. Nutrition deficiencies impede the normal processes that allow progression through stages of wound healing. Malnutrition has also been related to decreased wound tensile strength and increased infection rates. Malnourished patients can develop pressure ulcers, infections, and delayed wound healing that result in chronic nonhealing wounds. Chronic wounds are a significant cause of morbidity and mortality for many patients and therefore constitute a serious clinical concern. Because most patients with chronic skin ulcers suffer micronutrient status alterations and malnutrition to some degree, current nutrition therapies are aimed at correcting nutrition deficiencies responsible for delayed wound healing. This review provides current information on nutrition management for simple acute wounds and complex nonhealing wounds and offers some insights into innovative future treatments.",
"title": ""
},
{
"docid": "439f0f7b6ee2773b3eb53a52abad2594",
"text": "We address the problem of Foreground/Background segmentation of “unconstrained” video. By “unconstrained” we mean that the moving objects and the background scene may be highly non-rigid (e.g., waves in the sea); the camera may undergo a complex motion with 3D parallax; moving objects may suffer from motion blur, large scale and illumination changes, etc. Most existing segmentation methods fail on such unconstrained videos, especially in the presence of highly non-rigid motion and low resolution. We propose a computationally efficient algorithm which is able to produce accurate results on a large variety of unconstrained videos. This is obtained by casting the video segmentation problem as a voting scheme on the graph of similar (‘re-occurring’) regions in the video sequence. We start from crude saliency votes at each pixel, and iteratively correct those votes by ‘consensus voting’ of re-occurring regions across the video sequence. The power of our consensus voting comes from the non-locality of the region re-occurrence, both in space and in time – enabling propagation of diverse and rich information across the entire video sequence. Qualitative and quantitative experiments indicate that our approach outperforms current state-of-the-art methods.",
"title": ""
},
{
"docid": "c20e8853b7cb7b1ae55eb09732a1543f",
"text": "Activated carbon was prepared from coirpith by a chemical activation method and characterized. The adsorption of toxic heavy metals, Hg(II), Pb(II), Cd(II), Ni(II), and Cu(II) was studied using synthetic solutions and was reported elsewhere. In the present work the adsorption of toxic heavy metals from industrial wastewaters onto coirpith carbon was studied. The percent adsorption increased with increase in pH from 2 to 6 and remained constant up to 10. As coirpith is discarded as waste from coir processing industries, the resulting carbon is expected to be an economical product for the removal of toxic heavy metals from industrial wastewaters.",
"title": ""
},
{
"docid": "37b1c27feba98bfdeff0c048a1527b7e",
"text": "In this paper we study the problem of key phrase extraction from short texts written in Russian. As texts we consider messages posted on Internet car forums related to the purchase or repair of cars. The main assumption made is: the construction of lists of stop words for key phrase extraction can be effective if performed on the basis of a small, expert-marked collection. The results show that even a small number of texts marked by an expert can be enough to build an extended list of stop words. Extracted stop words allow to improve the quality of the key phrase extraction algorithm. Prior, we used a similar approach for key phrase extraction from scientific abstracts in the English language. In this paper we work with Russian texts. The obtained results show that the proposed approach works not only for texts that are appropriate in terms of structure and literacy, such as abstracts, but also for short texts, such as forum messages, in which many words may be misspelled and the text itself is poorly structured. Moreover, the results show that proposed approach works well not only with English texts, but also with texts in the Russian language.",
"title": ""
},
{
"docid": "367406644a29b4894df011b95add5985",
"text": "Graphs have long been proposed as a tool to browse and navigate in a collection of documents in order to support exploratory search. Many techniques to automatically extract different types of graphs, showing for example entities or concepts and different relationships between them, have been suggested. While experimental evidence that they are indeed helpful exists for some of them, it is largely unknown which type of graph is most helpful for a specific exploratory task. However, carrying out experimental comparisons with human subjects is challenging and time-consuming. Towards this end, we present the GraphDocExplore framework. It provides an intuitive web interface for graph-based document exploration that is optimized for experimental user studies. Through a generic graph interface, different methods to extract graphs from text can be plugged into the system. Hence, they can be compared at minimal implementation effort in an environment that ensures controlled comparisons. The system is publicly available under an open-source license.1",
"title": ""
},
{
"docid": "b4b0cbc448b45d337627b39029b6c60e",
"text": "Multi-task learning (MTL) improves the prediction performance on multiple, different but related, learning problems through shared parameters or representations. One of the most prominent multi-task learning algorithms is an extension to support vector machines (svm) by Evgeniou et al. [15]. Although very elegant, multi-task svm is inherently restricted by the fact that support vector machines require each class to be addressed explicitly with its own weight vector which, in a multi-task setting, requires the different learning tasks to share the same set of classes. This paper proposes an alternative formulation for multi-task learning by extending the recently published large margin nearest neighbor (lmnn) algorithm to the MTL paradigm. Instead of relying on separating hyperplanes, its decision function is based on the nearest neighbor rule which inherently extends to many classes and becomes a natural fit for multi-task learning. We evaluate the resulting multi-task lmnn on real-world insurance data and speech classification problems and show that it consistently outperforms single-task kNN under several metrics and state-of-the-art MTL classifiers.",
"title": ""
},
{
"docid": "eb271acef996a9ba0f84a50b5055953b",
"text": "Makeup is widely used to improve facial attractiveness and is well accepted by the public. However, different makeup styles will result in significant facial appearance changes. It remains a challenging problem to match makeup and non-makeup face images. This paper proposes a learning from generation approach for makeup-invariant face verification by introducing a bi-level adversarial network (BLAN). To alleviate the negative effects from makeup, we first generate non-makeup images from makeup ones, and then use the synthesized nonmakeup images for further verification. Two adversarial networks in BLAN are integrated in an end-to-end deep network, with the one on pixel level for reconstructing appealing facial images and the other on feature level for preserving identity information. These two networks jointly reduce the sensing gap between makeup and non-makeup images. Moreover, we make the generator well constrained by incorporating multiple perceptual losses. Experimental results on three benchmark makeup face datasets demonstrate that our method achieves state-of-the-art verification accuracy across makeup status and can produce photo-realistic non-makeup",
"title": ""
},
{
"docid": "78921cbdbc80f714598d8fb9ae750c7e",
"text": "Duplicates in data management are common and problematic. In this work, we present a translation of Datalog under bag semantics into a well-behaved extension of Datalog, the so-called warded Datalog±, under set semantics. From a theoretical point of view, this allows us to reason on bag semantics by making use of the well-established theoretical foundations of set semantics. From a practical point of view, this allows us to handle the bag semantics of Datalog by powerful, existing query engines for the required extension of Datalog. This use of Datalog± is extended to give a set semantics to duplicates in Datalog± itself. We investigate the properties of the resulting Datalog± programs, the problem of deciding multiplicities, and expressibility of some bag operations. Moreover, the proposed translation has the potential for interesting applications such as to Multiset Relational Algebra and the semantic web query language SPARQL with bag semantics. 2012 ACM Subject Classification Information systems → Query languages; Theory of computation → Logic; Theory of computation → Semantics and reasoning",
"title": ""
},
{
"docid": "0fcdd0dabb19ad2f45a5422caff6f8ff",
"text": "Message transmission through internet as medium, is becoming increasingly popular. Hence issues like information security are becoming more relevant than earlier. This necessitates for a secure communication method to transmit messages via internet. Steganography is the science of communicating secret data in several multimedia carriers like audio, text, video or image. A modified technique to enhance the security of secret information over the network is presented in this paper. In this technique, we generate stegnokey with the help of slave image. Proposed technique provides multi-level secured message transmission. Experimental results show that the proposed technique is robust and maintains image quality. Index Terms – Steganography, Least-significant-bit (LSB) substitution, XORing pixel bits, master-slave image.",
"title": ""
},
{
"docid": "717b685f6d0ac94555dcf1b3d209b2be",
"text": "Human faces in surveillance videos often suffer from severe image blur, dramatic pose variations, and occlusion. In this paper, we propose a comprehensive framework based on Convolutional Neural Networks (CNN) to overcome challenges in video-based face recognition (VFR). First, to learn blur-robust face representations, we artificially blur training data composed of clear still images to account for a shortfall in real-world video training data. Using training data composed of both still images and artificially blurred data, CNN is encouraged to learn blur-insensitive features automatically. Second, to enhance robustness of CNN features to pose variations and occlusion, we propose a Trunk-Branch Ensemble CNN model (TBE-CNN), which extracts complementary information from holistic face images and patches cropped around facial components. TBE-CNN is an end-to-end model that extracts features efficiently by sharing the low- and middle-level convolutional layers between the trunk and branch networks. Third, to further promote the discriminative power of the representations learnt by TBE-CNN, we propose an improved triplet loss function. Systematic experiments justify the effectiveness of the proposed techniques. Most impressively, TBE-CNN achieves state-of-the-art performance on three popular video face databases: PaSC, COX Face, and YouTube Faces. With the proposed techniques, we also obtain the first place in the BTAS 2016 Video Person Recognition Evaluation.",
"title": ""
},
{
"docid": "3db1b5b7e0d1d52957343d58edbcad45",
"text": "This paper shows how we can combine logical representations of actions and decision theory in such a manner that seems natural for both. In particular we assume an axiomatization of the domain in terms of situation calculus, using what is essentially Reiter’s solution to the frame problem, in terms of the completion of the axioms defining the state change. Uncertainty is handled in terms of the independent choice logic, which allows for independent choices and a logic program that gives the consequences of the choices. As part of the consequences are a specification of the utility of (final) states. The robot adopts robot plans, similar to the GOLOG programming language. Within this logic, we can define the expected utility of a conditional plan, based on the axiomatization of the actions, the uncertainty and the utility. The ‘planning’ problem is to find the plan with the highest expected utility. This is related to recent structured representations for POMDPs; here we use stochastic situation calculus rules to specify the state transition function and the reward/value function. Finally we show that with stochastic frame axioms, actions representations in probabilistic STRIPS are exponentially larger than using the representation proposed here.",
"title": ""
},
{
"docid": "7a56c53ad149198c6a142ebaab2150f8",
"text": "OBJECTIVE\nTo determine the effectiveness of nutrition education intervention based on Pender's Health Promotion Model in improving the frequency and nutrient intake of breakfast consumption among female Iranian students.\n\n\nDESIGN\nThe quasi-experimental study based on Pender's Health Promotion Model was conducted during April-June 2011. Information (data) was collected by self-administered questionnaire. In addition, a 3 d breakfast record was analysed. P < 0·05 was considered significant.\n\n\nSETTING\nTwo middle schools in average-income areas of Qom, Iran.\n\n\nSUBJECTS\nOne hundred female middle-school students.\n\n\nRESULTS\nThere was a significant reduction in immediate competing demands and preferences, perceived barriers and negative activity-related affect constructs in the experimental group after education compared with the control group. In addition, perceived benefit, perceived self-efficacy, positive activity-related affect, interpersonal influences, situational influences, commitment to a plan of action, frequency and intakes of macronutrients and most micronutrients of breakfast consumption were also significantly higher in the experimental group compared with the control group after the nutrition education intervention.\n\n\nCONCLUSIONS\nConstructs of Pender's Health Promotion Model provide a suitable source for designing strategies and content of a nutrition education intervention for improving the frequency and nutrient intake of breakfast consumption among female students.",
"title": ""
},
{
"docid": "25ae93b0714ad39fafbb743caaf83c3a",
"text": "This paper investigates the effect of winding angle on composite overwrapped pressure vessel (COPV) manufactured by filament winding process where continuous fibers impregnated in resin and wound over a liner. Three dimensional shell model is considered for the structural analysis of pressure vessel. The study on COPV is carried out by considering carbon T300/epoxy material. The thickness of composite vessel is calculated by using netting analysis. The study focused on optimum winding angle, total deformation, stress generation and failure analysis of composite pressure vessel. The failure of COPV is predicted by using Tsai-Wu failure criteria. The classical laminate theory (CLT) and failure criteria is considered for analytical method and obtained results are compared with numerical results which are obtained from ANSYS workbench (ACP) for validation. This comparison further helps in predicting behavior of COPV for change in winding angle and operating internal pressure.",
"title": ""
},
{
"docid": "77cfb72acbc2f077c3d9b909b0a79e76",
"text": "In this paper, we analyze two general-purpose encoding types, trees and graphs systematically, focusing on trends over increasingly complex problems. Tree and graph encodings are similar in application but offer distinct advantages and disadvantages in genetic programming. We describe two implementations and discuss their evolvability. We then compare performance using symbolic regression on hundreds of random nonlinear target functions of both 1-dimensional and 8-dimensional cases. Results show the graph encoding has less bias for bloating solutions but is slower to converge and deleterious crossovers are more frequent. The graph encoding however is found to have computational benefits, suggesting it to be an advantageous trade-off between regression performance and computational effort.",
"title": ""
},
{
"docid": "0048b244bd55a724f9bcf4dbf5e551a8",
"text": "In the research reported here, we investigated the debiasing effect of mindfulness meditation on the sunk-cost bias. We conducted four studies (one correlational and three experimental); the results suggest that increased mindfulness reduces the tendency to allow unrecoverable prior costs to influence current decisions. Study 1 served as an initial correlational demonstration of the positive relationship between trait mindfulness and resistance to the sunk-cost bias. Studies 2a and 2b were laboratory experiments examining the effect of a mindfulness-meditation induction on increased resistance to the sunk-cost bias. In Study 3, we examined the mediating mechanisms of temporal focus and negative affect, and we found that the sunk-cost bias was attenuated by drawing one's temporal focus away from the future and past and by reducing state negative affect, both of which were accomplished through mindfulness meditation.",
"title": ""
},
{
"docid": "db3c5c93daf97619ad927532266b3347",
"text": "Car9, a dodecapeptide identified by cell surface display for its ability to bind to the edge of carbonaceous materials, also binds to silica with high affinity. The interaction can be disrupted with l-lysine or l-arginine, enabling a broad range of technological applications. Previously, we reported that C-terminal Car9 extensions support efficient protein purification on underivatized silica. Here, we show that the Car9 tag is functional and TEV protease-excisable when fused to the N-termini of target proteins, and that it supports affinity purification under denaturing conditions, albeit with reduced yields. We further demonstrate that capture of Car9-tagged proteins is enhanced on small particle size silica gels with large pores, that the concomitant problem of nonspecific protein adsorption can be solved by lysing cells in the presence of 0.3% Tween 20, and that efficient elution is achieved at reduced l-lysine concentrations under alkaline conditions. An optimized small-scale purification kit incorporating the above features allows Car9-tagged proteins to be inexpensively recovered in minutes with better than 90% purity. The Car9 affinity purification technology should prove valuable for laboratory-scale applications requiring rapid access to milligram-quantities of proteins, and for preparative scale purification schemes where cost and productivity are important factors.",
"title": ""
},
{
"docid": "1b47dffdff3825ad44a0430311e2420b",
"text": "The present paper describes the SSM algorithm of protein structure comparison in three dimensions, which includes an original procedure of matching graphs built on the protein's secondary-structure elements, followed by an iterative three-dimensional alignment of protein backbone Calpha atoms. The SSM results are compared with those obtained from other protein comparison servers, and the advantages and disadvantages of different scores that are used for structure recognition are discussed. A new score, balancing the r.m.s.d. and alignment length Nalign, is proposed. It is found that different servers agree reasonably well on the new score, while showing considerable differences in r.m.s.d. and Nalign.",
"title": ""
},
{
"docid": "147a6ce22db736f475408d28d0398651",
"text": "Curating labeled training data has become the primary bottleneck in machine learning. Recent frameworks address this bottleneck with generative models to synthesize labels at scale from weak supervision sources. The generative model's dependency structure directly affects the quality of the estimated labels, but selecting a structure automatically without any labeled data is a distinct challenge. We propose a structure estimation method that maximizes the ℓ 1-regularized marginal pseudolikelihood of the observed data. Our analysis shows that the amount of unlabeled data required to identify the true structure scales sublinearly in the number of possible dependencies for a broad class of models. Simulations show that our method is 100× faster than a maximum likelihood approach and selects 1/4 as many extraneous dependencies. We also show that our method provides an average of 1.5 F1 points of improvement over existing, user-developed information extraction applications on real-world data such as PubMed journal abstracts.",
"title": ""
}
] |
scidocsrr
|
c6acf2a4f84f17af6c7c08abf5c9b079
|
Object-Oriented Modeling and Coordination of Mobile Robots
|
[
{
"docid": "290b56471b64e150e40211f7a51c1237",
"text": "Industrial robots are flexible machines that can be equipped with various sensors and tools to perform complex tasks. However, current robot programming languages are reaching their limits. They are not flexible and powerful enough to master the challenges posed by the intended future application areas. In the research project SoftRobot, a consortium of science and industry partners developed a software architecture that enables object-oriented software development for industrial robot systems using general-purpose programming languages. The requirements of current and future applications of industrial robots have been analysed and are reflected in the developed architecture. In this paper, an overview is given about this architecture as well as the goals that guided its development. A special focus is put on the design of the object-oriented Robotics API, which serves as a framework for developing complex robotic applications. It allows specifying real-time critical operations of robots and tools, including advanced concepts like sensor-based motions and multi-robot synchronization. The power and usefulness of the architecture is illustrated by several application examples. Its extensibility and reusability is evaluated and a comparison to other robotics frameworks is drawn.",
"title": ""
},
{
"docid": "d54168a9d8f10b43e24ff9d2cf87c2f0",
"text": "Mobile manipulators are of high interest to industry because of the increased flexibility and effectiveness they offer. The combination and coordination of the mobility provided by a mobile platform and of the manipulation capabilities provided by a robot arm leads to complex analytical problems for research. These problems can be studied very well on the KUKA youBot, a mobile manipulator designed for education and research applications. Issues still open in research include solving the inverse kinematics problem for the unified kinematics of the mobile manipulator, including handling the kinematic redundancy introduced by the holonomic platform of the KUKA youBot. As the KUKA youBot arm has only 5 degrees of freedom, a unified platform and manipulator system is needed to compensate for the missing degree of freedom. We present the KUKA youBot as an 8 degree of freedom serial kinematic chain, suggest appropriate redundancy parameters, and solve the inverse kinematics for the 8 degrees of freedom. This enables us to perform manipulation tasks more efficiently. We discuss implementation issues, present example applications and some preliminary experimental evaluation along with discussion about redundancies.",
"title": ""
}
] |
[
{
"docid": "e1a41e2c9ed279c0997c0ba87b8c2558",
"text": "Foot morphology and function has received increasing attention from both biomechanics researchers and footwear manufacturers. In this study, 168 habitually unshod runners (90 males whose age, weight & height were 23±2.4 years, 66±7.1 kg & 1.68±0.13 m and 78 females whose age, weight & height were 22±1.8 years, 55±4.7 kg & 1.6±0.11 m) (Indians) and 196 shod runners (130 males whose age, weight & height were 24±2.6 years, 66±8.2 kg & 1.72±0.18 m and 66 females whose age, weight & height were 23±1.5 years, 54±5.6 kg & 1.62±0.15 m) (Chinese) participated in a foot scanning test using the easy-foot-scan (a three-dimensional foot scanning system) to obtain 3D foot surface data and 2D footprint imaging. Foot length, foot width, hallux angle and minimal distance from hallux to second toe were calculated to analyze foot morphological differences. This study found that significant differences exist between groups (shod Chinese and unshod Indians) for foot length (female p = 0.001), width (female p = 0.001), hallux angle (male and female p = 0.001) and the minimal distance (male and female p = 0.001) from hallux to second toe. This study suggests that significant differences in morphology between different ethnicities could be considered for future investigation of locomotion biomechanics characteristics between ethnicities and inform last shape and design so as to reduce injury risks and poor performance from mal-fit shoes.",
"title": ""
},
{
"docid": "bfa05618da56c23cca87cd820c674fdf",
"text": "Mobile and location-based media refer to technologies that can openly and dynamically portray the characteristics of the users and their mundane life. Facebook check-ins highlights physical and informational mobility of the users relating individual activities into spaces. This study explored how personality traits like extraversion and narcissism function to influence self-disclosure that, in turn, impacts the intensity of check-ins on Facebook. Using survey data collected through Facebook check-in users in Taiwan (N 1⁄4 523), the results demonstrated that although extraversion and narcissism might not directly impact check-in intensity on Facebook, the indirect effects of selfdisclosure and exhibitionism were particularly salient. Moreover, a complete path from extraversion to Facebook check-in through self-disclosure and exhibitionism was discovered. Theoretical implications on human mobility and selective self-presentation are also discussed.",
"title": ""
},
{
"docid": "0939a703cb2eeb9396c4e681f95e1e4d",
"text": "Learning-based methods for visual segmentation have made progress on particular types of segmentation tasks, but are limited by the necessary supervision, the narrow definitions of fixed tasks, and the lack of control during inference for correcting errors. To remedy the rigidity and annotation burden of standard approaches, we address the problem of few-shot segmentation: given few image and few pixel supervision, segment any images accordingly. We propose guided networks, which extract a latent task representation from any amount of supervision, and optimize our architecture end-to-end for fast, accurate few-shot segmentation. Our method can switch tasks without further optimization and quickly update when given more guidance. We report the first results for segmentation from one pixel per concept and show real-time interactive video segmentation. Our unified approach propagates pixel annotations across space for interactive segmentation, across time for video segmentation, and across scenes for semantic segmentation. Our guided segmentor is state-of-the-art in accuracy for the amount of annotation and time. See http://github.com/shelhamer/revolver for code, models, and more details.",
"title": ""
},
{
"docid": "e6332552fb29765414020ee97184cc07",
"text": "In A History of God, Karen Armstrong describes a division, made by fourth century Christians, between kerygma and dogma: 'religious truth … capable of being expressed and defined clearly and logically,' versus 'religious insights [that] had an inner resonance that could only be apprehended by each individual in his own time during … contemplation' (Armstrong, 1993, p.114). This early dual-process theory had its roots in Plato and Aristotle, who suggested a division between 'philosophy,' which could be 'expressed in terms of reason and thus capable of proof,' and knowledge contained in myths, 'which eluded scientific demonstration' (Armstrong, 1993, 113–14). This division—between what can be known and reasoned logically versus what can only be experienced and apprehended—continued to influence Western culture through the centuries, and arguably underlies our current dual-process theories of reasoning. In psychology, the division between these two forms of understanding have been described in many different ways. The underlying theme of 'overtly reasoned' versus 'perceived, intuited' often ties these dual process theories together. In Western culture, the latter form of thinking has often been maligned (Dijksterhuis and Nordgren, 2006; Gladwell, 2005; Lieberman, 2000). Recently, cultural psychologists have suggested that although the distinction itself—between reasoned and intuited knowl-edge—may have precedents in the intellectual traditions of other cultures, the privileging of the former rather than the latter may be peculiar to Western cultures The Chinese philosophical tradition illustrates this difference of emphasis. Instead of an epistemology that was guided by abstract rules, 'the Chinese in esteeming what was immediately percepti-ble—especially visually perceptible—sought intuitive instantaneous understanding through direct perception' (Nakamura, 1960/1988, p.171). Taoism—the great Chinese philosophical school besides Confucianism—developed an epistemology that was particularly oriented towards concrete perception and direct experience (Fung, 1922; Nakamura, 1960/1988). Moreover, whereas the Greeks were concerned with definitions and devising rules for the purposes of classification, for many influential Taoist philosophers, such as Chuang Tzu, '… the problem of … how terms and attributes are to be delimited, leads one in precisely the wrong direction. Classifying or limiting knowledge fractures the greater knowledge' (Mote, 1971, p.102).",
"title": ""
},
{
"docid": "9c8e773dde5e999ac31a1a4bd279c24d",
"text": "The efficiency of wireless power transfer (WPT) systems is highly dependent on the load, which may change in a wide range in field applications. Besides, the detuning of WPT systems caused by the component tolerance and aging of inductors and capacitors can also decrease the system efficiency. In order to track the maximum system efficiency under varied loads and detuning conditions in real time, an active single-phase rectifier (ASPR) with an auxiliary measurement coil (AMC) and its corresponding control method are proposed in this paper. Both the equivalent load impedance and the output voltage can be regulated by the ASPR and the inverter, separately. First, the fundamental harmonic analysis model is established to analyze the influence of the load and the detuning on the system efficiency. Second, the soft-switching conditions and the equivalent input impedance of ASPR with different phase shifts and pulse widths are investigated in detail. Then, the analysis of the AMC and the maximum efficiency control strategy are provided in detail. Finally, an 800-W prototype is set up to validate the performance of the proposed method. The experimental results show that with 10% tolerance of the resonant capacitor in the receiver side, the system efficiency with the proposed approach reaches 91.7% at rated 800-W load and 91.1% at 300-W light load, which has an improvement by 2% and 10% separately compared with the traditional diode rectifier.",
"title": ""
},
{
"docid": "91cf217b2c5fa968bc4e893366ec53e1",
"text": "Importance\nPostpartum hypertension complicates approximately 2% of pregnancies and, similar to antepartum severe hypertension, can have devastating consequences including maternal death.\n\n\nObjective\nThis review aims to increase the knowledge and skills of women's health care providers in understanding, diagnosing, and managing hypertension in the postpartum period.\n\n\nResults\nHypertension complicating pregnancy, including postpartum, is defined as systolic blood pressure 140 mm Hg or greater and/or diastolic blood pressure 90 mm Hg or greater on 2 or more occasions at least 4 hours apart. Severe hypertension is defined as systolic blood pressure 160 mm Hg or greater and/or diastolic blood pressure 110 mm Hg or greater on 2 or more occasions repeated at a short interval (minutes). Workup for secondary causes of hypertension should be pursued, especially in patients with severe or resistant hypertension, hypokalemia, abnormal creatinine, or a strong family history of renal disease. Because severe hypertension is known to cause maternal stroke, women with severe hypertension sustained over 15 minutes during pregnancy or in the postpartum period should be treated with fast-acting antihypertension medication. Labetalol, hydralazine, and nifedipine are all effective for acute management, although nifedipine may work the fastest. For persistent postpartum hypertension, a long-acting antihypertensive agent should be started. Labetalol and nifedipine are also both effective, but labetalol may achieve control at a lower dose with fewer adverse effects.\n\n\nConclusions and Relevance\nProviders must be aware of the risks associated with postpartum hypertension and educate women about the symptoms of postpartum preeclampsia. Severe acute hypertension should be treated in a timely fashion to avoid morbidity and mortality. Women with persistent postpartum hypertension should be administered a long-acting antihypertensive agent.\n\n\nTarget Audience\nObstetricians and gynecologists, family physicians.\n\n\nLearning Objectives\nAfter completing this activity, the learner should be better able to assist patients and providers in identifying postpartum hypertension; provide a framework for the evaluation of new-onset postpartum hypertension; and provide instructions for the management of acute severe and persistent postpartum hypertension.",
"title": ""
},
{
"docid": "c42d1ee7a6b947e94eeb6c772e2b638f",
"text": "As mobile devices are equipped with more memory and computational capability, a novel peer-to-peer communication model for mobile cloud computing is proposed to interconnect nearby mobile devices through various short range radio communication technologies to form mobile cloudlets, where every mobile device works as either a computational service provider or a client of a service requester. Though this kind of computation offloading benefits compute-intensive applications, the corresponding service models and analytics tools are remaining open issues. In this paper we categorize computation offloading into three modes: remote cloud service mode, connected ad hoc cloudlet service mode, and opportunistic ad hoc cloudlet service mode. We also conduct a detailed analytic study for the proposed three modes of computation offloading at ad hoc cloudlet.",
"title": ""
},
{
"docid": "42af6ec7bc66a2ff9aa0d7bc90f9d76a",
"text": "In this paper, we propose a novel scene detection algorithm which employs semantic, visual, textual, and audio cues. We also show how the hierarchical decomposition of the storytelling video structure can improve retrieval results presentation with semantically and aesthetically effective thumbnails. Our method is built upon two advancements of the state of the art: first is semantic feature extraction which builds video-specific concept detectors; and second is multimodal feature embedding learning that maps the feature vector of a shot to a space in which the Euclidean distance has task specific semantic properties. The proposed method is able to decompose the video in annotated temporal segments which allow us for a query specific thumbnail extraction. Extensive experiments are performed on different data sets to demonstrate the effectiveness of our algorithm. An in-depth discussion on how to deal with the subjectivity of the task is conducted and a strategy to overcome the problem is suggested.",
"title": ""
},
{
"docid": "bd963a55c28304493118028fe5f47bab",
"text": "Tables are a common structuring element in many documents, s uch as PDF files. To reuse such tables, appropriate methods need to b e develop, which capture the structure and the content information. We have d e loped several heuristics which together recognize and decompose tables i n PDF files and store the extracted data in a structured data format (XML) for easi er reuse. Additionally, we implemented a prototype, which gives the user the ab ility of making adjustments on the extracted data. Our work shows that purel y heuristic-based approaches can achieve good results, especially for lucid t ables.",
"title": ""
},
{
"docid": "f59096137378d49c81bcb1de0be832b2",
"text": "Here the transformation related to the fast Fourier strategy mainly used in the field oriented well effective operations of the strategy elated to the scenario of the design oriented fashion in its implementation related to the well efficient strategy of the processing of the signal in the digital domain plays a crucial role in its analysis point of view in well oriented fashion respectively. It can also be applicable for the processing of the images and there is a crucial in its analysis in terms of the pixel wise process takes place in the system in well effective manner respectively. There is a vast number of the applications oriented strategy takes place in the system in w ell effective manner in the system based implementation followed by the well efficient analysis point of view in well stipulated fashion of the transformation related to the fast Fourier strategy plays a crucial role and some of them includes analysis of the signal, Filtering of the sound and also the compression of the data equations of the partial differential strategy plays a major role and the responsibility in its implementation scenario in a well oriented fashion respectively. There is a huge amount of the efficient analysis of the system related to the strategy of the transformation of the fast Fourier environment plays a crucial role and the responsibility for the effective implementation of the DFT in well respective fashion. Here in the present system oriented strategy DFT implementation takes place in a well explicit manner followed by the well effective analysis of the system where domain related to the time based strategy of the decimation plays a crucial role in its implementation aspect in well effective fashion respectively. Experiments have been conducted on the present method where there is a lot of analysis takes place on the large number of the huge datasets in a well oriented fashion with respect to the different environmental strategy and there is an implementation of the system in a well effective manner in terms of the improvement in the performance followed by the outcome of the entire system in well oriented fashion respectively.",
"title": ""
},
{
"docid": "4017069ba9b79f316d8cab584c06f853",
"text": "We examine the scenario in which a mobile network of robots must search, survey, or cover an environment and communication is restricted by relative location. While many algorithms choose to maintain a connected network at all times while performing such tasks, we relax this requirement and examine the use of periodic connectivity, where the network must regain connectivity at a fixed interval. We propose an online algorithm that scales linearly in the number of robots and allows for arbitrary periodic connectivity constraints. To complement the proposed algorithm, we provide theoretical inapproximability results for connectivity-constrained planning. Finally, we validate our approach in the coordinated search domain in simulation and in real-world experiments.",
"title": ""
},
{
"docid": "70260a7ce550830c7771b3e6004ebd41",
"text": "Due to the increasing requirements for transmission of images in computer, mobile environments, the research in the field of image compression has increased significantly. Image compression plays a crucial role in digital image processing, it is also very important for efficient transmission and storage of images. When we compute the number of bits per image resulting from typical sampling rates and quantization methods, we find that Image compression is needed. Therefore development of efficient techniques for image compression has become necessary .This paper is a survey for lossy image compression using Discrete Cosine Transform, it covers JPEG compression algorithm which is used for full-colour still image applications and describes all the components of it.",
"title": ""
},
{
"docid": "9bff76e87f4bfa3629e38621060050f7",
"text": "Non-textual components such as charts, diagrams and tables provide key information in many scientific documents, but the lack of large labeled datasets has impeded the development of data-driven methods for scientific figure extraction. In this paper, we induce high-quality training labels for the task of figure extraction in a large number of scientific documents, with no human intervention. To accomplish this we leverage the auxiliary data provided in two large web collections of scientific documents (arXiv and PubMed) to locate figures and their associated captions in the rasterized PDF. We share the resulting dataset of over 5.5 million induced labels---4,000 times larger than the previous largest figure extraction dataset---with an average precision of 96.8%, to enable the development of modern data-driven methods for this task. We use this dataset to train a deep neural network for end-to-end figure detection, yielding a model that can be more easily extended to new domains compared to previous work. The model was successfully deployed in Semantic Scholar,\\footnote\\urlhttps://www.semanticscholar.org/ a large-scale academic search engine, and used to extract figures in 13 million scientific documents.\\footnoteA demo of our system is available at \\urlhttp://labs.semanticscholar.org/deepfigures/,and our dataset of induced labels can be downloaded at \\urlhttps://s3-us-west-2.amazonaws.com/ai2-s2-research-public/deepfigures/jcdl-deepfigures-labels.tar.gz. Code to run our system locally can be found at \\urlhttps://github.com/allenai/deepfigures-open.",
"title": ""
},
{
"docid": "422183692a08138189271d4d7af407c7",
"text": "Scene flow describes the motion of 3D objects in real world and potentially could be the basis of a good feature for 3D action recognition. However, its use for action recognition, especially in the context of convolutional neural networks (ConvNets), has not been previously studied. In this paper, we propose the extraction and use of scene flow for action recognition from RGB-D data. Previous works have considered the depth and RGB modalities as separate channels and extract features for later fusion. We take a different approach and consider the modalities as one entity, thus allowing feature extraction for action recognition at the beginning. Two key questions about the use of scene flow for action recognition are addressed: how to organize the scene flow vectors and how to represent the long term dynamics of videos based on scene flow. In order to calculate the scene flow correctly on the available datasets, we propose an effective self-calibration method to align the RGB and depth data spatially without knowledge of the camera parameters. Based on the scene flow vectors, we propose a new representation, namely, Scene Flow to Action Map (SFAM), that describes several long term spatio-temporal dynamics for action recognition. We adopt a channel transform kernel to transform the scene flow vectors to an optimal color space analogous to RGB. This transformation takes better advantage of the trained ConvNets models over ImageNet. Experimental results indicate that this new representation can surpass the performance of state-of-the-art methods on two large public datasets.",
"title": ""
},
{
"docid": "0815549f210c57b28a7e2fc87c20f616",
"text": "Portable automatic seizure detection system is very convenient for epilepsy patients to carry. In order to make the system on-chip trainable with high efficiency and attain high detection accuracy, this paper presents a very large scale integration (VLSI) design based on the nonlinear support vector machine (SVM). The proposed design mainly consists of a feature extraction (FE) module and an SVM module. The FE module performs the three-level Daubechies discrete wavelet transform to fit the physiological bands of the electroencephalogram (EEG) signal and extracts the time–frequency domain features reflecting the nonstationary signal properties. The SVM module integrates the modified sequential minimal optimization algorithm with the table-driven-based Gaussian kernel to enable efficient on-chip learning. The presented design is verified on an Altera Cyclone II field-programmable gate array and tested using the two publicly available EEG datasets. Experiment results show that the designed VLSI system improves the detection accuracy and training efficiency.",
"title": ""
},
{
"docid": "d31c6830ee11fc73b53c7930ad0e638f",
"text": "This paper proposes two rectangular ring planar monopole antennas for wideband and ultra-wideband applications. Simple planar rectangular rings are used to design the planar antennas. These rectangular rings are designed in a way to achieve the wideband operations. The operating frequency band ranges from 1.85 GHz to 4.95 GHz and 3.12 GHz to 14.15 GHz. The gain varies from 1.83 dBi to 2.89 dBi for rectangular ring wideband antenna and 1.89 dBi to 5.2 dBi for rectangular ring ultra-wideband antenna. The design approach and the results are discussed.",
"title": ""
},
{
"docid": "b47d863479f1912ed8be154df188d4af",
"text": "This paper describes a new approach t o probabilistic roadmap planners (PRMs). The overall theme of the algorithm, called Lazy PRM, i s to minimize the number of collision checks performed during planning and hence minimize the running t ime of the planner. Our algorithm builds a roadmap in the configuration space, whose nodes are the user-defined initial and goal configurations and a number of randomly generated nodes. Neighboring nodes are connected by edges representing paths between the nodes. In contrast with PRMs, our planner initially assumes that all nodes and edges in the roadmap are collision-free, and searches the roadmap at hand for a shortest path between the initial and the goal node. The nodes and edges along the path are then checked for collision. If a collision with the obstacles occurs, the corresponding nodes and edges are removed fFom the roadmap. Our planner either finds a new shortest path, or first updates the roadmap with new nodes and edges, and then searches for a shortest path. The above process i s repeated until a collision-free path is returned. Lazy P R M is tailored to eficiently answer single planning queries, but can also be used for multiple queries. Experimental results presented in this paper show that our lazy method i s very eficient in practice.",
"title": ""
},
{
"docid": "0ff3e49a700a776c1a8f748d78bc4b73",
"text": "Nightlight surveys are commonly used to evaluate status and trends of crocodilian populations, but imperfect detection caused by survey- and location-specific factors makes it difficult to draw population inferences accurately from uncorrected data. We used a two-stage hierarchical model comprising population abundance and detection probability to examine recent abundance trends of American alligators (Alligator mississippiensis) in subareas of Everglades wetlands in Florida using nightlight survey data. During 2001–2008, there were declining trends in abundance of small and/or medium sized animals in a majority of subareas, whereas abundance of large sized animals had either demonstrated an increased or unclear trend. For small and large sized class animals, estimated detection probability declined as water depth increased. Detection probability of small animals was much lower than for larger size classes. The declining trend of smaller alligators may reflect a natural population response to the fluctuating environment of Everglades wetlands under modified hydrology. It may have negative implications for the future of alligator populations in this region, particularly if habitat conditions do not favor recruitment of offspring in the near term. Our study provides a foundation to improve inferences made from nightlight surveys of other crocodilian populations.",
"title": ""
},
{
"docid": "5d934dd45e812336ad12cee90d1e8cdf",
"text": "As research on the connection between narcissism and social networking site (SNS) use grows, definitions of SNS and measurements of their use continue to vary, leading to conflicting results. To improve understanding of the relationship between narcissism and SNS use, as well as the implications of differences in definition and measurement, we examine two ways of measuring Facebook and Twitter use by testing the hypothesis that SNS use is positively associated with narcissism. We also explore the relation between these types of SNS use and different components of narcissism within college students and general adult samples. Our findings suggest that for college students, posting on Twitter is associated with the Superiority component of narcissistic personality while Facebook posting is associated with the Exhibitionism component. Conversely, adults high in Superiority post on Facebook more rather than Twitter. For adults, Facebook and Twitter are both used more by those focused on their own appearances but not as a means of showing off, as is the case with college students. Given these differences, it is essential for future studies of SNS use and personality traits to distinguish between different types of SNS, different populations, and different types of use. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "fe3dfb844ec09b743032c0475c669b2c",
"text": "The significant changes enabled by the fog computing had demonstrated that Internet of Things (IoT) urgently needs more evolutional reforms. Limited by the inflexible design philosophy; the traditional structure of a network is hard to meet the latest demands. However, Information-Centric Networking (ICN) is a promising option to bridge and cover these enormous gaps. In this paper, a Smart Collaborative Caching (SCC) scheme is established by leveraging high-level ICN principles for IoT within fog computing paradigm. The proposed solution is supposed to be utilized in resource pooling, content storing, node locating and other related situations. By investigating the available characteristics of ICN, some challenges of such combination are reviewed in depth. The details of building SCC, including basic model and advanced algorithms, are presented based on theoretical analysis and simplified examples. The validation focuses on two typical scenarios: simple status inquiry and complex content sharing. The number of clusters, packet loss probability and other parameters are also considered. The analytical results demonstrate that the performance of our scheme, regarding total packet number and average transmission latency, can outperform that of the original ones. We expect that the SCC will contribute an efficient solution to the related studies.",
"title": ""
}
] |
scidocsrr
|
bbbd2e882ecb8f9a470c41153423311a
|
Investigations on the tensile force at the multi-wire needle winding process
|
[
{
"docid": "aaa28b62e4b7ce103c6c26a38c66eb9c",
"text": "Slowly but steadily, more and more electric vehicles push onto the consumer market. For a cost efficient production of electrical engines, in first-class quality and in sufficient quantity, it is indispensable to understand the process of coil winding. Thereby the prediction of the wire behavior is one of the key challenges. Therefore, a detailed model is built to investigate wire behavior during the linear winding process. The finite element based simulation tool LS-DYNA serves as explicit dynamics tool. To represent the high dynamic process of winding within this simulation, some first adaptions have to be made. This means, that dynamic influences such as rotational speed or acceleration of the coil body are definable. Within process simulation, the given boundary conditions are applied to the model. The material properties of the wire under scrutiny are validated by a tensile test and by the values out of datasheets in previous research. In order to achieve the best convergence, different contact algorithms are selected for each individual contact behavior. Furthermore, specific adjustments to the mesh are necessary to gain significant results. State of the art in coil winding is an experimental procedure, which delivers adequate process parameters and, thus, expertise in winding technology. Nevertheless, there are a lot of different, interacting parameters, which have to be optimized in terms of boundary conditions. The simulation model of winding process, in which varying parameters can be optimized pertaining to the optimal winding result, calls for extensive research in this field. The generated model enables the user not only to influence the process parameters but also to modify the geometry of a winding body. To make the simulation scientifically sound, it is validated by previous experiments and simulations",
"title": ""
},
{
"docid": "3653e29e71d70965317eb4c450bc28da",
"text": "This paper comprises an overview of different aspects for wire tension control devices and algorithms according to the state of industrial use and state of research. Based on a typical winding task of an orthocyclic winding scheme, possible new principles for an alternative piezo-electric actuator and an electromechanical tension control will be derived and presented.",
"title": ""
},
{
"docid": "06cfb7d14b50c24dc84ae14be8d525d1",
"text": "Distributed round-wire windings are usually manufactured using the insertion technology. If the needle winding technology is applied instead the end windings have to be conducted in a three-layer axial arrangement. This leads to differing coil lengths and thus to a phase asymmetry which is much more distinct than the one resulting from the insertion technology. In addition, it is possible that the first phase exhibits a higher end winding leakage inductance than the other phases if the distance between the first phase and the front end side of the stator core is too short. In this case the magnetic flux lines of the end windings partially close across the stator core producing an increase of the end winding leakage inductance. Therefore, in this paper the impact of the needle winding technology on the operational behavior of an asynchronous machine is investigated. For this purpose a needle wound electrical machine with three-layer end windings is compared to an electrical machine with very symmetric windings built up using manual insertion. By the use of the no load and blocked rotor test as well as a static stator measurement the machine parameters are determined and the impact of the phase asymmetry is investigated. In addition, load measurements are conducted in order to quantify the impact of the production related differences.",
"title": ""
}
] |
[
{
"docid": "a456e0d4a421fbae34cbbb3ca6217fa1",
"text": "Software-Defined Networking (SDN) is an emerging network architecture, centralized in the SDN controller entity, that decouples the control plane from the data plane. This controller-based solution allows programmability, and dynamic network reconfigurations, providing decision taking with global knowledge of the network. Currently, there are more than thirty SDN controllers with different features, such as communication protocol version, programming language, and architecture. Beyond that, there are also many studies about controller performance with the goal to identify the best one. However, some conclusions have been unjust because benchmark tests did not follow the same methodology, or controllers were not in the same category. Therefore, a standard benchmark methodology is essential to compare controllers fairly. The standardization can clarify and help us to understand the real behavior and weaknesses of an SDN controller. The main goal of this work-in-progress is to show existing benchmark methodologies, bringing a discussion about the need SDN controller benchmark standardization.",
"title": ""
},
{
"docid": "c1b955d77936e641f2ac05cb57fa91ed",
"text": "A theoretical model describing interpersonal trust in close relationships is presented. Three dimensions of trust are identified, based on the type of attributions drawn about a partner's motives. These dimensions are also characterized by a developmental progression in the relationship. The validity of this theoretical perspective was examined through evidence obtained from a survey of a heterogeneous sample of established couples. An analysis of the Trust Scale in this sample was consistent with the notion that the predictability, dependability, and faith components represent distinct and coherent dimensions. A scale to measure interpersonal motives was also developed. The perception of intrinsic motives in a partner emerged as a dimension, as did instrumental and extrinsic motives. As expected, love and happiness were closely tied to feelings of faith and the attribution of intrinsic motivation to both self and partner. Women appeared to have more integrated, complex views of their relationships than men: All three forms of trust were strongly related and attributions of instrumental motives in their partners seemed to be self-affirming. Finally, there was a tendency for people to view their own motives as less self-centered and more exclusively intrinsic in flavor than their partner's motives.",
"title": ""
},
{
"docid": "caad330df7dd6feb957af45a5dcfc524",
"text": "FPGA-based hardware accelerator for convolutional neural networks (CNNs) has obtained great attentions due to its higher energy efficiency than GPUs. However, it has been a challenge for FPGA-based solutions to achieve a higher throughput than GPU counterparts. In this paper, we demonstrate that FPGA acceleration can be a superior solution in terms of both throughput and energy efficiency when a CNN is trained with binary constraints on weights and activations. Specifically, we propose an optimized accelerator architecture tailored for bitwise convolution and normalization that features massive spatial parallelism with deep pipeline (temporal parallelism) stages. Experiment results show that the proposed architecture running at 90 MHz on a Xilinx Virtex-7 FPGA achieves a computing throughput of 7.663 TOPS with a power consumption of 8.2 W regardless of the batch size of input data. This is 8.3x faster and 75x more energy-efficient than a Titan X GPU for processing online individual requests (in small batch size). For processing static data (in large batch size), the proposed solution is on a par with a Titan X GPU in terms of throughput while delivering 9.5x higher energy efficiency.",
"title": ""
},
{
"docid": "3833e548f316f7c4e93cb49ec278379e",
"text": "Computational thinking (CT) is increasingly seen as a core literacy skill for the modern world on par with the longestablished skills of reading, writing, and arithmetic. To promote the learning of CT at a young age we capitalized on children's interest in play. We designed RabBit EscApe, a board game that challenges children, ages 610, to orient tangible, magnetized manipulatives to complete or create paths. We also ran an informal study to investigate the effectiveness of the game in fostering children's problemsolving capacity during collaborative game play. We used the results to inform our instructional interaction design that we think will better support the learning activities and help children hone the involved CT skills. Overall, we believe in the power of such games to challenge children to grow their understanding of CT in a focused and engaging activity.",
"title": ""
},
{
"docid": "3daa9fc7d434f8a7da84dd92f0665564",
"text": "In this article we analyze the response of Time of Flight cameras (active sensors) for close range imaging under three different illumination conditions and compare the results with stereo vision (passive) sensors. Time of Flight sensors are sensitive to ambient light and have low resolution but deliver high frame rate accurate depth data under suitable conditions. We introduce some metrics for performance evaluation over a small region of interest. Based on these metrics, we analyze and compare depth imaging of leaf under indoor (room) and outdoor (shadow and sunlight) conditions by varying exposures of the sensors. Performance of three different time of flight cameras (PMD CamBoard, PMD CamCube and SwissRanger SR4000) is compared against selected stereo-correspondence algorithms (local correlation and graph cuts). PMD CamCube has better cancellation of sunlight, followed by CamBoard, while SwissRanger SR4000 performs poorly under sunlight. stereo vision is more robust to ambient illumination and provides high resolution depth data but it is constrained by texture of the object along with computational efficiency. Graph cut based stereo correspondence algorithm can better retrieve the shape of the leaves but is computationally much more expensive as compared to local correlation. Finally, we propose a method to increase the dynamic range of the ToF cameras for a scene involving both shadow and sunlight exposures at the same time using camera flags (PMD) or confidence matrix (SwissRanger).",
"title": ""
},
{
"docid": "933f8ba333e8cbef574b56348872b313",
"text": "Automatic image annotation has been an important research topic in facilitating large scale image management and retrieval. Existing methods focus on learning image-tag correlation or correlation between tags to improve annotation accuracy. However, most of these methods evaluate their performance using top-k retrieval performance, where k is fixed. Although such setting gives convenience for comparing different methods, it is not the natural way that humans annotate images. The number of annotated tags should depend on image contents. Inspired by the recent progress in machine translation and image captioning, we propose a novel Recurrent Image Annotator (RIA) model that forms image annotation task as a sequence generation problem so that RIA can natively predict the proper length of tags according to image contents. We evaluate the proposed model on various image annotation datasets. In addition to comparing our model with existing methods using the conventional top-k evaluation measures, we also provide our model as a high quality baseline for the arbitrary length image tagging task. Moreover, the results of our experiments show that the order of tags in training phase has a great impact on the final annotation performance.",
"title": ""
},
{
"docid": "951c29150649a6ea8342b722bf39855c",
"text": "A method is proposed to enhance vascular structures within the framework of scale space theory. We combine a smooth vessel filter which is based on a geometrical analysis of the Hessian's eigensystem, with a non-linear anisotropic diffusion scheme. The amount and orientation of diffusion depend on the local vessel likeliness. Vessel enhancing diffusion (VED) is applied to patient and phantom data and compared to linear, regularized Perona-Malik, edge and coherence enhancing diffusion. The method performs better than most of the existing techniques in visualizing vessels with varying radii and in enhancing vessel appearance. A diameter study on phantom data shows that VED least affects the accuracy of diameter measurements. It is shown that using VED as a preprocessing step improves level set based segmentation of the cerebral vasculature, in particular segmentation of the smaller vessels of the vasculature.",
"title": ""
},
{
"docid": "efe2372e5b15ec1d04ac2b0d787a3c4e",
"text": "Social media records the thoughts and activities of countless cultures and subcultures around the globe. Yet institutional efforts to archive social media content remain controversial. We report on 988 responses across six surveys of social media users that included questions to explore this controversy. The quantitative and qualitative results show that the way people think about the issue depends on how personal and ephemeral they view the content to be. They use concepts such as creator privacy, content characteristics, technological capabilities, perceived legal rights, and intrinsic social good to reason about the boundaries of institutional social media archiving efforts.",
"title": ""
},
{
"docid": "ae800ced5663d320fcaca2df6f6bf793",
"text": "Stowage planning for container vessels concerns the core competence of the shipping lines. As such, automated stowage planning has attracted much research in the past two decades, but with few documented successes. In an ongoing project, we are developing a prototype stowage planning system aiming for large containerships. The system consists of three modules: the stowage plan generator, the stability adjustment module, and the optimization engine. This paper mainly focuses on the stability adjustment module. The objective of the stability adjustment module is to check the global ship stability of the stowage plan produced by the stowage plan generator and resolve the stability issues by applying a heuristic algorithm to search for alternative feasible locations for containers that violate some of the stability criteria. We demonstrate that the procedure proposed is capable of solving the stability problems for a large containership with more than 5000 TEUs. Keywords— Automation, Stowage Planning, Local Search, Heuristic algorithm, Stability Optimization",
"title": ""
},
{
"docid": "91414c022cad78ee98b7662647253340",
"text": "Biometric based authentication, particularly for fingerprint authentication systems play a vital role in identifying an individual. The existing fingerprint authentication systems depend on specific points known as minutiae for recognizing an individual. Designing a reliable automatic fingerprint authentication system is still very challenging, since not all fingerprint information is available. Further, the information obtained is not always accurate due to cuts, scars, sweat, distortion and various skin conditions. Moreover, the existing fingerprint authentication systems do not utilize other significant minutiae information, which can improve the accuracy. Various local feature detectors such as Difference-of-Gaussian, Hessian, Hessian Laplace, Harris Laplace, Multiscale Harris, and Multiscale Hessian have been extensively used for feature detection. However, these detectors have not been employed for detecting fingerprint image features. In this article, a versatile local feature fingerprint matching scheme is proposed. The local features are obtained by exploiting these local geometric detectors and SIFT descriptor. This scheme considers local characteristic features of the fingerprint image, thus eliminating the issues caused in existing fingerprint feature based matching techniques. Computer simulations of the proposed algorithm on specific databases show significant improvements when compared to existing fingerprint matchers, such as minutiae matcher, hierarchical matcher and graph based matcher. Computer simulations conducted on the Neurotechnology database demonstrates a very low Equal Error Rate (EER) of 0.8%. The proposed system a) improves the accuracy of the fingerprint authentication system, b) works when the minutiae information is sparse, and c) produces satisfactory matching accuracy in the case when minutiae information is unavailable. The proposed system can also be employed for partial fingerprint authentication.",
"title": ""
},
{
"docid": "16fc6497979fd2a3cde2f133792be32e",
"text": "Craniofacial duplication (diprosopus) is a rare form of conjoined twins. A case of monocephalus diprosopus with anencephaly, cervicothoracolumbar rachischisis, and duplication of the respiratory tract and upper gastrointestinal tract is reported. The cardiovascular system remained single but the heart showed transposition of the great vessels. We present this case due to its rarity, and compare our pathologic findings with those already reported.",
"title": ""
},
{
"docid": "62b8b95579e387913198cd4adc77eb84",
"text": "This paper aims to solve a fundamental problem in intensitybased 2D/3D registration, which concerns the limited capture range and need for very good initialization of state-of-the-art image registration methods. We propose a regression approach that learns to predict rotation and translations of arbitrary 2D image slices from 3D volumes, with respect to a learned canonical atlas co-ordinate system. To this end, we utilize Convolutional Neural Networks (CNNs) to learn the highly complex regression function that maps 2D image slices into their correct position and orientation in 3D space. Our approach is attractive in challenging imaging scenarios, where significant subject motion complicates reconstruction performance of 3D volumes from 2D slice data. We extensively evaluate the effectiveness of our approach quantitatively on simulated MRI brain data with extreme random motion. We further demonstrate qualitative results on fetal MRI where our method is integrated into a full reconstruction and motion compensation pipeline. With our CNN regression approach we obtain an average prediction error of 7mm on simulated data, and convincing reconstruction quality of images of very young fetuses where previous methods fail. We further discuss applications to Computed Tomography and X-ray projections. Our approach is a general solution to the 2D/3D initialization problem. It is computationally efficient, with prediction times per slice of a few milliseconds, making it suitable for real-time scenarios.",
"title": ""
},
{
"docid": "9f660caf74f1708339f7ca2ee067dc95",
"text": "Abstruct-Vehicle following and its effects on traffic flow has been an active area of research. Human driving involves reaction times, delays, and human errors that affect traffic flow adversely. One way to eliminate human errors and delays in vehicle following is to replace the human driver with a computer control system and sensors. The purpose of this paper is to develop an autonomous intelligent cruise control (AICC) system for automatic vehicle following, examine its effect on traffic flow, and compare its performance with that of the human driver models. The AICC system developed is not cooperative; Le., it does not exchange information with other vehicles and yet is not susceptible to oscillations and \" slinky \" effects. The elimination of the \" slinky \" effect is achieved by using a safety distance separation rule that is proportional to the vehicle velocity (constant time headway) and by designing the control system appropriately. The performance of the AICC system is found to be superior to that of the human driver models considered. It has a faster and better transient response that leads to a much smoother and faster traffic flow. Computer simulations are used to study the performance of the proposed AICC system and analyze vehicle following in a single lane, without passing, under manual and automatic control. In addition, several emergency situations that include emergency stopping and cut-in cases were simulated. The simulation results demonstrate the effectiveness of the AICC system and its potentially beneficial effects on traffic flow.",
"title": ""
},
{
"docid": "5f70d96454e4a6b8d2ce63bc73c0765f",
"text": "The Natural Language Processing group at the University of Szeged has been involved in human language technology research since 1998, and by now, it has become one of the leading workshops of Hungarian computational linguistics. Both computer scientists and linguists enrich the team with their knowledge, moreover, MSc and PhD students are also involved in research activities. The team has gained expertise in the fields of information extraction, implementing basic language processing toolkits and creating language resources. The Group is primarily engaged in processing Hungarian and English texts and its general objective is to develop language-independent or easily adaptable technologies. With the creation of the manually annotated Szeged Corpus and TreeBank, as well as the Hungarian WordNet, SzegedNE and other corpora it has become possible to apply machine learning based methods for the syntactic and semantic analysis of Hungarian texts, which is one of the strengths of the group. They have also implemented novel solutions for the morphological and syntactic parsing of morphologically rich languages and they have also published seminal papers on computational semantics, i.e. uncertainty detection and multiword expressions. They have developed tools for basic linguistic processing of Hungarian, for named entity recognition and for keyphrase extraction, which can all be easily integrated into large-scale systems and are optimizable for the specific needs of the given application. Currently, the group’s research activities focus on the processing of non-canonical texts (e.g. social media texts) and on the implementation of a syntactic parser for Hungarian, among others.",
"title": ""
},
{
"docid": "47317a0decf7a211b5550027e650c35e",
"text": "RT-qPCR is the accepted technique for the quantification of microRNA (miR) expression: however, stem-loop RT-PCR, the most frequently used method for quantification of miRs, is time- and reagent-consuming as well as inconvenient for scanning. We established a new method called 'universal stem-loop primer' (USLP) with 8 random nucleotides instead of a specific sequence at the 3' end of the traditional stem-loop primer (TSLP), for screening miR profile and to semi-quantify expression of miRs. Peripheral blood samples were cultured with phytohaemagglutinin (PHA), and then 87 candidate miRs were scanned in cultured T cells. By USLP, our study revealed that the expression of miR-150-5p (miR-150) decreased nearly 10-fold, and miR-155-5p (miR-155) increased more than 7-fold after treated with PHA. The results of the dissociation curve and gel electrophoresis showed that the PCR production of the USLP and TSLP were specificity. The USLP method has high precision because of its low ICV (ICV<2.5%). The sensitivity of the USLP is up to 103 copies/µl miR. As compared with the TSLP, USLP saved 75% the cost of primers and 60% of the test time. The USLP method is a simple, rapid, precise, sensitive, and cost-effective approach that is suitable for screening miR profiles.",
"title": ""
},
{
"docid": "48c9877043b59f3ed69aef3cbd807de7",
"text": "This paper presents an ontology-based approach for data quality inference on streaming observation data originating from large-scale sensor networks. We evaluate this approach in the context of an existing river basin monitoring program called the Intelligent River®. Our current methods for data quality evaluation are compared with the ontology-based inference methods described in this paper. We present an architecture that incorporates semantic inference into a publish/subscribe messaging middleware, allowing data quality inference to occur on real-time data streams. Our preliminary benchmark results indicate delays of 100ms for basic data quality checks based on an existing semantic web software framework. We demonstrate how these results can be maintained under increasing sensor data traffic rates by allowing inference software agents to work in parallel. These results indicate that data quality inference using the semantic sensor network paradigm is viable solution for data intensive, large-scale sensor networks.",
"title": ""
},
{
"docid": "216a65890d4256f56069e75879156550",
"text": "We address how listeners perceive temporal regularity in music performances, which are rich in temporal irregularities. A computational model is described in which a small system of internal self-sustained oscillations, operating at different periods with specific phase and period relations, entrains to the rhythms of music performances. Based on temporal expectancies embodied by the oscillations, the model predicts the categorization of temporally changing event intervals into discrete metrical categories, as well as the perceptual salience of deviations from these categories. The model’s predictions are tested in two experiments using piano performances of the same music with different phrase structure interpretations (Experiment 1) or different melodic interpretations (Experiment 2). The model successfully tracked temporal regularity amidst the temporal fluctuations found in the performances. The model’s sensitivity to performed deviations from its temporal expectations compared favorably with the performers’ structural (phrasal and melodic) intentions. Furthermore, the model tracked normal performances (with increased temporal variability) better than performances in which temporal fluctuations associated with individual voices were removed (with decreased variability). The small, systematic temporal irregularities characteristic of human performances (chord asynchronies) improved tracking, but randomly generated temporal irregularities did not. These findings suggest that perception of temporal regularity in complex musical sequences is based on temporal expectancies that adapt in response to temporally fluctuating input. © 2002 Cognitive Science Society, Inc. All rights reserved.",
"title": ""
},
{
"docid": "fab47ba2ca0b1fe26ae4aa11f7be4450",
"text": "Matrix approximation is a common tool in recommendation systems, text mining, and computer vision. A prevalent assumption in constructing matrix approximations is that the partially observed matrix is of low-rank. We propose a new matrix approximation model where we assume instead that the matrix is locally of low-rank, leading to a representation of the observed matrix as a weighted sum of low-rank matrices. We analyze the accuracy of the proposed local lowrank modeling. Our experiments show improvements in prediction accuracy over classical approaches for recommendation tasks.",
"title": ""
},
{
"docid": "abb45e408cb37a0ad89f0b810b7f583b",
"text": "In a mobile computing environment, a user carrying a portable computer can execute a mobile t11m,,· action by submitting the ope.rations of the transaction to distributed data servers from different locations. M a result of this mobility, the operations of the transaction may be executed at different servers. The distribution oC operations implies that the transmission of messages (such as those involved in a two phase commit protocol) may be required among these data servers in order to coordinate the execution ofthese operations. In this paper, we will address the distribution oC operations that update partitioned data in mobile environments. We show that, for operations pertaining to resource allocation, the message overhead (e.g., for a 2PC protocol) introduced by the distribution of operations is undesirable and unnecessary. We introduce a new algorithm, the RenlnJation Algorithm (RA), that does not necessitate the incurring of message overheads Cor the commitment of mobile transactions. We address two issues related to the RA algorithm: a termination protocol and a protocol for non_partition.commutotive operation\". We perform a comparison between the proposed RA algorithm and existing solutions that use a 2PC protocol.",
"title": ""
},
{
"docid": "12d565f0aaa6960e793b96f1c26cb103",
"text": "The new western Mode 5 IFF (Identification Foe or Friend) system is introduced. Based on analysis of signal features and format characteristics of Mode 5, a new signal detection method using phase and Amplitude correlation is put forward. This method utilizes odd and even channels to separate the signal, and then the separated signals are performed correlation with predefined mask. Through detecting preamble, the detection of Mode 5 signal is implemented. Finally, simulation results show the validity of the proposed method.",
"title": ""
}
] |
scidocsrr
|
f31980efca049a4c733792b83a36613b
|
Team vs . Team : Success Factors in a Multiplayer Online Battle Arena Game
|
[
{
"docid": "13bd8d8f7ae0295e2b2bba26f02ea378",
"text": "Teamwork plays an important role in many areas of today's society, such as business activities. Thus, the question of how to form an effective team is of increasing interest. In this paper we use the team-oriented multiplayer online game Dota 2 to study cooperation within teams and the success of teams. Making use of game log data, we choose a statistical approach to identify factors that increase the chance of a team to win. The factors that we analyze are related to the roles that players can take within the game, the experiences of the players and friendship ties within a team. Our results show that such data can be used to infer social behavior patterns.",
"title": ""
}
] |
[
{
"docid": "014de32885e6f7df0607fba6a170e404",
"text": "In spite of their remarkable success in signal processing applications, it is now widely acknowledged that traditional wavelets are not very effective in dealing multidimensional signals containing distributed discontinuities such as edges. To overcome this limitation, one has to use basis elements with much higher directional sensitivity and of various shapes, to be able to capture the intrinsic geometrical features of multidimensional phenomena. This paper introduces a new discrete multiscale directional representation called the Discrete Shearlet Transform. This approach, which is based on the shearlet transform, combines the power of multiscale methods with a unique ability to capture the geometry of multidimensional data and is optimally efficient in representing images containing edges. We describe two different methods of implementing the shearlet transform. The numerical experiments presented in this paper demonstrate that the Discrete Shearlet Transform is very competitive in denoising applications both in terms of performance and computational efficiency.",
"title": ""
},
{
"docid": "b120095067684a67fe3327d18860e760",
"text": "We present a flexible method for fusing information from optical and range sensors based on an accelerated high-dimensional filtering approach. Our system takes as input a sequence of monocular camera images as well as a stream of sparse range measurements as obtained from a laser or other sensor system. In contrast with existing approaches, we do not assume that the depth and color data streams have the same data rates or that the observed scene is fully static. Our method produces a dense, high-resolution depth map of the scene, automatically generating confidence values for every interpolated depth point. We describe how to integrate priors on object motion and appearance and how to achieve an efficient implementation using parallel processing hardware such as GPUs.",
"title": ""
},
{
"docid": "7411ae149016be794566261d7362f7d3",
"text": "BACKGROUND\nProcrastination, to voluntarily delay an intended course of action despite expecting to be worse-off for the delay, is a persistent behavior pattern that can cause major psychological suffering. Approximately half of the student population and 15%-20% of the adult population are presumed having substantial difficulties due to chronic and recurrent procrastination in their everyday life. However, preconceptions and a lack of knowledge restrict the availability of adequate care. Cognitive behavior therapy (CBT) is often considered treatment of choice, although no clinical trials have previously been carried out.\n\n\nOBJECTIVE\nThe aim of this study will be to test the effects of CBT for procrastination, and to investigate whether it can be delivered via the Internet.\n\n\nMETHODS\nParticipants will be recruited through advertisements in newspapers, other media, and the Internet. Only people residing in Sweden with access to the Internet and suffering from procrastination will be included in the study. A randomized controlled trial with a sample size of 150 participants divided into three groups will be utilized. The treatment group will consist of 50 participants receiving a 10-week CBT intervention with weekly therapist contact. A second treatment group with 50 participants receiving the same treatment, but without therapist contact, will also be employed. The intervention being used for the current study is derived from a self-help book for procrastination written by one of the authors (AR). It includes several CBT techniques commonly used for the treatment of procrastination (eg, behavioral activation, behavioral experiments, stimulus control, and psychoeducation on motivation and different work methods). A control group consisting of 50 participants on a wait-list control will be used to evaluate the effects of the CBT intervention. For ethical reasons, the participants in the control group will gain access to the same intervention following the 10-week treatment period, albeit without therapist contact.\n\n\nRESULTS\nThe current study is believed to result in three important findings. First, a CBT intervention is assumed to be beneficial for people suffering from problems caused by procrastination. Second, the degree of therapist contact will have a positive effect on treatment outcome as procrastination can be partially explained as a self-regulatory failure. Third, an Internet based CBT intervention is presumed to be an effective way to administer treatment for procrastination, which is considered highly important, as the availability of adequate care is limited. The current study is therefore believed to render significant knowledge on the treatment of procrastination, as well as providing support for the use of Internet based CBT for difficulties due to delayed tasks and commitments.\n\n\nCONCLUSIONS\nTo our knowledge, the current study is the first clinical trial to examine the effects of CBT for procrastination, and is assumed to render significant knowledge on the treatment of procrastination, as well as investigating whether it can be delivered via the Internet.\n\n\nTRIAL REGISTRATION\nClinicalTrials.gov: NCT01842945; http://clinicaltrials.gov/show/NCT01842945 (Archived by WebCite at http://www.webcitation.org/6KSmaXewC).",
"title": ""
},
{
"docid": "8001e848f42df09e9e240599de307fec",
"text": "Data videos, or short data-driven motion graphics, are an increasingly popular medium for storytelling. However, creating data videos is difficult as it involves pulling together a unique combination of skills. We introduce DataClips, an authoring tool aimed at lowering the barriers to crafting data videos. DataClips allows non-experts to assemble data-driven “clips” together to form longer sequences. We constructed the library of data clips by analyzing the composition of over 70 data videos produced by reputable sources such as The New York Times and The Guardian. We demonstrate that DataClips can reproduce over 90% of our data videos corpus. We also report on a qualitative study comparing the authoring process and outcome achieved by (1) non-experts using DataClips, and (2) experts using Adobe Illustrator and After Effects to create data-driven clips. Results indicated that non-experts are able to learn and use DataClips with a short training period. In the span of one hour, they were able to produce more videos than experts using a professional editing tool, and their clips were rated similarly by an independent audience.",
"title": ""
},
{
"docid": "75aa71e270d85df73fa97336d2a6b713",
"text": "Designing powerful tools that support cooking activities has rapidly gained popularity due to the massive amounts of available data, as well as recent advances in machine learning that are capable of analyzing them. In this paper, we propose a cross-modal retrieval model aligning visual and textual data (like pictures of dishes and their recipes) in a shared representation space. We describe an effective learning scheme, capable of tackling large-scale problems, and validate it on the Recipe1M dataset containing nearly 1 million picture-recipe pairs. We show the effectiveness of our approach regarding previous state-of-the-art models and present qualitative results over computational cooking use cases.",
"title": ""
},
{
"docid": "a45e7855be4a99ef2d382e914650e8bc",
"text": "We propose a novel type inference technique for Python programs. Type inference is difficult for Python programs due to their heavy dependence on external APIs and the dynamic language features. We observe that Python source code often contains a lot of type hints such as attribute accesses and variable names. However, such type hints are not reliable. We hence propose to use probabilistic inference to allow the beliefs of individual type hints to be propagated, aggregated, and eventually converge on probabilities of variable types. Our results show that our technique substantially outperforms a state-of-the-art Python type inference engine based on abstract interpretation.",
"title": ""
},
{
"docid": "35293c16985878fca24b5a327fd52c72",
"text": "In this paper we present a method for learning a discriminative classifier from unlabeled or partially labeled data. Our approach is based on an objective function that trades-off mutual information between observed examples and their predicted categorical class distribution, against robustness of the classifier to an adversarial generative model. The resulting algorithm can either be interpreted as a natural generalization of the generative adversarial networks (GAN) framework or as an extension of the regularized information maximization (RIM) framework to robust classification against an optimal adversary. We empirically evaluate our method – which we dub categorical generative adversarial networks (or CatGAN) – on synthetic data as well as on challenging image classification tasks, demonstrating the robustness of the learned classifiers. We further qualitatively assess the fidelity of samples generated by the adversarial generator that is learned alongside the discriminative classifier, and identify links between the CatGAN objective and discriminative clustering algorithms (such as RIM).",
"title": ""
},
{
"docid": "5a9d8c0531a06b5542e8f02b2673b26d",
"text": "Given that e-tailing service failure is inevitable, a better understanding of how service failure and recovery affect customer loyalty represents an important topic for academics and practitioners. This study explores the relationship of service failure severity, service recovery justice (i.e., interactional justice, procedural justice, and distributive justice), and perceived switching costs with customer loyalty; as well, the moderating relationship of service recovery justice and perceived switching costs on the link between service failure severity and customer loyalty in the context of e-tailing are investigated. Data collected from 221 erceived switching costs ustomer loyalty useful respondents are tested against the research model using the partial least squares (PLS) approach. The results indicate that service failure severity, interactional justice, procedural justice and perceived switching costs have a significant relationship with customer loyalty, and that interactional justice can mitigate the negative relationship between service failure severity and customer loyalty. These findings provide several important theoretical and practical implications in terms of e-tailing service failure and",
"title": ""
},
{
"docid": "1e8e4364427d18406594af9ad3a73a28",
"text": "The Internet Addiction Scale (IAS) is a self-report instrument based on the 7 Diagnostic and Statistical Manual of Mental Disorders (4th ed.; American Psychiatric Association, 1994) substance dependence criteria and 2 additional criteria recommended by Griffiths (1998). The IAS was administered to 233 undergraduates along with 4 measures pertaining to loneliness and boredom proneness. An item reliability analysis reduced the initial scale from 36 to 31 items (with a Cronbach's alpha of .95). A principal-components analysis indicated that the IAS consisted mainly of one factor. Multiple regression analyses revealed that Family and Social Loneliness and Boredom Proneness were significantly correlated with the IAS; Family and Social Loneliness uniquely predicted IAS scores. No evidence for widespread Internet addiction was found.",
"title": ""
},
{
"docid": "625f1f11e627c570e26da9f41f89a28b",
"text": "In this paper, we propose an approach to realize substrate integrated waveguide (SIW)-based leaky-wave antennas (LWAs) supporting continuous beam scanning from backward to forward above the cutoff frequency. First, through phase delay analysis, it was found that SIWs with straight transverse slots support backward and forward radiation of the -1-order mode with an open-stopband (OSB) in between. Subsequently, by introducing additional longitudinal slots as parallel components, the OSB can be suppressed, leading to continuous beam scanning at least from -40° through broadside to 35°. The proposed method only requires a planar structure and obtains less dispersive beam scanning compared with a composite right/left-handed (CRLH) LWA. Both simulations and measurements verify the intended beam scanning operation while verifying the underlying theory.",
"title": ""
},
{
"docid": "73d9461101dc15f93f52d2ab9b8c0f39",
"text": "The need for mining structured data has increased in the past few years. One of the best studied data structures in computer science and discrete mathematics are graphs. It can therefore be no surprise that graph based data mining has become quite popular in the last few years.This article introduces the theoretical basis of graph based data mining and surveys the state of the art of graph-based data mining. Brief descriptions of some representative approaches are provided as well.",
"title": ""
},
{
"docid": "5ce4f8227c5eebfb8b7b1dffc5557712",
"text": "In this paper, we propose a novel approach for face spoofing detection using the high-order Local Derivative Pattern from Three Orthogonal Planes (LDP-TOP). The proposed method is not only simple to derive and implement, but also highly efficient, since it takes into account both spatial and temporal information in different directions of subtle face movements. According to experimental results, the proposed approach outperforms state-of-the-art methods on three reference datasets, namely Idiap REPLAY-ATTACK, CASIA-FASD, and MSU MFSD. Moreover, it requires only 25 video frames from each video, i.e., only one second, and thus potentially can be performed in real time even on low-cost devices.",
"title": ""
},
{
"docid": "7e4c283766a18a12bda4c5990a5ae310",
"text": "In Genome Projects, biological sequences are aligned thousands of times, in a daily basis. The Smith-Waterman algorithm is able to retrieve the optimal local alignment with quadratic time and space complexity. So far, aligning huge sequences, such as whole chromosomes, with the Smith-Waterman algorithm has been regarded as unfeasible, due to huge computing and memory requirements. However, high-performance computing platforms such as GPUs are making it possible to obtain the optimal result for huge sequences in reasonable time. In this paper, we propose and evaluate CUDAlign 2.1, a parallel algorithm that uses GPU to align huge sequences, executing the Smith-Waterman algorithm combined with Myers-Miller, with linear space complexity. In order to achieve that, we propose optimizations which are able to reduce significantly the amount of data processed, while enforcing full parallelism most of the time. Using the NVIDIA GTX 560 Ti board and comparing real DNA sequences that range from 162 KBP (Thousand Base Pairs) to 59 MBP (Million Base Pairs), we show that CUDAlign 2.1 is scalable. Also, we show that CUDAlign 2.1 is able to produce the optimal alignment between the chimpanzee chromosome 22 (33 MBP) and the human chromosome 21 (47 MBP) in 8.4 hours and the optimal alignment between the chimpanzee chromosome Y (24 MBP) and the human chromosome Y (59 MBP) in 13.1 hours.",
"title": ""
},
{
"docid": "1306d2579c8af0af65805da887d283b0",
"text": "Traceability allows tracking products through all stages of a supply chain, which is crucial for product quality control. To provide accountability and forensic information, traceability information must be secured. This is challenging because traceability systems often must adapt to changes in regulations and to customized traceability inspection processes. OriginChain is a real-world traceability system using a blockchain. Blockchains are an emerging data storage technology that enables new forms of decentralized architectures. Components can agree on their shared states without trusting a central integration point. OriginChain’s architecture provides transparent tamper-proof traceability information, automates regulatory compliance checking, and enables system adaptability.",
"title": ""
},
{
"docid": "fd19dc1f6ca2616364c1f5b5e755118d",
"text": "Airplane detection in remote sensing images remains a challenging problem due to the complexity of backgrounds. In recent years, with the development of deep learning, object detection has also obtained great breakthroughs. For object detection tasks in natural images, such as the PASCAL (Pattern Analysis, Statistical Modelling and Computational Learning) VOC (Visual Object Classes) Challenge, the major trend of current development is to use a large amount of labeled classification data to pre-train the deep neural network as a base network, and then use a small amount of annotated detection data to fine-tune the network for detection. In this paper, we use object detection technology based on deep learning for airplane detection in remote sensing images. In addition to using some characteristics of remote sensing images, some new data augmentation techniques have been proposed. We also use transfer learning and adopt a single deep convolutional neural network and limited training samples to implement end-to-end trainable airplane detection. Classification and positioning are no longer divided into multistage tasks; end-to-end detection attempts to combine them for optimization, which ensures an optimal solution for the final stage. In our experiment, we use remote sensing images of airports collected from Google Earth. The experimental results show that the proposed algorithm is highly accurate and meaningful for remote sensing object detection.",
"title": ""
},
{
"docid": "8ae28438f9fbeb9fa22188f37d7b91a3",
"text": "Supply Chain Management systems provide information sharing and analysis to companies and support their planning activities. They are not based on the real data because there is asymmetric information between companies, then leading to disturbance of the planning algorithms. On the other hand, sharing data between manufacturers, suppliers and customers becomes very important to ensure reactivity towards markets variability. Especially, double marginalization is a widespread and serious problem in supply chain management. Decentralized systems under wholesale price contracts are investigated, with double marginalization effects shown to lead to supply insufficiencies, in the cases of both deterministic and random demands. This paper proposes a blockchain based solution to address the problems of supply chain such as Double Marginalization and Information Asymmetry etc.",
"title": ""
},
{
"docid": "9ac00559a52851ffd2e33e376dd58b62",
"text": "ARM servers are becoming increasingly common, making server technologies such as virtualization for ARM of growing importance. We present the first study of ARM virtualization performance on server hardware, including multicore measurements of two popular ARM and x86 hypervisors, KVM and Xen. We show how ARM hardware support for virtualization can enable much faster transitions between VMs and the hypervisor, a key hypervisor operation. However, current hypervisor designs, including both Type 1 hypervisors such as Xen and Type 2 hypervisors such as KVM, are not able to leverage this performance benefit for real application workloads. We discuss the reasons why and show that other factors related to hypervisor software design and implementation have a larger role in overall performance. Based on our measurements, we discuss changes to ARM's hardware virtualization support that can potentially bridge the gap to bring its faster VM-to-hypervisor transition mechanism to modern Type 2 hypervisors running real applications. These changes have been incorporated into the latest ARM architecture.",
"title": ""
},
{
"docid": "9809521909e01140c367dbfbf3a4aacd",
"text": "Understanding how housing values evolve over time is important to policy makers, consumers and real estate professionals. Existing methods for constructing housing indices are computed at a coarse spatial granularity, such as metropolitan regions, which can mask or distort price dynamics apparent in local markets, such as neighborhoods and census tracts. A challenge in moving to estimates at, for example, the census tract level is the scarcity of spatiotemporally localized house sales observations. Our work aims to address this challenge by leveraging observations from multiple census tracts discovered to have correlated valuation dynamics. Our proposed Bayesian nonparametric approach builds on the framework of latent factor models to enable a flexible, data-driven method for inferring the clustering of correlated census tracts. We explore methods for scalability and parallelizability of computations, yielding a housing valuation index at the level of census tract rather than zip code, and on a monthly basis rather than quarterly. Our analysis is provided on a large Seattle metropolitan housing dataset.",
"title": ""
},
{
"docid": "0df006400924b05117a6d5b12fedfbb0",
"text": "The lack of data authentication and integrity guarantees in the Domain Name System (DNS) facilitates a wide variety of malicious activity on the Internet today. DNSSec, a set of cryptographic extensions to DNS, has been proposed to address these threats. While DNSSec does provide certain security guarantees, here we argue that it does not provide what users really need, namely end-to-end authentication and integrity. Even worse, DNSSec makes DNS much less efficient and harder to administer, thus significantly compromising DNS’s availability—arguably its most important characteristic. In this paper we explain the structure of DNS, examine the threats against it, present the details of DNSSec, and analyze the benefits of DNSSec relative to its costs. This cost-benefit analysis clearly shows that DNSSec deployment is a futile effort, one that provides little long-term benefit yet has distinct, perhaps very significant costs.",
"title": ""
},
{
"docid": "45176f43660f5a92fdccccfc4e9a328c",
"text": "This review article summarizes the basic knowledge from the field of sleep research. The emphasis is on the exploration of the rules of polysomnographic recording and scoring sleep stages as well as on results and opinions about the nature of sleep EEG. History of sleep research, sleep physiology, functions of sleep and mostly used experiments are briefly mentioned. Relevant spectral methods and methods inspired by dynamical systems theory are listed.",
"title": ""
}
] |
scidocsrr
|
caeba50304535d1b67ad333cc1ca0e71
|
Mining Twitter big data to predict 2013 Pakistan election winner
|
[
{
"docid": "76ae2082a4ab35fa3046f3f0af54bfe2",
"text": "Electoral prediction from Twitter data is an appealing research topic. It seems relatively straightforward and the prevailing view is overly optimistic. This is problematic because while simple approaches are assumed to be good enough, core problems are not addressed. Thus, this paper aims to (1) provide a balanced and critical review of the state of the art; (2) cast light on the presume predictive power of Twitter data; and (3) depict a roadmap to push forward the field. Hence, a scheme to characterize Twitter prediction methods is proposed. It covers every aspect from data collection to performance evaluation, through data processing and vote inference. Using that scheme, prior research is analyzed and organized to explain the main approaches taken up to date but also their weaknesses. This is the first meta-analysis of the whole body of research regarding electoral prediction from Twitter data. It reveals that its presumed predictive power regarding electoral prediction has been somewhat exaggerated: although social media may provide a glimpse on electoral outcomes current research does not provide strong evidence to support it can currently replace traditional polls. Finally, future lines of work are suggested.",
"title": ""
},
{
"docid": "cd2fb4278f1c2da581708d961bd7aa93",
"text": "Twitter messages are increasingly used to determine consumer sentiment towards a brand. The existing literature on Twitter sentiment analysis uses various feature sets and methods, many of which are adapted from more traditional text classification problems. In this research, we introduce an approach to supervised feature reduction using n-grams and statistical analysis to develop a Twitter-specific lexicon for sentiment analysis. We augment this reduced Twitter-specific lexicon with brand-specific terms for brand-related tweets. We show that the reduced lexicon set, while significantly smaller (only 187 features), reduces modeling complexity, maintains a high degree of coverage over our Twitter corpus, and yields improved sentiment classification accuracy. To demonstrate the effectiveness of the devised Twitter-specific lexicon compared to a traditional sentiment lexicon, we develop comparable sentiment classification models using SVM. We show that the Twitter-specific lexicon is significantly more effective in terms of classification recall and accuracy metrics. We then develop sentiment classification models using the Twitter-specific lexicon and the DAN2 machine learning approach, which has demonstrated success in other text classification problems. We show that DAN2 produces more accurate sentiment classification results than SVM while using the same Twitter-specific lexicon. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "09da8d98929dded2b6ed30810e61f441",
"text": "The FG-NET aging database was released in 2004 in an attempt to support research activities related to facial aging. Since then a number of researchers used the database for carrying out research in various disciplines related to facial aging. Based on the analysis of published work where the FG-NET aging database was used, conclusions related to the type of research carried out in relation to the impact of the dataset in shaping up the research topic of facial aging, are presented. In particular we focus our attention on the topic of age estimation that proved to be the most popular among users of the FG-NET aging database. Through the review of key papers in age estimation and the presentation of benchmark results the main approaches/directions in facial aging are outlined and future trends, requirements and research directions are drafted.",
"title": ""
},
{
"docid": "46545de8429e6e7363a2b41676fc9e91",
"text": "BACKGROUND\nThe scapula osteocutaneous free flap is frequently used to reconstruct complex head and neck defects given its tissue versatility. Because of minimal atherosclerotic changes in its vascular pedicle, this flap also may be used as a second choice when other osseous flaps are not available because of vascular disease at a preferred donor site.\n\n\nMETHODS\nWe performed a retrospective chart review evaluating flap outcome as well as surgical and medical complications based upon the flap choice.\n\n\nRESULTS\nThe flap survival rate was 97%. The surgical complication rate was similar for the 21 first-choice flaps (57.1%) and the 12 second-choice flaps (41.7%; p = .481). However, patients having second-choice flaps had a higher rate of medical complications (66.7%) than those with first-choice flaps (28.6%; p = .066). Age and the presence of comorbidities were associated with increased medical complications. All patients with comorbidities that had a second-choice flap experienced medical complications, with most being severe.\n\n\nCONCLUSIONS\nThe scapula osteocutaneous free flap has a high success rate in head and neck reconstruction. Surgical complications occur frequently regardless of whether the flap is used as a first or second choice. However, medical complications are more frequent and severe in patients undergoing second-choice flaps.",
"title": ""
},
{
"docid": "2b9fa788e7ccacf14fcdc295ba387e25",
"text": "In this paper, two kinds of methods, namely additional momentum method and self-adaptive learning rate adjustment method, are used to improve the BP algorithm. Considering the diversity of factors which affect stock prices, Single-input and Multi-input Prediction Model (SIPM and MIPM) are established respectively to implement short-term forecasts for SDIC Electric Power (600886) shares and Bank of China (601988) shares in 2009. Experiments indicate that the improved BP model has superior performance to the basic BP model, and MIPM is also better than SIPM. However, the best performance is obtained by using MIPM and improved prediction model cohesively.",
"title": ""
},
{
"docid": "2c6c8703d7be507e15066d2a3fbd813c",
"text": "This paper presents a novel and effective audio based method on depression classification. It focuses on two important issues, \\emph{i.e.} data representation and sample imbalance, which are not well addressed in literature. For the former one, in contrast to traditional shallow hand-crafted features, we propose a deep model, namely DepAudioNet, to encode the depression related characteristics in the vocal channel, combining Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) to deliver a more comprehensive audio representation. For the latter one, we introduce a random sampling strategy in the model training phase to balance the positive and negative samples, which largely alleviates the bias caused by uneven sample distribution. Evaluations are carried out on the DAIC-WOZ dataset for the Depression Classification Sub-challenge (DCC) at the 2016 Audio-Visual Emotion Challenge (AVEC), and the experimental results achieved clearly demonstrate the effectiveness of the proposed approach.",
"title": ""
},
{
"docid": "875548b7dc303bef8efa8284216e010d",
"text": "BACKGROUND\nGigantomastia is a breast disorder marked by exaggerated rapid growth of the breasts, generally bilaterally. Since this disorder is very rare and has been reported only in sparse case reports its etiology has yet to be fully established. Treatment is aimed at improving the clinical and psychological symptoms and reducing the treatment side effects; however, the best therapeutic option varies from case to case.\n\n\nCASE PRESENTATION\nThe present report described a case of gestational gigantomastia in a 30-year-old woman, gravida 2, parity 1, 17 week pregnant admitted to Pars Hospital, Tehran, Iran, on May 2014. The patient was admitted to hospital at week 17 of pregnancy, although her breasts initially had begun to enlarge from the first trimester. The patient developed hypercalcemia in her 32nd week of pregnancy. The present report followed this patient from diagnosis until the completion of treatment.\n\n\nCONCLUSION\nAlthough gestational gigantomastia is a rare condition, its timely prognosis and careful examination of some conditions like hyperprolactinemia and hypercalcemia is essential in successful management of this condition.",
"title": ""
},
{
"docid": "287d1e603f7d677cff93aa0601a9bfef",
"text": "Frameworks are an object-oriented reuse technique that are widely used in industry but not discussed much by the software engineering research community. They are a way of reusing design that is part of the reason that some object-oriented developers are so productive. This paper compares and contrasts frameworks with other reuse techniques, and describes how to use them, how to evaluate them, and how to develop them. It describe the tradeo s involved in using frameworks, including the costs and pitfalls, and when frameworks are appropriate.",
"title": ""
},
{
"docid": "4053bbaf8f9113bef2eb3b15e34a209a",
"text": "With the recent availability of commodity Virtual Reality (VR) products, immersive video content is receiving a significant interest. However, producing high-quality VR content often requires upgrading the entire production pipeline, which is costly and time-consuming. In this work, we propose using video feeds from regular broadcasting cameras to generate immersive content. We utilize the motion of the main camera to generate a wide-angle panorama. Using various techniques, we remove the parallax and align all video feeds. We then overlay parts from each video feed on the main panorama using Poisson blending. We examined our technique on various sports including basketball, ice hockey and volleyball. Subjective studies show that most participants rated their immersive experience when viewing our generated content between Good to Excellent. In addition, most participants rated their sense of presence to be similar to ground-truth content captured using a GoPro Omni 360 camera rig.",
"title": ""
},
{
"docid": "70bce8834a23bc84bea7804c58bcdefe",
"text": "This study presents novel coplanar waveguide (CPW) power splitters comprising a CPW T-junction with outputs attached to phase-adjusting circuits, i.e., the composite right/left-handed (CRLH) CPW and the conventional CPW, to achieve a constant phase difference with arbitrary value over a wide bandwidth. To demonstrate the proposed technique, a 180/spl deg/ CRLH CPW power splitter with a phase error of less than 10/spl deg/ and a magnitude difference of below 1.5 dB within 2.4 to 5.22 GHz is experimentally demonstrated. Compared with the conventional 180/spl deg/ delay-line power splitter, the proposed structure possesses not only superior phase and magnitude performances but also a 37% size reduction. The equivalent circuit of the CRLH CPW, which represents the left-handed (LH), right-handed (RH), and lossy characteristics, is constructed and the results obtained are in good agreement with the full-wave simulation and measurement. Applications involving the wideband coplanar waveguide-to-coplanar stripline (CPW-to-CPS) transition and the tapered loop antenna are presented to stress the practicality of the 180/spl deg/ CRLH CPW power splitter. The 3-dB insertion loss bandwidth is measured as 98% for the case of a back-to-back CPW-to-CPS transition. The tapered loop antenna fed by the proposed transition achieves a measured 10-dB return loss bandwidth of 114%, and shows similar radiation patterns and 6-9 dBi antenna gain in its operating band.",
"title": ""
},
{
"docid": "4191648ada97ecc5a906468369c12bf4",
"text": "Dermoscopy is a widely used technique whose role in the clinical (and preoperative) diagnosis of melanocytic and non-melanocytic skin lesions has been well established in recent years. The aim of this paper is to clarify the correlations between the \"local\" dermoscopic findings in melanoma and the underlying histology, in order to help clinicians in routine practice.",
"title": ""
},
{
"docid": "577b0b3215fbd6a6b6fd0d8882967a1e",
"text": "Generating texts of different sentiment labels is getting more and more attention in the area of natural language generation. Recently, Generative Adversarial Net (GAN) has shown promising results in text generation. However, the texts generated by GAN usually suffer from the problems of poor quality, lack of diversity and mode collapse. In this paper, we propose a novel framework SentiGAN, which has multiple generators and one multi-class discriminator, to address the above problems. In our framework, multiple generators are trained simultaneously, aiming at generating texts of different sentiment labels without supervision. We propose a penalty based objective in the generators to force each of them to generate diversified examples of a specific sentiment label. Moreover, the use of multiple generators and one multi-class discriminator can make each generator focus on generating its own examples of a specific sentiment label accurately. Experimental results on four datasets demonstrate that our model consistently outperforms several state-of-the-art text generation methods in the sentiment accuracy and quality of generated texts.",
"title": ""
},
{
"docid": "a0c15895a455c07b477d4486d32582ef",
"text": "PURPOSE\nTo evaluate the efficacy of α-lipoic acid (ALA) in reducing scarring after trabeculectomy.\n\n\nMATERIALS AND METHODS\nEighteen adult New Zealand white rabbits underwent trabeculectomy. During trabeculectomy, thin sponges were placed between the sclera and Tenon's capsule for 3 minutes, saline solution, mitomycin-C (MMC) and ALA was applied to the control group (CG) (n=6 eyes), MMC group (MMCG) (n=6 eyes), and ALA group (ALAG) (n=6 eyes), respectively. After surgery, topical saline and ALA was applied for 28 days to the control and ALAGs, respectively. Filtrating bleb patency was evaluated by using 0.1% trepan blue. Hematoxylin and eosin and Masson trichrome staining for toxicity, total cellularity, and collagen organization; α-smooth muscle actin immunohistochemistry staining performed for myofibroblast phenotype identification.\n\n\nRESULTS\nClinical evaluation showed that all 6 blebs (100%) of the CG had failed, whereas there were only 2 failures (33%) in the ALAG and no failures in the MMCG on day 28. Histologic evaluation showed significantly lower inflammatory cell infiltration in the ALAGs and CGs than the MMCG. Toxicity change was more significant in the MMCG than the control and ALAGs. Collagen was better organized in the ALAG than control and MMCGs. In immunohistochemistry evaluation, ALA significantly reduced the population of cells expressing α-smooth muscle action.\n\n\nCONCLUSIONS\nΑLA prevents and/or reduces fibrosis by inhibition of inflammation pathways, revascularization, and accumulation of extracellular matrix. It can be used as an agent for delaying tissue regeneration and for providing a more functional-permanent fistula.",
"title": ""
},
{
"docid": "3309e09d16e74f87a507181bd82cd7f0",
"text": "The goal of this work is to overview and summarize the grasping taxonomies reported in the literature. Our long term goal is to understand how to reduce mechanical complexity of anthropomorphic hands and still preserve their dexterity. On the basis of a literature survey, 33 different grasp types are taken into account. They were then arranged in a hierarchical manner, resulting in 17 grasp types.",
"title": ""
},
{
"docid": "e2d8da3d28f560c4199991dbdffb8c2c",
"text": "Attention-based neural encoder-decoder frameworks have been widely adopted for image captioning. Most methods force visual attention to be active for every generated word. However, the decoder likely requires little to no visual information from the image to predict non-visual words such as the and of. Other words that may seem visual can often be predicted reliably just from the language model e.g., sign after behind a red stop or phone following talking on a cell. In this paper, we propose a novel adaptive attention model with a visual sentinel. At each time step, our model decides whether to attend to the image (and if so, to which regions) or to the visual sentinel. The model decides whether to attend to the image and where, in order to extract meaningful information for sequential word generation. We test our method on the COCO image captioning 2015 challenge dataset and Flickr30K. Our approach sets the new state-of-the-art by a significant margin.",
"title": ""
},
{
"docid": "1b82ef890fbbf033781ea65202b2f4b9",
"text": "We present a fast GPU-based streaming algorithm to perform collision queries between deformable models. Our approach is based on hierarchical culling and reduces the computation to generating different streams. We present a novel stream registration method to compact the streams and efficiently compute the potentially colliding pairs of primitives. We also use a deferred front tracking method to lower the memory overhead. The overall algorithm has been implemented on different GPUs and we have evaluated its performance on non-rigid and deformable simulations. We highlight our speedups over prior CPU-based and GPU-based algorithms. In practice, our algorithm can perform inter-object and intra-object computations on models composed of hundreds of thousands of triangles in tens of milliseconds.",
"title": ""
},
{
"docid": "1104df035599f5f890e9b8650ea336be",
"text": "A new digital programmable CMOS analog front-end (AFE) IC for measuring electroencephalograph or electrocardiogram signals in a portable instrumentation design approach is presented. This includes a new high-performance rail-to-rail instrumentation amplifier (IA) dedicated to the low-power AFE IC. The measurement results have shown that the proposed biomedical AFE IC, with a die size of 4.81 mm/sup 2/, achieves a maximum stable ac gain of 10 000 V/V, input-referred noise of 0.86 /spl mu/ V/sub rms/ (0.3 Hz-150 Hz), common-mode rejection ratio of at least 115 dB (0-1 kHz), input-referred dc offset of less than 60 /spl mu/V, input common mode range from -1.5 V to 1.3 V, and current drain of 485 /spl mu/A (excluding the power dissipation of external clock oscillator) at a /spl plusmn/1.5-V supply using a standard 0.5-/spl mu/m CMOS process technology.",
"title": ""
},
{
"docid": "c01fbc8bd278b06e0476c6fbffca0ad1",
"text": "Memristors can be optimally used to implement logic circuits. In this paper, a logic circuit based on Memristor Ratioed Logic (MRL) is proposed. Specifically, a hybrid CMOS-memristive logic family by a suitable combination of 4 memristor and a complementary inverter CMOS structure is presented. The proposed structure by having outputs of AND, OR and XOR gates of inputs at the same time, reducing the area and connections and fewer power consumption can be appropriate for implementation of more complex circuits. Circuit design of a single-bit Full Adder is considered as a case study. The Full Adder proposed is implemented using 10 memristors and 4 transistors comparing to 18 memristors and 8 transistors in the other related work.",
"title": ""
},
{
"docid": "0b705fc98638cf042e84417849259074",
"text": "G et al. [Gallego, G., G. Iyengar, R. Phillips, A. Dubey. 2004. Managing flexible products on a network. CORC Technical Report TR-2004-01, Department of Industrial Engineering and Operations Research, Columbia University, New York.] recently proposed a choice-based deterministic linear programming model (CDLP) for network revenue management (RM) that parallels the widely used deterministic linear programming (DLP) model. While they focused on analyzing “flexible products”—a situation in which the provider has the flexibility of using a collection of products (e.g., different flight times and/or itineraries) to serve the same market demand (e.g., an origin-destination connection)—their approach has broader implications for understanding choice-based RM on a network. In this paper, we explore the implications in detail. Specifically, we characterize optimal offer sets (sets of available network products) by extending to the network case a notion of “efficiency” developed by Talluri and van Ryzin [Talluri, K. T., G. J. van Ryzin. 2004. Revenue management under a general discrete choice model of consumer behavior. Management Sci. 50 15–33.] for the single-leg, choice-based RM problem. We show that, asymptotically, as demand and capacity are scaled up, only these efficient sets are used in an optimal policy. This analysis suggests that efficiency is a potentially useful approach for identifying “good” offer sets on networks, as it is in the case of single-leg problems. Second, we propose a practical decomposition heuristic for converting the static CDLP solution into a dynamic control policy. The heuristic is quite similar to the familiar displacement-adjusted virtual nesting (DAVN) approximation used in traditional network RM, and it significantly improves on the performance of the static LP solution. We illustrate the heuristic on several numerical examples.",
"title": ""
},
{
"docid": "9a30008cc270ac7a0bb1a0f12dca6187",
"text": "Recent advancements in deep neural networks for graph-structured data have led to state-of-the-art performance on recommender system benchmarks. However, making these methods practical and scalable to web-scale recommendation tasks with billions of items and hundreds of millions of users remains an unsolved challenge. Here we describe a large-scale deep recommendation engine that we developed and deployed at Pinterest. We develop a data-efficient Graph Convolutional Network (GCN) algorithm, which combines efficient random walks and graph convolutions to generate embeddings of nodes (i.e., items) that incorporate both graph structure as well as node feature information. Compared to prior GCN approaches, we develop a novel method based on highly efficient random walks to structure the convolutions and design a novel training strategy that relies on harder-and-harder training examples to improve robustness and convergence of the model. We also develop an efficient MapReduce model inference algorithm to generate embeddings using a trained model. Overall, we can train on and embed graphs that are four orders of magnitude larger than typical GCN implementations. We show how GCN embeddings can be used to make high-quality recommendations in various settings at Pinterest, which has a massive underlying graph with 3 billion nodes representing pins and boards, and 17 billion edges. According to offline metrics, user studies, as well as A/B tests, our approach generates higher-quality recommendations than comparable deep learning based systems. To our knowledge, this is by far the largest application of deep graph embeddings to date and paves the way for a new generation of web-scale recommender systems based on graph convolutional architectures.",
"title": ""
},
{
"docid": "1d8653708c06f27433dc57844550bb4c",
"text": "Because of the nonlinearity of digital PWM generator and the effect of power supply noise in power stage, the error is introduced into digital class D power amplifier. A method used to eliminate the error is presented in this paper, and it is easy to implement. Based on this method, a digital class D power amplifier is designed and simulated, the simulation results indicate this method can basically eliminate the error produced by digital PWM generator and power stage, and improve the performance of the system.",
"title": ""
},
{
"docid": "bd3016195482f7fbd41f03a25d1a9e83",
"text": "Evaluating in Massive Open Online Courses (MOOCs) is a difficult task because of the huge number of students involved in the courses. Peer grading is an effective method to cope with this problem, but something must be done to lessen the effect of the subjective evaluation. In this paper we present a matrix factorization approach able to learn from the order of the subset of exams evaluated by each grader. We tested this method on a data set provided by a real peer review process. By using a tailored graphical representation, the induced model could also allow the detection of peculiarities in the peer review process.",
"title": ""
}
] |
scidocsrr
|
f2be7280227d473ef0dbe3d6c97783ef
|
Study of a T-Shaped Slot With a Capacitor for High Isolation Between MIMO Antennas
|
[
{
"docid": "b3c9bc55f5a9d64a369ec67e1364c4fc",
"text": "This paper introduces a coupling element to enhance the isolation between two closely packed antennas operating at the same frequency band. The proposed structure consists of two antenna elements and a coupling element which is located in between the two antenna elements. The idea is to use field cancellation to enhance isolation by putting a coupling element which artificially creates an additional coupling path between the antenna elements. To validate the idea, a design for a USB dongle MIMO antenna for the 2.4 GHz WLAN band is presented. In this design, the antenna elements are etched on a compact low-cost FR4 PCB board with dimensions of 20times40times1.6 mm3. According to our measurement results, we can achieve more than 30 dB isolation between the antenna elements even though the two parallel individual planar inverted F antenna (PIFA) in the design share a solid ground plane with inter-antenna spacing (Center to Center) of less than 0.095 lambdao or edge to edge separations of just 3.6 mm (0.0294 lambdao). Both simulation and measurement results are used to confirm the antenna isolation and performance. The method can also be applied to different types of antennas such as non-planar antennas. Parametric studies and current distribution for the design are also included to show how to tune the structure and control the isolation.",
"title": ""
}
] |
[
{
"docid": "2232d02a700d412c61cab20b98b6a6c2",
"text": "Intranasal drug delivery (INDD) systems offer a route to the brain that bypasses problems related to gastrointestinal absorption, first-pass metabolism, and the blood-brain barrier; onset of therapeutic action is rapid, and the inconvenience and discomfort of parenteral administration are avoided. INDD has found several applications in neuropsychiatry, such as to treat migraine, acute and chronic pain, Parkinson disease, disorders of cognition, autism, schizophrenia, social phobia, and depression. INDD has also been used to test experimental drugs, such as peptides, for neuropsychiatric indications; these drugs cannot easily be administered by other routes. This article examines the advantages and applications of INDD in neuropsychiatry; provides examples of test, experimental, and approved INDD treatments; and focuses especially on the potential of intranasal ketamine for the acute and maintenance therapy of refractory depression.",
"title": ""
},
{
"docid": "8464328ecb1fcfbd6d727af489de5188",
"text": "Recent deep learning (DL) models have moved beyond static network architectures to dynamic ones, handling data where the network structure changes every example, such as sequences of variable lengths, trees, and graphs. Existing dataflow-based programming models for DL—both static and dynamic declaration—either cannot readily express these dynamic models, or are inefficient due to repeated dataflow graph construction and processing, and difficulties in batched execution. We present Cavs, a vertexcentric programming interface and optimized system implementation for dynamic DL models. Cavs represents dynamic network structure as a static vertex function F and a dynamic instance-specific graph G, and performs backpropagation by scheduling the execution of F following the dependencies in G. Cavs bypasses expensive graph construction and preprocessing overhead, allows for the use of static graph optimization techniques on pre-defined operations in F , and naturally exposes batched execution opportunities over different graphs. Experiments comparing Cavs to two state-of-the-art frameworks for dynamic NNs (TensorFlow Fold and DyNet) demonstrate the efficacy of this approach: Cavs achieves a near one order of magnitude speedup on training of various dynamic NN architectures, and ablations demonstrate the contribution of our proposed batching and memory management strategies.",
"title": ""
},
{
"docid": "9a2ab1d198468819f32a2b74334528ae",
"text": "This paper introduces GeoSpark an in-memory cluster computing framework for processing large-scale spatial data. GeoSpark consists of three layers: Apache Spark Layer, Spatial RDD Layer and Spatial Query Processing Layer. Apache Spark Layer provides basic Spark functionalities that include loading / storing data to disk as well as regular RDD operations. Spatial RDD Layer consists of three novel Spatial Resilient Distributed Datasets (SRDDs) which extend regular Apache Spark RDDs to support geometrical and spatial objects. GeoSpark provides a geometrical operations library that accesses Spatial RDDs to perform basic geometrical operations (e.g., Overlap, Intersect). System users can leverage the newly defined SRDDs to effectively develop spatial data processing programs in Spark. The Spatial Query Processing Layer efficiently executes spatial query processing algorithms (e.g., Spatial Range, Join, KNN query) on SRDDs. GeoSpark also allows users to create a spatial index (e.g., R-tree, Quad-tree) that boosts spatial data processing performance in each SRDD partition. Preliminary experiments show that GeoSpark achieves better run time performance than its Hadoop-based counterparts (e.g., SpatialHadoop).",
"title": ""
},
{
"docid": "7359794d213f095d3429f114748545c3",
"text": "Purpose: To investigate the impact of residual astigmatism on visual acuity (VA) after the implantation of a novel extended range of vision (ERV) intraocular lens (IOL) based on the correction of spherical and chromatic aberration. Method: The study enrolled 411 patients bilaterally implanted with the ERV IOL Tecnis Symfony. Visual acuity and subjective refraction were analyzed during the 4to 6-month follow-up. The sample of eyes was stratified for four groups according to the magnitude of postoperative refractive astigmatism and postoperative spherical equivalent. Results: The astigmatism analysis included 386 eyes of 193 patients with both eyes of each patient within the same cylinder range. Uncorrected VAs for distance, intermediate and near were better in the group of eyes with lower level of postoperative astigmatism, but even in eyes with residual cylinders up to 0.75 D, the loss of VA lines was clinically not relevant. The orientation of astigmatism did not seem to have an impact on the tolerance to the residual cylinder. The SE evaluation included 810 eyes of 405 patients, with both eyes of each patient in the same SE range. Uncorrected VAs for distance, intermediate and near, were very similar in all SE groups. Conclusion: Residual cylinders up to 0.75 D after the implantation of the Tecnis Symfony IOL have a very mild impact on monocular and binocular VA. The Tecnis Symfony IOL shows a good tolerance to unexpected refractive surprises and thus a better “sweet spot”.",
"title": ""
},
{
"docid": "fc0a8bffb77dd7498658eb1319edd566",
"text": "There continues to be debate about the long-term neuropsychological impact of mild traumatic brain injury (MTBI). A meta-analysis of the relevant literature was conducted to determine the impact of MTBI across nine cognitive domains. The analysis was based on 39 studies involving 1463 cases of MTBI and 1191 control cases. The overall effect of MTBI on neuropsychological functioning was moderate (d = .54). However, findings were moderated by cognitive domain, time since injury, patient characteristics, and sampling methods. Acute effects (less than 3 months postinjury) of MTBI were greatest for delayed memory and fluency (d = 1.03 and .89, respectively). In unselected or prospective samples, the overall analysis revealed no residual neuropsychological impairment by 3 months postinjury (d = .04). In contrast, clinic-based samples and samples including participants in litigation were associated with greater cognitive sequelae of MTBI (d = .74 and .78, respectively at 3 months or greater). Indeed, litigation was associated with stable or worsening of cognitive functioning over time. The implications and limitations of these findings are discussed.",
"title": ""
},
{
"docid": "91136fd0fd8e15ed1d6d6bf7add489f0",
"text": "Microelectromechanical Systems (MEMS) technology has already led to advances in optical imaging, scanning, communications and adaptive applications. Many of these efforts have been approached without the use of feedback control techniques that are common in macro-scale operations to ensure repeatable and precise performance. This paper examines control techniques and related issues of precision performance as applied to a one-degree-of-freedom electrostatic MEMS micro mirror.",
"title": ""
},
{
"docid": "81b82ae24327c7d5c0b0bf4a04904826",
"text": "AIM\nTo identify key predictors and moderators of mental health 'help-seeking behavior' in adolescents.\n\n\nBACKGROUND\nMental illness is highly prevalent in adolescents and young adults; however, individuals in this demographic group are among the least likely to seek help for such illnesses. Very little quantitative research has examined predictors of help-seeking behaviour in this demographic group.\n\n\nDESIGN\nA cross-sectional design was used.\n\n\nMETHODS\nA group of 180 volunteers between the ages of 17-25 completed a survey designed to measure hypothesized predictors and moderators of help-seeking behaviour. Predictors included a range of health beliefs, personality traits and attitudes. Data were collected in August 2010 and were analysed using two standard and three hierarchical multiple regression analyses.\n\n\nFINDINGS\nThe standard multiple regression analyses revealed that extraversion, perceived benefits of seeking help, perceived barriers to seeking help and social support were direct predictors of help-seeking behaviour. Tests of moderated relationships (using hierarchical multiple regression analyses) indicated that perceived benefits were more important than barriers in predicting help-seeking behaviour. In addition, perceived susceptibility did not predict help-seeking behaviour unless individuals were health conscious to begin with or they believed that they would benefit from help.\n\n\nCONCLUSION\nA range of personality traits, attitudes and health beliefs can predict help-seeking behaviour for mental health problems in adolescents. The variable 'Perceived Benefits' is of particular importance as it is: (1) a strong and robust predictor of help-seeking behaviour; and (2) a factor that can theoretically be modified based on health promotion programmes.",
"title": ""
},
{
"docid": "6f265af3f4f93fcce13563cac14b5774",
"text": "Inorganic pyrophosphate (PP(i)) produced by cells inhibits mineralization by binding to crystals. Its ubiquitous presence is thought to prevent \"soft\" tissues from mineralizing, whereas its degradation to P(i) in bones and teeth by tissue-nonspecific alkaline phosphatase (Tnap, Tnsalp, Alpl, Akp2) may facilitate crystal growth. Whereas the crystal binding properties of PP(i) are largely understood, less is known about its effects on osteoblast activity. We have used MC3T3-E1 osteoblast cultures to investigate the effect of PP(i) on osteoblast function and matrix mineralization. Mineralization in the cultures was dose-dependently inhibited by PP(i). This inhibition could be reversed by Tnap, but not if PP(i) was bound to mineral. PP(i) also led to increased levels of osteopontin (Opn) induced via the Erk1/2 and p38 MAPK signaling pathways. Opn regulation by PP(i) was also insensitive to foscarnet (an inhibitor of phosphate uptake) and levamisole (an inhibitor of Tnap enzymatic activity), suggesting that increased Opn levels did not result from changes in phosphate. Exogenous OPN inhibited mineralization, but dephosphorylation by Tnap reversed this effect, suggesting that OPN inhibits mineralization via its negatively charged phosphate residues and that like PP(i), hydrolysis by Tnap reduces its mineral inhibiting potency. Using enzyme kinetic studies, we have shown that PP(i) inhibits Tnap-mediated P(i) release from beta-glycerophosphate (a commonly used source of organic phosphate for culture mineralization studies) through a mixed type of inhibition. In summary, PP(i) prevents mineralization in MC3T3-E1 osteoblast cultures by at least three different mechanisms that include direct binding to growing crystals, induction of Opn expression, and inhibition of Tnap activity.",
"title": ""
},
{
"docid": "0c79db142f913564654f53b6519f2927",
"text": "For software process improvement -SPIthere are few small organizations using models that guide the management and deployment of their improvement initiatives. This is largely because a lot of these models do not consider the special characteristics of small businesses, nor the appropriate strategies for deploying an SPI initiative in this type of organization. It should also be noted that the models which direct improvement implementation for small settings do not present an explicit process with which to organize and guide the internal work of the employees involved in the implementation of the improvement opportunities. In this paper we propose a lightweight process, which takes into account appropriate strategies for this type of organization. Our proposal, known as a “Lightweight process to incorporate improvements” uses the philosophy of the Scrum agile",
"title": ""
},
{
"docid": "1b638147b80419c6a4c472b02cd9916f",
"text": "Herein, we report the development of highly water dispersible nanocomposite of conducting polyaniline and multiwalled carbon nanotubes (PANI-MWCNTs) via novel, `dynamic' or `stirred' liquid-liquid interfacial polymerization method using sulphonic acid as a dopant. MWCNTs were functionalized prior to their use and then dispersed in water. The nanocomposite was further subjected for physico-chemical characterization using spectroscopic (UV-Vis and FT-IR), FE-SEM analysis. The UV-VIS spectrum of the PANI-MWCNTs nanocomposite shows a free carrier tail with increasing absorption at higher wavelength. This confirms the presence of conducting emeraldine salt phase of the polyaniline and is further supported by FT-IR analysis. The FE-SEM images show that the thin layer of polyaniline is coated over the functionalized MWCNTs forming a `core-shell' like structure. The synthesized nanocomposite was found to be highly dispersible in water and shows beautiful colour change from dark green to blue with change in pH of the solution from 1 to 12 (i.e. from acidic to basic pH). The change in colour of the polyaniline-MWCNTs nanocomposite is mainly due to the pH dependent chemical transformation /change of thin layer of polyaniline.",
"title": ""
},
{
"docid": "2c61a29907ad3d2d6f1bbd090f33cd08",
"text": "Evolvability is the capacity to evolve. This paper introduces a simple computational model of evolvability and demonstrates that, under certain conditions, evolvability can increase indefinitely, even when there is no direct selection for evolvability. The model shows that increasing evolvability implies an accelerating evolutionary pace. It is suggested that the conditions for indefinitely increasing evolvability are satisfied in biological and cultural evolution. We claim that increasing evolvability is a large-scale trend in evolution. This hypothesis leads to testable predictions about biological and cultural evolution.",
"title": ""
},
{
"docid": "9cc2dfde38bed5e767857b1794d987bc",
"text": "Smartphones providing proprietary encryption schemes, albeit offering a novel paradigm to privacy, are becoming a bone of contention for certain sovereignties. These sovereignties have raised concerns about their security agencies not having any control on the encrypted data leaving their jurisdiction and the ensuing possibility of it being misused by people with malicious intents. Such smartphones have typically two types of customers, independent users who use it to access public mail servers and corporates/enterprises whose employees use it to access corporate emails in an encrypted form. The threat issues raised by security agencies concern mainly the enterprise servers where the encrypted data leaves the jurisdiction of the respective sovereignty while on its way to the global smartphone router. In this paper, we have analyzed such email message transfer mechanisms in smartphones and proposed some feasible solutions, which, if accepted and implemented by entities involved, can lead to a possible win-win situation for both the parties, viz., the smartphone provider who does not want to lose the customers and these sovereignties who can avoid the worry of encrypted data leaving their jurisdiction.",
"title": ""
},
{
"docid": "ec501a4ff57e812a68def82f185f4d19",
"text": "The photosynthetic light-harvesting apparatus moves energy from absorbed photons to the reaction center with remarkable quantum efficiency. Recently, long-lived quantum coherence has been proposed to influence efficiency and robustness of photosynthetic energy transfer in light-harvesting antennae. The quantum aspect of these dynamics has generated great interest both because of the possibility for efficient long-range energy transfer and because biology is typically considered to operate entirely in the classical regime. Yet, experiments to date show only that coherence persists long enough that it can influence dynamics, but they have not directly shown that coherence does influence energy transfer. Here, we provide experimental evidence that interaction between the bacteriochlorophyll chromophores and the protein environment surrounding them not only prolongs quantum coherence, but also spawns reversible, oscillatory energy transfer among excited states. Using two-dimensional electronic spectroscopy, we observe oscillatory excited-state populations demonstrating that quantum transport of energy occurs in biological systems. The observed population oscillation suggests that these light-harvesting antennae trade energy reversibly between the protein and the chromophores. Resolving design principles evident in this biological antenna could provide inspiration for new solar energy applications.",
"title": ""
},
{
"docid": "7bf5aaa12c9525909f39dc8af8774927",
"text": "Certain deterministic non-linear systems may show chaotic behaviour. Time series derived from such systems seem stochastic when analyzed with linear techniques. However, uncovering the deterministic structure is important because it allows constructing more realistic and better models and thus improved predictive capabilities. This paper provides a review of two main key features of chaotic systems, the dimensions of their strange attractors and the Lyapunov exponents. The emphasis is on state space reconstruction techniques that are used to estimate these properties, given scalar observations. Data generated from equations known to display chaotic behaviour are used for illustration. A compilation of applications to real data from widely di erent elds is given. If chaos is found to be present, one may proceed to build non-linear models, which is the topic of the second paper in this series.",
"title": ""
},
{
"docid": "658ff079f4fc59ee402a84beecd77b55",
"text": "Mitochondria are master regulators of metabolism. Mitochondria generate ATP by oxidative phosphorylation using pyruvate (derived from glucose and glycolysis) and fatty acids (FAs), both of which are oxidized in the Krebs cycle, as fuel sources. Mitochondria are also an important source of reactive oxygen species (ROS), creating oxidative stress in various contexts, including in the response to bacterial infection. Recently, complex changes in mitochondrial metabolism have been characterized in mouse macrophages in response to varying stimuli in vitro. In LPS and IFN-γ-activated macrophages (M1 macrophages), there is decreased respiration and a broken Krebs cycle, leading to accumulation of succinate and citrate, which act as signals to alter immune function. In IL-4-activated macrophages (M2 macrophages), the Krebs cycle and oxidative phosphorylation are intact and fatty acid oxidation (FAO) is also utilized. These metabolic alterations in response to the nature of the stimulus are proving to be determinants of the effector functions of M1 and M2 macrophages. Furthermore, reprogramming of macrophages from M1 to M2 can be achieved by targeting metabolic events. Here, we describe the role that metabolism plays in macrophage function in infection and immunity, and propose that reprogramming with metabolic inhibitors might be a novel therapeutic approach for the treatment of inflammatory diseases.",
"title": ""
},
{
"docid": "421261547adfa6c47c6ced492e7e3463",
"text": "Purpose – Conventional street lighting systems in areas with a low frequency of passersby are online most of the night without purpose. The consequence is that a large amount of power is wasted meaninglessly. With the broad availability of flexible-lighting technology like light-emitting diode lamps and everywhere available wireless internet connection, fast reacting, reliably operating, and power-conserving street lighting systems become reality. The purpose of this work is to describe the Smart Street Lighting (SSL) system, a first approach to accomplish the demand for flexible public lighting systems. Design/methodology/approach – This work presents the SSL system, a framework developed for a dynamic switching of street lamps based on pedestrians’ locations and desired safety (or “fear”) zones. In the developed system prototype, each pedestrian is localized via his/her smartphone, periodically sending location and configuration information to the SSL server. For street lamp control, each and every lamppost is equipped with a ZigBee-based radio device, receiving control information from the SSL server via multi-hop routing. Findings – This research paper confirms that the application of the proposed SSL system has great potential to revolutionize street lighting, particularly in suburban areas with low-pedestrian frequency. More important, the broad utilization of SSL can easily help to overcome the regulatory requirement for CO2 emission reduction by switching off lampposts whenever they are not required. Research limitations/implications – The paper discusses in detail the implementation of SSL, and presents results of its application on a small scale. Experiments have shown that objects like trees can interrupt wireless communication between lampposts and that inaccuracy of global positioning system position detection can lead to unexpected lighting effects. Originality/value – This paper introduces the novel SSL framework, a system for fast, reliable, and energy efficient street lamp switching based on a pedestrian’s location and personal desires of safety. Both safety zone definition and position estimation in this novel approach is accomplished using standard smartphone capabilities. Suggestions for overcoming these issues are discussed in the last part of the paper.",
"title": ""
},
{
"docid": "a93e0e98e6367606a8bb72000b0bbe8a",
"text": "Programming by Demonstration: a Machine Learning Approach",
"title": ""
},
{
"docid": "29734bed659764e167beac93c81ce0a7",
"text": "Fashion classification encompasses the identification of clothing items in an image. The field has applications in social media, e-commerce, and criminal law. In our work, we focus on four tasks within the fashion classification umbrella: (1) multiclass classification of clothing type; (2) clothing attribute classification; (3) clothing retrieval of nearest neighbors; and (4) clothing object detection. We report accuracy measurements for clothing style classification (50.2%) and clothing attribute classification (74.5%) that outperform baselines in the literature for the associated datasets. We additionally report promising qualitative results for our clothing retrieval and clothing object detection tasks.",
"title": ""
},
{
"docid": "688848d25ef154a797f85e03987b795f",
"text": "In this paper, we propose an omnidirectional mobile mechanism with surface contact. This mechanism is expected to perform on rough terrain and weak ground at disaster sites. In the discussion on the drive mechanism, we explain how a two axes orthogonal drive transmission system is important and we propose a principle drive mechanism for omnidirectional motion. In addition, we demonstrated that the proposed drive mechanism has potential for omnidirectional movement on rough ground by conducting experiments with prototypes.",
"title": ""
}
] |
scidocsrr
|
1d30f7381f8928527f017b85057db2bf
|
Feature Detector Using Adaptive Accelerated Segment Test
|
[
{
"docid": "e32f77e31a452ae6866652ce69c5faaa",
"text": "The efficient detection of interesting features is a crucial step for various tasks in Computer Vision. Corners are favored cues due to their two dimensional constraint and fast algorithms to detect them. Recently, a novel corner detection approach, FAST, has been presented which outperforms previous algorithms in both computational performance and repeatability. We will show how the accelerated segment test, which underlies FAST, can be significantly improved by making it more generic while increasing its performance. We do so by finding the optimal decision tree in an extended configuration space, and demonstrating how specialized trees can be combined to yield an adaptive and generic accelerated segment test. The resulting method provides high performance for arbitrary environments and so unlike FAST does not have to be adapted to a specific scene structure. We will also discuss how different test patterns affect the corner response of the accelerated segment test.",
"title": ""
},
{
"docid": "83ad3f9cce21b2f4c4f8993a3d418a44",
"text": "Effective and efficient generation of keypoints from an image is a well-studied problem in the literature and forms the basis of numerous Computer Vision applications. Established leaders in the field are the SIFT and SURF algorithms which exhibit great performance under a variety of image transformations, with SURF in particular considered as the most computationally efficient amongst the high-performance methods to date. In this paper we propose BRISK1, a novel method for keypoint detection, description and matching. A comprehensive evaluation on benchmark datasets reveals BRISK's adaptive, high quality performance as in state-of-the-art algorithms, albeit at a dramatically lower computational cost (an order of magnitude faster than SURF in cases). The key to speed lies in the application of a novel scale-space FAST-based detector in combination with the assembly of a bit-string descriptor from intensity comparisons retrieved by dedicated sampling of each keypoint neighborhood.",
"title": ""
}
] |
[
{
"docid": "6fab2f7c340b6edbffe30b061bcd991e",
"text": "A Majority-Inverter Graph (MIG) is a recently introduced logic representation form whose algebraic and Boolean properties allow for efficient logic optimization. In particular, when considering logic depth reduction, MIG algorithms obtained significantly superior synthesis results as compared to the state-of-the-art approaches based on AND-inverter graphs and commercial tools. In this paper, we present a new MIG optimization algorithm targeting size minimization based on functional hashing. The proposed algorithm makes use of minimum MIG representations which are precomputed for functions up to 4 variables using an approach based on Satisfiability Modulo Theories (SMT). Experimental results show that heavily-optimized MIGs can be further minimized also in size, thanks to our proposed methodology. When using the optimized MIGs as starting point for technology mapping, we were able to improve both depth and area for the arithmetic instances of the EPFL benchmarks beyond the current results achievable by state-of-the-art logic synthesis algorithms.",
"title": ""
},
{
"docid": "e9189d7d310a8c0a45cc1c59be6fbb2d",
"text": "The technological evolution emerges a unified (Industrial) Internet of Things network, where loosely coupled smart manufacturing devices build smart manufacturing systems and enable comprehensive collaboration possibilities that increase the dynamic and volatility of their ecosystems. On the one hand, this evolution generates a huge field for exploitation, but on the other hand also increases complexity including new challenges and requirements demanding for new approaches in several issues. One challenge is the analysis of such systems that generate huge amounts of (continuously generated) data, potentially containing valuable information useful for several use cases, such as knowledge generation, key performance indicator (KPI) optimization, diagnosis, predication, feedback to design or decision support. This work presents a review of Big Data analysis in smart manufacturing systems. It includes the status quo in research, innovation and development, next challenges, and a comprehensive list of potential use cases and exploitation possibilities.",
"title": ""
},
{
"docid": "5d6cb50477423bf9fc1ea6c27ad0f1b9",
"text": "We propose a framework for general probabilistic multi-step time series regression. Specifically, we exploit the expressiveness and temporal nature of Sequence-to-Sequence Neural Networks (e.g. recurrent and convolutional structures), the nonparametric nature of Quantile Regression and the efficiency of Direct Multi-Horizon Forecasting. A new training scheme, forking-sequences, is designed for sequential nets to boost stability and performance. We show that the approach accommodates both temporal and static covariates, learning across multiple related series, shifting seasonality, future planned event spikes and coldstarts in real life large-scale forecasting. The performance of the framework is demonstrated in an application to predict the future demand of items sold on Amazon.com, and in a public probabilistic forecasting competition to predict electricity price and load.",
"title": ""
},
{
"docid": "c55c339eb53de3a385df7d831cb4f24b",
"text": "Massive Open Online Courses (MOOCs) have gained tremendous popularity in the last few years. Thanks to MOOCs, millions of learners from all over the world have taken thousands of high-quality courses for free. Putting together an excellent MOOC ecosystem is a multidisciplinary endeavour that requires contributions from many different fields. Artificial intelligence (AI) and data mining (DM) are two such fields that have played a significant role in making MOOCs what they are today. By exploiting the vast amount of data generated by learners engaging in MOOCs, DM improves our understanding of the MOOC ecosystem and enables MOOC practitioners to deliver better courses. Similarly, AI, supported by DM, can greatly improve student experience and learning outcomes. In this survey paper, we first review the state-of-the-art artificial intelligence and data mining research applied to MOOCs, emphasising the use of AI and DM tools and techniques to improve student engagement, learning outcomes, and our understanding of the MOOC ecosystem. We then offer an overview of key trends and important research to carry out in the fields of AI and DM so that MOOCs can reach their full potential.",
"title": ""
},
{
"docid": "f52dca1ec4b77059639f6faf7c79746a",
"text": "We present an automatic approach to tree annotation in which basic nonterminal symbols are alternately split and merged to maximize the likelihood of a training treebank. Starting with a simple Xbar grammar, we learn a new grammar whose nonterminals are subsymbols of the original nonterminals. In contrast with previous work, we are able to split various terminals to different degrees, as appropriate to the actual complexity in the data. Our grammars automatically learn the kinds of linguistic distinctions exhibited in previous work on manual tree annotation. On the other hand, our grammars are much more compact and substantially more accurate than previous work on automatic annotation. Despite its simplicity, our best grammar achieves an F1 of 90.2% on the Penn Treebank, higher than fully lexicalized systems.",
"title": ""
},
{
"docid": "dcfc6f3c1eba7238bd6c6aa18dcff6df",
"text": "With the evaluation and simulation of long-term evolution/4G cellular network and hot discussion about new technologies or network architecture for 5G, the appearance of simulation and evaluation guidelines for 5G is in urgent need. This paper analyzes the challenges of building a simulation platform for 5G considering the emerging new technologies and network architectures. Based on the overview of evaluation methodologies issued for 4G candidates, challenges in 5G evaluation are formulated. Additionally, a cloud-based two-level framework of system-level simulator is proposed to validate the candidate technologies and fulfill the promising technology performance identified for 5G.",
"title": ""
},
{
"docid": "026408a6ad888ea0bcf298a23ef77177",
"text": "The microwave power transmission is an approach for wireless power transmission. As an important component of a microwave wireless power transmission systems, microwave rectennas are widely studied. A rectenna based on a microstrip dipole antenna and a microwave rectifier with high conversion efficiency were designed at 2.45 GHz. The dipole antenna achieved a gain of 5.2 dBi, a return loss greater than 10 dB, and a bandwidth of 20%. The microwave to DC (MW-DC) conversion efficiency of the rectifier was measured as 83% with 20 dBm input power and 600 Ω load. There are 72 rectennas to form an array with an area of 50 cm by 50 cm. The measured results show that the arrangement of the rectenna connection is an effective way to improve the total conversion efficiency, when the microwave power distribution is not uniform on rectenna array. The experimental results show that the highest microwave power transmission efficiency reaches 67.6%.",
"title": ""
},
{
"docid": "1a65b9d35bce45abeefe66882dcf4448",
"text": "Data is nowadays an invaluable resource, indeed it guides all business decisions in most of the computer-aided human activities. Threats to data integrity are thus of paramount relevance, as tampering with data may maliciously affect crucial business decisions. This issue is especially true in cloud computing environments, where data owners cannot control fundamental data aspects, like the physical storage of data and the control of its accesses. Blockchain has recently emerged as a fascinating technology which, among others, provides compelling properties about data integrity. Using the blockchain to face data integrity threats seems to be a natural choice, but its current limitations of low throughput, high latency, and weak stability hinder the practical feasibility of any blockchain-based solutions. In this paper, by focusing on a case study from the European SUNFISH project, which concerns the design of a secure by-design cloud federation platform for the public sector, we precisely delineate the actual data integrity needs of cloud computing environments and the research questions to be tackled to adopt blockchain-based databases. First, we detail the open research questions and the difficulties inherent in addressing them. Then, we outline a preliminary design of an effective blockchain-based database for cloud computing environments.",
"title": ""
},
{
"docid": "5d15ba47aaa29f388328824fa592addc",
"text": "Breast cancer continues to be a significant public health problem in the world. The diagnosing mammography method is the most effective technology for early detection of the breast cancer. However, in some cases, it is difficult for radiologists to detect the typical diagnostic signs, such as masses and microcalcifications on the mammograms. This paper describes a new method for mammographic image enhancement and denoising based on wavelet transform and homomorphic filtering. The mammograms are acquired from the Faculty of Medicine of the University of Akdeniz and the University of Istanbul in Turkey. Firstly wavelet transform of the mammograms is obtained and the approximation coefficients are filtered by homomorphic filter. Then the detail coefficients of the wavelet associated with noise and edges are modeled by Gaussian and Laplacian variables, respectively. The considered coefficients are compressed and enhanced using these variables with a shrinkage function. Finally using a proposed adaptive thresholding the fine details of the mammograms are retained and the noise is suppressed. The preliminary results of our work indicate that this method provides much more visibility for the suspicious regions.",
"title": ""
},
{
"docid": "102e1718e03b3a4e96ee8c2212738792",
"text": "This paper introduces a new method for the rapid development of complex rule bases involving cue phrases for the purpose of classifying text segments. The method is based on Ripple-Down Rules, a knowledge acquisition method that proved very successful in practice for building medical expert systems and does not require a knowledge engineer. We implemented our system KAFTAN and demonstrate the applicability of our method to the task of classifying scientific citations. Building cue phrase rules in KAFTAN is easy and efficient. We demonstrate the effectiveness of our approach by presenting experimental results where our resulting classifier clearly outperforms previously built classifiers in the recent literature.",
"title": ""
},
{
"docid": "423f246065662358b1590e8f59a2cc55",
"text": "Caused by the rising interest in traffic surveillance for simulations and decision management many publications concentrate on automatic vehicle detection or tracking. Quantities and velocities of different car classes form the data basis for almost every traffic model. Especially during mass events or disasters a wide-area traffic monitoring on demand is needed which can only be provided by airborne systems. This means a massive amount of image information to be handled. In this paper we present a combination of vehicle detection and tracking which is adapted to the special restrictions given on image size and flow but nevertheless yields reliable information about the traffic situation. Combining a set of modified edge filters it is possible to detect cars of different sizes and orientations with minimum computing effort, if some a priori information about the street network is used. The found vehicles are tracked between two consecutive images by an algorithm using Singular Value Decomposition. Concerning their distance and correlation the features are assigned pairwise with respect to their global positioning among each other. Choosing only the best correlating assignments it is possible to compute reliable values for the average velocities.",
"title": ""
},
{
"docid": "84b9601738c4df376b42d6f0f6190f53",
"text": "Cloud Computing is one of the most important trend and newest area in the field of information technology in which resources (e.g. CPU and storage) can be leased and released by customers through the Internet in an on-demand basis. The adoption of Cloud Computing in Education and developing countries is real an opportunity. Although Cloud computing has gained popularity in Pakistan especially in education and industry, but its impact in Pakistan is still unexplored especially in Higher Education Department. Already published work investigated in respect of factors influencing on adoption of cloud computing but very few investigated said analysis in developing countries. The Higher Education Institutions (HEIs) of Punjab, Pakistan are still not focused to discover cloud adoption factors. In this study, we prepared cloud adoption model for Higher Education Institutions (HEIs) of Punjab, a survey was carried out from 900 students all over Punjab. The survey was designed based upon literature and after discussion and opinions of academicians. In this paper, 34 hypothesis were developed that affect the cloud computing adoption in HEIs and tested by using powerful statistical analysis tools i.e. SPSS and SmartPLS. Statistical findings shows that 84.44% of students voted in the favor of cloud computing adoption in their colleges, while 99% supported Reduce Cost as most important factor in cloud adoption.",
"title": ""
},
{
"docid": "f24c9f07945572ed467f397e4274060e",
"text": "Scholarly digital libraries have become an important source of bibliographic records for scientific communities. Author name search is one of the most common query exercised in digital libraries. The name ambiguity problem in the context of author search in digital libraries, arising from multiple authors sharing the same name, poses many challenges. A number of name disambiguation methods have been proposed in the literature so far. A variety of bibliographic attributes have been considered in these methods. However, hardly any effort has been made to assess the potential contribution of these attributes. We, for the first time, evaluate the potential strength and/or weaknesses of these attributes by a rigorous course of experiments on a large data set. We also explore the potential utility of some attributes from different perspective. A close look reveals that most of the earlier work require one or more attributes which are difficult to obtain in practical applications. Based on this empirical study, we identify three very common and easy to access attributes and propose a two-step hierarchical clustering technique to solve name ambiguity using these attributes only. Experimental results on data set extracted from a popular digital library show that the proposed method achieves significantly high level of accuracy (> 90%) for most of the instances.",
"title": ""
},
{
"docid": "279302300cbdca5f8d7470532928f9bd",
"text": "The problem of feature selection is a difficult combinatorial task in Machine Learning and of high practical relevance, e.g. in bioinformatics. Genetic Algorithms (GAs) of fer a natural way to solve this problem. In this paper we present a special Genetic Algorithm, which especially take s into account the existing bounds on the generalization erro r for Support Vector Machines (SVMs). This new approach is compared to the traditional method of performing crossvalidation and to other existing algorithms for feature selection.",
"title": ""
},
{
"docid": "0ce556418f6557d86c59f178a206cd11",
"text": "The efficiency of decision processes which can be divided into two stages has been measured for the whole process as well as for each stage independently by using the conventional data envelopment analysis (DEA) methodology in order to identify the causes of inefficiency. This paper modifies the conventional DEA model by taking into account the series relationship of the two sub-processes within the whole process. Under this framework, the efficiency of the whole process can be decomposed into the product of the efficiencies of the two sub-processes. In addition to this sound mathematical property, the case of Taiwanese non-life insurance companies shows that some unusual results which have appeared in the independent model do not exist in the relational model. In other words, the relational model developed in this paper is more reliable in measuring the efficiencies and consequently is capable of identifying the causes of inefficiency more accurately. Based on the structure of the model, the idea of efficiency decomposition can be extended to systems composed of multiple stages connected in series. 2007 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "67ba6914f8d1a50b7da5024567bc5936",
"text": "Abstract—Braille alphabet is an important tool that enables visually impaired individuals to have a comfortable life like those who have normal vision. For this reason, new applications related to the Braille alphabet are being developed. In this study, a new Refreshable Braille Display was developed to help visually impaired individuals learn the Braille alphabet easier. By means of this system, any text downloaded on a computer can be read by the visually impaired individual at that moment by feeling it by his/her hands. Through this electronic device, it was aimed to make learning the Braille alphabet easier for visually impaired individuals with whom the necessary tests were conducted.",
"title": ""
},
{
"docid": "55370f9487be43f2fbd320c903005185",
"text": "Exemplar-based texture synthesis is the process of generating, from an input sample, new texture images of arbitrary size and which are perceptually equivalent to the sample. The two main approaches are statisticsbased methods and patch re-arrangement methods. In the first class, a texture is characterized by a statistical signature; then, a random sampling conditioned to this signature produces genuinely different texture images. The second class boils down to a clever “copy-paste” procedure, which stitches together large regions of the sample. Hybrid methods try to combines ideas from both approaches to avoid their hurdles. Current methods, including the recent CNN approaches, are able to produce impressive synthesis on various kinds of textures. Nevertheless, most real textures are organized at multiple scales, with global structures revealed at coarse scales and highly varying details at finer ones. Thus, when confronted with large natural images of textures the results of state-of-the-art methods degrade rapidly.",
"title": ""
},
{
"docid": "5e5ffa7890dd2e16cff9dbc9592f162e",
"text": "Spin-transfer torque magnetic memory (STT-MRAM) is currently under intense academic and industrial development, since it features non-volatility, high write and read speed and high endurance. In this work, we show that when used in a non-conventional regime, it can additionally act as a stochastic memristive device, appropriate to implement a “synaptic” function. We introduce basic concepts relating to spin-transfer torque magnetic tunnel junction (STT-MTJ, the STT-MRAM cell) behavior and its possible use to implement learning-capable synapses. Three programming regimes (low, intermediate and high current) are identified and compared. System-level simulations on a task of vehicle counting highlight the potential of the technology for learning systems. Monte Carlo simulations show its robustness to device variations. The simulations also allow comparing system operation when the different programming regimes of STT-MTJs are used. In comparison to the high and low current regimes, the intermediate current regime allows minimization of energy consumption, while retaining a high robustness to device variations. These results open the way for unexplored applications of STT-MTJs in robust, low power, cognitive-type systems.",
"title": ""
},
{
"docid": "134d2671fa44793c8969acb50c71c5c0",
"text": "OBJECTIVES\nTransferrin is a glycosylated protein responsible for transporting iron, an essential metal responsible for proper fetal development. Tobacco is a heavily used xenobiotic having a negative impact on the human body and pregnancy outcomes. Aims of this study was to examine the influence of tobacco smoking on transferrin sialic acid residues and their connection with fetal biometric parameters in women with iron-deficiency.\n\n\nMETHODS\nThe study involved 173 samples from pregnant women, smokers and non-smokers, iron deficient and not. Transferrin sialylation was determined by capillary electrophoresis. The cadmium (Cd) level was measured by atomic absorption and the sialic acid concentration by the resorcinol method.\n\n\nRESULTS\nWomen with iron deficiencies who smoked gave birth earlier than non-smoking, non-iron-deficient women. The Cd level, but not the cotinine level, was positively correlated with transferrin sialylation in the blood of iron-deficient women who smoked; 3-, 4-, 5- and 6-sialoTf correlated negatively with fetal biometric parameters in the same group.\n\n\nCONCLUSION\nIt has been shown the relationship between Cd from tobacco smoking and fetal biometric parameters observed only in the iron deficient group suggests an additive effect of these two factors, and indicate that mothers with anemia may be more susceptible to Cd toxicity and disturbed fetal development.",
"title": ""
},
{
"docid": "ab0c80a10d26607134828c6b350089aa",
"text": "Parkinson's disease (PD) is a neurodegenerative disorder with symptoms that progressively worsen with age. Pathologically, PD is characterized by the aggregation of α-synuclein in cells of the substantia nigra in the brain and loss of dopaminergic neurons. This pathology is associated with impaired movement and reduced cognitive function. The etiology of PD can be attributed to a combination of environmental and genetic factors. A popular animal model, the nematode roundworm Caenorhabditis elegans, has been frequently used to study the role of genetic and environmental factors in the molecular pathology and behavioral phenotypes associated with PD. The current review summarizes cellular markers and behavioral phenotypes in transgenic and toxin-induced PD models of C. elegans.",
"title": ""
}
] |
scidocsrr
|
a5f8eb914b8230b0374a716ebe7c939c
|
Artificial Intelligence – Consumers and Industry Impact
|
[
{
"docid": "2fc294f2ab50b917f36155c0b9e1847d",
"text": "Social and cultural conventions are an often-neglected aspect of intelligent-machine development.",
"title": ""
}
] |
[
{
"docid": "f8209a4b6cb84b63b1f034ec274fe280",
"text": "A major challenge in topic classification (TC) is the high dimensionality of the feature space. Therefore, feature extraction (FE) plays a vital role in topic classification in particular and text mining in general. FE based on cosine similarity score is commonly used to reduce the dimensionality of datasets with tens or hundreds of thousands of features, which can be impossible to process further. In this study, TF-IDF term weighting is used to extract features. Selecting relevant features and determining how to encode them for a learning machine method have a vast impact on the learning machine methods ability to extract a good model. Two different weighting methods (TF-IDF and TF-IDF Global) were used and tested on the Reuters-21578 text categorization test collection. The obtained results emerged a good candidate for enhancing the performance of English topics FE. Simulation results the Reuters-21578 text categorization show the superiority of the proposed algorithm.",
"title": ""
},
{
"docid": "8d9fbeda9f6a77e927ac14b0d426d1d3",
"text": "This paper describes a new detector for finding perspective rectangle structural features that runs in real-time. Given the vanishing points within an image, the algorithm recovers the edge points that are aligned along the vanishing lines. We then efficiently recover the intersections of pairs of lines corresponding to different vanishing points. The detector has been designed for robot visual mapping, and we present the application of this detector to real-time stereo matching and reconstruction over a corridor sequence for this goal.",
"title": ""
},
{
"docid": "e8ebec3b64e05cad3ab4c9b3d2bfa191",
"text": "Multidimensional databases have recently gained widespread acceptance in the commercial world for supporting on-line analytical processing (OLAP) applications. We propose a hypercube-based data model and a few algebraic operations that provide semantic foundation to multidimensional databases and extend their current functionality. The distinguishing feature of the proposed model is the symmetric treatment not only of all dimensions but also measures. The model also is very exible in that it provides support for multiple hierarchies along each dimension and support for adhoc aggregates. The proposed operators are composable, reorderable, and closed in application. These operators are also minimal in the sense that none can be expressed in terms of others nor can any one be dropped without sacri cing functionality. They make possible the declarative speci cation and optimization of multidimensional database queries that are currently speci ed operationally. The operators have been designed to be translated to SQL and can be implemented either on top of a relational database system or within a special purpose multidimensional database engine. In e ect, they provide an algebraic application programming interface (API) that allows the separation of the frontend from the backend. Finally, the proposed model provides a framework in which to study multidimensional databases and opens several new research problems. Current Address: Oracle Corporation, Redwood City, California. Current Address: University of California, Berkeley, California.",
"title": ""
},
{
"docid": "afffadc35ac735d11e1a415c93d1c39f",
"text": "We examine self-control problems — modeled as time-inconsistent, presentbiased preferences—in a model where a person must do an activity exactly once. We emphasize two distinctions: Do activities involve immediate costs or immediate rewards, and are people sophisticated or naive about future self-control problems? Naive people procrastinate immediate-cost activities and preproperate—do too soon—immediate-reward activities. Sophistication mitigates procrastination, but exacerbates preproperation. Moreover, with immediate costs, a small present bias can severely harm only naive people, whereas with immediate rewards it can severely harm only sophisticated people. Lessons for savings, addiction, and elsewhere are discussed. (JEL A12, B49, C70, D11, D60, D74, D91, E21)",
"title": ""
},
{
"docid": "bbf5561f88f31794ca95dd991c074b98",
"text": "O CTO B E R 2014 | Volume 18, Issue 4 GetMobile Every time you use a voice command on your smartphone, you are benefitting from a technique called cloud offload. Your speech is captured by a microphone, pre-processed, then sent over a wireless network to a cloud service that converts speech to text. The result is then forwarded to another cloud service or sent back to your mobile device, depending on the application. Speech recognition and many other resource-intensive mobile services require cloud offload. Otherwise, the service would be too slow and drain too much of your battery. Research projects on cloud offload are hot today, with MAUI [4] in 2010, Odessa [13] and CloneCloud [2] in 2011, and COMET [8] in 2012. These build on a rich heritage of work dating back to the mid-1990s on a theme that is broadly characterized as cyber foraging. They are also relevant to the concept of cloudlets [18] that has emerged as an important theme in mobile-cloud convergence. Reflecting my participation in this evolution from its origins, this article is a personal account of the key developments in this research area. It focuses on mobile computing, ignoring many other uses of remote execution since the 1980s such as distributed processing, query processing, distributed object systems, and distributed partitioning.",
"title": ""
},
{
"docid": "c158fbbcf592ff372d0d317494f79537",
"text": "The concept of no- or minimal-preparation veneers is more than 25 years old, yet there is no classification system categorizing the extent of preparation for different veneer treatments. The lack of veneer preparation classifications creates misunderstanding and miscommunication with patients and within the dental profession. Such a system could be indicated in various clinical scenarios and would benefit dentists and patients, providing a guide for conservatively preparing and placing veneers. A classification system is proposed to divide preparation and veneering into reduction--referred to as space requirement, working thickness, or material room--volume of enamel remaining, and percentage of dentin exposed. Using this type of metric provides an accurate measurement system to quantify tooth structure removal, with preferably no reduction, on a case-by-case basis, dissolve uncertainty, and aid with multiple aspects of treatment planning and communication.",
"title": ""
},
{
"docid": "a7bd7a5b7d79ce8c5691abfdcecfeec7",
"text": "We consider the problems of learning forward models that map state to high-dimensional images and inverse models that map high-dimensional images to state in robotics. Specifically, we present a perceptual model for generating video frames from state with deep networks, and provide a framework for its use in tracking and prediction tasks. We show that our proposed model greatly outperforms standard deconvolutional methods and GANs for image generation, producing clear, photo-realistic images. We also develop a convolutional neural network model for state estimation and compare the result to an Extended Kalman Filter to estimate robot trajectories. We validate all models on a real robotic system.",
"title": ""
},
{
"docid": "08c6752ef763f74eb63b2546889f0860",
"text": "Subspace clustering refers to the problem of grouping data points that lie in a union of low-dimensional subspaces. One successful approach for solving this problem is sparse subspace clustering, which is based on a sparse representation of the data. In this paper, we extend SSC to non-linear manifolds by using the kernel trick. We show that the alternating direction method of multipliers can be used to efficiently find kernel sparse representations. Various experiments on synthetic as well real datasets show that non-linear mappings lead to sparse representation that give better clustering results than state-of-the-art methods.",
"title": ""
},
{
"docid": "db433a01dd2a2fd80580ffac05601f70",
"text": "While depth tends to improve network performances, it also m akes gradient-based training more difficult since deeper networks tend to be more non-linear. The recently proposed knowledge distillation approach is aimed a t obtaining small and fast-to-execute models, and it has shown that a student netw ork could imitate the soft output of a larger teacher network or ensemble of networ ks. In this paper, we extend this idea to allow the training of a student that is d eeper and thinner than the teacher, using not only the outputs but also the inte rmediate representations learned by the teacher as hints to improve the traini ng process and final performance of the student. Because the student intermedia te hidden layer will generally be smaller than the teacher’s intermediate hidde n layer, additional parameters are introduced to map the student hidden layer to th e prediction of the teacher hidden layer. This allows one to train deeper studen s that can generalize better or run faster, a trade-off that is controlled by the ch osen student capacity. For example, on CIFAR-10, a deep student network with almost 10.4 times less parameters outperforms a larger, state-of-the-art teache r network.",
"title": ""
},
{
"docid": "557b718f65e68f3571302e955ddb74d7",
"text": "Synthetic aperture radar (SAR) has been an unparalleled tool in cloudy and rainy regions as it allows observations throughout the year because of its all-weather, all-day operation capability. In this paper, the influence of Wenchuan Earthquake on the Sichuan Giant Panda habitats was evaluated for the first time using SAR interferometry and combining data from C-band Envisat ASAR and L-band ALOS PALSAR data. Coherence analysis based on the zero-point shifting indicated that the deforestation process was significant, particularly in habitats along the Min River approaching the epicenter after the natural disaster, and as interpreted by the vegetation deterioration from landslides, avalanches and debris flows. Experiments demonstrated that C-band Envisat ASAR data were sensitive to vegetation, resulting in an underestimation of deforestation; in contrast, L-band PALSAR data were capable of evaluating the deforestation process owing to a better penetration and the significant coherence gain on damaged forest areas. The percentage of damaged forest estimated by PALSAR decreased from 20.66% to 17.34% during 2009–2010, implying an approximate 3% recovery rate of forests in the earthquake OPEN ACCESS Remote Sens. 2014, 6 6284 impacted areas. This study proves that long-wavelength SAR interferometry is promising for rapid assessment of disaster-induced deforestation, particularly in regions where the optical acquisition is constrained.",
"title": ""
},
{
"docid": "a1ccca52f1563a2e208afcaa37e209d1",
"text": "BACKGROUND\nImplicit biases involve associations outside conscious awareness that lead to a negative evaluation of a person on the basis of irrelevant characteristics such as race or gender. This review examines the evidence that healthcare professionals display implicit biases towards patients.\n\n\nMETHODS\nPubMed, PsychINFO, PsychARTICLE and CINAHL were searched for peer-reviewed articles published between 1st March 2003 and 31st March 2013. Two reviewers assessed the eligibility of the identified papers based on precise content and quality criteria. The references of eligible papers were examined to identify further eligible studies.\n\n\nRESULTS\nForty two articles were identified as eligible. Seventeen used an implicit measure (Implicit Association Test in fifteen and subliminal priming in two), to test the biases of healthcare professionals. Twenty five articles employed a between-subjects design, using vignettes to examine the influence of patient characteristics on healthcare professionals' attitudes, diagnoses, and treatment decisions. The second method was included although it does not isolate implicit attitudes because it is recognised by psychologists who specialise in implicit cognition as a way of detecting the possible presence of implicit bias. Twenty seven studies examined racial/ethnic biases; ten other biases were investigated, including gender, age and weight. Thirty five articles found evidence of implicit bias in healthcare professionals; all the studies that investigated correlations found a significant positive relationship between level of implicit bias and lower quality of care.\n\n\nDISCUSSION\nThe evidence indicates that healthcare professionals exhibit the same levels of implicit bias as the wider population. The interactions between multiple patient characteristics and between healthcare professional and patient characteristics reveal the complexity of the phenomenon of implicit bias and its influence on clinician-patient interaction. The most convincing studies from our review are those that combine the IAT and a method measuring the quality of treatment in the actual world. Correlational evidence indicates that biases are likely to influence diagnosis and treatment decisions and levels of care in some circumstances and need to be further investigated. Our review also indicates that there may sometimes be a gap between the norm of impartiality and the extent to which it is embraced by healthcare professionals for some of the tested characteristics.\n\n\nCONCLUSIONS\nOur findings highlight the need for the healthcare profession to address the role of implicit biases in disparities in healthcare. More research in actual care settings and a greater homogeneity in methods employed to test implicit biases in healthcare is needed.",
"title": ""
},
{
"docid": "9cb832657be4d4d80682c1a49249a319",
"text": "0377-2217/$ see front matter 2010 Elsevier B.V. A doi:10.1016/j.ejor.2010.08.023 ⇑ Corresponding author. Tel.: +47 73593602; fax: + E-mail address: [email protected] This paper considers a maritime inventory routing problem faced by a major cement producer. A heterogeneous fleet of bulk ships transport multiple non-mixable cement products from producing factories to regional silo stations along the coast of Norway. Inventory constraints are present both at the factories and the silos, and there are upper and lower limits for all inventories. The ship fleet capacity is limited, and in peak periods the demand for cement products at the silos exceeds the fleet capacity. In addition, constraints regarding the capacity of the ships’ cargo holds, the depth of the ports and the fact that different cement products cannot be mixed must be taken into consideration. A construction heuristic embedded in a genetic algorithmic framework is developed. The approach adopted is used to solve real instances of the problem within reasonable solution time and with good quality solutions. 2010 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "7334904bb8b95fbf9668c388d30d4d72",
"text": "Write-optimized data structures like Log-Structured Merge-tree (LSM-tree) and its variants are widely used in key-value storage systems like Big Table and Cassandra. Due to deferral and batching, the LSM-tree based storage systems need background compactions to merge key-value entries and keep them sorted for future queries and scans. Background compactions play a key role on the performance of the LSM-tree based storage systems. Existing studies about the background compaction focus on decreasing the compaction frequency, reducing I/Os or confining compactions on hot data key-ranges. They do not pay much attention to the computation time in background compactions. However, the computation time is no longer negligible, and even the computation takes more than 60% of the total compaction time in storage systems using flash based SSDs. Therefore, an alternative method to speedup the compaction is to make good use of the parallelism of underlying hardware including CPUs and I/O devices. In this paper, we analyze the compaction procedure, recognize the performance bottleneck, and propose the Pipelined Compaction Procedure (PCP) to better utilize the parallelism of CPUs and I/O devices. Theoretical analysis proves that PCP can improve the compaction bandwidth. Furthermore, we implement PCP in real system and conduct extensive experiments. The experimental results show that the pipelined compaction procedure can increase the compaction bandwidth and storage system throughput by 77% and 62% respectively.",
"title": ""
},
{
"docid": "87bd2fc53cbe92823af786e60e82f250",
"text": "Cyc is a bold attempt to assemble a massive knowledge base (on the order of 108 axioms) spanning human consensus knowledge. This article examines the need for such an undertaking and reviews the authos' efforts over the past five years to begin its construction. The methodology and history of the project are briefly discussed, followed by a more developed treatment of the current state of the representation language used (epistemological level), techniques for efficient inferencing and default reasoning (heuristic level), and the content and organization of the knowledge base.",
"title": ""
},
{
"docid": "6960f780dfc491c6cdcbb6c53fd32363",
"text": "We learn to compute optical flow by combining a classical spatial-pyramid formulation with deep learning. This estimates large motions in a coarse-to-fine approach by warping one image of a pair at each pyramid level by the current flow estimate and computing an update to the flow. Instead of the standard minimization of an objective function at each pyramid level, we train one deep network per level to compute the flow update. Unlike the recent FlowNet approach, the networks do not need to deal with large motions, these are dealt with by the pyramid. This has several advantages. First, our Spatial Pyramid Network (SPyNet) is much simpler and 96% smaller than FlowNet in terms of model parameters. This makes it more efficient and appropriate for embedded applications. Second, since the flow at each pyramid level is small (",
"title": ""
},
{
"docid": "baafff8270bf3d33d70544130968f6d3",
"text": "The authors present a new algorithm for identifying the distribution of different material types in volumetric datasets such as those produced with magnetic resonance imaging (MRI) or computed tomography (CT). Because the authors allow for mixtures of materials and treat voxels as regions, their technique reduces errors that other classification techniques can create along boundaries between materials and is particularly useful for creating accurate geometric models and renderings from volume data. It also has the potential to make volume measurements more accurately and classifies noisy, low-resolution data well. There are two unusual aspects to the authors' approach. First, they assume that, due to partial-volume effects, or blurring, voxels can contain more than one material, e.g., both muscle and fat; the authors compute the relative proportion of each material in the voxels. Second, they incorporate information from neighboring voxels into the classification process by reconstructing a continuous function, /spl rho/(x), from the samples and then looking at the distribution of values that /spl rho/(x) takes on within the region of a voxel. This distribution of values is represented by a histogram taken over the region of the voxel; the mixture of materials that those values measure is identified within the voxel using a probabilistic Bayesian approach that matches the histogram by finding the mixture of materials within each voxel most likely to have created the histogram. The size of regions that the authors classify is chosen to match the sparing of the samples because the spacing is intrinsically related to the minimum feature size that the reconstructed continuous function can represent.",
"title": ""
},
{
"docid": "f4da31cf831dd3db5f3063c5ea1fca62",
"text": "SUMMARY Backtrack algorithms are applicable to a wide variety of problems. An efficient but readable version of such an algorithm is presented and its use in the problem of finding the maximal common subgraph of two graphs is described. Techniques available in this application area for ordering and pruning the backtrack search are discussed. This algorithm has been used successfully as a component of a program for analysing chemical reactions and enumerating the bond changes which have taken place.",
"title": ""
},
{
"docid": "32fad05dacb750e5539c66bb222b0e09",
"text": "Radio Frequency Identification (RFID) technology has received considerable attention from practitioners, driven by mandates from major retailers and the United States Department of Defense. RFID technology promises numerous benefits in the supply chain, such as increased visibility, security and efficiency. Despite such attentions and the anticipated benefits, RFID is not well-understood and many problems exist in the adoption and implementation of RFID. The purpose of this paper is to introduce RFID technology to practitioners and academicians by systematically reviewing the relevant literature, discussing how RFID systems work, their advantages, supply chain impacts, and the implementation challenges and the corresponding strategies, in the hope of providing guidance for practitioners in the implementation of RFID technology and offering a springboard for academicians to conduct future research in this area.",
"title": ""
},
{
"docid": "b0a24593396ef5f8029c560f87a07c45",
"text": "BACKGROUND\nYouth with disabilities are at risk of poor health outcomes as they transition to adult healthcare. Although space and place play an important role in accessing healthcare little is known about the spatial aspects of youth's transition from pediatric to adult healthcare.\n\n\nOBJECTIVE\nTo understand the spaces of well-being as youth with physical disabilities transition from pediatric to adult healthcare.\n\n\nMETHODS\nThis study draws on a qualitative design involving 63 in-depth interviews with young adults (n = 22), parents (n = 17), and clinicians (n = 24) involved in preparing young adults for transition. All participants were recruited from a pediatric rehabilitation hospital within a metropolitan area of Ontario, Canada. Data were analyzed using an inductive content analysis approach that was informed by the spaces of well-being framework.\n\n\nRESULTS\nThe results highlight that within the 'spaces of capability' those with more disability-related complications and/or those using a mobility device encountered challenges in their transition to adult care. The 'spaces of security' influencing youth's well-being during their transition included: temporary (in)security while they were away at college, and health (in)security. Most of the focus on youth's transition included 'integrative spaces', which can enhance or hinder their well-being. Such spaces included: spatial (dis)connections (distance to access care), embeddedness (family and community), physical access, and distance. Meanwhile, therapeutic spaces involved having spaces that youth were satisfied with and enhanced their well-being as they transitioned to adult care.\n\n\nCONCLUSIONS\nIn applying the spaces of well-being framework, the findings showed that youth had varied experiences regarding spaces of capability, security, integrative, and therapeutic spaces.",
"title": ""
},
{
"docid": "de84b1b739da8e272f8bf88889b1c4ad",
"text": "Stock market is the most popular investment scheme promising high returns albeit some risks. An intelligent stock prediction model would thus be desirable. So, this paper aims at surveying recent literature in the area of Neural Network, Hidden Markov Model and Support Vector Machine used to predict the stock market fluctuation. Neural networks and SVM are identified to be the leading machine learning techniques in stock market prediction area. Also, a model for predicting stock market using HMM is presented. Traditional techniques lack in covering stock price fluctuations and so new approaches have been developed for analysis of stock price variations. Markov Model is one such recent approach promising better results. In this paper a predicting method using Hidden Markov Model is proposed to provide better accuracy and a comparison of the existing techniques is also done.",
"title": ""
}
] |
scidocsrr
|
99ec25d15b4010422aae1ab34bb01b55
|
Towards an Engine for Lifelong Interactive Knowledge Learning in Human-Machine Conversations
|
[
{
"docid": "ffa5989436b8783314d60f7fb47c447a",
"text": "A long-term goal of machine learning research is to build an intelligent dialog agent. Most research in natural language understanding has focused on learning from fixed training sets of labeled data, with supervision either at the word level (tagging, parsing tasks) or sentence level (question answering, machine translation). This kind of supervision is not realistic of how humans learn, where language is both learned by, and used for, communication. In this work, we study dialog-based language learning, where supervision is given naturally and implicitly in the response of the dialog partner during the conversation. We study this setup in two domains: the bAbI dataset of [30] and large-scale question answering from [4]. We evaluate a set of baseline learning strategies on these tasks, and show that a novel model incorporating predictive lookahead is a promising approach for learning from a teacher’s response. In particular, a surprising result is that it can learn to answer questions correctly without any reward-based supervision at all.",
"title": ""
},
{
"docid": "75e14669377727660391ab3870d1627e",
"text": "Knowledge base (KB) completion aims to infer missing facts from existing ones in a KB. Among various approaches, path ranking (PR) algorithms have received increasing attention in recent years. PR algorithms enumerate paths between entitypairs in a KB and use those paths as features to train a model for missing fact prediction. Due to their good performances and high model interpretability, several methods have been proposed. However, most existing methods suffer from scalability (high RAM consumption) and feature explosion (trains on an exponentially large number of features) problems. This paper proposes a Context-aware Path Ranking (C-PR) algorithm to solve these problems by introducing a selective path exploration strategy. C-PR learns global semantics of entities in the KB using word embedding and leverages the knowledge of entity semantics to enumerate contextually relevant paths using bidirectional random walk. Experimental results on three large KBs show that the path features (fewer in number) discovered by C-PR not only improve predictive performance but also are more interpretable than existing baselines.",
"title": ""
}
] |
[
{
"docid": "7dbb7d378eae5c4b77076aa9504ba871",
"text": "The authors present a Markov random field model which allows realistic edge modeling while providing stable maximum a posterior (MAP) solutions. The model, referred to as a generalized Gaussian Markov random field (GGMRF), is named for its similarity to the generalized Gaussian distribution used in robust detection and estimation. The model satisfies several desirable analytical and computational properties for map estimation, including continuous dependence of the estimate on the data, invariance of the character of solutions to scaling of data, and a solution which lies at the unique global minimum of the a posteriori log-likelihood function. The GGMRF is demonstrated to be useful for image reconstruction in low-dosage transmission tomography.",
"title": ""
},
{
"docid": "cc291cfa92227d97784702bd108edae1",
"text": "Graphene's optical properties in the infrared and terahertz can be tailored and enhanced by patterning graphene into periodic metamaterials with sub-wavelength feature sizes. Here we demonstrate polarization-sensitive and gate-tunable photodetection in graphene nanoribbon arrays. The long-lived hybrid plasmon-phonon modes utilized are coupled excitations of electron density oscillations and substrate (SiO2) surface polar phonons. Their excitation by s-polarization leads to an in-resonance photocurrent, an order of magnitude larger than the photocurrent observed for p-polarization, which excites electron-hole pairs. The plasmonic detectors exhibit photo-induced temperature increases up to four times as large as comparable two-dimensional graphene detectors. Moreover, the photocurrent sign becomes polarization sensitive in the narrowest nanoribbon arrays owing to differences in decay channels for photoexcited hybrid plasmon-phonons and electrons. Our work provides a path to light-sensitive and frequency-selective photodetectors based on graphene's plasmonic excitations.",
"title": ""
},
{
"docid": "61b6021f99649010437096abc13119ed",
"text": "Given electroencephalogram (EEG) data measured from several subjects under the same conditions, our goal is to estimate common task-related bases in a linear model that capture intra-subject variations as well as inter-subject variations. Such bases capture the common phenomenon in group data, which is a core of group analysis. In this paper we present a method of nonnegative matrix factorization (NMF) that is well suited to analyzing EEG data of multiple subjects. The method is referred to as group nonnegative matrix factorization (GNMF) where we seek task-related common bases reflecting both intra-subject and inter-subject variations, as well as bases involving individual characteristics. We compare GNMF with NMF and some modified NMFs, in the task of learning spectral features from EEG data. Experiments on brain computer interface (BCI) competition data indicate that GNMF improves the EEG classification performance. In addition, we also show that GNMF is useful in the task of subject-tosubject transfer where the prediction for an unseen subject is performed based on a linear model learned from different subjects in the same group.",
"title": ""
},
{
"docid": "2ab6bc212e45c3d5775e760e5a01c0ef",
"text": "The face recognition systems are used to recognize the person by using merely a person’s image. The face detection scheme is the primary method which is used to extract the region of interest (ROI). The ROI is further processed under the face recognition scheme. In the proposed model, we are going to use the cross-correlation algorithm along with the viola jones for the purpose of face recognition to recognize the person. The proposed model is proposed using the Cross-correlation algorithm along with cross correlation scheme in order to recognize the person by evaluating the facial features.",
"title": ""
},
{
"docid": "8b971925c3a9a70b6c3eaffedf5a3985",
"text": "We consider the NP-complete problem of finding an enclosing rectangle of minimum area that will contain a given a set of rectangles. We present two different constraintsatisfaction formulations of this problem. The first searches a space of absolute placements of rectangles in the enclosing rectangle, while the other searches a space of relative placements between pairs of rectangles. Both approaches dramatically outperform previous approaches to optimal rectangle packing. For problems where the rectangle dimensions have low precision, such as small integers, absolute placement is generally more efficient, whereas for rectangles with high-precision dimensions, relative placement will be more effective. In two sets of experiments, we find both the smallest rectangles and squares that can contain the set of squares of size 1 × 1, 2 × 2, . . . ,N × N , for N up to 27. In addition, we solve an open problem dating to 1966, concerning packing the set of consecutive squares up to 24 × 24 in a square of size 70 × 70. Finally, we find the smallest enclosing rectangles that can contain a set of unoriented rectangles of size 1 × 2, 2 × 3, 3 × 4, . . . ,N × (N + 1), for N up to 25.",
"title": ""
},
{
"docid": "59a98a769d8aa5565f522369e65f02fc",
"text": "Common nonlinear activation functions used in neural networks can cause training difficulties due to the saturation behavior of the activation function, which may hide dependencies that are not visible to vanilla-SGD (using first order gradients only). Gating mechanisms that use softly saturating activation functions to emulate the discrete switching of digital logic circuits are good examples of this. We propose to exploit the injection of appropriate noise so that the gradients may flow easily, even if the noiseless application of the activation function would yield zero gradient. Large noise will dominate the noise-free gradient and allow stochastic gradient descent to explore more. By adding noise only to the problematic parts of the activation function, we allow the optimization procedure to explore the boundary between the degenerate (saturating) and the well-behaved parts of the activation function. We also establish connections to simulated annealing, when the amount of noise is annealed down, making it easier to optimize hard objective functions. We find experimentally that replacing such saturating activation functions by noisy variants helps training in many contexts, yielding state-of-the-art or competitive results on different datasets and task, especially when training seems to be the most difficult, e.g., when curriculum learning is necessary to obtain good results.",
"title": ""
},
{
"docid": "20edbb4e0d7ba85da7427b4f6b8c28d9",
"text": "The use of visual models such as pictures, diagrams and animations in science education is increasing. This is because of the complex nature associated with the concepts in the field. Students, especially entrant students, often report misconceptions and learning difficulties associated with various concepts especially those that exist at a microscopic level, such as DNA, the gene and meiosis as well as those that exist in relatively large time scales such as evolution. However the role of visual literacy in the construction of knowledge in science education has not been investigated much. This article explores the theoretical process of visualization answering the question \"how can visual literacy be understood based on the theoretical cognitive process of visualization in order to inform the understanding, teaching and studying of visual literacy in science education?\" Based on various theories on cognitive processes during learning for science and general education the author argues that the theoretical process of visualization consists of three stages, namely, Internalization of Visual Models, Conceptualization of Visual Models and Externalization of Visual Models. The application of this theoretical cognitive process of visualization and the stages of visualization in science education are discussed.",
"title": ""
},
{
"docid": "bd100b77d129163277b9ea6225fd3af3",
"text": "Spatial interactions (or flows), such as population migration and disease spread, naturally form a weighted location-to-location network (graph). Such geographically embedded networks (graphs) are usually very large. For example, the county-to-county migration data in the U.S. has thousands of counties and about a million migration paths. Moreover, many variables are associated with each flow, such as the number of migrants for different age groups, income levels, and occupations. It is a challenging task to visualize such data and discover network structures, multivariate relations, and their geographic patterns simultaneously. This paper addresses these challenges by developing an integrated interactive visualization framework that consists three coupled components: (1) a spatially constrained graph partitioning method that can construct a hierarchy of geographical regions (communities), where there are more flows or connections within regions than across regions; (2) a multivariate clustering and visualization method to detect and present multivariate patterns in the aggregated region-to-region flows; and (3) a highly interactive flow mapping component to map both flow and multivariate patterns in the geographic space, at different hierarchical levels. The proposed approach can process relatively large data sets and effectively discover and visualize major flow structures and multivariate relations at the same time. User interactions are supported to facilitate the understanding of both an overview and detailed patterns.",
"title": ""
},
{
"docid": "3b88cd186023cc5d4a44314cdb521d0e",
"text": "RATIONALE, AIMS AND OBJECTIVES\nThis article aims to provide evidence to guide multidisciplinary clinical practitioners towards successful initiation and long-term maintenance of oral feeding in preterm infants, directed by the individual infant maturity.\n\n\nMETHOD\nA comprehensive review of primary research, explorative work, existing guidelines, and evidence-based opinions regarding the transition to oral feeding in preterm infants was studied to compile this document.\n\n\nRESULTS\nCurrent clinical hospital practices are described and challenged and the principles of cue-based feeding are explored. \"Traditional\" feeding regimes use criteria, such as the infant's weight, gestational age and being free of illness, and even caregiver intuition to initiate or delay oral feeding. However, these criteria could compromise the infant and increase anxiety levels and frustration for parents and caregivers. Cue-based feeding, opposed to volume-driven feeding, lead to improved feeding success, including increased weight gain, shorter hospital stay, fewer adverse events, without increasing staff workload while simultaneously improving parents' skills regarding infant feeding. Although research is available on cue-based feeding, an easy-to-use clinical guide for practitioners could not be found. A cue-based infant feeding regime, for clinical decision making on providing opportunities to support feeding success in preterm infants, is provided in this article as a framework for clinical reasoning.\n\n\nCONCLUSIONS\nCue-based feeding of preterm infants requires care providers who are trained in and sensitive to infant cues, to ensure optimal feeding success. An easy-to-use clinical guideline is presented for implementation by multidisciplinary team members. This evidence-based guideline aims to improve feeding outcomes for the newborn infant and to facilitate the tasks of nurses and caregivers.",
"title": ""
},
{
"docid": "13503c2cb633e162f094727df62092d3",
"text": "In this article, we investigate word sense distributions in noun compounds (NCs). Our primary goal is to disambiguate the word sense of component words in NCs, based on investigation of “semantic collocation” between them. We use sense collocation and lexical substitution to build supervised and unsupervised word sense disambiguation (WSD) classifiers, and show our unsupervised learner to be superior to a benchmark WSD system. Further, we develop a word sense-based approach to interpreting the semantic relations in NCs.",
"title": ""
},
{
"docid": "427796f5c37e41363c1664b47596eacf",
"text": "A trading and portfolio management system called QSR is proposed. It uses Q-learning and Sharpe ratio maximization algorithm. We use absolute proot and relative risk-adjusted proot as performance function to train the system respectively, and employ a committee of two networks to do the testing. The new proposed algorithm makes use of the advantages of both parts and can be used in a more general case. We demonstrate with experimental results that the proposed approach generates appreciable proots from trading in the foreign exchange markets.",
"title": ""
},
{
"docid": "638e0059bf390b81de2202c22427b937",
"text": "Oral and gastrointestinal mucositis is a toxicity of many forms of radiotherapy and chemotherapy. It has a significant impact on health, quality of life and economic outcomes that are associated with treatment. It also indirectly affects the success of antineoplastic therapy by limiting the ability of patients to tolerate optimal tumoricidal treatment. The complex pathogenesis of mucositis has only recently been appreciated and reflects the dynamic interactions of all of the cell and tissue types that comprise the epithelium and submucosa. The identification of the molecular events that lead to treatment-induced mucosal injury has provided targets for mechanistically based interventions to prevent and treat mucositis.",
"title": ""
},
{
"docid": "9117bb0ed6ab5fb573f16b5a09798711",
"text": "When does knowledge transfer benefit performance? Combining field data from a global consulting firm with an agent-based model, we examine how efforts to supplement one’s knowledge from coworkers interact with individual, organizational, and environmental characteristics to impact organizational performance. We find that once cost and interpersonal exchange are included in the analysis, the impact of knowledge transfer is highly contingent. Depending on specific characteristics and circumstances, knowledge transfer can better, matter little to, or even harm performance. Three illustrative studies clarify puzzling past results and offer specific boundary conditions: (1) At the individual level, better organizational support for employee learning diminishes the benefit of knowledge transfer for organizational performance. (2) At the organization level, broader access to organizational memory makes global knowledge transfer less beneficial to performance. (3) When the organizational environment becomes more turbulent, the organizational performance benefits of knowledge transfer decrease. The findings imply that organizations may forgo investments in both organizational memory and knowledge exchange, that wide-ranging knowledge exchange may be unimportant or even harmful for performance, and that organizations operating in turbulent environments may find that investment in knowledge exchange undermines performance rather than enhances it. At a time when practitioners are urged to make investments in facilitating knowledge transfer and collaboration, appreciation of the complex relationship between knowledge transfer and performance will help in reaping benefits while avoiding liabilities.",
"title": ""
},
{
"docid": "4e54ca27e8f28deefac8219cb8d02d16",
"text": "The design, simulation studies, and experimental verification of an electrically small, low-profile, broadside-radiating Huygens circularly polarized (HCP) antenna are reported. To realize its unique circular polarization cardioid-shaped radiation characteristics in a compact structure, two pairs of the metamaterial-inspired near-field resonant parasitic elements, the Egyptian axe dipole (EAD) and the capacitively loaded loop (CLL), are integrated into a crossed-dipole configuration. The EAD (CLL) elements act as the orthogonal electric dipole (magnetic dipole) radiators. Balanced broadside-radiated electric and magnetic field amplitudes with the requisite 90° phase difference between them are realized by exciting these two pairs of electric and magnetic dipoles with a specially designed, unbalanced crossed-dipole structure. The electrically small (ka = 0.73) design operates at 1575 MHz. It is low profile $0.04\\lambda _{\\mathbf {0}}$ , and its entire volume is only $0.0018\\lambda _{\\mathbf {0}}^{\\mathbf {3}}$ . A prototype of this optimized HCP antenna system was fabricated, assembled, and tested. The measured results are in good agreement with their simulated values. They demonstrate that the prototype HCP antenna resonates at 1584 MHz with a 0.6 dB axial ratio, and produces the predicted Huygens cardioid-shaped radiation patterns. The measured peak realized LHCP gain was 2.7 dBic, and the associated front-to-back ratio was 17.7 dB.",
"title": ""
},
{
"docid": "29dcdc7c19515caad04c6fb58e7de4ea",
"text": "The standard way to procedurally generate random terrain for video games and other applications is to post-process the output of a fast noise generator such as Perlin noise. Tuning the post-processing to achieve particular types of terrain requires game designers to be reasonably well-trained in mathematics. A well-known variant of Perlin noise called value noise is used in a process accessible to designers trained in geography to generate geotypical terrain based on elevation statistics drawn from widely available sources such as the United States Geographical Service. A step-by-step process for downloading and creating terrain from realworld USGS elevation data is described, and an implementation in C++ is given.",
"title": ""
},
{
"docid": "7b7e7db68753dc40fce611ce06dc7c74",
"text": "Ontology learning is the process of acquiring (constructing or integrating) an ontology (semi-) automatically. Being a knowledge acquisition task, it is a complex activity, which becomes even more complex in the context of the BOEMIE project, due to the management of multimedia resources and the multi-modal semantic interpretation that they require. The purpose of this chapter is to present a survey of the most relevant methods, techniques and tools used for the task of ontology learning. Adopting a practical perspective, an overview of the main activities involved in ontology learning is presented. This breakdown of the learning process is used as a basis for the comparative analysis of existing tools and approaches. The comparison is done along dimensions that emphasize the particular interests of the BOEMIE project. In this context, ontology learning in BOEMIE is treated and compared to the state of the art, explaining how BOEMIE addresses problems observed in existing systems and contributes to issues that are not frequently considered by existing approaches.",
"title": ""
},
{
"docid": "17ae550374220164f05c3421b6ff7cd1",
"text": "Abstract meaning representations (AMRs) are broad-coverage sentence-level semantic representations. AMRs represent sentences as rooted labeled directed acyclicmeaning representations (AMRs) are broad-coverage sentence-level semantic representations. AMRs represent sentences as rooted labeled directed acyclic graphs. AMR parsing is challenging partly due to the lack of annotated alignments between nodes in the graphs and words in the corresponding sentences. We introduce a neural parser which treats alignments as latent variables within a joint probabilistic model of concepts, relations and alignments. As exact inference requires marginalizing over alignments and is infeasible, we use the variational autoencoding framework and a continuous relaxation of the discrete alignments. We show that joint modeling is preferable to using a pipeline of align and parse. The parser achieves the best reported results on the standard benchmark (73.6% on LDC2016E25).",
"title": ""
},
{
"docid": "b70716877c23701d0897ab4a42a5beba",
"text": "We propose an end-to-end deep learning architecture that produces a 3D shape in triangular mesh from a single color image. Limited by the nature of deep neural network, previous methods usually represent a 3D shape in volume or point cloud, and it is non-trivial to convert them to the more ready-to-use mesh model. Unlike the existing methods, our network represents 3D mesh in a graph-based convolutional neural network and produces correct geometry by progressively deforming an ellipsoid, leveraging perceptual features extracted from the input image. We adopt a coarse-to-fine strategy to make the whole deformation procedure stable, and define various of mesh related losses to capture properties of different levels to guarantee visually appealing and physically accurate 3D geometry. Extensive experiments show that our method not only qualitatively produces mesh model with better details, but also achieves higher 3D shape estimation accuracy compared to the state-of-the-art.",
"title": ""
},
{
"docid": "dc4e9b951f83843b17c620a4b766282d",
"text": "Security threats have been a major concern as a result of emergence of technology in every aspect including internet market, computational and communication technologies. To solve this issue effective mechanism of “cryptography” is used to ensure integrity, privacy, availability, authentication, computability, identification and accuracy. Cryptology techniques like PKC and SKC are used of data recovery. In current work, we describe exploration of efficient approach of private key architecture on the basis of attributes: effectiveness, scalability, flexibility, reliability and degree of security issues essential for safe wired and wireless communication. The work explores efficient private key algorithm based on security of individual system and scalability under criteria of memory-cpu utilization together with encryption performance. The exploration results in AES as superior over other algorithm. The work opens a new direction over cloud security and internet of things.",
"title": ""
},
{
"docid": "dfcf58ee43773271d01cd5121c60fde0",
"text": "Semantic segmentation as a pixel-wise segmentation task provides rich object information, and it has been widely applied in many fields ranging from autonomous driving to medical image analysis. There are two main challenges on existing approaches: the first one is the obfuscation between objects resulted from the prediction of the network and the second one is the lack of localization accuracy. Hence, to tackle these challenges, we proposed global encoding module (GEModule) and dilated decoder module (DDModule). Specifically, the GEModule that integrated traditional dictionary learning and global semantic context information is to select discriminative features and improve performance. DDModule that combined dilated convolution and dense connection is used to decoder module and to refine the prediction results. We evaluated our proposed architecture on two public benchmarks, Cityscapes and CamVid data set. We conducted a series of ablation studies to exploit the effectiveness of each module, and our approach has achieved an intersection-over-union scores of 71.3% on the Cityscapes data set and 60.4% on the CamVid data set.",
"title": ""
}
] |
scidocsrr
|
c560534d1277a7f650d71830605b38be
|
Skin picking and trichotillomania in adults with obsessive-compulsive disorder.
|
[
{
"docid": "f84f279b6ef3b112a0411f5cba82e1b0",
"text": "PHILADELPHIA The difficulties inherent in obtaining consistent and adequate diagnoses for the purposes of research and therapy have been pointed out by a number of authors. Pasamanick12 in a recent article viewed the low interclinician agreement on diagnosis as an indictment of the present state of psychiatry and called for \"the development of objective, measurable and verifiable criteria of classification based not on personal or parochial considerations, buton behavioral and other objectively measurable manifestations.\" Attempts by other investigators to subject clinical observations and judgments to objective measurement have resulted in a wide variety of psychiatric rating ~ c a l e s . ~ J ~ These have been well summarized in a review article by Lorr l1 on \"Rating Scales and Check Lists for the E v a 1 u a t i o n of Psychopathology.\" In the area of psychological testing, a variety of paper-andpencil tests have been devised for the purpose of measuring specific personality traits; for example, the Depression-Elation Test, devised by Jasper in 1930. This report describes the development of an instrument designed to measure the behavioral manifestations of depression. In the planning of the research design of a project aimed at testing certain psychoanalytic formulations of depression, the necessity for establishing an appropriate system for identifying depression was recognized. Because of the reports on the low degree of interclinician agreement on diagnosis,13 we could not depend on the clinical diagnosis, but had to formulate a method of defining depression that would be reliable and valid. The available instruments were not considered adequate for our purposes. The Minnesota Multiphasic Personality Inventory, for example, was not specifically designed",
"title": ""
}
] |
[
{
"docid": "dbf694e11b78835dbc31ef4249bfff73",
"text": "Insider attacks are a well-known problem acknowledged as a threat as early as 1980s. The threat is attributed to legitimate users who abuse their privileges, and given their familiarity and proximity to the computational environment, can easily cause significant damage or losses. Due to the lack of tools and techniques, security analysts do not correctly perceive the threat, and hence consider the attacks as unpreventable. In this paper, we present a theory of insider threat assessment. First, we describe a modeling methodology which captures several aspects of insider threat, and subsequently, show threat assessment methodologies to reveal possible attack strategies of an insider.",
"title": ""
},
{
"docid": "233c63982527a264b91dfb885361b657",
"text": "One unfortunate consequence of the success story of wireless sensor networks (WSNs) in separate research communities is an evergrowing gap between theory and practice. Even though there is a increasing number of algorithmic methods for WSNs, the vast majority has never been tried in practice; conversely, many practical challenges are still awaiting efficient algorithmic solutions. The main cause for this discrepancy is the fact that programming sensor nodes still happens at a very technical level. We remedy the situation by introducing Wiselib, our algorithm library that allows for simple implementations of algorithms onto a large variety of hardware and software. This is achieved by employing advanced C++ techniques such as templates and inline functions, allowing to write generic code that is resolved and bound at compile time, resulting in virtually no memory or computation overhead at run time. The Wiselib runs on different host operating systems, such as Contiki, iSense OS, and ScatterWeb. Furthermore, it runs on virtual nodes simulated by Shawn. For any algorithm, the Wiselib provides data structures that suit the specific properties of the target platform. Algorithm code does not contain any platform-specific specializations, allowing a single implementation to run natively on heterogeneous networks. In this paper, we describe the building blocks of the Wiselib, and analyze the overhead. We demonstrate the effectiveness of our approach by showing how routing algorithms can be implemented. We also report on results from experiments with real sensor-node hardware.",
"title": ""
},
{
"docid": "4650411615ad68be9596e5de3c0613f1",
"text": "Based on the limitations of traditional English class, an English listening class was designed by Edmodo platform through making use of the advantages of flipped classroom. On this class, students will carry out online autonomous learning before class, teacher will guide students learning collaboratively in class, as well as after-school reflection and summary will be realized. By analyzing teaching effect on flipped classroom, it can provide reference and teaching model for English listening classes in local universities.",
"title": ""
},
{
"docid": "107d6605a6159d5a278b49b8c020cdd9",
"text": "Internet applications increasingly rely on scalable data structures that must support high throughput and store huge amounts of data. These data structures can be hard to implement efficiently. Recent proposals have overcome this problem by giving up on generality and implementing specialized interfaces and functionality (e.g., Dynamo [4]). We present the design of a more general and flexible solution: a fault-tolerant and scalable distributed B-tree. In addition to the usual B-tree operations, our B-tree provides some important practical features: transactions for atomically executing several operations in one or more B-trees, online migration of B-tree nodes between servers for load-balancing, and dynamic addition and removal of servers for supporting incremental growth of the system. Our design is conceptually simple. Rather than using complex concurrency and locking protocols, we use distributed transactions to make changes to B-tree nodes. We show how to extend the B-tree and keep additional information so that these transactions execute quickly and efficiently. Our design relies on an underlying distributed data sharing service, Sinfonia [1], which provides fault tolerance and a light-weight distributed atomic primitive. We use this primitive to commit our transactions. We implemented our B-tree and show that it performs comparably to an existing open-source B-tree and that it scales to hundreds of machines. We believe that our approach is general and can be used to implement other distributed data structures easily.",
"title": ""
},
{
"docid": "58710f81203e204bf0fcbd19bc57b921",
"text": "In this demo, we demonstrate a functional prototype of an air quality monitoring box (AQBox) built from cheap/commodity off- the-shelf (COTS) sensors. We use a set of MQ gas sensors, a temperature and humidity sensor, a dust sensor and a GPS. We instrument the box, powered by an on-board battery, with a 3G cellular connection to upload sensed data to the cloud. The box is suitable for deploying in developing countries where other means to monitor air quality, such as large expensive environmental sensors affixed to certain locations (such as at weather stations) and use of satellite, is not available or not viable. We shall demonstrate the construction and function of the box as well as the collection and analysis of captured data (both in real-time and offline). Built and deployed in large numbers, we believe, these boxes can be a cheap solution to perpetual air quality monitoring for modern cities.",
"title": ""
},
{
"docid": "c56c392e1a7d58912eeeb1718379fa37",
"text": "The changing face of technology has played an integral role in the development of the hotel and restaurant industry. The manuscript investigated the impact that technology has had on the hotel and restaurant industry. A detailed review of the literature regarding the growth of technology in the industry was linked to the development of strategic direction. The manuscript also looked at the strategic analysis methodology for evaluating and taking advantage of current and future technological innovations for the hospitality industry. Identification and implementation of these technologies can help in building a sustainable competitive advantage for hotels and restaurants.",
"title": ""
},
{
"docid": "19e070089a8495a437e81da50f3eb21c",
"text": "Mobile payment refers to the use of mobile devices to conduct payment transactions. Users can use mobile devices for remote and proximity payments; moreover, they can purchase digital contents and physical goods and services. It offers an alternative payment method for consumers. However, there are relative low adoption rates in this payment method. This research aims to identify and explore key factors that affect the decision of whether to use mobile payments. Two well-established theories, the Technology Acceptance Model (TAM) and the Innovation Diffusion Theory (IDT), are applied to investigate user acceptance of mobile payments. Survey data from mobile payments users will be used to test the proposed hypothesis and the model.",
"title": ""
},
{
"docid": "6d925c32d3900512e0fd0ed36b683c69",
"text": "This paper presents a detailed design process of an ultra-high speed, switched reluctance machine for micro machining. The performance goal of the machine is to reach a maximum rotation speed of 750,000 rpm with an output power of 100 W. The design of the rotor involves reducing aerodynamic drag, avoiding mechanical resonance, and mitigating excessive stress. The design of the stator focuses on meeting the torque requirement while minimizing core loss and copper loss. The performance of the machine and the strength of the rotor structure are both verified through finite-element simulations The final design is a 6/4 switched reluctance machine with a 6mm diameter rotor that is wrapped in a carbon fiber sleeve and exhibits 13.6 W of viscous loss. The stator has shoeless poles and exhibits 19.1 W of electromagnetic loss.",
"title": ""
},
{
"docid": "837c34e3999714c0aa0dcf901aa278cf",
"text": "A novel high temperature superconducting interdigital bandpass filter is proposed by using coplanar waveguide quarter-wavelength resonators. The CPW resonators are arranged in parallel, and consequently the filter becomes very compact. The filter is a 5-pole Chebyshev BPF with a midband frequency of 5.0GHz and an equal-ripple fractional bandwidth of 3.2%. It is fabricated using a YBCO film deposited on an MgO substrate. The measured filtering characteristics agree well with EM simulations and show a low insertion loss in spite of the small size of the filter.",
"title": ""
},
{
"docid": "a4dd8ab8b45a8478ca4ac7e19debf777",
"text": "Most sensory, cognitive and motor functions depend on the interactions of many neurons. In recent years, there has been rapid development and increasing use of technologies for recording from large numbers of neurons, either sequentially or simultaneously. A key question is what scientific insight can be gained by studying a population of recorded neurons beyond studying each neuron individually. Here, we examine three important motivations for population studies: single-trial hypotheses requiring statistical power, hypotheses of population response structure and exploratory analyses of large data sets. Many recent studies have adopted dimensionality reduction to analyze these populations and to find features that are not apparent at the level of individual neurons. We describe the dimensionality reduction methods commonly applied to population activity and offer practical advice about selecting methods and interpreting their outputs. This review is intended for experimental and computational researchers who seek to understand the role dimensionality reduction has had and can have in systems neuroscience, and who seek to apply these methods to their own data.",
"title": ""
},
{
"docid": "db534e232e485f83d9808cde9052cdb0",
"text": "Due to conformal capability, research on transmission lines has received much attention lately. Many studies have been reported in the last decade in which transmission lines have been analyzed extensively using various techniques. It is well known that transmission lines are used for transmission of information, but in this case the main aim is to deliver information from generator to receiver with low attenuation. To achieve this, the load should be matched to the characteristic impedance of the line, meaning that the wave coefficient should be near 1 (one). One of the most important methods for line matching is through quarter-wavelength line (quarter-wave transformer). Analysis of transmission lines using numerical methods is difficult because of any possible error that can occur. Therefore, the best solution in this case would be the use of any software package which is designed for analysis of transmission lines. In this paper we will use Sonet software which is generally used for the analysis of planar lines.",
"title": ""
},
{
"docid": "410aa6bb03299e5fda9c28f77e37bc5b",
"text": "Spamming has been a widespread problem for social networks. In recent years there is an increasing interest in the analysis of anti-spamming for microblogs, such as Twitter. In this paper we present a systematic research on the analysis of spamming in Sina Weibo platform, which is currently a dominant microblogging service provider in China. Our research objectives are to understand the specific spamming behaviors in Sina Weibo and find approaches to identify and block spammers in Sina Weibo based on spamming behavior classifiers. To start with the analysis of spamming behaviors we devise several effective methods to collect a large set of spammer samples, including uses of proactive honeypots and crawlers, keywords based searching and buying spammer samples directly from online merchants. We processed the database associated with these spammer samples and interestingly we found three representative spamming behaviors: aggressive advertising, repeated duplicate reposting and aggressive following. We extract various features and compare the behaviors of spammers and legitimate users with regard to these features. It is found that spamming behaviors and normal behaviors have distinct characteristics. Based on these findings we design an automatic online spammer identification system. Through tests with real data it is demonstrated that the system can effectively detect the spamming behaviors and identify spammers in Sina Weibo.",
"title": ""
},
{
"docid": "fe4046a3cf32de51c9ff75be49b34648",
"text": "A method of preventing the degradation in the isolation between the orthogonal polarization ports caused by beamforming network routing in combined edge/aperture fed dual-polarized microstrip-patch planar array antennas is described. The simulated and measured performance of such planar arrays is demonstrated. Measured port isolations of 50 dB at center frequency, and more than 40 dB over a 4% bandwidth, are achieved. In addition, insight into the physical reasons for the improved port-to-port isolation levels, of the proposed element geometry and beamforming network layout, is obtained through prudent use of the electromagnetic modelling.",
"title": ""
},
{
"docid": "aeadbf476331a67bec51d5d6fb6cc80b",
"text": "Gamification, an emerging idea for using game-design elements and principles to make everyday tasks more engaging, is permeating many different types of information systems. Excitement surrounding gamification results from its many potential organizational benefits. However, little research and design guidelines exist regarding gamified information systems. We therefore write this commentary to call upon information systems scholars to investigate the design and use of gamified information systems from a variety of disciplinary perspectives and theories, including behavioral economics, psychology, social psychology, information systems, etc. We first explicate the idea of gamified information systems, provide real-world examples of successful and unsuccessful systems, and based on a synthesis of the available literature, present a taxonomy of gamification design elements. We then develop a framework for research and design: its main theme is to create meaningful engagement for users, that is, gamified information systems should be designed to address the dual goals of instrumental and experiential outcomes. Using this framework, we develop a set of design principles and research questions, using a running case to illustrate some of our ideas. We conclude with a summary of opportunities for IS researchers to extend our knowledge of gamified information systems, and at the same time, advance",
"title": ""
},
{
"docid": "a5a0e1b984eac30c225190c0cba63ab4",
"text": "The traditional intrusion detection system is not flexible in providing security in cloud computing because of the distributed structure of cloud computing. This paper surveys the intrusion detection and prevention techniques and possible solutions in Host Based and Network Based Intrusion Detection System. It discusses DDoS attacks in Cloud environment. Different Intrusion Detection techniques are also discussed namely anomaly based techniques and signature based techniques. It also surveys different approaches of Intrusion Prevention System.",
"title": ""
},
{
"docid": "298c3b480f44be031c0c4262816298c1",
"text": "Information extraction (IE) - the problem of extracting structured information from unstructured text - has become an increasingly important topic in recent years. A SIGMOD 2006 tutorial [3] outlined challenges and opportunities for the database community to advance the state of the art in information extraction, and posed the following grand challenge: \"Can we build a System R for information extraction?\n Our tutorial gives an overview of progress the database community has made towards meeting this challenge. In particular, we start by discussing design requirements in building an enterprise IE system. We then survey recent technological advances towards addressing these requirements, broadly categorized as: (1) Languages for specifying extraction programs in a declarative way, thus allowing database-style performance optimizations; (2) Infrastructure needed to ensure scalability, and (3) Development support for enterprise IE systems. Finally, we outline several open challenges and opportunities for the database community to further advance the state of the art in enterprise IE systems. The tutorial is intended for students and researchers interested in information extraction and its applications, and assumes no prior knowledge of the area.",
"title": ""
},
{
"docid": "3f49f74eabc407b1b5b5899badefce3d",
"text": "The purpose of this study is to determine restaurant service quality. The aims are to: (a) assess customers’ expectations and perceptions, (b) establish the significance of difference between perceived and expected service quality, (c) identify the number of dimensions for expectations and perceptions scales of modified DINESERV model, (d) test the reliability of the applied DINESERV model. The empirical research was conducted using primary data. The questionnaire is based on Stevens et al. (1995) and Andaleeb and Conway’s (2006) research. In order to meet survey goals, descriptive, bivariate and multivariate (exploratory factor analysis and reliability analysis) statistical analyses were conducted. The empirical results show that expectations scores are higher than perceptions scores, which indicate low level of service quality. Furthermore, this study identified seven factors that best explain customers’ expectations and two factors that best explain customers’ perceptions regarding restaurant service. The results of this study would help management identify the strengths and weaknesses of service quality and implement an effective strategy to meet the customers’ expectations.",
"title": ""
},
{
"docid": "4c00cf339ccc28708c19cf8feec767ec",
"text": "This paper presents vCorfu, a strongly consistent cloudscale object store built over a shared log. vCorfu augments the traditional replication scheme of a shared log to provide fast reads and leverages a new technique, composable state machine replication, to compose large state machines from smaller ones, enabling the use of state machine replication to be used to efficiently in huge data stores. We show that vCorfu outperforms Cassandra, a popular state-of-the art NOSQL stores while providing strong consistency (opacity, read-own-writes), efficient transactions, and global snapshots at cloud scale.",
"title": ""
},
{
"docid": "376646286bea50e173cc3c928d3f96a3",
"text": "We formulate an integer program to solve a highly constrained academic timetabling problem at the United States Merchant Marine Academy. The IP instance that results from our real case study has approximately both 170,000 rows and columns and solves to optimality in 4–24 hours using a commercial solver on a portable computer (near optimal feasible solutions were often found in 4–12 hours). Our model is applicable to both high schools and small colleges who wish to deviate from group scheduling. We also solve a necessary preprocessing student subgrouping problem, which breaks up big groups of students into small groups so they can optimally fit into small capacity classes.",
"title": ""
}
] |
scidocsrr
|
6b49d02c6be3abe3fe2462fdb907c502
|
Auto-patching DOM-based XSS at scale
|
[
{
"docid": "dde76ca0ed14039e77f09a9238d5e4a2",
"text": "JavaScript is widely used for writing client-side web applications and is getting increasingly popular for writing mobile applications. However, unlike C, C++, and Java, there are not that many tools available for analysis and testing of JavaScript applications. In this paper, we present a simple yet powerful framework, called Jalangi, for writing heavy-weight dynamic analyses. Our framework incorporates two key techniques: 1) selective record-replay, a technique which enables to record and to faithfully replay a user-selected part of the program, and 2) shadow values and shadow execution, which enables easy implementation of heavy-weight dynamic analyses. Our implementation makes no special assumption about JavaScript, which makes it applicable to real-world JavaScript programs running on multiple platforms. We have implemented concolic testing, an analysis to track origins of nulls and undefined, a simple form of taint analysis, an analysis to detect likely type inconsistencies, and an object allocation profiler in Jalangi. Our evaluation of Jalangi on the SunSpider benchmark suite and on five web applications shows that Jalangi has an average slowdown of 26X during recording and 30X slowdown during replay and analysis. The slowdowns are comparable with slowdowns reported for similar tools, such as PIN and Valgrind for x86 binaries. We believe that the techniques proposed in this paper are applicable to other dynamic languages.",
"title": ""
}
] |
[
{
"docid": "ca7e7fa988bf2ed1635e957ea6cd810d",
"text": "Knowledge graph (KG) is known to be helpful for the task of question answering (QA), since it provides well-structured relational information between entities, and allows one to further infer indirect facts. However, it is challenging to build QA systems which can learn to reason over knowledge graphs based on question-answer pairs alone. First, when people ask questions, their expressions are noisy (for example, typos in texts, or variations in pronunciations), which is non-trivial for the QA system to match those mentioned entities to the knowledge graph. Second, many questions require multi-hop logic reasoning over the knowledge graph to retrieve the answers. To address these challenges, we propose a novel and unified deep learning architecture, and an end-to-end variational learning algorithm which can handle noise in questions, and learn multi-hop reasoning simultaneously. Our method achieves state-of-the-art performance on a recent benchmark dataset in the literature. We also derive a series of new benchmark datasets, including questions for multi-hop reasoning, questions paraphrased by neural translation model, and questions in human voice. Our method yields very promising results on all these challenging datasets.",
"title": ""
},
{
"docid": "86aa31d70e44137ff16e81f79e1dac74",
"text": "The bee genus Lasioglossum Curtis is a model taxon for studying the evolutionary origins of and reversals in eusociality. This paper presents a phylogenetic analysis of Lasioglossum species and subgenera based on a data set consisting of 1240 bp of the mitochondrial cytochrome oxidase I (COI) gene for seventy-seven taxa (sixty-six ingroup and eleven outgroup taxa). Maximum parsimony was used to analyse the data set (using PAUP*4.0) by a variety of weighting methods, including equal weights, a priori weighting and a posteriori weighting. All methods yielded roughly congruent results. Michener's Hemihalictus series was found to be monophyletic in all analyses but one, while his Lasioglossum series formed a basal, paraphyletic assemblage in all analyses but one. Chilalictus was consistently found to be a basal taxon of Lasioglossum sensu lato and Lasioglossum sensu stricto was found to be monophyletic. Within the Hemihalictus series, major lineages included Dialictus + Paralictus, the acarinate Evylaeus + Hemihalictus + Sudila and the carinate Evylaeus + Sphecodogastra. Relationships within the Hemihalictus series were highly stable to altered weighting schemes, while relationships among the basal subgenera in the Lasioglossum series (Lasioglossum s.s., Chilalictus, Parasphecodes and Ctenonomia) were unclear. The social parasite of Dialictus, Paralictus, is consistently and unambiguously placed well within Dialictus, thus rendering Dialictus paraphyletic. The implications of this for understanding the origins of social parasitism are discussed.",
"title": ""
},
{
"docid": "f6a1d7b206ca2796d4e91f3e8aceeed8",
"text": "Objective To develop a classifier that tackles the problem of determining the risk of a patient of suffering from a cardiovascular disease within the next ten years. The system has to provide both a diagnosis and an interpretable model explaining the decision. In this way, doctors are able to analyse the usefulness of the information given by the system. Methods Linguistic fuzzy rule-based classification systems are used, since they provide a good classification rate and a highly interpretable model. More specifically, a new methodology to combine fuzzy rule-based classification systems with interval-valued fuzzy sets is proposed, which is composed of three steps: 1) the modelling of the linguistic labels of the classifier using interval-valued fuzzy sets; 2) the use of theKα operator in the inference process and 3) the application of a genetic tuning to find the best ignorance degree that each interval-valued fuzzy set represents as well as the best value for the parameter α of theKα operator in each rule. Results Correspondingauthor. Tel:+34-948166048. Fax:+34-948168924 Email addresses: [email protected] (Jośe Antonio Sanz ), [email protected] (Mikel Galar),[email protected] (Aranzazu Jurio), [email protected] (Antonio Brugos), [email protected] (Miguel Pagola),[email protected] (Humberto Bustince) Preprint submitted to Elsevier November 13, 2013 © 2013. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/",
"title": ""
},
{
"docid": "b82facfc85ef2ae55f03beef7d1bb968",
"text": "Stock movements are essentially driven by new information. Market data, financial news, and social sentiment are believed to have impacts on stock markets. To study the correlation between information and stock movements, previous works typically concatenate the features of different information sources into one super feature vector. However, such concatenated vector approaches treat each information source separately and ignore their interactions. In this article, we model the multi-faceted investors’ information and their intrinsic links with tensors. To identify the nonlinear patterns between stock movements and new information, we propose a supervised tensor regression learning approach to investigate the joint impact of different information sources on stock markets. Experiments on CSI 100 stocks in the year 2011 show that our approach outperforms the state-of-the-art trading strategies.",
"title": ""
},
{
"docid": "43850ef433d1419ed37b7b12f3ff5921",
"text": "We have seen ten years of the application of AI planning to the problem of narrative generation in Interactive Storytelling (IS). In that time planning has emerged as the dominant technology and has featured in a number of prototype systems. Nevertheless key issues remain, such as how best to control the shape of the narrative that is generated (e.g., by using narrative control knowledge, i.e., knowledge about narrative features that enhance user experience) and also how best to provide support for real-time interactive performance in order to scale up to more realistic sized systems. Recent progress in planning technology has opened up new avenues for IS and we have developed a novel approach to narrative generation that builds on this. Our approach is to specify narrative control knowledge for a given story world using state trajectory constraints and then to treat these state constraints as landmarks and to use them to decompose narrative generation in order to address scalability issues and the goal of real-time performance in larger story domains. This approach to narrative generation is fully implemented in an interactive narrative based on the “Merchant of Venice.” The contribution of the work lies both in our novel use of state constraints to specify narrative control knowledge for interactive storytelling and also our development of an approach to narrative generation that exploits such constraints. In the article we show how the use of state constraints can provide a unified perspective on important problems faced in IS.",
"title": ""
},
{
"docid": "22658b675b501059ec5a7905f6b766ef",
"text": "The purpose of this study was to compare the physiological results of 2 incremental graded exercise tests (GXTs) and correlate these results with a short-distance laboratory cycle time trial (TT). Eleven men (age 25 +/- 5 years, Vo(2)max 62 +/- 8 ml.kg(-1).min(-1)) randomly underwent 3 laboratory tests performed on a cycle ergometer. The first 2 tests consisted of a GXT consisting of either 3-minute (GXT(3-min)) or 5-minute (GXT(5-min)) workload increments. The third test involved 1 laboratory 30-minute TT. The peak power output, lactate threshold, onset of blood lactate accumulation, and maximum displacement threshold (Dmax) determined from each GXT was not significantly different and in agreement when measured from the GXT(3-min) or GXT(5-min). Furthermore, similar correlation coefficients were found among the results of each GXT and average power output in the 30-minute cycling TT. Hence, the results of either GXT can be used to predict performance or for training prescription.",
"title": ""
},
{
"docid": "af2e881acf6744469389d3e81570341f",
"text": "Although smoking cessation is the primary goal for the control of cancer and other smoking-related diseases, chemoprevention provides a complementary approach applicable to high risk individuals such as current smokers and ex-smokers. The thiol N-acetylcysteine (NAC) works per se in the extracellular environment, and is a precursor of intracellular cysteine and glutathione (GSH). Almost 40 years of experience in the prophylaxis and therapy of a variety of clinical conditions, mostly involving GSH depletion and alterations of the redox status, have established the safety of this drug, even at very high doses and for long-term treatments. A number of studies performed since 1984 have indicated that NAC has the potential to prevent cancer and other mutation-related diseases. N-Acetylcysteine has an impressive array of mechanisms and protective effects towards DNA damage and carcinogenesis, which are related to its nucleophilicity, antioxidant activity, modulation of metabolism, effects in mitochondria, decrease of the biologically effective dose of carcinogens, modulation of DNA repair, inhibition of genotoxicity and cell transformation, modulation of gene expression and signal transduction pathways, regulation of cell survival and apoptosis, anti-inflammatory activity, anti-angiogenetic activity, immunological effects, inhibition of progression to malignancy, influence on cell cycle progression, inhibition of pre-neoplastic and neoplastic lesions, inhibition of invasion and metastasis, and protection towards adverse effects of other chemopreventive agents or chemotherapeutical agents. These mechanisms are herein reviewed and commented on with special reference to smoking-related end-points, as evaluated in in vitro test systems, experimental animals and clinical trials. It is important that all protective effects of NAC were observed under a range of conditions produced by a variety of treatments or imbalances of homeostasis. However, our recent data show that, at least in mouse lung, under physiological conditions NAC does not alter per se the expression of multiple genes detected by cDNA array technology. On the whole, there is overwhelming evidence that NAC has the ability to modulate a variety of DNA damage- and cancer-related end-points.",
"title": ""
},
{
"docid": "77281793a88329ca2cf9fd8eeaf01524",
"text": "This paper describes a new circuit integrated on silicon, which generates temperature-independent bias currents. Such a circuit is firstly employed to obtain a current reference with first-order temperature compensation, then it is modified to obtain second-order temperature compensation. The operation principle of the new circuits is described and the relationships between design and technology process parameters are derived. These circuits have been designed by a 0.35 /spl mu/m BiCMOS technology process and the thermal drift of the reference current has been evaluated by computer simulations. They show good thermal performance and in particular, the new second-order temperature-compensated current reference has a mean temperature drift of only 28 ppm//spl deg/C in the temperature range between -30/spl deg/C and 100/spl deg/C.",
"title": ""
},
{
"docid": "17bd8497b30045267f77572c9bddb64f",
"text": "0007-6813/$ see front matter D 200 doi:10.1016/j.bushor.2004.11.006 * Corresponding author. E-mail addresses: [email protected] [email protected] (J. Mair).",
"title": ""
},
{
"docid": "786f6c09777788c3456e6613729c0292",
"text": "An experimental approach to studying the properties of word embeddings is proposed. Controlled experiments, achieved through modifications of the training corpus, permit the demonstration of direct relations between word properties and word vector direction and length. The approach is demonstrated using the word2vec CBOW model with experiments that independently vary word frequency and word co-occurrence noise. The experiments reveal that word vector length depends more or less linearly on both word frequency and the level of noise in the co-occurrence distribution of the word. The coefficients of linearity depend upon the word. The special point in feature space, defined by the (artificial) word with pure noise in its co-occurrence distribution, is found to be small but non-zero.",
"title": ""
},
{
"docid": "d114f37ccb079106a728ad8fe1461919",
"text": "This paper describes a stochastic hill climbing algorithm named SHCLVND to optimize arbitrary vectorial < n ! < functions. It needs less parameters. It uses normal (Gaussian) distributions to represent probabilities which are used for generating more and more better argument vectors. The-parameters of the normal distributions are changed by a kind of Hebbian learning. Kvasnicka et al. KPP95] used algorithm Stochastic Hill Climbing with Learning (HCwL) to optimize a highly multimodal vectorial function on real numbers. We have tested proposed algorithm by optimizations of the same and a similar function and show the results in comparison to HCwL. In opposite to it algorithm SHCLVND desribed here works directly on vectors of numbers instead their bit-vector representations and uses normal distributions instead of numbers to represent probabilities. 1 Overview In Section 2 we give an introduction with the way to the algorithm. Then we describe it exactly in Section 3. There is also given a compact notation in pseudo PASCAL-code, see Section 3.4. After that we give an example: we optimize highly multimodal functions with the proposed algorithm and give some visualisations of the progress in Section 4. In Section 5 there are a short summary and some ideas for future works. At last in Section 6 we give some hints for practical use of the algorithm. 2 Introduction This paper describes a hill climbing algorithm to optimize vectorial functions on real numbers. 2.1 Motivation Flexible algorithms for optimizing any vectorial function are interesting if there is no or only a very diicult mathematical solution known, e.g. parameter adjustments to optimize with respect to some relevant property the recalling behavior of a (trained) neuronal net HKP91, Roj93], or the resulting image of some image-processing lter.",
"title": ""
},
{
"docid": "7f2dff96e9c1742842fea6a43d17f93e",
"text": "We study shock-based methods for credible causal inference in corporate finance research. We focus on corporate governance research, survey 13,461 papers published between 2001 and 2011 in 22 major accounting, economics, finance, law, and management journals; and identify 863 empirical studies in which corporate governance is associated with firm value or other characteristics. We classify the methods used in these studies and assess whether they support a causal link between corporate governance and firm value or another outcome. Only a stall minority of studies have convincing causal inference strategies. The convincing strategies largely rely on external shocks – usually from legal rules – often called “natural experiments”. We examine the 74 shock-based papers and provide a guide to shock-based research design, which stresses the common features across different designs and the value of using combined designs.",
"title": ""
},
{
"docid": "24f68da70b879cc74b00e2bc9cae6f96",
"text": "This paper presents the power management scheme for a power electronics based low voltage microgrid in islanding operation. The proposed real and reactive power control is based on the virtual frequency and voltage frame, which can effectively decouple the real and reactive power flows and improve the system transient and stability performance. Detailed analysis of the virtual frame operation range is presented, and a control strategy to guarantee that the microgrid can be operated within the predetermined voltage and frequency variation limits is also proposed. Moreover, a reactive power control with adaptive voltage droop method is proposed, which automatically updates the maximum reactive power limit of a DG unit based on its current rating and actual real power output and features enlarged power output range and further improved system stability. Both simulation and experimental results are provided in this paper.",
"title": ""
},
{
"docid": "a5d568b4a86dcbda2c09894c778527ea",
"text": "INTRODUCTION\nHypoglycemia (Hypo) is the most common side effect of insulin therapy in people with type 1 diabetes (T1D). Over time, patients with T1D become unaware of signs and symptoms of Hypo. Hypo unawareness leads to morbidity and mortality. Diabetes alert dogs (DADs) represent a unique way to help patients with Hypo unawareness. Our group has previously presented data in abstract form which demonstrates the sensitivity and specificity of DADS. The purpose of our current study is to expand evaluation of DAD sensitivity and specificity using a method that reduces the possibility of trainer bias.\n\n\nMETHODS\nWe evaluated 6 dogs aging 1-10 years old who had received an average of 6 months of training for Hypo alert using positive training methods. Perspiration samples were collected from patients during Hypo (BG 46-65 mg/dL) and normoglycemia (BG 85-136 mg/dl) and were used in training. These samples were placed in glass vials which were then placed into 7 steel cans (1 Hypo, 2 normal, 4 blank) randomly placed by roll of a dice. The dogs alerted by either sitting in front of, or pushing, the can containing the Hypo sample. Dogs were rewarded for appropriate recognition of the Hypo samples using a food treat via a remote control dispenser. The results were videotaped and statistically evaluated for sensitivity (proportion of lows correctly alerted, \"true positive rate\") and specificity (proportion of blanks + normal samples not alerted, \"true negative rate\") calculated after pooling data across all trials for all dogs.\n\n\nRESULTS\nAll DADs displayed statistically significant (p value <0.05) greater sensitivity (min 50.0%-max 87.5%) to detect the Hypo sample than the expected random correct alert of 14%. Specificity ranged from a min of 89.6% to a max of 97.9% (expected rate is not defined in this scenario).\n\n\nCONCLUSIONS\nOur results suggest that properly trained DADs can successfully recognize and alert to Hypo in an in vitro setting using smell alone.",
"title": ""
},
{
"docid": "48a8cfc2ac8c8c63bbd15aba5a830ef9",
"text": "We extend prior research on masquerade detection using UNIX commands issued by users as the audit source. Previous studies using multi-class training requires gathering data from multiple users to train specific profiles of self and non-self for each user. Oneclass training uses data representative of only one user. We apply one-class Naïve Bayes using both the multivariate Bernoulli model and the Multinomial model, and the one-class SVM algorithm. The result shows that oneclass training for this task works as well as multi-class training, with the great practical advantages of collecting much less data and more efficient training. One-class SVM using binary features performs best among the oneclass training algorithms.",
"title": ""
},
{
"docid": "7dde662184f9dc0363df5cfeffc4724e",
"text": "WordNet is a lexical reference system, developed by the university of Princeton. This paper gives a detailed documentation of the Prolog database of WordNet and predicates to interface it. 1",
"title": ""
},
{
"docid": "d126bfbab45f7e942947b30806045123",
"text": "Despite increasing amounts of data and ever improving natural language generation techniques, work on automated journalism is still relatively scarce. In this paper, we explore the field and challenges associated with building a journalistic natural language generation system. We present a set of requirements that should guide system design, including transparency, accuracy, modifiability and transferability. Guided by the requirements, we present a data-driven architecture for automated journalism that is largely domain and language independent. We illustrate its practical application in the production of news articles upon a user request about the 2017 Finnish municipal elections in three languages, demonstrating the successfulness of the data-driven, modular approach of the design. We then draw some lessons for future automated journalism.",
"title": ""
},
{
"docid": "83ac82ef100fdf648a5214a50d163fe3",
"text": "We consider the problem of multi-robot taskallocation when robots have to deal with uncertain utility estimates. Typically an allocation is performed to maximize expected utility; we consider a means for measuring the robustness of a given optimal allocation when robots have some measure of the uncertainty (e.g., a probability distribution, or moments of such distributions). We introduce a new O(n) algorithm, the Interval Hungarian algorithm, that extends the classic KuhnMunkres Hungarian algorithm to compute the maximum interval of deviation (for each entry in the assignment matrix) which will retain the same optimal assignment. This provides an efficient measurement of the tolerance of the allocation to the uncertainties, for both a specific interval and a set of interrelated intervals. We conduct experiments both in simulation and with physical robots to validate the approach and to gain insight into the effect of location uncertainty on allocations for multi-robot multi-target navigation tasks.",
"title": ""
},
{
"docid": "385789e37297644dc79ce9e39ee0f7cd",
"text": "A key issue in Low Voltage (LV) distribution systems is to identify strategies for the optimal management and control in the presence of Distributed Energy Resources (DERs). To reduce the number of variables to be monitored and controlled, virtual levels of aggregation, called Virtual Microgrids (VMs), are introduced and identified by using new models of the distribution system. To this aim, this paper, revisiting and improving the approach outlined in a conference paper, presents a sensitivity-based model of an LV distribution system, supplied by an Medium/Low Voltage (MV/LV) substation and composed by several feeders, which is suitable for the optimal management and control of the grid and for VM definition. The main features of the proposed method are: it evaluates the sensitivity coefficients in a closed form; it provides an overview of the sensitivity of the network to the variations of each DER connected to the grid; and it presents a limited computational burden. A comparison of the proposed method with both the exact load flow solutions and a perturb-and-observe method is discussed in a case study. Finally, the method is used to evaluate the impact of the DERs on the nodal voltages of the network.",
"title": ""
},
{
"docid": "e964a46706179a92b775307166a64c8a",
"text": "I general, perceptions of information systems (IS) success have been investigated within two primary research streams—the user satisfaction literature and the technology acceptance literature. These two approaches have been developed in parallel and have not been reconciled or integrated. This paper develops an integrated research model that distinguishes beliefs and attitudes about the system (i.e., object-based beliefs and attitudes) from beliefs and attitudes about using the system (i.e., behavioral beliefs and attitudes) to build the theoretical logic that links the user satisfaction and technology acceptance literature. The model is then tested using a sample of 465 users from seven different organizations who completed a survey regarding their use of data warehousing software. The proposed model was supported, providing preliminary evidence that the two perspectives can and should be integrated. The integrated model helps build the bridge from design and implementation decisions to system characteristics (a core strength of the user satisfaction literature) to the prediction of usage (a core strength of the technology acceptance literature).",
"title": ""
}
] |
scidocsrr
|
2fc805c64562df9daf1344e2c4a8883d
|
In the Eye of the Beholder: A Survey of Models for Eyes and Gaze
|
[
{
"docid": "12d625fe60790761ff604ab8aa70c790",
"text": "We describe a system designed to monitor the gaze of a user working naturally at a computer workstation. The system consists of three cameras situated between the keyboard and the monitor. Free head movements are allowed within a three-dimensional volume approximately 40 centimeters in diameter. Two fixed, wide-field \"face\" cameras equipped with active-illumination systems enable rapid localization of the subject's pupils. A third steerable \"eye\" camera has a relatively narrow field of view, and acquires the images of the eyes which are used for gaze estimation. Unlike previous approaches which construct an explicit three-dimensional representation of the subject's head and eye, we derive mappings for steering control and gaze estimation using a procedure we call implicit calibration. Implicit calibration is performed by collecting a \"training set\" of parameters and associated measurements, and solving for a set of coefficients relating the measurements back to the parameters of interest. Preliminary data on three subjects indicate an median gaze estimation error of ap-proximately 0.8 degree.",
"title": ""
},
{
"docid": "953f2efa434f29ceecc191201ebd77d7",
"text": "This paper presents a novel design for a non-contact eye detection and gaze tracking device. It uses two cameras to maintain real-time tracking of a person s eye in the presence of head motion. Image analysis techniques are used to obtain accurate locations of the pupil and corneal reflections. All the computations are performed in software and the device only requires simple, compact optics and electronics attached to the user s computer. Three methods of estimating the user s point of gaze on a computer monitor are evaluated. The camera motion system is capable of tracking the user s eye in real-time (9 fps) in the presence of natural head movements as fast as 100 /s horizontally and 77 /s vertically. Experiments using synthetic images have shown its ability to track the location of the eye in an image to within 0.758 pixels horizontally and 0.492 pixels vertically. The system has also been tested with users with different eye colors and shapes, different ambient lighting conditions and the use of eyeglasses. A gaze accuracy of 2.9 was observed. 2004 Elsevier Inc. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "0666baa7be39ef1887c7f8ce04aaa957",
"text": "BACKGROUND\nEnsuring health worker job satisfaction and motivation are important if health workers are to be retained and effectively deliver health services in many developing countries, whether they work in the public or private sector. The objectives of the paper are to identify important aspects of health worker satisfaction and motivation in two Indian states working in public and private sectors.\n\n\nMETHODS\nCross-sectional surveys of 1916 public and private sector health workers in Andhra Pradesh and Uttar Pradesh, India, were conducted using a standardized instrument to identify health workers' satisfaction with key work factors related to motivation. Ratings were compared with how important health workers consider these factors.\n\n\nRESULTS\nThere was high variability in the ratings for areas of satisfaction and motivation across the different practice settings, but there were also commonalities. Four groups of factors were identified, with those relating to job content and work environment viewed as the most important characteristics of the ideal job, and rated higher than a good income. In both states, public sector health workers rated \"good employment benefits\" as significantly more important than private sector workers, as well as a \"superior who recognizes work\". There were large differences in whether these factors were considered present on the job, particularly between public and private sector health workers in Uttar Pradesh, where the public sector fared consistently lower (P < 0.01). Discordance between what motivational factors health workers considered important and their perceptions of actual presence of these factors were also highest in Uttar Pradesh in the public sector, where all 17 items had greater discordance for public sector workers than for workers in the private sector (P < 0.001).\n\n\nCONCLUSION\nThere are common areas of health worker motivation that should be considered by managers and policy makers, particularly the importance of non-financial motivators such as working environment and skill development opportunities. But managers also need to focus on the importance of locally assessing conditions and managing incentives to ensure health workers are motivated in their work.",
"title": ""
},
{
"docid": "54776bdc9f7a9b18289d4901a8db5d7a",
"text": "The goal of this research was to determine the effect of different doses of galactooligosaccharide (GOS) on the fecal microbiota of healthy adults, with a focus on bifidobacteria. The study was designed as a single-blinded study, with eighteen subjects consuming GOS-containing chocolate chews at four increasing dosage levels; 0, 2.5, 5.0, and 10.0g. Subjects consumed each dose for 3 weeks, with a two-week baseline period preceding the study and a two-week washout period at the end. Fecal samples were collected weekly and analyzed by cultural and molecular methods. Cultural methods were used for bifidobacteria, Bacteroides, enterobacteria, enterococci, lactobacilli, and total anaerobes; culture-independent methods included denaturing gradient gel electrophoresis (DGGE) and quantitative real-time PCR (qRT-PCR) using Bifidobacterium-specific primers. All three methods revealed an increase in bifidobacteria populations, as the GOS dosage increased to 5 or 10g. Enumeration of bifidobacteria by qRT-PCR showed a high inter-subject variation in bifidogenic effect and indicated a subset of 9 GOS responders among the eighteen subjects. There were no differences, however, in the initial levels of bifidobacteria between the responding individuals and the non-responding individuals. Collectively, this study showed that a high purity GOS, administered in a confection product at doses of 5g or higher, was bifidogenic, while a dose of 2.5g showed no significant effect. However, the results also showed that even when GOS was administered for many weeks and at high doses, there were still some individuals for which a bifidogenic response did not occur.",
"title": ""
},
{
"docid": "9e9dd203746a1bd4024632abeb80fb0a",
"text": "Translating data from linked data sources to the vocabulary that is expected by a linked data application requires a large number of mappings and can require a lot of structural transformations as well as complex property value transformations. The R2R mapping language is a language based on SPARQL for publishing expressive mappings on the web. However, the specification of R2R mappings is not an easy task. This paper therefore proposes the use of mapping patterns to semi-automatically generate R2R mappings between RDF vocabularies. In this paper, we first specify a mapping language with a high level of abstraction to transform data from a source ontology to a target ontology vocabulary. Second, we introduce the proposed mapping patterns. Finally, we present a method to semi-automatically generate R2R mappings using the mapping",
"title": ""
},
{
"docid": "a97838c0a9290bb3bf6fbbbac0a25f5e",
"text": "The collaborative filtering (CF) using known user ratings of items has proved to be effective for predicting user preferences in item selection. This thrivi ng subfield of machine learning became popular in the late 1990s with the spread of online services t hat use recommender systems, such as Amazon, Yahoo! Music, and Netflix. CF approaches are usually designed to work on very large data sets. Therefore the scalability of the methods is cruci al. In this work, we propose various scalable solutions that are validated against the Netflix Pr ize data set, currently the largest publicly available collection. First, we propose various matrix fac torization (MF) based techniques. Second, a neighbor correction method for MF is outlined, which alloy s the global perspective of MF and the localized property of neighbor based approaches efficie ntly. In the experimentation section, we first report on some implementation issues, and we suggest on how parameter optimization can be performed efficiently for MFs. We then show that the propos ed calable approaches compare favorably with existing ones in terms of prediction accurac y nd/or required training time. Finally, we report on some experiments performed on MovieLens and Jes ter data sets.",
"title": ""
},
{
"docid": "e1da6ca2b27ef6dfcdad1db9def49ce2",
"text": "The first stage of every knowledge base question answering approach is to link entities in the input question. We investigate entity linking in the context of a question answering task and present a jointly optimized neural architecture for entity mention detection and entity disambiguation that models the surrounding context on different levels of granularity. We use the Wikidata knowledge base and available question answering datasets to create benchmarks for entity linking on question answering data. Our approach outperforms the previous state-of-the-art system on this data, resulting in an average 8% improvement of the final score. We further demonstrate that our model delivers a strong performance across different entity categories.",
"title": ""
},
{
"docid": "9581c692787cfef1ce2916100add4c1e",
"text": "Diabetes related eye disease is growing as a major health concern worldwide. Diabetic retinopathy is an infirmity due to higher level of glucose in the retinal capillaries, resulting in cloudy vision and blindness eventually. With regular screening, pathology can be detected in the instigating stage and if intervened with in time medication could prevent further deterioration. This paper develops an automated diagnosis system to recognize retinal blood vessels, and pathologies, such as exudates and microaneurysms together with certain texture properties using image processing techniques. These anatomical and texture features are then fed into a multiclass support vector machine (SVM) for classifying it into normal, mild, moderate, severe and proliferative categories. Advantages include, it processes quickly a large collection of fundus images obtained from mass screening which lessens cost and increases efficiency for ophthalmologists. Our method was evaluated on two publicly available databases and got encouraging results with a state of the art in this area.",
"title": ""
},
{
"docid": "872946be0c4897dc33bc1276593ee7a4",
"text": "BACKGROUND\nMusic therapy is a therapeutic method that uses musical interaction as a means of communication and expression. The aim of the therapy is to help people with serious mental disorders to develop relationships and to address issues they may not be able to using words alone.\n\n\nOBJECTIVES\nTo review the effects of music therapy, or music therapy added to standard care, compared with 'placebo' therapy, standard care or no treatment for people with serious mental disorders such as schizophrenia.\n\n\nSEARCH METHODS\nWe searched the Cochrane Schizophrenia Group Trials Register (December 2010) and supplemented this by contacting relevant study authors, handsearching of music therapy journals and manual searches of reference lists.\n\n\nSELECTION CRITERIA\nAll randomised controlled trials (RCTs) that compared music therapy with standard care, placebo therapy, or no treatment.\n\n\nDATA COLLECTION AND ANALYSIS\nStudies were reliably selected, quality assessed and data extracted. We excluded data where more than 30% of participants in any group were lost to follow-up. We synthesised non-skewed continuous endpoint data from valid scales using a standardised mean difference (SMD). If statistical heterogeneity was found, we examined treatment 'dosage' and treatment approach as possible sources of heterogeneity.\n\n\nMAIN RESULTS\nWe included eight studies (total 483 participants). These examined effects of music therapy over the short- to medium-term (one to four months), with treatment 'dosage' varying from seven to 78 sessions. Music therapy added to standard care was superior to standard care for global state (medium-term, 1 RCT, n = 72, RR 0.10 95% CI 0.03 to 0.31, NNT 2 95% CI 1.2 to 2.2). Continuous data identified good effects on negative symptoms (4 RCTs, n = 240, SMD average endpoint Scale for the Assessment of Negative Symptoms (SANS) -0.74 95% CI -1.00 to -0.47); general mental state (1 RCT, n = 69, SMD average endpoint Positive and Negative Symptoms Scale (PANSS) -0.36 95% CI -0.85 to 0.12; 2 RCTs, n=100, SMD average endpoint Brief Psychiatric Rating Scale (BPRS) -0.73 95% CI -1.16 to -0.31); depression (2 RCTs, n = 90, SMD average endpoint Self-Rating Depression Scale (SDS) -0.63 95% CI -1.06 to -0.21; 1 RCT, n = 30, SMD average endpoint Hamilton Depression Scale (Ham-D) -0.52 95% CI -1.25 to -0.21 ); and anxiety (1 RCT, n = 60, SMD average endpoint SAS -0.61 95% CI -1.13 to -0.09). Positive effects were also found for social functioning (1 RCT, n = 70, SMD average endpoint Social Disability Schedule for Inpatients (SDSI) score -0.78 95% CI -1.27 to -0.28). Furthermore, some aspects of cognitive functioning and behaviour seem to develop positively through music therapy. Effects, however, were inconsistent across studies and depended on the number of music therapy sessions as well as the quality of the music therapy provided.\n\n\nAUTHORS' CONCLUSIONS\nMusic therapy as an addition to standard care helps people with schizophrenia to improve their global state, mental state (including negative symptoms) and social functioning if a sufficient number of music therapy sessions are provided by qualified music therapists. Further research should especially address the long-term effects of music therapy, dose-response relationships, as well as the relevance of outcomes measures in relation to music therapy.",
"title": ""
},
{
"docid": "8bda505118b1731e778b41203520b3b8",
"text": "Image search and retrieval systems depend heavily on availability of descriptive textual annotations with images, to match them with textual queries of users. In most cases, such systems have to rely on users to provide tags or keywords with images. Users may add insufficient or noisy tags. A system to automatically generate descriptive tags for images can be extremely helpful for search and retrieval systems. Automatic image annotation has been explored widely in both image and text processing research communities. In this paper, we present a novel approach to tackle this problem by incorporating contextual information provided by scene analysis of image. Image can be represented by features which indicate type of scene shown in the image, instead of representing individual objects or local characteristics of that image. We have used such features to provide context in the process of predicting tags for images.",
"title": ""
},
{
"docid": "576819d44c53e29e495fe594ce624f17",
"text": "This paper proposes a new off line error compensation model by taking into accounting of geometric and cutting force induced errors in a 3-axis CNC milling machine. Geometric error of a 3-axis milling machine composes of 21 components, which can be measured by laser interferometer within the working volume. Geometric error estimation determined by back-propagation neural network is proposed and used separately in the geometric error compensation model. Likewise, cutting force induced error estimation by back-propagation neural network determined based on a flat end mill behavior observation is proposed and used separately in the cutting force induced error compensation model. Various experiments over a wide range of cutting conditions are carried out to investigate cutting force and machine error relation. Finally, the combination of geometric and cutting force induced errors is modeled by the combined back-propagation neural network. This unique model is used to compensate both geometric and cutting force induced errors simultaneously by a single model. Experimental tests have been carried out in order to validate the performance of geometric and cutting force induced errors compensation model. # 2004 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "225e7b608d06d218144853b900d40fd1",
"text": "Deep neural networks require a large amount of labeled training data during supervised learning. However, collecting and labeling so much data might be infeasible in many cases. In this paper, we introduce a deep transfer learning scheme, called selective joint fine-tuning, for improving the performance of deep learning tasks with insufficient training data. In this scheme, a target learning task with insufficient training data is carried out simultaneously with another source learning task with abundant training data. However, the source learning task does not use all existing training data. Our core idea is to identify and use a subset of training images from the original source learning task whose low-level characteristics are similar to those from the target learning task, and jointly fine-tune shared convolutional layers for both tasks. Specifically, we compute descriptors from linear or nonlinear filter bank responses on training images from both tasks, and use such descriptors to search for a desired subset of training samples for the source learning task. Experiments demonstrate that our deep transfer learning scheme achieves state-of-the-art performance on multiple visual classification tasks with insufficient training data for deep learning. Such tasks include Caltech 256, MIT Indoor 67, and fine-grained classification problems (Oxford Flowers 102 and Stanford Dogs 120). In comparison to fine-tuning without a source domain, the proposed method can improve the classification accuracy by 2% - 10% using a single model. Codes and models are available at https://github.com/ZYYSzj/Selective-Joint-Fine-tuning.",
"title": ""
},
{
"docid": "e30db40102a2d84a150c220250fa4d36",
"text": "A voltage reference circuit operating with all transistors biased in weak inversion, providing a mean reference voltage of 257.5 mV, has been fabricated in 0.18 m CMOS technology. The reference voltage can be approximated by the difference of transistor threshold voltages at room temperature. Accurate subthreshold design allows the circuit to work at room temperature with supply voltages down to 0.45 V and an average current consumption of 5.8 nA. Measurements performed over a set of 40 samples showed an average temperature coefficient of 165 ppm/ C with a standard deviation of 100 ppm/ C, in a temperature range from 0 to 125°C. The mean line sensitivity is ≈0.44%/V, for supply voltages ranging from 0.45 to 1.8 V. The power supply rejection ratio measured at 30 Hz and simulated at 10 MHz is lower than -40 dB and -12 dB, respectively. The active area of the circuit is ≈0.043mm2.",
"title": ""
},
{
"docid": "114880188f559f42f818ddfc0753c169",
"text": "Geometric active contours have many advantages over parametric active contours, such as computational simplicity and the ability to change the curve topology during deformation. While many of the capabilities of the older parametric active contours have been reproduced in geometric active contours, the relationship between the two has not always been clear. We develop a precise relationship between the two which includes spatially-varying coefficients, both tension and rigidity, and non-conservative external forces. The result is a very general geometric active contour formulation for which the intuitive design principles of parametric active contours can be applied. We demonstrate several novel applications in a series of simulations.",
"title": ""
},
{
"docid": "a7bc0af9b764021d1f325b1edfbfd700",
"text": "BACKGROUND\nIn the treatment of schizophrenia, changing antipsychotics is common when one treatment is suboptimally effective, but the relative effectiveness of drugs used in this strategy is unknown. This randomized, double-blind study compared olanzapine, quetiapine, risperidone, and ziprasidone in patients who had just discontinued a different atypical antipsychotic.\n\n\nMETHOD\nSubjects with schizophrenia (N=444) who had discontinued the atypical antipsychotic randomly assigned during phase 1 of the Clinical Antipsychotic Trials of Intervention Effectiveness (CATIE) investigation were randomly reassigned to double-blind treatment with a different antipsychotic (olanzapine, 7.5-30 mg/day [N=66]; quetiapine, 200-800 mg/day [N=63]; risperidone, 1.5-6.0 mg/day [N=69]; or ziprasidone, 40-160 mg/day [N=135]). The primary aim was to determine if there were differences between these four treatments in effectiveness measured by time until discontinuation for any reason.\n\n\nRESULTS\nThe time to treatment discontinuation was longer for patients treated with risperidone (median: 7.0 months) and olanzapine (6.3 months) than with quetiapine (4.0 months) and ziprasidone (2.8 months). Among patients who discontinued their previous antipsychotic because of inefficacy (N=184), olanzapine was more effective than quetiapine and ziprasidone, and risperidone was more effective than quetiapine. There were no significant differences between antipsychotics among those who discontinued their previous treatment because of intolerability (N=168).\n\n\nCONCLUSIONS\nAmong this group of patients with chronic schizophrenia who had just discontinued treatment with an atypical antipsychotic, risperidone and olanzapine were more effective than quetiapine and ziprasidone as reflected by longer time until discontinuation for any reason.",
"title": ""
},
{
"docid": "1dde34893bbfb2c08e2dd59f98836a2b",
"text": "Standards such as OIF CEI-25G, CEI-28G and 32G-FC require transceivers operating at high data rates over imperfect channels. Equalizers are used to cancel the inter-symbol interference (ISI) caused by frequency-dependent channel losses such as skin effect and dielectric loss. The primary objective of an equalizer is to compensate for high-frequency loss, which often exceeds 30dB at fs/2. However, due to the skin effect in a PCB stripline, which starts at 10MHz or lower, we also need to compensate for a small amount of loss at low frequency (e.g., 500MHz). Figure 2.1.1 shows simulated responses of a backplane channel (42.6dB loss at fs/2 for 32Gb/s) with conventional high-frequency equalizers only (4-tap feed-forward equalizer (FFE), 1st-order continuous-time linear equalizer (CTLE) with a dominant pole at fs/4, and 1-tap DFE) and with additional low-frequency equalization. Conventional equalizers cannot compensate for the small amount of low-frequency loss because the slope of the low-frequency loss is too gentle (<;3dB/dec). The FFE and CTLE do not have a pole in the low frequency region and hence have only a steep slope of 20dB/dec above their zero. The DFE cancels only short-term ISI. Effects of such low-frequency loss have often been overlooked or neglected, because 1) the loss is small (2 to 3dB), 2) when plotted using the linear frequency axis which is commonly used to show frequency dependence of skin effect and dielectric loss, the low-frequency loss is degenerated at DC and hardly visible (Fig. 2.1.1a), and 3) the long ISI tail of the channel pulse response seems well cancelled at first glance by conventional equalizers only (Fig. 2.1.1b). However, the uncompensated low-frequency loss causes non-negligible long-term residual ISI, because the integral of the residual ISI magnitude keeps going up for several hundred UI. As shown by the eye diagrams in the inset of Fig. 2.1.1(b), the residual long-term ISI results in 0.42UI data-dependent Jitter (DDJ) that is difficult to reduce further by enhancing FFE/CTLE/DFE, but can be reduced to 0.21UI by adding a low-frequency equalizer (LFEQ). Savoj et al. also recently reported long-tail cancellation [2].",
"title": ""
},
{
"docid": "a962df86c47b97280a272fb4a62c4f47",
"text": "Following an approach introduced by Lagnado and Osher (1997), we study Tikhonov regularization applied to an inverse problem important in mathematical finance, that of calibrating, in a generalized Black–Scholes model, a local volatility function from observed vanilla option prices. We first establish W 1,2 p estimates for the Black–Scholes and Dupire equations with measurable ingredients. Applying general results available in the theory of Tikhonov regularization for ill-posed nonlinear inverse problems, we then prove the stability of this approach, its convergence towards a minimum norm solution of the calibration problem (which we assume to exist), and discuss convergence rates issues.",
"title": ""
},
{
"docid": "e44636035306e122bf50115552516f53",
"text": "Texts and dialogues often express information indirectly. For instance, speakers’ answers to yes/no questions do not always straightforwardly convey a ‘yes’ or ‘no’ answer. The intended reply is clear in some cases (Was it good? It was great!) but uncertain in others (Was it acceptable? It was unprecedented.). In this paper, we present methods for interpreting the answers to questions like these which involve scalar modifiers. We show how to ground scalar modifier meaning based on data collected from the Web. We learn scales between modifiers and infer the extent to which a given answer conveys ‘yes’ or ‘no’. To evaluate the methods, we collected examples of question–answer pairs involving scalar modifiers from CNN transcripts and the Dialog Act corpus and use response distributions from Mechanical Turk workers to assess the degree to which each answer conveys ‘yes’ or ‘no’. Our experimental results closely match the Turkers’ response data, demonstrating that meanings can be learned from Web data and that such meanings can drive pragmatic inference.",
"title": ""
},
{
"docid": "1272563e64ca327aba1be96f2e045c30",
"text": "Current Web search engines are built to serve all users, independent of the special needs of any individual user. Personalization of Web search is to carry out retrieval for each user incorporating his/her interests. We propose a novel technique to learn user profiles from users' search histories. The user profiles are then used to improve retrieval effectiveness in Web search. A user profile and a general profile are learned from the user's search history and a category hierarchy, respectively. These two profiles are combined to map a user query into a set of categories which represent the user's search intention and serve as a context to disambiguate the words in the user's query. Web search is conducted based on both the user query and the set of categories. Several profile learning and category mapping algorithms and a fusion algorithm are provided and evaluated. Experimental results indicate that our technique to personalize Web search is both effective and efficient.",
"title": ""
},
{
"docid": "494d013d52282b9c6667024188c38542",
"text": "Digital Image processing( DIP ) is a theme of awesome significance basically for any task, either for essential varieties of photograph indicators or complex mechanical frameworks utilizing assumed vision. In this paperbasics of the image processing in LabVIEW have been described in brief. It involves capturing the image of an object that is to be analysed and compares it with the reference image template of the object by pattern matching algorithm. The co-ordinates of the image is also be identified by tracking of object on the screen. A basic pattern matching algorithm is modified to snap and track the image on real-time basis. Keywords— LabVIEW, IMAQ, Pattern matching, Realtime tracking, .",
"title": ""
},
{
"docid": "df609125f353505fed31eee302ac1742",
"text": "We present a method for 3D object detection and pose estimation from a single image. In contrast to current techniques that only regress the 3D orientation of an object, our method first regresses relatively stable 3D object properties using a deep convolutional neural network and then combines these estimates with geometric constraints provided by a 2D object bounding box to produce a complete 3D bounding box. The first network output estimates the 3D object orientation using a novel hybrid discrete-continuous loss, which significantly outperforms the L2 loss. The second output regresses the 3D object dimensions, which have relatively little variance compared to alternatives and can often be predicted for many object types. These estimates, combined with the geometric constraints on translation imposed by the 2D bounding box, enable us to recover a stable and accurate 3D object pose. We evaluate our method on the challenging KITTI object detection benchmark [2] both on the official metric of 3D orientation estimation and also on the accuracy of the obtained 3D bounding boxes. Although conceptually simple, our method outperforms more complex and computationally expensive approaches that leverage semantic segmentation, instance level segmentation and flat ground priors [4] and sub-category detection [23][24]. Our discrete-continuous loss also produces state of the art results for 3D viewpoint estimation on the Pascal 3D+ dataset[26].",
"title": ""
},
{
"docid": "cd48c6b722f8e88f0dc514fcb6a0d890",
"text": "Multi-tier data-intensive applications are widely deployed in virtualized data centers for high scalability and reliability. As the response time is vital for user satisfaction, this requires achieving good performance at each tier of the applications in order to minimize the overall latency. However, in such virtualized environments, each tier (e.g., application, database, web) is likely to be hosted by different virtual machines (VMs) on multiple physical servers, where a guest VM is unaware of changes outside its domain, and the hypervisor also does not know the configuration and runtime status of a guest VM. As a result, isolated virtualization domains lend themselves to performance unpredictability and variance. In this paper, we propose IOrchestra, a holistic collaborative virtualization framework, which bridges the semantic gaps of I/O stacks and system information across multiple VMs, improves virtual I/O performance through collaboration from guest domains, and increases resource utilization in data centers. We present several case studies to demonstrate that IOrchestra is able to address numerous drawbacks of the current practice and improve the I/O latency of various distributed cloud applications by up to 31%.",
"title": ""
}
] |
scidocsrr
|
5f73633df9472d368dcdb17566f3c935
|
Research through design as a method for interaction design research in HCI
|
[
{
"docid": "ed4dcf690914d0a16d2017409713ea5f",
"text": "We argue that HCI has emerged as a design-oriented field of research, directed at large towards innovation, design, and construction of new kinds of information and interaction technology. But the understanding of such an attitude to research in terms of philosophical, theoretical, and methodological underpinnings seems however relatively poor within the field. This paper intends to specifically address what design 'is' and how it is related to HCI. First, three candidate accounts from design theory of what design 'is' are introduced; the conservative, the romantic, and the pragmatic. By examining the role of sketching in design, it is found that the designer becomes involved in a necessary dialogue, from which the design problem and its solution are worked out simultaneously as a closely coupled pair. In conclusion, it is proposed that we need to acknowledge, first, the role of design in HCI conduct, and second, the difference between the knowledge-generating Design-oriented Research and the artifact-generating conduct of Research-oriented Design.",
"title": ""
}
] |
[
{
"docid": "eb06c0af1ea9de72f27f995d54590443",
"text": "Random acceleration vibration specifications for subsystems, i.e. instruments, equipment, are most times based on measurement during acoustic noise tests on system level, i.e. a spacecraft and measured by accelerometers, placed in the neighborhood of the interface between spacecraft and subsystem. Tuned finite element models can be used to predict the random acceleration power spectral densities at other locations than available via the power spectral density measurements of the acceleration. The measured and predicted power spectral densities do represent the modal response characteristics of the system and show many peaks and valleys. The equivalent random acceleration vibration test specification is a smoothed, enveloped, peak-clipped version of the measured and predicted power spectral densities of the acceleration spectrum. The original acceleration vibration spectrum can be characterized by a different number response spectra: Shock Response Spectrum (SRS) , Extreme Response Spectrum (ERS), Vibration Response Spectrum (VRS), and Fatigue Damage Spectrum (FDS). An additional method of non-stationary random vibrations is based on the Rayleigh distribution of peaks. The response spectra represent the responses of series of SDOF systems excited at the base by random acceleration, both in time and frequency domain. The synthesis of equivalent random acceleration vibration specifications can be done in a very structured manner and are more suitable than equivalent random acceleration vibration specifications obtained by simple enveloping. In the synthesis process Miles’ equation plays a dominant role to invert the response spectra into equivalent random acceleration vibration spectra. A procedure is proposed to reduce the number of data point in the response spectra curve by dividing the curve in a numbers of fields. The synthesis to an equivalent random acceleration J.J. Wijker, M.H.M. Ellenbroek, and A. de Boer spectrum is performed on a reduced selected set of data points. The recalculated response spectra curve envelops the original response spectra curves. A real life measured random acceleration spectrum (PSD) with quite a number of peaks and valleys is taken to generate, applying response spectra SRS, ERS, VRS, FDS and the Rayleigh distribution of peaks, equivalent random acceleration vibration specifications. Computations are performed both in time and frequency domain. J.J. Wijker, M.H.M. Ellenbroek, and A. de Boer",
"title": ""
},
{
"docid": "5a8d4bfb89468d432b7482062a0cbf2d",
"text": "While “no one size fits all” is a sound philosophy for system designers to follow, it poses multiple challenges for application developers and system administrators. It can be hard for an application developer to pick one system when the needs of her application match the features of multiple “one size” systems. The choice becomes considerably harder when different components of an application fit the features of different “one size” systems. Considerable manual effort goes into creating and tuning such multi-system applications. An application’s data and workload properties may change over time, often in unpredictable and bursty ways. Consequently, the “one size” system that is best for an application can change over time. Adapting to change can be hard when application development is coupled tightly with any individual “one size” system. In this paper, we make the case for developing a new breed of Database Management Systems that we term DBMS. A DBMS contains multiple “one size” systems internally. An application specifies its execution requirements on aspects like performance, availability, consistency, change, and cost to the DBMS declaratively. For all requests (e.g., queries) made by the application, the DBMS will select the execution plan that meets the application’s requirements best. A unique aspect of the execution plan in a DBMS is that the plan includes the selection of one or more “one size” systems. The plan is then deployed and managed automatically on the selected system(s). If application requirements change beyond what was planned for originally by the DBMS, then the application can be reoptimized and redeployed; usually with no additional effort required from the application developer. The DBMS approach has the potential to address the challenges that application developers and system administrators face from the vast and growing number of “one size” systems today. However, this approach poses many research challenges that we discuss in this paper. We are taking the DBMS approach in a platform, called Cyclops, that we are building for continuous query execution. We will use Cyclops throughout the paper to give concrete illustrations of the benefits and challenges of the DBMS approach. This article is published under a Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0/), which permits distribution and reproduction in any medium as well allowing derivative works, provided that you attribute the original work to the author(s) and CIDR 2013. 6 Biennial Conference on Innovative Data Systems Research (CIDR ’13) January 6-9, 2013, Asilomar, California, USA.",
"title": ""
},
{
"docid": "d27735fc52e407e4b5e1b3fd7296ff8e",
"text": "The ACL Anthology Network (AAN)1 is a comprehensive manually curated networked database of citations and collaborations in the field of Computational Linguistics. Each citation edge in AAN is associated with one or more citing sentences. A citing sentence is one that appears in a scientific article and contains an explicit reference to another article. In this paper, we shed the light on the usefulness of AAN citing sentences for understanding research trends and summarizing previous discoveries and contributions. We also propose and motivate several different uses and applications of citing sentences.",
"title": ""
},
{
"docid": "fb655a622c2e299b8d7f8b85769575b4",
"text": "With the substantial development of digital technologies in multimedia, network communication and user interfaces, we are seeing an increasing number of applications of these technologies, in particular in the entertainment domain. They include computer gaming, elearning, high-definition and interactive TVs, and virtual environments. The development of these applications typically involves the integration of existing technologies as well as the development of new technologies. This Introduction summarizes latest interactive entertainment technologies and applications, and briefly highlights some potential research directions. It also introduces the seven papers that are accepted to the special issue. Hopefully, this will provide the readers some insights into future research topics in interactive entertainment technologies and applications.",
"title": ""
},
{
"docid": "126b52ab2e2585eabf3345ef7fb39c51",
"text": "We propose a method to build in real-time animated 3D head models using a consumer-grade RGB-D camera. Our framework is the first one to provide simultaneously comprehensive facial motion tracking and a detailed 3D model of the user's head. Anyone's head can be instantly reconstructed and his facial motion captured without requiring any training or pre-scanning. The user starts facing the camera with a neutral expression in the first frame, but is free to move, talk and change his face expression as he wills otherwise. The facial motion is tracked using a blendshape representation while the fine geometric details are captured using a Bump image mapped over the template mesh. We propose an efficient algorithm to grow and refine the 3D model of the head on-the-fly and in real-time. We demonstrate robust and high-fidelity simultaneous facial motion tracking and 3D head modeling results on a wide range of subjects with various head poses and facial expressions. Our proposed method offers interesting possibilities for animation production and 3D video telecommunications.",
"title": ""
},
{
"docid": "065c24bc712f7740b95e0d1a994bfe19",
"text": "David Haussler Computer and Information Sciences University of California Santa Cruz Santa Cruz , CA 95064 We study a particular type of Boltzmann machine with a bipartite graph structure called a harmonium. Our interest is in using such a machine to model a probability distribution on binary input vectors . We analyze the class of probability distributions that can be modeled by such machines. showing that for each n ~ 1 this class includes arbitrarily good appwximations to any distribution on the set of all n-vectors of binary inputs. We then present two learning algorithms for these machines .. The first learning algorithm is the standard gradient ascent heuristic for computing maximum likelihood estimates for the parameters (i.e. weights and thresholds) of the modeL Here we give a closed form for this gradient that is significantly easier to compute than the corresponding gradient for the general Boltzmann machine . The second learning algorithm is a greedy method that creates the hidden units and computes their weights one at a time. This method is a variant of the standard method for projection pursuit density estimation . We give experimental results for these learning methods on synthetic data and natural data from the domain of handwritten digits.",
"title": ""
},
{
"docid": "3e80b90205de0033a3e22f7914f7fed9",
"text": "-------------------------------------------------------------------ABSTRACT---------------------------------------------------------------------Financial losses due to financial statement frauds (FSF) are increasing day by day in the world. The industry recognizes the problem and is just now starting to act. Although prevention is the best way to reduce frauds, fraudsters are adaptive and will usually find ways to circumvent such measures. Detecting fraud is essential once prevention mechanism has failed. Several data mining algorithms have been developed that allow one to extract relevant knowledge from a large amount of data like fraudulent financial statements to detect FSF. It is an attempt to detect FSF ; We present a generic framework to do our analysis.",
"title": ""
},
{
"docid": "7fed1248efb156c8b2585147e2791ed7",
"text": "In [1], we proposed a graph-based formulation that links and clusters person hypotheses over time by solving a minimum cost subgraph multicut problem. In this paper, we modify and extend [1] in three ways: 1) We introduce a novel local pairwise feature based on local appearance matching that is robust to partial occlusion and camera motion. 2) We perform extensive experiments to compare different pairwise potentials and to analyze the robustness of the tracking formulation. 3) We consider a plain multicut problem and remove outlying clusters from its solution. This allows us to employ an efficient primal feasible optimization algorithm that is not applicable to the subgraph multicut problem of [1]. Unlike the branch-and-cut algorithm used there, this efficient algorithm used here is applicable to long videos and many detections. Together with the novel feature, it eliminates the need for the intermediate tracklet representation of [1]. We demonstrate the effectiveness of our overall approach on the MOT16 benchmark [2], achieving state-of-art performance.",
"title": ""
},
{
"docid": "3cbb932e65cf2150cb32aaf930b45492",
"text": "In software industries, various open source projects utilize the services of Bug Tracking Systems that let users submit software issues or bugs and allow developers to respond to and fix them. The users label the reports as bugs or any other relevant class. This classification helps to decide which team or personnel would be responsible for dealing with an issue. A major problem here is that users tend to wrongly classify the issues, because of which a middleman called a bug triager is required to resolve any misclassifications. This ensures no time is wasted at the developer end. This approach is very time consuming and therefore it has been of great interest to automate the classification process, not only to speed things up, but to lower the amount of errors as well. In the literature, several approaches including machine learning techniques have been proposed to automate text classification. However, there has not been an extensive comparison on the performance of different natural language classifiers in this field. In this paper we compare general natural language data classifying techniques using five different machine learning algorithms: Naive Bayes, kNN, Pegasos, Rocchio and Perceptron. The performance comparison of these algorithms was done on the basis of their apparent error rates. The data-set involved four different projects, Httpclient, Jackrabbit, Lucene and Tomcat5, that used two different Bug Tracking Systems - Bugzilla and Jira. An experimental comparison of pre-processing techniques was also performed.",
"title": ""
},
{
"docid": "f785636331f737d8dc14b6958277553f",
"text": "This paper focuses on subword-based Neural Machine Translation (NMT). We hypothesize that in the NMT model, the appropriate subword units for the following three modules (layers) can differ: (1) the encoder embedding layer, (2) the decoder embedding layer, and (3) the decoder output layer. We find the subword based on Sennrich et al. (2016) has a feature that a large vocabulary is a superset of a small vocabulary and modify the NMT model enables the incorporation of several different subword units in a single embedding layer. We refer these small subword features as hierarchical subword features. To empirically investigate our assumption, we compare the performance of several different subword units and hierarchical subword features for both the encoder and decoder embedding layers. We confirmed that incorporating hierarchical subword features in the encoder consistently improves BLEU scores on the IWSLT evaluation datasets. Title and Abstract in Japanese 階層的部分単語特徴を用いたニューラル機械翻訳 本稿では、部分単語 (subword) を用いたニューラル機械翻訳 (Neural Machine Translation, NMT)に着目する。NMTモデルでは、エンコーダの埋め込み層、デコーダの埋め込み層お よびデコーダの出力層の 3箇所で部分単語が用いられるが、それぞれの層で適切な部分単 語単位は異なるという仮説を立てた。我々は、Sennrich et al. (2016)に基づく部分単語は、 大きな語彙集合が小さい語彙集合を必ず包含するという特徴を利用して、複数の異なる部 分単語列を同時に一つの埋め込み層として扱えるよう NMTモデルを改良する。以降、こ の小さな語彙集合特徴を階層的部分単語特徴と呼ぶ。本仮説を検証するために、様々な部 分単語単位や階層的部分単語特徴をエンコーダ・デコーダの埋め込み層に適用して、その 精度の変化を確認する。IWSLT評価セットを用いた実験により、エンコーダ側で階層的な 部分単語を用いたモデルは BLEUスコアが一貫して向上することが確認できた。",
"title": ""
},
{
"docid": "ab0541d9ec1ea0cf7ad85d685267c142",
"text": "Umbilical catheters have been used in NICUs for drawing blood samples, measuring blood pressure, and administering fluid and medications for more than 25 years. Complications associated with umbilical catheters include thrombosis; embolism; vasospasm; vessel perforation; hemorrhage; infection; gastrointestinal, renal, and limb tissue damage; hepatic necrosis; hydrothorax; cardiac arrhythmias; pericardial effusion and tamponade; and erosion of the atrium and ventricle. A review of the literature provides conflicting accounts of the superiority of high versus low placement of umbilical arterial catheters. This article reviews the current literature regarding use of umbilical catheters in neonates. It also highlights the policy developed for the authors' NICU, a 34-bed tertiary care unit of a children's hospital, and analyzes complications associated with umbilical catheter use for 1 year in that unit.",
"title": ""
},
{
"docid": "c35a4278aa4a084d119238fdd68d9eb6",
"text": "ARM TrustZone, which provides a Trusted Execution Environment (TEE), normally plays a role in keeping security-sensitive resources safe. However, to properly control access to the resources, it is not enough to just isolate them from the Rich Execution Environment (REE). In addition to the isolation, secure communication should be guaranteed between security-critical resources in the TEE and legitimate REE processes that are permitted to use them. Even though there is a TEE security solution — namely, a kernel-integrity monitor — it aims to protect the REE kernel’s static regions, not to secure communication between the REE and TEE. We propose SeCReT to ameliorate this problem. SeCReT is a framework that builds a secure channel between the REE and TEE by enabling REE processes to use session keys in the REE that is regarded as unsafe region. SeCReT provides the session key to a requestor process only when the requestor’s code and control flow integrity are verified. To prevent the key from being exposed to an attacker who already compromised the REE kernel, SeCReT flushes the key from the memory every time the processor switches into kernel mode. In this paper, we present the design and implementation of SeCReT to show how it protects the key in the REE. Our prototype is implemented on Arndale board, which offers a Cortex-A15 dual-core processor with TrustZone as its security extension. We performed a security analysis by using a kernel rootkit and also ran LMBench microbenchmark to evaluate the performance overhead imposed by SeCReT.",
"title": ""
},
{
"docid": "9533193407869250854157e89d2815eb",
"text": "Life events are often described as major forces that are going to shape tomorrow's consumer need, behavior and mood. Thus, the prediction of life events is highly relevant in marketing and sociology. In this paper, we propose a data-driven, real-time method to predict individual life events, using readily available data from smartphones. Our large-scale user study with more than 2000 users shows that our method is able to predict life events with 64.5% higher accuracy, 183.1% better precision and 88.0% higher specificity than a random model on average.",
"title": ""
},
{
"docid": "ba696260b6b5ae71f4558e4c1addeebd",
"text": "Over the last 100 years, many studies have been performed to determine the biochemical and histopathological phenomena that mark the origin of neoplasms. At the end of the last century, the leading paradigm, which is currently well rooted, considered the origin of neoplasms to be a set of genetic and/or epigenetic mutations, stochastic and independent in a single cell, or rather, a stochastic monoclonal pattern. However, in the last 20 years, two important areas of research have underlined numerous limitations and incongruities of this pattern, the hypothesis of the so-called cancer stem cell theory and a revaluation of several alterations in metabolic networks that are typical of the neoplastic cell, the so-called Warburg effect. Even if this specific \"metabolic sign\" has been known for more than 85 years, only in the last few years has it been given more attention; therefore, the so-called Warburg hypothesis has been used in multiple and independent surveys. Based on an accurate analysis of a series of considerations and of biophysical thermodynamic events in the literature, we will demonstrate a homogeneous pattern of the cancer stem cell theory, of the Warburg hypothesis and of the stochastic monoclonal pattern; this pattern could contribute considerably as the first basis of the development of a new uniform theory on the origin of neoplasms. Thus, a new possible epistemological paradigm is represented; this paradigm considers the Warburg effect as a specific \"metabolic sign\" reflecting the stem origin of the neoplastic cell, where, in this specific metabolic order, an essential reason for the genetic instability that is intrinsic to the neoplastic cell is defined.",
"title": ""
},
{
"docid": "fd3faa049df1d2a0b2fe9af6cf0f3e06",
"text": "Wireless Mesh Networks improve their capacities by equipping mesh nodes with multi-radios tuned to non-overlapping channels. Hence the data forwarding between two nodes has multiple selections of links and the bandwidth between the pair of nodes varies dynamically. Under this condition, a mesh node adopts machine learning mechanisms to choose the possible best next hop which has maximum bandwidth when it intends to forward data. In this paper, we present a machine learning based forwarding algorithm to let a forwarding node dynamically select the next hop with highest potential bandwidth capacity to resume communication based on learning algorithm. Key to this strategy is that a node only maintains three past status, and then it is able to learn and predict the potential bandwidth capacities of its links. Then, the node selects the next hop with potential maximal link bandwidth. Moreover, a geometrical based algorithm is developed to let the source node figure out the forwarding region in order to avoid flooding. Simulations demonstrate that our approach significantly speeds up the transmission and outperforms other peer algorithms.",
"title": ""
},
{
"docid": "0a1aeee5bf33abd61665c72d1c0b911b",
"text": "Kampo herbal remedies are reported to have a wide range of indications and have attracted attention due to reports suggesting that these remedies are effective when used in disease treatment while maintaining a favourable quality of life. Yokukansan, also known as TJ-54, is composed of seven herbs; Angelica acutiloba, Atractylodes lancea, Bupleurum falcatum, Poria cocos, Glycyrrhiza uralensis, Cnidium officinale and Uncaria rhynchophylla. Yokukansan is used to treat insomnia and irritability as well as screaming attacks, sleep tremors and hypnic myoclonia, and neurological disorders which include dementia and Alzheimer's disease - the focus of this article. It is concluded that Yokukansan is a versatile herbal remedy with a variety of effects on various neurological states, without reported adverse effects. Traditional herbal medicines consist of a combination of constituents which account for the clinical effect seen. Likewise, the benefits of Yokukansan are probably attributable to the preparation as a whole, rather than to individual compounds.",
"title": ""
},
{
"docid": "ecd7da1f742b4c92f3c748fd19098159",
"text": "Abstract. Today, a paradigm shift is being observed in science, where the focus is gradually shifting toward the cloud environments to obtain appropriate, robust and affordable services to deal with Big Data challenges (Sharma et al. 2014, 2015a, 2015b). Cloud computing avoids any need to locally maintain the overly scaled computing infrastructure that include not only dedicated space, but the expensive hardware and software also. In this paper, we study the evolution of as-a-Service modalities, stimulated by cloud computing, and explore the most complete inventory of new members beyond traditional cloud computing stack.",
"title": ""
},
{
"docid": "2e42e1f9478fb2548e39a92c5bacbaab",
"text": "In this paper, we consider a fully automatic makeup recommendation system and propose a novel examples-rules guided deep neural network approach. The framework consists of three stages. First, makeup-related facial traits are classified into structured coding. Second, these facial traits are fed into examples-rules guided deep neural recommendation model which makes use of the pairwise of Before-After images and the makeup artist knowledge jointly. Finally, to visualize the recommended makeup style, an automatic makeup synthesis system is developed as well. To this end, a new Before-After facial makeup database is collected and labeled manually, and the knowledge of makeup artist is modeled by knowledge base system. The performance of this framework is evaluated through extensive experimental analyses. The experiments validate the automatic facial traits classification, the recommendation effectiveness in statistical and perceptual ways and the makeup synthesis accuracy which outperforms the state of the art methods by large margin. It is also worthy to note that the proposed framework is a pioneering fully automatic makeup recommendation systems to our best knowledge.",
"title": ""
},
{
"docid": "33390e96d05644da201db3edb3ad7338",
"text": "This paper addresses the difficult problem of finding an optimal neural architecture design for a given image classification task. We propose a method that aggregates two main results of the previous state-of-the-art in neural architecture search. These are, appealing to the strong sampling efficiency of a search scheme based on sequential modelbased optimization (SMBO) [15], and increasing training efficiency by sharing weights among sampled architectures [18]. Sequential search has previously demonstrated its capabilities to find state-of-the-art neural architectures for image classification. However, its computational cost remains high, even unreachable under modest computational settings. Affording SMBO with weight-sharing alleviates this problem. On the other hand, progressive search with SMBO is inherently greedy, as it leverages a learned surrogate function to predict the validation error of neural architectures. This prediction is directly used to rank the sampled neural architectures. We propose to attenuate the greediness of the original SMBO method by relaxing the role of the surrogate function so it predicts architecture sampling probability instead. We demonstrate with experiments on the CIFAR-10 dataset that our method, denominated Efficient progressive neural architecture search (EPNAS), leads to increased search efficiency, while retaining competitiveness of found architectures.",
"title": ""
},
{
"docid": "ff5f7772a0a578cfe1dd08816af8e2e7",
"text": "Moisture-associated skin damage (MASD) occurs when there is prolonged exposure of the skin to excessive amounts of moisture from incontinence, wound exudate or perspiration. Incontinenceassociated dermatitis (IAD) relates specifically to skin breakdown from faecal and/or urinary incontinence (Beeckman et al, 2009), and has been defined as erythema and oedema of the skin surface, which may be accompanied by bullae with serous exudate, erosion or secondary cutaneous infection (Gray et al, 2012). IAD may also be referred to as a moisture lesion, moisture ulcer, perineal dermatitis or diaper dermatitis (Ousey, 2012). The effects of ageing on the skin are known to affect skin integrity, as is the underdeveloped nature of very young skin; as such, elderly patients and neonates are particularly vulnerable to damage from moisture (Voegeli, 2007). The increase in moisture resulting from episodes of incontinence is exacerbated due to bacterial and enzymatic activity associated with urine and faeces, particularly when both are present, which leads to an increase in skin pH alongside over-hydration of the skin surface. This damages the natural protection of the acid mantle, the skin’s naturally acidic pH, which is an important defence mechanism against external irritants and microorganisms. This damage leads to the breakdown of vulnerable skin and increased susceptibility to secondary infection (Beeckman et al, 2009). It has become well recognised that presence of IAD greatly increases the likelihood of pressure ulcer development, since over-hydrated skin is much more susceptible to damage by extrinsic factors such as pressure, friction and shear as compared with normal skin (Clarke et al, 2010). While it is important to firstly understand that pressure and moisture damage are separate aetiologies and, secondly, be able to recognise the clinical differences in presentation, one of the factors to consider for prevention of pressure ulcers is minimising exposure to moisture/ incontinence. Another important consideration with IAD is the effect on the patient. IAD can be painful and debilitating, and has been associated with reduced quality of life. It can also be time-consuming and expensive to treat, which has an impact on clinical resources and financial implications (Doughty et al, 2012). IAD is known to impact on direct Incontinence-associated dermatitis (IAD) relates to skin breakdown from exposure to urine or faeces, and its management involves implementation of structured skin care regimens that incorporate use of appropriate skin barrier products to protect the skin from exposure to moisture and irritants. Medi Derma-Pro Foam & Spray Cleanser and Medi Derma-Pro Skin Protectant Ointment are recent additions to the Total Barrier ProtectionTM (Medicareplus International) range indicated for management of moderateto-severe IAD and other moisture-associated skin damage. This article discusses a series of case studies and product evaluations performed to determine clinical outcomes and clinician feedback based on use of the Medi Derma-Pro skin barrier products to manage IAD. Results showed improvements to patients’ skin condition following use of Medi Derma-Pro, and the cleanser and skin protectant ointment were considered better than or the same as the most equivalent products on the market.",
"title": ""
}
] |
scidocsrr
|
6e6180c0e068f9ced017825428f8456a
|
Optimization of the settings of multiphase induction heating system
|
[
{
"docid": "8d1465aadbce57275d29d572d7dd6e52",
"text": "This paper presents a multiphase induction system modeling for a metal disc heating and further industrial applications such as hot strip mill. An original architecture, with three concentric inductors supplied by three resonant current inverters, leads to a reduced element system, without any coupling transformers, phase loop, mobile screens, or mobile magnetic cores as it could be found in classical solutions. A simulation model is built, based on simplified equivalent models of electric and thermal phenomena. It takes into account the data extracted from Flux2D finite-element software, concerning the energy transfer between the inductor currents and the piece to be heated. It is implemented in a versatile software PSIM, initially dedicated to power electronics. An optimization procedure calculates the optimal supply currents in the inverters in order to obtain a desired power density profile in the work piece. This paper deals with the simulated and experimental results which are compared in open loop and closed loop. This paper ends with a current control method which sets rms inductor currents in continuous and digital conditions.",
"title": ""
}
] |
[
{
"docid": "a1b91f78786d44cdadc6da0c2ecc2d1f",
"text": "Availability of an explainable deep learning model that can be applied to practical real world scenarios and in turn, can consistently, rapidly and accurately identify specific and minute traits in applicable fields of biological sciences, is scarce. Here we consider one such real world example viz., accurate identification, classification and quantification of biotic and abiotic stresses in crop research and production. Up until now, this has been predominantly done manually by visual inspection and require specialized training. However, such techniques are hindered by subjectivity resulting from interand intra-rater cognitive variability. Here, we demonstrate the ability of a machine learning framework to identify and classify a diverse set of foliar stresses in the soybean plant with remarkable accuracy. We also present an explanation mechanism using gradientweighted class activation mapping that isolates the visual symptoms used by the model to make predictions. This unsupervised identification of unique visual symptoms for each stress provides a quantitative measure of stress severity, allowing for identification, classification and quantification in one framework. The learnt model appears to be agnostic to species and make good predictions for other (non-soybean) species, demonstrating an ability of transfer learning. Disciplines Agriculture | Agronomy and Crop Sciences | Computer-Aided Engineering and Design Comments This is a pre-print made available through arxiv: https://arxiv.org/abs/1710.08619. Authors Sambuddha Ghosal, David Blystone, Asheesh K. Singh, Baskar Ganapathysubramanian, Arti Singh, and Soumik Sarkar This article is available at Iowa State University Digital Repository: https://lib.dr.iastate.edu/agron_pubs/540 Interpretable Deep Learning applied to Plant Stress Phenotyping Sambuddha Ghosal Department of Mechanical Engineering Iowa State University [email protected] David Blystone Department of Agronomy Iowa State University [email protected] Asheesh K. Singh Department of Agronomy Iowa State University [email protected] Baskar Ganapathysubramanian Department of Mechanical Engineering Iowa State University [email protected] Arti Singh Department of Agronomy Iowa State University [email protected] Soumik Sarkar Department of Mechanical Engineering Iowa State University [email protected]",
"title": ""
},
{
"docid": "8ace8a84496060999001bc8daab1b01f",
"text": "As the field of HRI evolves, it is important to understand how users interact with robots over long periods. This paper reviews the current research on long-term interaction between users and social robots. We describe the main features of these robots and highlight the main findings of the existing long-term studies. We also present a set of directions for future research and discuss some open issues that should be addressed in this field.",
"title": ""
},
{
"docid": "3442a266eaaf878a507f58124e15fee3",
"text": "The application of kernel-based learning algorithms has, so far, largely been confined to realvalued data and a few special data types, such as strings. In this paper we propose a general method of constructing natural families of kernels over discrete structures, based on the matrix exponentiation idea. In particular, we focus on generating kernels on graphs, for which we propose a special class of exponential kernels called diffusion kernels, which are based on the heat equation and can be regarded as the discretization of the familiar Gaussian kernel of Euclidean space.",
"title": ""
},
{
"docid": "bbfcce9ec7294cb542195cca1dfbcc6c",
"text": "We propose a new algorithm, DASSO, for fitting the entire coef fici nt path of the Dantzig selector with a similar computational cost to the LA RS algorithm that is used to compute the Lasso. DASSO efficiently constructs a piecewi s linear path through a sequential simplex-like algorithm, which is remarkably si milar to LARS. Comparison of the two algorithms sheds new light on the question of how th e Lasso and Dantzig selector are related. In addition, we provide theoretical c onditions on the design matrix, X, under which the Lasso and Dantzig selector coefficient esti mates will be identical for certain tuning parameters. As a consequence, in many instances, we are able to extend the powerful non-asymptotic bounds that have been de veloped for the Dantzig selector to the Lasso. Finally, through empirical studies o f imulated and real world data sets we show that in practice, when the bounds hold for th e Dantzig selector, they almost always also hold for the Lasso. Some key words : Dantzig selector; LARS; Lasso; DASSO",
"title": ""
},
{
"docid": "a24f958c480812feb338b651849037b2",
"text": "This paper investigates the detection and classification of fighting and pre and post fighting events when viewed from a video camera. Specifically we investigate normal, pre, post and actual fighting sequences and classify them. A hierarchical AdaBoost classifier is described and results using this approach are presented. We show it is possible to classify pre-fighting situations using such an approach and demonstrate how it can be used in the general case of continuous sequences.",
"title": ""
},
{
"docid": "cac8aa7cfd50da05a6f973b019e8c4f5",
"text": "Deep learning has led to remarkable advances when applied to problems where the data distribution does not change over the course of learning. In stark contrast, biological neural networks continually adapt to changing domains, and solve a diversity of tasks simultaneously. Furthermore, synapses in biological neurons are not simply real-valued scalars, but possess complex molecular machinery enabling non-trivial learning dynamics. In this study, we take a first step toward bringing this biological complexity into artificial neural networks. We introduce a model of intelligent synapses that accumulate task relevant information over time, and exploit this information to efficiently consolidate memories of old tasks to protect them from being overwritten as new tasks are learned. We apply our framework to learning sequences of related classification problems, and show that it dramatically reduces catastrophic forgetting while maintaining computational efficiency.",
"title": ""
},
{
"docid": "7931fa9541efa9a006a030655c59c5f4",
"text": "Natural language generation (NLG) is a critical component in a spoken dialogue system. This paper presents a Recurrent Neural Network based Encoder-Decoder architecture, in which an LSTM-based decoder is introduced to select, aggregate semantic elements produced by an attention mechanism over the input elements, and to produce the required utterances. The proposed generator can be jointly trained both sentence planning and surface realization to produce natural language sentences. The proposed model was extensively evaluated on four different NLG datasets. The experimental results showed that the proposed generators not only consistently outperform the previous methods across all the NLG domains but also show an ability to generalize from a new, unseen domain and learn from multi-domain datasets.",
"title": ""
},
{
"docid": "73adcdf18b86ab3598731d75ac655f2c",
"text": "Many individuals exhibit unconscious body movements called mannerisms while speaking. These repeated changes often distract the audience when not relevant to the verbal context. We present an intelligent interface that can automatically extract human gestures using Microsoft Kinect to make speakers aware of their mannerisms. We use a sparsity-based algorithm, Shift Invariant Sparse Coding, to automatically extract the patterns of body movements. These patterns are displayed in an interface with subtle question and answer-based feedback scheme that draws attention to the speaker's body language. Our formal evaluation with 27 participants shows that the users became aware of their body language after using the system. In addition, when independent observers annotated the accuracy of the algorithm for every extracted pattern, we find that the patterns extracted by our algorithm is significantly (p<0.001) more accurate than just random selection. This represents a strong evidence that the algorithm is able to extract human-interpretable body movement patterns. An interactive demo of AutoManner is available at http://tinyurl.com/AutoManner.",
"title": ""
},
{
"docid": "bfba2d1f26b3ac66630d81ab5bf10347",
"text": "Authcoin is an alternative approach to the commonly used public key infrastructures such as central authorities and the PGP web of trust. It combines a challenge response-based validation and authentication process for domains, certificates, email accounts and public keys with the advantages of a block chain-based storage system. As a result, Authcoin does not suffer from the downsides of existing solutions and is much more resilient to sybil attacks.",
"title": ""
},
{
"docid": "1f50a6d6e7c48efb7ffc86bcc6a8271d",
"text": "Creating short summaries of documents with respect to a query has applications in for example search engines, where it may help inform users of the most relevant results. Constructing such a summary automatically, with the potential expressiveness of a human-written summary, is a difficult problem yet to be fully solved. In this thesis, a neural network model for this task is presented. We adapt an existing dataset of news article summaries for the task and train a pointer-generator model using this dataset to summarize such articles. The generated summaries are then evaluated by measuring similarity to reference summaries. We observe that the generated summaries exhibit abstractive properties, but also that they have issues, such as rarely being truthful. However, we show that a neural network summarization model, similar to existing neural network models for abstractive summarization, can be constructed to make use of queries for more targeted summaries.",
"title": ""
},
{
"docid": "89513d2cf137e60bf7f341362de2ba84",
"text": "In this paper, we present a visual analytics approach that provides decision makers with a proactive and predictive environment in order to assist them in making effective resource allocation and deployment decisions. The challenges involved with such predictive analytics processes include end-users' understanding, and the application of the underlying statistical algorithms at the right spatiotemporal granularity levels so that good prediction estimates can be established. In our approach, we provide analysts with a suite of natural scale templates and methods that enable them to focus and drill down to appropriate geospatial and temporal resolution levels. Our forecasting technique is based on the Seasonal Trend decomposition based on Loess (STL) method, which we apply in a spatiotemporal visual analytics context to provide analysts with predicted levels of future activity. We also present a novel kernel density estimation technique we have developed, in which the prediction process is influenced by the spatial correlation of recent incidents at nearby locations. We demonstrate our techniques by applying our methodology to Criminal, Traffic and Civil (CTC) incident datasets.",
"title": ""
},
{
"docid": "89a11e5525d086b6b480fba368fb7924",
"text": "OBJECTIVE\nMost BCIs have to undergo a calibration session in which data is recorded to train decoders with machine learning. Only recently zero-training methods have become a subject of study. This work proposes a probabilistic framework for BCI applications which exploit event-related potentials (ERPs). For the example of a visual P300 speller we show how the framework harvests the structure suitable to solve the decoding task by (a) transfer learning, (b) unsupervised adaptation, (c) language model and (d) dynamic stopping.\n\n\nAPPROACH\nA simulation study compares the proposed probabilistic zero framework (using transfer learning and task structure) to a state-of-the-art supervised model on n = 22 subjects. The individual influence of the involved components (a)-(d) are investigated.\n\n\nMAIN RESULTS\nWithout any need for a calibration session, the probabilistic zero-training framework with inter-subject transfer learning shows excellent performance--competitive to a state-of-the-art supervised method using calibration. Its decoding quality is carried mainly by the effect of transfer learning in combination with continuous unsupervised adaptation.\n\n\nSIGNIFICANCE\nA high-performing zero-training BCI is within reach for one of the most popular BCI paradigms: ERP spelling. Recording calibration data for a supervised BCI would require valuable time which is lost for spelling. The time spent on calibration would allow a novel user to spell 29 symbols with our unsupervised approach. It could be of use for various clinical and non-clinical ERP-applications of BCI.",
"title": ""
},
{
"docid": "6b18d45d56b9e3f34ce9b983bb8d30a9",
"text": "1. The size of the Mexican overwintering population of monarch butterflies has decreased over the last decade. Approximately half of these butterflies come from the U.S. Midwest where larvae feed on common milkweed. There has been a large decline in milkweed in agricultural fields in the Midwest over the last decade. This loss is coincident with the increased use of glyphosate herbicide in conjunction with increased planting of genetically modified (GM) glyphosate-tolerant corn (maize) and soybeans (soya). 2. We investigate whether the decline in the size of the overwintering population can be attributed to a decline in monarch production owing to a loss of milkweeds in agricultural fields in the Midwest. We estimate Midwest annual monarch production using data on the number of monarch eggs per milkweed plant for milkweeds in different habitats, the density of milkweeds in different habitats, and the area occupied by those habitats on the landscape. 3. We estimate that there has been a 58% decline in milkweeds on the Midwest landscape and an 81% decline in monarch production in the Midwest from 1999 to 2010. Monarch production in the Midwest each year was positively correlated with the size of the subsequent overwintering population in Mexico. Taken together, these results strongly suggest that a loss of agricultural milkweeds is a major contributor to the decline in the monarch population. 4. The smaller monarch population size that has become the norm will make the species more vulnerable to other conservation threats.",
"title": ""
},
{
"docid": "36b0ace93b5a902966e96e4649d83b98",
"text": "We introduce a novel matching algorithm, called DeepMatching, to compute dense correspondences between images. DeepMatching relies on a hierarchical, multi-layer, correlational architecture designed for matching images and was inspired by deep convolutional approaches. The proposed matching algorithm can handle non-rigid deformations and repetitive textures and efficiently determines dense correspondences in the presence of significant changes between images. We evaluate the performance of DeepMatching, in comparison with state-of-the-art matching algorithms, on the Mikolajczyk (Mikolajczyk et al. A comparison of affine region detectors, 2005), the MPI-Sintel (Butler et al. A naturalistic open source movie for optical flow evaluation, 2012) and the Kitti (Geiger et al. Vision meets robotics: The KITTI dataset, 2013) datasets. DeepMatching outperforms the state-of-the-art algorithms and shows excellent results in particular for repetitive textures. We also apply DeepMatching to the computation of optical flow, called DeepFlow, by integrating it in the large displacement optical flow (LDOF) approach of Brox and Malik (Large displacement optical flow: descriptor matching in variational motion estimation, 2011). Additional robustness to large displacements and complex motion is obtained thanks to our matching approach. DeepFlow obtains competitive performance on public benchmarks for optical flow estimation.",
"title": ""
},
{
"docid": "6778931314fbaa831264c91250614a0c",
"text": "We present a real-time indoor visible light positioning system based on the optical camera communication, where the coordinate data in the ON–OFF keying format is transmitted via light-emitting diode-based lights and captured using a smartphone camera. The position of the camera is estimated using a novel perspective-<inline-formula> <tex-math notation=\"LaTeX\">$n$ </tex-math></inline-formula>-point problem algorithm, which determines the position of a calibrated camera from <inline-formula> <tex-math notation=\"LaTeX\">$n~3\\text{D}$ </tex-math></inline-formula>-to-2D point correspondences. The experimental results show that the proposed system offers mean position errors of 4.81 and 6.58 cm for the heights of 50 and 80 cm, respectively.",
"title": ""
},
{
"docid": "8d13a4f52c9a72a2f53b6633f7fb4053",
"text": "The hippocampal-entorhinal system encodes a map of space that guides spatial navigation. Goal-directed behaviour outside of spatial navigation similarly requires a representation of abstract forms of relational knowledge. This information relies on the same neural system, but it is not known whether the organisational principles governing continuous maps may extend to the implicit encoding of discrete, non-spatial graphs. Here, we show that the human hippocampal-entorhinal system can represent relationships between objects using a metric that depends on associative strength. We reconstruct a map-like knowledge structure directly from a hippocampal-entorhinal functional magnetic resonance imaging adaptation signal in a situation where relationships are non-spatial rather than spatial, discrete rather than continuous, and unavailable to conscious awareness. Notably, the measure that best predicted a behavioural signature of implicit knowledge and blood oxygen level-dependent adaptation was a weighted sum of future states, akin to the successor representation that has been proposed to account for place and grid-cell firing patterns.",
"title": ""
},
{
"docid": "16c205cd85d33eed145724bc6b015ba1",
"text": "Telematics data is becoming increasingly available due to the ubiquity of devices that collect data during drives, for different purposes, such as usage based insurance (UBI), fleet management, navigation of connected vehicles, etc. Consequently, a variety of data-analytic applications have become feasible that extract valuable insights from the data. In this paper, we address the especially challenging problem of discovering behavior-based driving patterns from only externally observable phenomena (e.g. vehicle's speed). We present a trajectory segmentation approach capable of discovering driving patterns as separate segments, based on the behavior of drivers. This segmentation approach includes a novel transformation of trajectories along with a dynamic programming approach for segmentation. We apply the segmentation approach on a real-word, rich dataset of personal car trajectories provided by a major insurance company based in Columbus, Ohio. Analysis and preliminary results show the applicability of approach for finding significant driving patterns.",
"title": ""
},
{
"docid": "788bf97b435dfbe9d31373e21bc76716",
"text": "In this paper, we study the design and workspace of a 6–6 cable-suspended parallel robot. The workspace volume is characterized as the set of points where the centroid of the moving platform can reach with tensions in all suspension cables at a constant orientation. This paper attempts to tackle some aspects of optimal design of a 6DOF cable robot by addressing the variations of the workspace volume and the accuracy of the robot using different geometric configurations, different sizes and orientations of the moving platform. The global condition index is used as a performance index of a robot with respect to the force and velocity transmission over the whole workspace. The results are used for design analysis of the cable-robot for a specific motion of the moving platform. 2004 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "67417a87eff4ad3b1d2a906a1f17abd2",
"text": "Epitaxial growth of A-A and A-B stacking MoS2 on WS2 via a two-step chemical vapor deposition method is reported. These epitaxial heterostructures show an atomic clean interface and a strong interlayer coupling, as evidenced by systematic characterization. Low-frequency Raman breathing and shear modes are observed in commensurate stacking bilayers for the first time; these can serve as persuasive fingerprints for interfacial quality and stacking configurations.",
"title": ""
},
{
"docid": "4cb7a805f490c7f3624eb04e109d8349",
"text": "Femtocell is a promising solution for enhancing the indoor coverage and capacity in wireless networks. However, for the small size of femtocell and potentially frequent power on/off, existing handover schemes may not be reliable enough for femtocell networks. Moreover, improper handover parameters settings may lead to handover failures and unnecessary handovers, which make it necessary to enhance the mobility robustness for femtocells. In this article, we propose a gradient method and cost function-based mobility robustness optimization scheme for long term evolution (LTE) femtocell self-organizing networks. Moreover, signalling overhead of the scheme is analyzed. Simulation results show that the proposed scheme has a better performance than the fixed parameters method in terms of reduced the number of handover failures and unnecessary handovers with limited signalling modifications.",
"title": ""
}
] |
scidocsrr
|
6e254f1a3e0039abac80b9b06f4b8a6f
|
Using Proactive Fault-Tolerance Approach to Enhance Cloud Service Reliability
|
[
{
"docid": "1b04911f677767284063133908ab4bb1",
"text": "An increasing number of companies are beginning to deploy services/applications in the cloud computing environment. Enhancing the reliability of cloud service has become a critical and challenging research problem. In the cloud computing environment, all resources are commercialized. Therefore, a reliability enhancement approach should not consume too much resource. However, existing approaches cannot achieve the optimal effect because of checkpoint image-sharing neglect, and checkpoint image inaccessibility caused by node crashing. To address this problem, we propose a cloud service reliability enhancement approach for minimizing network and storage resource usage in a cloud data center. In our proposed approach, the identical parts of all virtual machines that provide the same service are checkpointed once as the service checkpoint image, which can be shared by those virtual machines to reduce the storage resource consumption. Then, the remaining checkpoint images only save the modified page. To persistently store the checkpoint image, the checkpoint image storage problem is modeled as an optimization problem. Finally, we present an efficient heuristic algorithm to solve the problem. The algorithm exploits the data center network architecture characteristics and the node failure predicator to minimize network resource usage. To verify the effectiveness of the proposed approach, we extend the renowned cloud simulator Cloudsim and conduct experiments on it. Experimental results based on the extended Cloudsim show that the proposed approach not only guarantees cloud service reliability, but also consumes fewer network and storage resources than other approaches.",
"title": ""
},
{
"docid": "acaaa0a6316bffb3ed618da7ec4d8d80",
"text": "The rapid growth in demand for computational power driven by modern service applications combined with the shift to the Cloud computing model have led to the establishment of large-scale virtualized data centers. Such data centers consume enormous amounts of electrical energy resulting in high operating costs and carbon dioxide emissions. Dynamic consolidation of virtual machines (VMs) using live migration and switching idle nodes to the sleep mode allow Cloud providers to optimize resource usage and reduce energy consumption. However, the obligation of providing high quality of service to customers leads to the necessity in dealing with the energy-performance trade-off, as aggressive consolidation may lead to performance degradation. Due to the variability of workloads experienced by modern applications, the VM placement should be optimized continuously in an online manner. To understand the implications of the online nature of the problem, we conduct competitive analysis and prove competitive ratios of optimal online deterministic algorithms for the single VM migration and dynamic VM consolidation problems. Furthermore, we propose novel adaptive heuristics for dynamic consolidation of VMs based on an analysis of historical data from the resource usage by VMs. The proposed algorithms significantly reduce energy consumption, while ensuring a high level of adherence to the Service Level Agreements (SLA). We validate the high efficiency of the proposed algorithms by extensive simulations using real-world workload traces from more than a thousand PlanetLab VMs. Copyright c © 2012 John Wiley & Sons, Ltd.",
"title": ""
}
] |
[
{
"docid": "4f973dfbea2cd0273d060f6917eac0af",
"text": "For an understanding of the aberrant biology seen in mouse mutations and identification of more subtle phenotype variation, there is a need for a full clinical and pathological characterization of the animals. Although there has been some use of sophisticated techniques, the majority of behavioral and functional analyses in mice have been qualitative rather than quantitative in nature. There is, however, no comprehensive routine screening and testing protocol designed to identify and characterize phenotype variation or disorders associated with the mouse genome. We have developed the SHIRPA procedure to characterize the phenotype of mice in three stages. The primary screen utilizes standard methods to provide a behavioral and functional profile by observational assessment. The secondary screen involves a comprehensive behavioral assessment battery and pathological analysis. These protocols provide the framework for a general phenotype assessment that is suitable for a wide range of applications, including the characterization of spontaneous and induced mutants, the analysis of transgenic and gene-targeted phenotypes, and the definition of variation between strains. The tertiary screening stage described is tailored to the assessment of existing or potential models of neurological disease, as well as the assessment of phenotypic variability that may be the result of unknown genetic influences. SHIRPA utilizes standardized protocols for behavioral and functional assessment that provide a sensitive measure for quantifying phenotype expression in the mouse. These paradigms can be refined to test the function of specific neural pathways, which will, in turn, contribute to a greater understanding of neurological disorders.",
"title": ""
},
{
"docid": "02a130ee46349366f2df347119831e5c",
"text": "Low power ad hoc wireless networks operate in conditions where channels are subject to fading. Cooperative diversity mitigates fading in these networks by establishing virtual antenna arrays through clustering the nodes. A cluster in a cooperative diversity network is a collection of nodes that cooperatively transmits a single packet. There are two types of clustering schemes: static and dynamic. In static clustering all nodes start and stop transmission simultaneously, and nodes do not join or leave the cluster while the packet is being transmitted. Dynamic clustering allows a node to join an ongoing cooperative transmission of a packet as soon as the packet is received. In this paper we take a broad view of the cooperative network by examining packet flows, while still faithfully implementing the physical layer at the bit level. We evaluate both clustering schemes using simulations on large multi-flow networks. We demonstrate that dynamically-clustered cooperative networks substantially outperform both statically-clustered cooperative networks and classical point-to-point networks.",
"title": ""
},
{
"docid": "6dbf49c714f6e176273317d4274b93de",
"text": "Categorical compositional distributional model of [9] sug gests a way to combine grammatical composition of the formal, type logi cal models with the corpus based, empirical word representations of distribut ional semantics. This paper contributes to the project by expanding the model to al so capture entailment relations. This is achieved by extending the representatio s of words from points in meaning space to density operators, which are probabilit y d stributions on the subspaces of the space. A symmetric measure of similarity an d an asymmetric measure of entailment is defined, where lexical entailment i s measured using von Neumann entropy, the quantum variant of Kullback-Leibl er divergence. Lexical entailment, combined with the composition map on wo rd representations, provides a method to obtain entailment relations on the leve l of sentences. Truth theoretic and corpus-based examples are provided.",
"title": ""
},
{
"docid": "1557392e8482bafe53eb50fccfd60157",
"text": "A common practice among servers in restaurants is to give their dining parties an unexpected gift in the form of candy when delivering the check. Two studies were conducted to evaluate the impact of this gesture on the tip percentages received by servers. Study 1 found that customers who received a small piece of chocolate along with the check tipped more than did customers who received no candy. Study 2 found that tips varied with the amount of the candy given to the customers as well as with the manner in which it was offered. It is argued that reciprocity is a stronger explanation for these findings than either impression management or the good mood effect.",
"title": ""
},
{
"docid": "7535a7351849c5a6dd65611037d06678",
"text": "In this paper, we present an optimistic concurrency control solution. The proposed solution represents an excellent blossom in the concurrency control field. It deals with the concurrency control anomalies, and, simultaneously, assures the reliability of the data before read-write transactions and after successfully committed. It can be used within the distributed database to track data logs and roll back processes to overcome distributed database anomalies. The method is based on commit timestamps for validation and an integer flag that is incremented each time a successful update on the record is committed.",
"title": ""
},
{
"docid": "e6e78cf1e5dc6332e872bad7321f9c16",
"text": "Structural analysis and design is often conducted under the assumption of rigid base boundary conditions, particularly if the foundation system extends to bedrock, though the extent to which the actual flexibility of the soil-foundation system affects the predicted periods of vibration depends on the application. While soil-structure interaction has mostly received attention in seismic applications, lateral flexibility below the ground surface may in some cases influence the dynamic properties of tall, flexible structures, generally greater than 50 stories and dominated by wind loads. This study will explore this issue and develop a hybrid framework within which these effects can be captured and eventually be applied to existing finite element models of two tall buildings in the Chicago Full-Scale Monitoring Program. It is hypothesized that the extent to which the rigid base condition assumption applies in these buildings depends on the relative role of cantilever and frame actions in their structural systems. In this hybrid approach, the lateral and axial flexibility of the foundation systems are first determined in isolation and then introduced to the existing finite element models of the buildings as springs, replacing the rigid boundary conditions assumed by designers in the original finite element model development. The evaluation of the periods predicted by this hybrid framework, validated against companion studies and full-scale data, are used to quantify the sensitivity of foundation modeling to the super-structural system primary deformation mechanisms and soil type. Not only will this study demonstrate the viability of this hybrid approach, but also illustrate situations under which foundation flexibility in various degrees of freedom should be considered in the modeling process.",
"title": ""
},
{
"docid": "26b38a6dc48011af80547171a9f3ecbd",
"text": "This work addresses two classification problems that fall under the heading of domain adaptation, wherein the distributions of training and testing examples differ. The first problem studied is that of class proportion estimation, which is the problem of estimating the class proportions in an unlabeled testing data set given labeled examples of each class. Compared to previous work on this problem, our approach has the novel feature that it does not require labeled training data from one of the classes. This property allows us to address the second domain adaptation problem, namely, multiclass anomaly rejection. Here, the goal is to design a classifier that has the option of assigning a “reject” label, indicating that the instance did not arise from a class present in the training data. We establish consistent learning strategies for both of these domain adaptation problems, which to our knowledge are the first of their kind. We also implement the class proportion estimation technique and demonstrate its performance on several benchmark data sets.",
"title": ""
},
{
"docid": "b50498964a73a59f54b3a213f2626935",
"text": "To reduce the significant redundancy in deep Convolutional Neural Networks (CNNs), most existing methods prune neurons by only considering the statistics of an individual layer or two consecutive layers (e.g., prune one layer to minimize the reconstruction error of the next layer), ignoring the effect of error propagation in deep networks. In contrast, we argue that for a pruned network to retain its predictive power, it is essential to prune neurons in the entire neuron network jointly based on a unified goal: minimizing the reconstruction error of important responses in the \"final response layer\" (FRL), which is the second-to-last layer before classification. Specifically, we apply feature ranking techniques to measure the importance of each neuron in the FRL, formulate network pruning as a binary integer optimization problem, and derive a closed-form solution to it for pruning neurons in earlier layers. Based on our theoretical analysis, we propose the Neuron Importance Score Propagation (NISP) algorithm to propagate the importance scores of final responses to every neuron in the network. The CNN is pruned by removing neurons with least importance, and it is then fine-tuned to recover its predictive power. NISP is evaluated on several datasets with multiple CNN models and demonstrated to achieve significant acceleration and compression with negligible accuracy loss.",
"title": ""
},
{
"docid": "85fe68b957a8daa69235ef65d92b1990",
"text": "Although Neural Machine Translation (NMT) models have advanced state-of-the-art performance in machine translation, they face problems like the inadequate translation. We attribute this to that the standard Maximum Likelihood Estimation (MLE) cannot judge the real translation quality due to its several limitations. In this work, we propose an adequacyoriented learning mechanism for NMT by casting translation as a stochastic policy in Reinforcement Learning (RL), where the reward is estimated by explicitly measuring translation adequacy. Benefiting from the sequence-level training of RL strategy and a more accurate reward designed specifically for translation, our model outperforms multiple strong baselines, including (1) standard and coverage-augmented attention models with MLE-based training, and (2) advanced reinforcement and adversarial training strategies with rewards based on both word-level BLEU and character-level CHRF3. Quantitative and qualitative analyses on different language pairs and NMT architectures demonstrate the effectiveness and universality of the proposed approach.",
"title": ""
},
{
"docid": "c0ef15616ba357cb522b828e03a5298c",
"text": "This paper introduces the compact genetic algorithm (cGA) which represents the population as a probability distribution over the set of solutions and is operationally equivalent to the order-one behavior of the simple GA with uniform crossover. It processes each gene independently and requires less memory than the simple GA. The development of the compact GA is guided by a proper understanding of the role of the GA’s parameters and operators. The paper clearly illustrates the mapping of the simple GA’s parameters into those of an equivalent compact GA. Computer simulations compare both algorithms in terms of solution quality and speed. Finally, this work raises important questions about the use of information in a genetic algorithm, and its ramifications show us a direction that can lead to the design of more efficient GA’s.",
"title": ""
},
{
"docid": "936353c90f0e0ce7946a11b4a60d494c",
"text": "This paper deals with multi-class classification problems. Many methods extend binary classifiers to operate a multi-class task, with strategies such as the one-vs-one and the one-vs-all schemes. However, the computational cost of such techniques is highly dependent on the number of available classes. We present a method for multi-class classification, with a computational complexity essentially independent of the number of classes. To this end, we exploit recent developments in multifunctional optimization in machine learning. We show that in the proposed algorithm, labels only appear in terms of inner products, in the same way as input data emerge as inner products in kernel machines via the so-called the kernel trick. Experimental results on real data show that the proposed method reduces efficiently the computational time of the classification task without sacrificing its generalization ability.",
"title": ""
},
{
"docid": "79b3dc474bc2a75185c6cb7486ad7dde",
"text": "BACKGROUND\nCanine rabies causes many thousands of human deaths every year in Africa, and continues to increase throughout much of the continent.\n\n\nMETHODOLOGY/PRINCIPAL FINDINGS\nThis paper identifies four common reasons given for the lack of effective canine rabies control in Africa: (a) a low priority given for disease control as a result of lack of awareness of the rabies burden; (b) epidemiological constraints such as uncertainties about the required levels of vaccination coverage and the possibility of sustained cycles of infection in wildlife; (c) operational constraints including accessibility of dogs for vaccination and insufficient knowledge of dog population sizes for planning of vaccination campaigns; and (d) limited resources for implementation of rabies surveillance and control. We address each of these issues in turn, presenting data from field studies and modelling approaches used in Tanzania, including burden of disease evaluations, detailed epidemiological studies, operational data from vaccination campaigns in different demographic and ecological settings, and economic analyses of the cost-effectiveness of dog vaccination for human rabies prevention.\n\n\nCONCLUSIONS/SIGNIFICANCE\nWe conclude that there are no insurmountable problems to canine rabies control in most of Africa; that elimination of canine rabies is epidemiologically and practically feasible through mass vaccination of domestic dogs; and that domestic dog vaccination provides a cost-effective approach to the prevention and elimination of human rabies deaths.",
"title": ""
},
{
"docid": "021c7631ac1ac3c47029468563f8d310",
"text": "It is widely accepted that variable names in computer programs should be meaningful, and that this aids program comprehension. \"Meaningful\" is commonly interpreted as favoring long descriptive names. However, there is at least some use of short and even single-letter names: using 'i' in loops is very common, and we show (by extracting variable names from 1000 popular github projects in 5 languages) that some other letters are also widely used. In addition, controlled experiments with different versions of the same functions (specifically, different variable names) failed to show significant differences in ability to modify the code. Finally, an online survey showed that certain letters are strongly associated with certain types and meanings. This implies that a single letter can in fact convey meaning. The conclusion from all this is that single letter variables can indeed be used beneficially in certain cases, leading to more concise code.",
"title": ""
},
{
"docid": "9cae19b4d3b4a8258b1013a9895a6c91",
"text": "Research has mainly neglected to examine if the possible antagonism of play/games and seriousness affects the educational potential of serious gaming. This article follows a microsociological approach and treats play and seriousness as different social frames, with each being indicated by significant symbols and containing unique social rules, adequate behavior and typical consequences of action. It is assumed that due to the specific qualities of these frames, serious frames are perceived as more credible but less entertaining than playful frames – regardless of subject matter. Two empirical studies were conducted to test these hypotheses. Results partially confirm expectations, but effects are not as strong as assumed and sometimes seem to be moderated by further variables, such as gender and attitudes. Overall, this article demonstrates that the educational potential of serious gaming depends not only on media design, but also on social context and personal variables.",
"title": ""
},
{
"docid": "7df6898369d5e307610f43c59ff048ea",
"text": "In the industrial fields, Mecanum robots have been widely used. The Mecanum Wheel can do omnidirectional movements by electric machinery drive. It's more flexible than ordinary robots. It has massive potential in some situation which has small space. The robots with control system can complete the function of location and the calculation of optimal route. The Astar algorithm is most common mothed. However, Due to the orthogonal turning point, this algorithm takes a lot of Adjusting time. The Improved algorithm raised in this paper can reduce the occurrence of orthogonal turning point. It can generate a new smooth path automatically. This method can greatly reduce the time of the motion of the path. At the same time, it is difficult to obtain satisfactory performance by using the traditional control algorithm because of the complicated road conditions and the difficulty of establishing the model of the robot, so we use fuzzy algorithm to control robots. In fuzzy algorithm, the use of static membership function will affect the control effect, therefore, for complex control environment, using PSO algorithm to dynamically determine the membership function. It can effectively improve the motion performance and improve the dynamic characteristics and the adjustment time of the robot.",
"title": ""
},
{
"docid": "f28170dcc3c4949c27ee609604c53bc2",
"text": "Debates over Cannabis sativa L. and C. indica Lam. center on their taxonomic circumscription and rank. This perennial puzzle has been compounded by the viral spread of a vernacular nomenclature, “Sativa” and “Indica,” which does not correlate with C. sativa and C. indica. Ambiguities also envelop the epithets of wild-type Cannabis: the spontanea versus ruderalis debate (i.e., vernacular “Ruderalis”), as well as another pair of Cannabis epithets, afghanica and kafirstanica. To trace the rise of vernacular nomenclature, we begin with the protologues (original descriptions, synonymies, type specimens) of C. sativa and C. indica. Biogeographical evidence (obtained from the literature and herbarium specimens) suggests 18th–19th century botanists were biased in their assignment of these taxa to field specimens. This skewed the perception of Cannabis biodiversity and distribution. The development of vernacular “Sativa,” “Indica,” and “Ruderalis” was abetted by twentieth century botanists, who ignored original protologues and harbored their own cultural biases. Predominant taxonomic models by Vavilov, Small, Schultes, de Meijer, and Hillig are compared and critiqued. Small’s model adheres closest to protologue data (with C. indica treated as a subspecies). “Sativa” and “Indica” are subpopulations of C. sativa subsp. indica; “Ruderalis” represents a protean assortment of plants, including C. sativa subsp. sativa and recent hybrids.",
"title": ""
},
{
"docid": "d056e5ea017eb3e5609dcc978e589158",
"text": "In this paper we study and evaluate rumor-like methods for combating the spread of rumors on a social network. We model rumor spread as a diffusion process on a network and suggest the use of an \"anti-rumor\" process similar to the rumor process. We study two natural models by which these anti-rumors may arise. The main metrics we study are the belief time, i.e., the duration for which a person believes the rumor to be true and point of decline, i.e., point after which anti-rumor process dominates the rumor process. We evaluate our methods by simulating rumor spread and anti-rumor spread on a data set derived from the social networking site Twitter and on a synthetic network generated according to the Watts and Strogatz model. We find that the lifetime of a rumor increases if the delay in detecting it increases, and the relationship is at least linear. Further our findings show that coupling the detection and anti-rumor strategy by embedding agents in the network, we call them beacons, is an effective means of fighting the spread of rumor, even if these beacons do not share information.",
"title": ""
},
{
"docid": "a457545baa59e39e6ef6d7e0d2f9c02e",
"text": "The domain adaptation problem in machine learning occurs when the test data generating distribution differs from the one that generates the training data. It is clear that the success of learning under such circumstances depends on similarities between the two data distributions. We study assumptions about the relationship between the two distributions that one needed for domain adaptation learning to succeed. We analyze the assumptions in an agnostic PAC-style learning model for a the setting in which the learner can access a labeled training data sample and an unlabeled sample generated by the test data distribution. We focus on three assumptions: (i) similarity between the unlabeled distributions, (ii) existence of a classifier in the hypothesis class with low error on both training and testing distributions, and (iii) the covariate shift assumption. I.e., the assumption that the conditioned label distribution (for each data point) is the same for both the training and test distributions. We show that without either assumption (i) or (ii), the combination of the remaining assumptions is not sufficient to guarantee successful learning. Our negative results hold with respect to any domain adaptation learning algorithm, as long as it does not have access to target labeled examples. In particular, we provide formal proofs that the popular covariate shift assumption is rather weak and does not relieve the necessity of the other assumptions. We also discuss the intuitively appealing Appearing in Proceedings of the 13 International Conference on Artificial Intelligence and Statistics (AISTATS) 2010, Chia Laguna Resort, Sardinia, Italy. Volume 9 of JMLR: W&CP 9. Copyright 2010 by the authors. paradigm of re-weighting the labeled training sample according to the target unlabeled distribution and show that, somewhat counter intuitively, we show that paradigm cannot be trusted in the following sense. There are DA tasks that are indistinguishable as far as the training data goes but in which re-weighting leads to significant improvement in one task while causing dramatic deterioration of the learning success in the other.",
"title": ""
},
{
"docid": "49e1dc71e71b45984009f4ee20740763",
"text": "The ecosystem of open source software (OSS) has been growing considerably in size. In addition, code clones - code fragments that are copied and pasted within or between software systems - are also proliferating. Although code cloning may expedite the process of software development, it often critically affects the security of software because vulnerabilities and bugs can easily be propagated through code clones. These vulnerable code clones are increasing in conjunction with the growth of OSS, potentially contaminating many systems. Although researchers have attempted to detect code clones for decades, most of these attempts fail to scale to the size of the ever-growing OSS code base. The lack of scalability prevents software developers from readily managing code clones and associated vulnerabilities. Moreover, most existing clone detection techniques focus overly on merely detecting clones and this impairs their ability to accurately find \"vulnerable\" clones. In this paper, we propose VUDDY, an approach for the scalable detection of vulnerable code clones, which is capable of detecting security vulnerabilities in large software programs efficiently and accurately. Its extreme scalability is achieved by leveraging function-level granularity and a length-filtering technique that reduces the number of signature comparisons. This efficient design enables VUDDY to preprocess a billion lines of code in 14 hour and 17 minutes, after which it requires a few seconds to identify code clones. In addition, we designed a security-aware abstraction technique that renders VUDDY resilient to common modifications in cloned code, while preserving the vulnerable conditions even after the abstraction is applied. This extends the scope of VUDDY to identifying variants of known vulnerabilities, with high accuracy. In this study, we describe its principles and evaluate its efficacy and effectiveness by comparing it with existing mechanisms and presenting the vulnerabilities it detected. VUDDY outperformed four state-of-the-art code clone detection techniques in terms of both scalability and accuracy, and proved its effectiveness by detecting zero-day vulnerabilities in widely used software systems, such as Apache HTTPD and Ubuntu OS Distribution.",
"title": ""
},
{
"docid": "aa7d94bebbd988af48bc7cb9f5e35a39",
"text": "Over the recent years, embedding methods have attracted increasing focus as a means for knowledge graph completion. Similarly, rule-based systems have been studied for this task in the past. What is missing so far is a common evaluation that includes more than one type of method. We close this gap by comparing representatives of both types of systems in a frequently used evaluation protocol. Leveraging the explanatory qualities of rule-based systems, we present a fine-grained evaluation that gives insight into characteristics of the most popular datasets and points out the different strengths and shortcomings of the examined approaches. Our results show that models such as TransE, RESCAL or HolE have problems in solving certain types of completion tasks that can be solved by a rulebased approach with high precision. At the same time, there are other completion tasks that are difficult for rule-based systems. Motivated by these insights, we combine both families of approaches via ensemble learning. The results support our assumption that the two methods complement each other in a beneficial way.",
"title": ""
}
] |
scidocsrr
|
2e7d78ea417684563f9c27165e3cbcd8
|
Generating Diverse Numbers of Diverse Keyphrases
|
[
{
"docid": "54d3d5707e50b979688f7f030770611d",
"text": "In this article, we describe an automatic differentiation module of PyTorch — a library designed to enable rapid research on machine learning models. It builds upon a few projects, most notably Lua Torch, Chainer, and HIPS Autograd [4], and provides a high performance environment with easy access to automatic differentiation of models executed on different devices (CPU and GPU). To make prototyping easier, PyTorch does not follow the symbolic approach used in many other deep learning frameworks, but focuses on differentiation of purely imperative programs, with a focus on extensibility and low overhead. Note that this preprint is a draft of certain sections from an upcoming paper covering all PyTorch features.",
"title": ""
},
{
"docid": "ec37e61fcac2639fa6e605b362f2a08d",
"text": "Keyphrases that efficiently summarize a document’s content are used in various document processing and retrieval tasks. Current state-of-the-art techniques for keyphrase extraction operate at a phrase-level and involve scoring candidate phrases based on features of their component words. In this paper, we learn keyphrase taggers for research papers using token-based features incorporating linguistic, surfaceform, and document-structure information through sequence labeling. We experimentally illustrate that using withindocument features alone, our tagger trained with Conditional Random Fields performs on-par with existing state-of-the-art systems that rely on information from Wikipedia and citation networks. In addition, we are also able to harness recent work on feature labeling to seamlessly incorporate expert knowledge and predictions from existing systems to enhance the extraction performance further. We highlight the modeling advantages of our keyphrase taggers and show significant performance improvements on two recently-compiled datasets of keyphrases from Computer Science research papers.",
"title": ""
},
{
"docid": "97838cc3eb7b31d49db6134f8fc81c84",
"text": "We study the problem of semi-supervised question answering—-utilizing unlabeled text to boost the performance of question answering models. We propose a novel training framework, the Generative Domain-Adaptive Nets. In this framework, we train a generative model to generate questions based on the unlabeled text, and combine model-generated questions with human-generated questions for training question answering models. We develop novel domain adaptation algorithms, based on reinforcement learning, to alleviate the discrepancy between the modelgenerated data distribution and the humangenerated data distribution. Experiments show that our proposed framework obtains substantial improvement from unlabeled text.",
"title": ""
},
{
"docid": "8f916f7be3048ae2a367096f4f82207d",
"text": "Existing methods for single document keyphrase extraction usually make use of only the information contained in the specified document. This paper proposes to use a small number of nearest neighbor documents to provide more knowledge to improve single document keyphrase extraction. A specified document is expanded to a small document set by adding a few neighbor documents close to the document, and the graph-based ranking algorithm is then applied on the expanded document set to make use of both the local information in the specified document and the global information in the neighbor documents. Experimental results demonstrate the good effectiveness and robustness of our proposed approach.",
"title": ""
},
{
"docid": "73bf620a97b2eadeb2398dd718b85fe8",
"text": "The Semeval task 5 was an opportunity for experimenting with the key term extraction module of GROBID, a system for extracting and generating bibliographical information from technical and scientific documents. The tool first uses GROBID’s facilities for analyzing the structure of scientific articles, resulting in a first set of structural features. A second set of features captures content properties based on phraseness, informativeness and keywordness measures. Two knowledge bases, GRISP and Wikipedia, are then exploited for producing a last set of lexical/semantic features. Bagged decision trees appeared to be the most efficient machine learning algorithm for generating a list of ranked key term candidates. Finally a post ranking was realized based on statistics of cousage of keywords in HAL, a large Open Access publication repository.",
"title": ""
},
{
"docid": "1593fd6f9492adc851c709e3dd9b3c5f",
"text": "This paper addresses the problem of extracting keyphrases from scientific articles and categorizing them as corresponding to a task, process, or material. We cast the problem as sequence tagging and introduce semi-supervised methods to a neural tagging model, which builds on recent advances in named entity recognition. Since annotated training data is scarce in this domain, we introduce a graph-based semi-supervised algorithm together with a data selection scheme to leverage unannotated articles. Both inductive and transductive semi-supervised learning strategies outperform state-of-the-art information extraction performance on the 2017 SemEval Task 10 ScienceIE task.",
"title": ""
}
] |
[
{
"docid": "dcd9a430a69fc3a938ea1068273627ff",
"text": "Background Nursing theory should provide the principles that underpin practice and help to generate further nursing knowledge. However, a lack of agreement in the professional literature on nursing theory confuses nurses and has caused many to dismiss nursing theory as irrelevant to practice. This article aims to identify why nursing theory is important in practice. Conclusion By giving nurses a sense of identity, nursing theory can help patients, managers and other healthcare professionals to recognise the unique contribution that nurses make to the healthcare service ( Draper 1990 ). Providing a definition of nursing theory also helps nurses to understand their purpose and role in the healthcare setting.",
"title": ""
},
{
"docid": "9636c75bdbbd7527abdd8fbac1466d55",
"text": "Predicting the occurrence of a particular event of interest at future time points is the primary goal of survival analysis. The presence of incomplete observations due to time limitations or loss of data traces is known as censoring which brings unique challenges in this domain and differentiates survival analysis from other standard regression methods. The popularly used survival analysis methods such as Cox proportional hazard model and parametric survival regression suffer from some strict assumptions and hypotheses that are not realistic in most of the real-world applications. To overcome the weaknesses of these two types of methods, in this paper, we reformulate the survival analysis problem as a multi-task learning problem and propose a new multi-task learning based formulation to predict the survival time by estimating the survival status at each time interval during the study duration. We propose an indicator matrix to enable the multi-task learning algorithm to handle censored instances and incorporate some of the important characteristics of survival problems such as non-negative non-increasing list structure into our model through max-heap projection. We employ the L2,1-norm penalty which enables the model to learn a shared representation across related tasks and hence select important features and alleviate over-fitting in high-dimensional feature spaces; thus, reducing the prediction error of each task. To efficiently handle the two non-smooth constraints, in this paper, we propose an optimization method which employs Alternating Direction Method of Multipliers (ADMM) algorithm to solve the proposed multi-task learning problem. We demonstrate the performance of the proposed method using real-world microarray gene expression high-dimensional benchmark datasets and show that our method outperforms state-of-the-art methods.",
"title": ""
},
{
"docid": "cbdace4636017f925b89ecf266fde019",
"text": "It is traditionally known that wideband apertures lose bandwidth when placed over a ground plane. To overcome this issue, this paper introduces a new non-symmetric tightly coupled dipole element for wideband phased arrays. The proposed array antenna incorporates additional degrees of freedom to control capacitance and cancel the ground plane inductance. Specifically, each arm on the dipole is different than the other (or non-symmetric). The arms are identical near the center feed section but dissimilar towards the ends, forming a ball-and-cup. It is demonstrated that the non-symmetric qualities achieve wideband performance. Concurrently, a design example for planar installation with balun and matching network is presented to cover X-band. The balun avoids extraneous radiation, maintains the array's low-profile height and is printed on top of the ground plane connecting to the array aperture with 180° out of phase vertical twin-wire transmission lines. To demonstrate the concept, a 64-element array with integrated feed and matching network is designed, fabricated and verified experimentally. The array aperture is placed λ/7 (at 8 GHz) above the ground plane and shown to maintain a active VSWR less than 2 from 8-12.5 GHz while scanning up to 70° and 60° in E- and H-plane, respectively. The array's simulated diagonal plane cross-polarization is approximately 10 dB below the co-polarized component during 60° diagonal scan and follows the theoretical limit for an infinite current sheet.",
"title": ""
},
{
"docid": "eec15a5d14082d625824452bd070ec38",
"text": "Food waste is a major environmental issue. Expired products are thrown away, implying that too much food is ordered compared to what is sold and that a more accurate prediction model is required within grocery stores. In this study the two prediction models Long Short-Term Memory (LSTM) and Autoregressive Integrated Moving Average (ARIMA) were compared on their prediction accuracy in two scenarios, given sales data for different products, to observe if LSTM is a model that can compete against the ARIMA model in the field of sales forecasting in retail. In the first scenario the models predict sales for one day ahead using given data, while they in the second scenario predict each day for a week ahead. Using the evaluation measures RMSE and MAE together with a t-test the results show that the difference between the LSTM and ARIMA model is not of statistical significance in the scenario of predicting one day ahead. However when predicting seven days ahead, the results show that there is a statistical significance in the difference indicating that the LSTM model has higher accuracy. This study therefore concludes that the LSTM model is promising in the field of sales forecasting in retail and able to compete against the ARIMA model.",
"title": ""
},
{
"docid": "6d2adebf7fbdf67b778b60ac69ea5cd3",
"text": "In this paper, we propose Zero-Suppressed BDDs (0-Sup-BDDs), which are BDDs based on a new reduction rule. This data structure brings unique and compact representation of sets which appear in many combinatorial problems. Using 0-Sup-BDDs, we can manipulate such sets more simply and efficiently than using original BDDs. We show the properties of 0-Sup-BDDs, their manipulation algorithms, and good applications for LSI CAD systems.",
"title": ""
},
{
"docid": "d0c85b824d7d3491f019f47951d1badd",
"text": "A nine-year-old female Rottweiler with a history of repeated gastrointestinal ulcerations and three previous surgical interventions related to gastrointestinal ulceration presented with symptoms of anorexia and intermittent vomiting. Benign gastric outflow obstruction was diagnosed in the proximal duodenal area. The initial surgical plan was to perform a pylorectomy with gastroduodenostomy (Billroth I procedure), but owing to substantial scar tissue and adhesions in the area a palliative gastrojejunostomy was performed. This procedure provided a bypass for the gastric contents into the proximal jejunum via the new stoma, yet still allowed bile and pancreatic secretions to flow normally via the patent duodenum. The gastrojejunostomy technique was successful in the surgical management of this case, which involved proximal duodenal stricture in the absence of neoplasia. Regular telephonic followup over the next 12 months confirmed that the patient was doing well.",
"title": ""
},
{
"docid": "13748d365584ef2e680affb67cfcc882",
"text": "In this paper, we discuss the development of cost effective, wireless, and wearable vibrotactile haptic device for stiffness perception during an interaction with virtual objects. Our experimental setup consists of haptic device with five vibrotactile actuators, virtual reality environment tailored in Unity 3D integrating the Oculus Rift Head Mounted Display (HMD) and the Leap Motion controller. The virtual environment is able to capture touch inputs from users. Interaction forces are then rendered at 500 Hz and fed back to the wearable setup stimulating fingertips with ERM vibrotactile actuators. Amplitude and frequency of vibrations are modulated proportionally to the interaction force to simulate the stiffness of a virtual object. A quantitative and qualitative study is done to compare the discrimination of stiffness on virtual linear spring in three sensory modalities: visual only feedback, tactile only feedback, and their combination. A common psychophysics method called the Two Alternative Forced Choice (2AFC) approach is used for quantitative analysis using Just Noticeable Difference (JND) and Weber Fractions (WF). According to the psychometric experiment result, average Weber fraction values of 0.39 for visual only feedback was improved to 0.25 by adding the tactile feedback.",
"title": ""
},
{
"docid": "23641b410a3d1ae3f270bb19988ad4f5",
"text": "Brain Computer Interface systems rely on lengthy training phases that can last up to months due to the inherent variability in brainwave activity between users. We propose a BCI architecture based on the co-learning between the user and the system through different feedback strategies. Thus, we achieve an operational BCI within minutes. We apply our system to the piloting of an AR.Drone 2.0 quadricopter. We show that our architecture provides better task performance than traditional BCI paradigms within a shorter time frame. We further demonstrate the enthusiasm of users towards our BCI-based interaction modality and how they find it much more enjoyable than traditional interaction modalities.",
"title": ""
},
{
"docid": "4adfc2bf6907305fc4da20a5b753c2b1",
"text": "Book recommendation systems can benefit commercial websites, social media sites, and digital libraries, to name a few, by alleviating the knowledge acquisition process of users who look for books that are appealing to them. Even though existing book recommenders, which are based on either collaborative filtering, text content, or the hybrid approach, aid users in locating books (among the millions available), their recommendations are not personalized enough to meet users’ expectations due to their collective assumption on group preference and/or exact content matching, which is a failure. To address this problem, we have developed PBRecS, a book recommendation system that is based on social interactions and personal interests to suggest books appealing to users. PBRecS relies on the friendships established on a social networking site, such as LibraryThing, to generate more personalized suggestions by including in the recommendations solely books that belong to a user’s friends who share common interests with the user, in addition to applying word-correlation factors for partially matching book tags to disclose books similar in contents. The conducted empirical study on data extracted from LibraryThing has verified (i) the effectiveness of PBRecS using social-media data to improve the quality of book recommendations and (ii) that PBRecS outperforms the recommenders employed by Amazon and LibraryThing.",
"title": ""
},
{
"docid": "f095118c63d1531ebdbaec3565b0d91f",
"text": "BACKGROUND\nSystematic reviews are most helpful if they are up-to-date. We did a systematic review of strategies and methods describing when and how to update systematic reviews.\n\n\nOBJECTIVES\nTo identify, describe and assess strategies and methods addressing: 1) when to update systematic reviews and 2) how to update systematic reviews.\n\n\nSEARCH STRATEGY\nWe searched MEDLINE (1966 to December 2005), PsycINFO, the Cochrane Methodology Register (Issue 1, 2006), and hand searched the 2005 Cochrane Colloquium proceedings.\n\n\nSELECTION CRITERIA\nWe included methodology reports, updated systematic reviews, commentaries, editorials, or other short reports describing the development, use, or comparison of strategies and methods for determining the need for updating or updating systematic reviews in healthcare.\n\n\nDATA COLLECTION AND ANALYSIS\nWe abstracted information from each included report using a 15-item questionnaire. The strategies and methods for updating systematic reviews were assessed and compared descriptively with respect to their usefulness, comprehensiveness, advantages, and disadvantages.\n\n\nMAIN RESULTS\nFour updating strategies, one technique, and two statistical methods were identified. Three strategies addressed steps for updating and one strategy presented a model for assessing the need to update. One technique discussed the use of the \"entry date\" field in bibliographic searching. Statistical methods were cumulative meta-analysis and predicting when meta-analyses are outdated.\n\n\nAUTHORS' CONCLUSIONS\nLittle research has been conducted on when and how to update systematic reviews and the feasibility and efficiency of the identified approaches is uncertain. These shortcomings should be addressed in future research.",
"title": ""
},
{
"docid": "71fa9602c24916b8c868c24ba50a74e8",
"text": "In this paper, we review the research on virtual teams in an effort to assess the state of the literature. We start with an examination of the definitions of virtual teams used and propose an integrative definition that suggests that all teams may be defined in terms of their extent of virtualness. Next, we review findings related to team inputs, processes, and outcomes, and identify areas of agreement and inconsistency in the literature on virtual teams. Based on this review, we suggest avenues for future research, including methodological and theoretical considerations that are important to advancing our understanding of virtual teams. © 2004 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "64122833d6fa0347f71a9abff385d569",
"text": "We present a brief history and overview of statistical methods in frame-semantic parsing – the automatic analysis of text using the theory of frame semantics. We discuss how the FrameNet lexicon and frameannotated datasets have been used by statistical NLP researchers to build usable, state-of-the-art systems. We also focus on future directions in frame-semantic parsing research, and discuss NLP applications that could benefit from this line of work. 1 Frame-Semantic Parsing Frame-semantic parsing has been considered as the task of automatically finding semantically salient targets in text, disambiguating their semantic frame representing an event and scenario in discourse, and annotating arguments consisting of words or phrases in text with various frame elements (or roles). The FrameNet lexicon (Baker et al., 1998), an ontology inspired by the theory of frame semantics (Fillmore, 1982), serves as a repository of semantic frames and their roles. Figure 1 depicts a sentence with three evoked frames for the targets “million”, “created” and “pushed” with FrameNet frames and roles. Automatic analysis of text using framesemantic structures can be traced back to the pioneering work of Gildea and Jurafsky (2002). Although their experimental setup relied on a primitive version of FrameNet and only made use of “exemplars” or example usages of semantic frames (containing one target per sentence) as opposed to a “corpus” of sentences, it resulted in a flurry of work in the area of automatic semantic role labeling (Màrquez et al., 2008). However, the focus of semantic role labeling (SRL) research has mostly been on PropBank (Palmer et al., 2005) conventions, where verbal targets could evoke a “sense” frame, which is not shared across targets, making the frame disambiguation setup different from the representation in FrameNet. Furthermore, it is fair to say that early research on PropBank focused primarily on argument structure prediction, and the interaction between frame and argument structure analysis has mostly been unaddressed (Màrquez et al., 2008). There are exceptions, where the verb frame has been taken into account during SRL (Meza-Ruiz and Riedel, 2009; Watanabe et al., 2010). Moreoever, the CoNLL 2008 and 2009 shared tasks also include the verb and noun frame identification task in their evaluations, although the overall goal was to predict semantic dependencies based on PropBank, and not full argument spans (Surdeanu et al., 2008; Hajič",
"title": ""
},
{
"docid": "265bf26646113a56101c594f563cb6dc",
"text": "A system, particularly a decision-making concept, that facilitates highly automated driving on freeways in real traffic is presented. The system is capable of conducting fully automated lane change (LC) maneuvers with no need for driver approval. Due to the application in real traffic, a robust functionality and the general safety of all traffic participants are among the main requirements. Regarding these requirements, the consideration of measurement uncertainties demonstrates a major challenge. For this reason, a fully integrated probabilistic concept is developed. By means of this approach, uncertainties are regarded in the entire process of determining driving maneuvers. While this also includes perception tasks, this contribution puts a focus on the driving strategy and the decision-making process for the execution of driving maneuvers. With this approach, the BMW Group Research and Technology managed to drive 100% automated in real traffic on the freeway A9 from Munich to Ingolstadt, showing a robust, comfortable, and safe driving behavior, even during multiple automated LC maneuvers.",
"title": ""
},
{
"docid": "f8093849e9157475149d00782c60ae60",
"text": "Social media use, potential and challenges in innovation have received little attention in literature, especially from the standpoint of the business-to-business sector. Therefore, this paper focuses on bridging this gap with a survey of social media use, potential and challenges, combined with a social media - focused innovation literature review of state-of-the-art. The study also studies the essential differences between business-to-consumer and business-to-business in the above respects. The paper starts by defining of social media and web 2.0, and then characterizes social media in business, social media in business-to-business sector and social media in business-to-business innovation. Finally we present and analyze the results of our empirical survey of 122 Finnish companies. This paper suggests that there is a significant gap between perceived potential of social media and social media use in innovation activity in business-to-business companies, recognizes potentially effective ways to reduce the gap, and clarifies the found differences between B2B's and B2C's.",
"title": ""
},
{
"docid": "3848b727cfda3031742cec04abd74608",
"text": "This paper presents SemFrame, a system that induces frame semantic verb classes from WordNet and LDOCE. Semantic frames are thought to have significant potential in resolving the paraphrase problem challenging many languagebased applications. When compared to the handcrafted FrameNet, SemFrame achieves its best recall-precision balance with 83.2% recall (based on SemFrame's coverage of FrameNet frames) and 73.8% precision (based on SemFrame verbs’ semantic relatedness to frame-evoking verbs). The next best performing semantic verb classes achieve 56.9% recall and 55.0% precision.",
"title": ""
},
{
"docid": "3a549571e281b9b381a347fb49953d2c",
"text": "Social media has been gaining popularity among university students who use social media at higher rates than the general population. Students consequently spend a significant amount of time on social media, which may inevitably have an effect on their academic engagement. Subsequently, scholars have been intrigued to examine the impact of social media on students' academic engagement. Research that has directly explored the use of social media and its impact on students in tertiary institutions has revealed limited and mixed findings, particularly within a South African context; thus leaving a window of opportunity to further investigate the impact that social media has on students' academic engagement. This study therefore aims to investigate the use of social media in tertiary institutions, the impact that the use thereof has on students' academic engagement and to suggest effective ways of using social media in tertiary institutions to improve students' academic engagement from students' perspectives. This study used an interpretivist (inductive) approach in order to determine and comprehend student's perspectives and experiences towards the use of social media and the effects thereof on their academic engagement. A single case study design at Rhodes University was used to determine students' perceptions and data was collected using an online survey. The findings reveal that students use social media for both social and academic purposes. Students further perceived that social media has a positive impact on their academic engagement and suggest that using social media at tertiary level could be advantageous and could enhance students' academic engagement.",
"title": ""
},
{
"docid": "c4d816303790125c790a3a09edcf499b",
"text": "Predictive modeling techniques are increasingly being used by data scientists to understand the probability of predicted outcomes. However, for data that is high-dimensional, a critical step in predictive modeling is determining which features should be included in the models. Feature selection algorithms are often used to remove non-informative features from models. However, there are many different classes of feature selection algorithms. Deciding which one to use is problematic as the algorithmic output is often not amenable to user interpretation. This limits the ability for users to utilize their domain expertise during the modeling process. To improve on this limitation, we developed INFUSE, a novel visual analytics system designed to help analysts understand how predictive features are being ranked across feature selection algorithms, cross-validation folds, and classifiers. We demonstrate how our system can lead to important insights in a case study involving clinical researchers predicting patient outcomes from electronic medical records.",
"title": ""
},
{
"docid": "f9afcc134abda1c919cf528cbc975b46",
"text": "Multimodal question answering in the cultural heritage domain allows visitors to museums, landmarks or other sites to ask questions in a more natural way. This in turn provides better user experiences. In this paper, we propose the construction of a golden standard dataset dedicated to aiding research into multimodal question answering in the cultural heritage domain. The dataset, soon to be released to the public, contains multimodal content about the fascinating old-Egyptian Amarna period, including images of typical artworks, documents about these artworks (containing images) and over 800 multimodal queries integrating visual and textual questions. The multimodal questions and related documents are all in English. The multimodal questions are linked to relevant paragraphs in the related documents that contain the answer to the multimodal query.",
"title": ""
},
{
"docid": "6bdcd13e63a4f24561f575efcd232dad",
"text": "Men have called me mad,” wrote Edgar Allan Poe, “but the question is not yet settled, whether madness is or is not the loftiest intelligence— whether much that is glorious—whether all that is profound—does not spring from disease of thought—from moods of mind exalted at the expense of the general intellect.” Many people have long shared Poe’s suspicion that genius and insanity are entwined. Indeed, history holds countless examples of “that fine madness.” Scores of influential 18thand 19th-century poets, notably William Blake, Lord Byron and Alfred, Lord Tennyson, wrote about the extreme mood swings they endured. Modern American poets John Berryman, Randall Jarrell, Robert Lowell, Sylvia Plath, Theodore Roethke, Delmore Schwartz and Anne Sexton were all hospitalized for either mania or depression during their lives. And many painters and composers, among them Vincent van Gogh, Georgia O’Keeffe, Charles Mingus and Robert Schumann, have been similarly afflicted. Judging by current diagnostic criteria, it seems that most of these artists—and many others besides—suffered from one of the major mood disorders, namely, manic-depressive illness or major depression. Both are fairly common, very treatable and yet frequently lethal diseases. Major depression induces intense melancholic spells, whereas manic-depression, Manic-Depressive Illness and Creativity",
"title": ""
},
{
"docid": "f7deaa9b65be6b8de9f45fb0dec3879d",
"text": "This paper reports the first 8kV+ ESD-protected SP10T transmit/receive (T/R) antenna switch for quad-band (0.85/0.9/1.8/1.9-GHz) GSM and multiple W-CDMA smartphones fabricated in an 180-nm SOI CMOS. A novel physics-based switch-ESD co-design methodology is applied to ensure full-chip optimization for a SP10T test chip and its ESD protection circuit simultaneously.",
"title": ""
}
] |
scidocsrr
|
ec17753411d281b2fe7eae0bb0198bf0
|
Evaluating Shallow and Deep Neural Networks for Network Intrusion Detection Systems in Cyber Security
|
[
{
"docid": "1c6078d68891b6600727a82841812666",
"text": "Network traffic prediction aims at predicting the subsequent network traffic by using the previous network traffic data. This can serve as a proactive approach for network management and planning tasks. The family of recurrent neural network (RNN) approaches is known for time series data modeling which aims to predict the future time series based on the past information with long time lags of unrevealed size. RNN contains different network architectures like simple RNN, long short term memory (LSTM), gated recurrent unit (GRU), identity recurrent unit (IRNN) which is capable to learn the temporal patterns and long range dependencies in large sequences of arbitrary length. To leverage the efficacy of RNN approaches towards traffic matrix estimation in large networks, we use various RNN networks. The performance of various RNN networks is evaluated on the real data from GÉANT backbone networks. To identify the optimal network parameters and network structure of RNN, various experiments are done. All experiments are run up to 200 epochs with learning rate in the range [0.01-0.5]. LSTM has performed well in comparison to the other RNN and classical methods. Moreover, the performance of various RNN methods is comparable to LSTM.",
"title": ""
},
{
"docid": "61c268616851d28855ed8fe14a6de205",
"text": "Ransomware is one type of malware that covertly installs and executes a cryptovirology attack on a victims computer to demand a ransom payment for restoration of the infected resources. This kind of malware has been growing largely in recent days and causes tens of millions of dollars losses to consumers. In this paper, we evaluate shallow and deep networks for the detection and classification of ransomware. To characterize and distinguish ransomware over benign and various other families of ransomwares, we leverage the dominance of application programming interface (API) invocations. To select a best architecture for the multi-layer perceptron (MLP), we done various experiments related to network parameters and structures. All the experiments are run up to 500 epochs with a learning rate in the range [0.01-0.5]. Result obtained on our data set is more promising to distinguish ransomware not only from benign from its families too. On distinguishing the .EXE as either benign or ransomware, MLP has attained highest accuracy 1.0 and classifying the ransomware to their categories obtained highest accuracy 0.98. Moreover, MLP has performed well in detecting and classifying ransomwares in comparison to the other classical machine learning classifiers.",
"title": ""
}
] |
[
{
"docid": "028eb67d71987c33c4a331cf02c6ff00",
"text": "We explore the feasibility of using crowd workers from Amazon Mechanical Turk to identify and rank sidewalk accessibility issues from a manually curated database of 100 Google Street View images. We examine the effect of three different interactive labeling interfaces (Point, Rectangle, and Outline) on task accuracy and duration. We close the paper by discussing limitations and opportunities for future work.",
"title": ""
},
{
"docid": "54df0e1a435d673053f9264a4c58e602",
"text": "Next location prediction anticipates a person’s movement based on the history of previous sojourns. It is useful for proactive actions taken to assist the person in an ubiquitous environment. This paper evaluates next location prediction methods: dynamic Bayesian network, multi-layer perceptron, Elman net, Markov predictor, and state predictor. For the Markov and state predictor we use additionally an optimization, the confidence counter. The criterions for the comparison are the prediction accuracy, the quantity of useful predictions, the stability, the learning, the relearning, the memory and computing costs, the modelling costs, the expandability, and the ability to predict the time of entering the next location. For evaluation we use the same benchmarks containing movement sequences of real persons within an office building.",
"title": ""
},
{
"docid": "1ec8f7bb8de36b625cb8fee335557acf",
"text": "Airborne laser scanner technique is broadly the most appropriate way to acquire rapidly and with high density 3D data over a city. Once the 3D Lidar data are available, the next task is the automatic data processing, with major aim to construct 3D building models. Among the numerous automatic reconstruction methods, the techniques allowing the detection of 3D building roof planes are of crucial importance. Three main methods arise from the literature: region growing, Hough-transform and Random Sample Consensus (RANSAC) paradigm. Since region growing algorithms are sometimes not very transparent and not homogenously applied, this paper focuses only on the Hough-transform and the RANSAC algorithm. Their principles, their pseudocode rarely detailed in the related literature as well as their complete analyses are presented in this paper. An analytic comparison of both algorithms, in terms of processing time and sensitivity to cloud characteristics, shows that despite the limitation encountered in both methods, RANSAC algorithm is still more efficient than the first one. Under other advantages, its processing time is negligible even when the input data size is very large. On the other hand, Hough-transform is very sensitive to the segmentation parameters values. Therefore, RANSAC algorithm has been chosen and extended to exceed its limitations. Its major limitation is that it searches to detect the best mathematical plane among 3D building point cloud even if this plane does not always represent a roof plane. So the proposed extension allows harmonizing the mathematical aspect of the algorithm with the geometry of a roof. At last, it is shown that the extended approach provides very satisfying results, even in the case of very weak point density and for different levels of building complexity. Therefore, once the roof planes are successfully detected, the automatic building modelling can be carried out.",
"title": ""
},
{
"docid": "4b5ff1f0ef9e668f5e76a69b0c77c1e8",
"text": "This investigation was concerned with providing a rationale for the understanding and measurement of quality of life. The investigation proposes a modified version of Veenhoven’s Four-Qualities-of-Life Framework. Its main purpose is to bring order to the vast literature on measuring quality of life; another purpose is to provide a richer framework to guide public policy in the procurement of a better society. The framework is used to assess quality of life in Latin America; the purpose of this exercise is to illustrate the utility of the framework and to show that importance of conceptualizing what quality of life is before any attempt to measure it is undertaken.",
"title": ""
},
{
"docid": "d6d275b719451982fa67d442c55c186c",
"text": "Waterfall development is still a widely used way of working in software development companies. Many problems have been reported related to the model. Commonly accepted problems are for example to cope with change and that defects all too often are detected too late in the software development process. However, many of the problems mentioned in literature are based on beliefs and experiences, and not on empirical evidence. To address this research gap, we compare the problems in literature with the results of a case study at Ericsson AB in Sweden, investigating issues in the waterfall model. The case study aims at validating or contradicting the beliefs of what the problems are in waterfall development through empirical research.",
"title": ""
},
{
"docid": "c55e7c3825980d0be4546c7fadc812fe",
"text": "Individual graphene oxide sheets subjected to chemical reduction were electrically characterized as a function of temperature and external electric fields. The fully reduced monolayers exhibited conductivities ranging between 0.05 and 2 S/cm and field effect mobilities of 2-200 cm2/Vs at room temperature. Temperature-dependent electrical measurements and Raman spectroscopic investigations suggest that charge transport occurs via variable range hopping between intact graphene islands with sizes on the order of several nanometers. Furthermore, the comparative study of multilayered sheets revealed that the conductivity of the undermost layer is reduced by a factor of more than 2 as a consequence of the interaction with the Si/SiO2 substrate.",
"title": ""
},
{
"docid": "b5927458f6d34f2ff326f0f631a0e450",
"text": "Bipolar disorder (BD) is a common and disabling psychiatric condition with a severe socioeconomic impact. BD is treated with mood stabilizers, among which lithium represents the first-line treatment. Lithium alone or in combination is effective in 60% of chronically treated patients, but response remains heterogenous and a large number of patients require a change in therapy after several weeks or months. Many studies have so far tried to identify molecular and genetic markers that could help us to predict response to mood stabilizers or the risk for adverse drug reactions. Pharmacogenetic studies in BD have been for the most part focused on lithium, but the complexity and variability of the response phenotype, together with the unclear mechanism of action of lithium, limited the power of these studies to identify robust biomarkers. Recent pharmacogenomic studies on lithium response have provided promising findings, suggesting that the integration of genome-wide investigations with deep phenotyping, in silico analyses and machine learning could lead us closer to personalized treatments for BD. Nevertheless, to date none of the genes suggested by pharmacogenetic studies on mood stabilizers have been included in any of the genetic tests approved by the Food and Drug Administration (FDA) for drug efficacy. On the other hand, genetic information has been included in drug labels to test for the safety of carbamazepine and valproate. In this review, we will outline available studies investigating the pharmacogenetics and pharmacogenomics of lithium and other mood stabilizers, with a specific focus on the limitations of these studies and potential strategies to overcome them. We will also discuss FDA-approved pharmacogenetic tests for treatments commonly used in the management of BD.",
"title": ""
},
{
"docid": "45c9ecc06dca6e18aae89ebf509d31d2",
"text": "For estimating causal effects of treatments, randomized experiments are generally considered the gold standard. Nevertheless, they are often infeasible to conduct for a variety of reasons, such as ethical concerns, excessive expense, or timeliness. Consequently, much of our knowledge of causal effects must come from non-randomized observational studies. This article will advocate the position that observational studies can and should be designed to approximate randomized experiments as closely as possible. In particular, observational studies should be designed using only background information to create subgroups of similar treated and control units, where 'similar' here refers to their distributions of background variables. Of great importance, this activity should be conducted without any access to any outcome data, thereby assuring the objectivity of the design. In many situations, this objective creation of subgroups of similar treated and control units, which are balanced with respect to covariates, can be accomplished using propensity score methods. The theoretical perspective underlying this position will be presented followed by a particular application in the context of the US tobacco litigation. This application uses propensity score methods to create subgroups of treated units (male current smokers) and control units (male never smokers) who are at least as similar with respect to their distributions of observed background characteristics as if they had been randomized. The collection of these subgroups then 'approximate' a randomized block experiment with respect to the observed covariates.",
"title": ""
},
{
"docid": "af9b81a034c76a7706d362105beff3cf",
"text": "A core aspect of human intelligence is the ability to learn new tasks quickly and switch between them flexibly. Here, we describe a modular continual reinforcement learning paradigm inspired by these abilities. We first introduce a visual interaction environment that allows many types of tasks to be unified in a single framework. We then describe a reward map prediction scheme that learns new tasks robustly in the very large state and action spaces required by such an environment. We investigate how properties of module architecture influence efficiency of task learning, showing that a module motif incorporating specific design principles (e.g. early bottlenecks, low-order polynomial nonlinearities, and symmetry) significantly outperforms more standard neural network motifs, needing fewer training examples and fewer neurons to achieve high levels of performance. Finally, we present a meta-controller architecture for task switching based on a dynamic neural voting scheme, which allows new modules to use information learned from previouslyseen tasks to substantially improve their own learning efficiency.",
"title": ""
},
{
"docid": "fdc4d23fa336ca122fdfb12818901180",
"text": "Concept of communication systems, which use smart antennas is based on digital signal processing algorithms. Thus, the smart antennas system becomes capable to locate and track signals by the both: users and interferers and dynamically adapts the antenna pattern to enhance the reception in Signal-Of-Interest direction and minimizing interference in Signal-Of-Not-Interest direction. Hence, Space Division Multiple Access system, which uses smart antennas, is being used more often in wireless communications, because it shows improvement in channel capacity and co-channel interference. However, performance of smart antenna system greatly depends on efficiency of digital signal processing algorithms. The algorithm uses the Direction of Arrival (DOA) algorithms to estimate the number of incidents plane waves on the antenna array and their angle of incidence. This paper investigates performance of the DOA algorithms like MUSIC, ESPRIT and ROOT MUSIC on the uniform linear array in the presence of white noise. The simulation results show that MUSIC algorithm is the best. The resolution of the DOA techniques improves as number of snapshots, number of array elements and signalto-noise ratio increases.",
"title": ""
},
{
"docid": "6d07571fa4a7027a260bd6586d59e2bd",
"text": "As there is a need for innovative and new medical technologies in the healthcare, we identified Thalmic's “MYO Armband”, which is used for gaming systems and controlling applications in mobiles and computers. We can exploit this development in the field of medicine and healthcare to improve public health care system. So, we spotted “MYO diagnostics”, a computer-based application developed by Thalmic labs to understand Electromyography (EMG) lines (graphs), bits of vector data, and electrical signals of our complicated biology inside our arm. The human gestures will allow to gather huge amount of data and series of EMG lines which can be analysed to detect medical abnormalities and hand movements. This application has powerful algorithms which are translated into commands to recognise human hand gestures. The effect of doctors experience on user satisfaction metrics in using MYO armband can be measured in terms of effectiveness, efficiency and satisfaction which are based on the metrics-task completion, error counts, task times and satisfaction scores. In this paper, we considered only satisfaction metrics using a widely used System Usability Scale (SUS) questionnaire model to study the usability on the twenty-four medical students of the Brighton and Sussex Medical School. This helps in providing guidelines about the use of MYO armband for physiotherapy analysis by the doctors and patients. Another questionnaire with a focus on ergonomic (human factors) issues related to the use of the device such as social acceptability, ease of use and ease of learning, comfort and stress, attempted to discover characteristics of hand gestures using MYO. The results of this study can be used in a way to support the development of interactive physiotherapy analysis by individuals using MYO and hand gesture applications at their home for self-examination. Also, the relationship and correlation between the signals received will lead to a better understanding of the whole myocardium system and assist doctors in early diagnosis.",
"title": ""
},
{
"docid": "a9399439831a970fcce8e0101696325f",
"text": "We describe the design, implementation, and evaluation of EMBERS, an automated, 24x7 continuous system for forecasting civil unrest across 10 countries of Latin America using open source indicators such as tweets, news sources, blogs, economic indicators, and other data sources. Unlike retrospective studies, EMBERS has been making forecasts into the future since Nov 2012 which have been (and continue to be) evaluated by an independent T&E team (MITRE). Of note, EMBERS has successfully forecast the June 2013 protests in Brazil and Feb 2014 violent protests in Venezuela. We outline the system architecture of EMBERS, individual models that leverage specific data sources, and a fusion and suppression engine that supports trading off specific evaluation criteria. EMBERS also provides an audit trail interface that enables the investigation of why specific predictions were made along with the data utilized for forecasting. Through numerous evaluations, we demonstrate the superiority of EMBERS over baserate methods and its capability to forecast significant societal happenings.",
"title": ""
},
{
"docid": "fa4480bbc460658bd1ea5804fdebc5ed",
"text": "This paper examines the problem of how to teach multiple tasks to a Reinforcement Learning (RL) agent. To this end, we use Linear Temporal Logic (LTL) as a language for specifying multiple tasks in a manner that supports the composition of learned skills. We also propose a novel algorithm that exploits LTL progression and off-policy RL to speed up learning without compromising convergence guarantees, and show that our method outperforms the state-of-the-art approach on randomly generated Minecraft-like grids.",
"title": ""
},
{
"docid": "98ca1c0100115646bb14a00f19c611a5",
"text": "The interconnected nature of graphs often results in difficult to interpret clutter. Typically techniques focus on either decluttering by clustering nodes with similar properties or grouping edges with similar relationship. We propose using mapper, a powerful topological data analysis tool, to summarize the structure of a graph in a way that both clusters data with similar properties and preserves relationships. Typically, mapper operates on a given data by utilizing a scalar function defined on every point in the data and a cover for scalar function codomain. The output of mapper is a graph that summarize the shape of the space. In this paper, we outline how to use this mapper construction on an input graphs, outline three filter functions that capture important structures of the input graph, and provide an interface for interactively modifying the cover. To validate our approach, we conduct several case studies on synthetic and real world data sets and demonstrate how our method can give meaningful summaries for graphs with various",
"title": ""
},
{
"docid": "d719fb1fe0faf76c14d24f7587c5345f",
"text": "This paper describes a framework for the estimation of shape from sparse or incomplete range data. It uses a shape representation called blending, which allows for the geometric combination of shapes into a unified model— selected regions of the component shapes are cut-out and glued together. Estimation of shape using this representation is realized using a physics-based framework, and also includes a process for deciding how to adapt the structure and topology of the model to improve the fit. The blending representation helps avoid abrupt changes in model geometry during fitting by allowing the smooth evolution of the shape, which improves the robustness of the technique. We demonstrate this framework with a series of experiments showing its ability to automatically extract structured representations from range data given both structurally and topologically complex objects. Comments University of Pennsylvania Department of Computer and Information Science Technical Report No. MSCIS-97-12. This technical report is available at ScholarlyCommons: http://repository.upenn.edu/cis_reports/47 (appeared inIEEE Transactions on Pattern Analysis and Machine Intelligence , Vol. 20, No. 11, pp. 1186-1205, November 1998) Shape Evolution with Structural and Topological Changes using Blending Douglas DeCarlo and Dimitris Metaxas †",
"title": ""
},
{
"docid": "054c2e8fa9421c77939091e5adfc07e5",
"text": "Visualization is a powerful paradigm for exploratory data analysis. Visualizing large graphs, however, often results in excessive edges crossings and overlapping nodes. We propose a new scalable approach called FACETS that helps users adaptively explore large million-node graphs from a local perspective, guiding them to focus on nodes and neighborhoods that are most subjectively interesting to users. We contribute novel ideas to measure this interestingness in terms of how surprising a neighborhood is given the background distribution, as well as how well it matches what the user has chosen to explore. FACETS uses Jensen-Shannon divergence over information-theoretically optimized histograms to calculate the subjective user interest and surprise scores. Participants in a user study found FACETS easy to use, easy to learn, and exciting to use. Empirical runtime analyses demonstrated FACETS’s practical scalability on large real-world graphs with up to 5 million edges, returning results in fewer than 1.5 seconds.",
"title": ""
},
{
"docid": "dc81f63623020220eba19f4f6ae545e0",
"text": "In this paper, a new technique for human identification task based on heart sound signals has been proposed. It utilizes a feature level fusion technique based on canonical correlation analysis. For this purpose a robust pre-processing scheme based on the wavelet analysis of the heart sounds is introduced. Then, three feature vectors are extracted depending on the cepstral coefficients of different frequency scale representation of the heart sound namely; the mel, bark, and linear scales. Among the investigated feature extraction methods, experimental results show that the mel-scale is the best with 94.4% correct identification rate. Using a hybrid technique combining MFCC and DWT, a new feature vector is extracted improving the system's performance up to 95.12%. Finally, canonical correlation analysis is applied for feature fusion. This improves the performance of the proposed system up to 99.5%. The experimental results show significant improvements in the performance of the proposed system over methods adopting single feature extraction.",
"title": ""
},
{
"docid": "87bded10bc1a29a3c0dead2958defc2e",
"text": "Context: Web applications are trusted by billions of users for performing day-to-day activities. Accessibility, availability and omnipresence of web applications have made them a prime target for attackers. A simple implementation flaw in the application could allow an attacker to steal sensitive information and perform adversary actions, and hence it is important to secure web applications from attacks. Defensive mechanisms for securing web applications from the flaws have received attention from both academia and industry. Objective: The objective of this literature review is to summarize the current state of the art for securing web applications from major flaws such as injection and logic flaws. Though different kinds of injection flaws exist, the scope is restricted to SQL Injection (SQLI) and Cross-site scripting (XSS), since they are rated as the top most threats by different security consortiums. Method: The relevant articles recently published are identified from well-known digital libraries, and a total of 86 primary studies are considered. A total of 17 articles related to SQLI, 35 related to XSS and 34 related to logic flaws are discussed. Results: The articles are categorized based on the phase of software development life cycle where the defense mechanism is put into place. Most of the articles focus on detecting the flaws and preventing attacks against web applications. Conclusion: Even though various approaches are available for securing web applications from SQLI and XSS, they are still prevalent due to their impact and severity. Logic flaws are gaining attention of the researchers since they violate the business specifications of applications. There is no single solution to mitigate all the flaws. More research is needed in the area of fixing flaws in the source code of applications.",
"title": ""
},
{
"docid": "c8b9bba65b8561b48abe68a72c02f054",
"text": "The Bitcoin backbone protocol [Eurocrypt 2015] extracts basic properties of Bitcoin's underlying blockchain data structure, such as common pre x and chain quality, and shows how fundamental applications including consensus and a robust public transaction ledger can be built on top of them. The underlying assumptions are proofs of work (POWs), adversarial hashing power strictly less than 1/2 and no adversarial pre-computation or, alternatively, the existence of an unpredictable genesis block. In this paper we show how to remove the latter assumption, presenting a bootstrapped Bitcoin-like blockchain protocol relying on POWs that builds genesis blocks from scratch in the presence of adversarial pre-computation. The only known previous result in the same setting (unauthenticated parties, no trusted setup) [Crypto 2015] is indirect in the sense of creating a PKI rst and then employing conventional PKI-based authenticated communication. With our construction we establish that consensus can be solved directly by a blockchain protocol without trusted setup assuming an honest majority (in terms of computational power). We also formalize miner unlinkability, a privacy property for blockchain protocols, and demonstrate that our protocol retains the same level of miner unlinkability as Bitcoin itself.",
"title": ""
}
] |
scidocsrr
|
dc5485657eed24774b979e7a98eb620f
|
Ch2R: A Chinese Chatter Robot for Online Shopping Guide
|
[
{
"docid": "16ccacd0f59bd5e307efccb9f15ac678",
"text": "This document presents the results from Inst. of Computing Tech., CAS in the ACLSIGHAN-sponsored First International Chinese Word Segmentation Bakeoff. The authors introduce the unified HHMM-based frame of our Chinese lexical analyzer ICTCLAS and explain the operation of the six tracks. Then provide the evaluation results and give more analysis. Evaluation on ICTCLAS shows that its performance is competitive. Compared with other system, ICTCLAS has ranked top both in CTB and PK closed track. In PK open track, it ranks second position. ICTCLAS BIG5 version was transformed from GB version only in two days; however, it achieved well in two BIG5 closed tracks. Through the first bakeoff, we could learn more about the development in Chinese word segmentation and become more confident on our HHMM-based approach. At the same time, we really find our problems during the evaluation. The bakeoff is interesting and helpful.",
"title": ""
}
] |
[
{
"docid": "89297a4aef0d3251e8d947ccc2acacc7",
"text": "We propose a novel probabilistic framework for learning visual models of 3D object categories by combining appearance information and geometric constraints. Objects are represented as a coherent ensemble of parts that are consistent under 3D viewpoint transformations. Each part is a collection of salient image features. A generative framework is used for learning a model that captures the relative position of parts within each of the discretized viewpoints. Contrary to most of the existing mixture of viewpoints models, our model establishes explicit correspondences of parts across different viewpoints of the object class. Given a new image, detection and classification are achieved by determining the position and viewpoint of the model that maximize recognition scores of the candidate objects. Our approach is among the first to propose a generative probabilistic framework for 3D object categorization. We test our algorithm on the detection task and the viewpoint classification task by using “car” category from both the Savarese et al. 2007 and PASCAL VOC 2006 datasets. We show promising results in both the detection and viewpoint classification tasks on these two challenging datasets.",
"title": ""
},
{
"docid": "4a89f20c4b892203be71e3534b32449c",
"text": "This paper draws together knowledge from a variety of fields to propose that innovation management can be viewed as a form of organisational capability. Excellent companies invest and nurture this capability, from which they execute effective innovation processes, leading to innovations in new product, services and processes, and superior business performance results. An extensive review of the literature on innovation management, along with a case study of Cisco Systems, develops a conceptual model of the firm as an innovation engine. This new operating model sees substantial investment in innovation capability as the primary engine for wealth creation, rather than the possession of physical assets. Building on the dynamic capabilities literature, an “innovation capability” construct is proposed with seven elements. These are vision and strategy, harnessing the competence base, organisational intelligence, creativity and idea management, organisational structures and systems, culture and climate, and management of technology.",
"title": ""
},
{
"docid": "3f83d41f66b2c3b6b62afb3d3a3d8562",
"text": "Many recommendation algorithms suffer from popularity bias in their output: popular items are recommended frequently and less popular ones rarely, if at all. However, less popular, long-tail items are precisely those that are often desirable recommendations. In this paper, we introduce a flexible regularization-based framework to enhance the long-tail coverage of recommendation lists in a learning-to-rank algorithm. We show that regularization provides a tunable mechanism for controlling the trade-off between accuracy and coverage. Moreover, the experimental results using two data sets show that it is possible to improve coverage of long tail items without substantial loss of ranking performance.",
"title": ""
},
{
"docid": "11538da6cfda3a81a7ddec0891aae1d9",
"text": "This work presents a dataset and annotation scheme for the new task of identifying “good” conversations that occur online, which we call ERICs: Engaging, Respectful, and/or Informative Conversations. We develop a taxonomy to reflect features of entire threads and individual comments which we believe contribute to identifying ERICs; code a novel dataset of Yahoo News comment threads (2.4k threads and 10k comments) and 1k threads from the Internet Argument Corpus; and analyze the features characteristic of ERICs. This is one of the largest annotated corpora of online human dialogues, with the most detailed set of annotations. It will be valuable for identifying ERICs and other aspects of argumentation, dialogue, and discourse.",
"title": ""
},
{
"docid": "6a993cdfbb701b43bb1cf287380e5b2e",
"text": "There is a growing need for real-time human pose estimation from monocular RGB images in applications such as human computer interaction, assisted living, video surveillance, people tracking, activity recognition and motion capture. For the task, depth sensors and multi-camera systems are usually more expensive and difficult to set up than conventional RGB video cameras. Recent advances in convolutional neural network research have allowed to replace of traditional methods with more efficient convolutional neural network based methods in many computer vision tasks. This thesis presents a method for real-time multi-person human pose estimation from video by utilizing convolutional neural networks. The method is aimed for use case specific applications, where good accuracy is essential and variation of the background and poses is limited. This enables to use a generic network architecture, which is both accurate and fast. The problem is divided into two phases: (1) pretraining and (2) fine-tuning. In pretraining, the network is learned with highly diverse input data from publicly available datasets, while in fine-tuning it is trained with application specific data recorded with Kinect. The method considers the whole system, including person detector, pose estimator and an automatic way to record application specific training material for fine-tuning. The method can be also thought of as a replacement for Kinect, and it can be used for higher level tasks such as gesture control, games, person tracking and action recognition.",
"title": ""
},
{
"docid": "b4ecf497c8240a48a6e60aef400d0e1e",
"text": "Skin color diversity is the most variable and noticeable phenotypic trait in humans resulting from constitutive pigmentation variability. This paper will review the characterization of skin pigmentation diversity with a focus on the most recent data on the genetic basis of skin pigmentation, and the various methodologies for skin color assessment. Then, melanocyte activity and amount, type and distribution of melanins, which are the main drivers for skin pigmentation, are described. Paracrine regulators of melanocyte microenvironment are also discussed. Skin response to sun exposure is also highly dependent on color diversity. Thus, sensitivity to solar wavelengths is examined in terms of acute effects such as sunburn/erythema or induced-pigmentation but also long-term consequences such as skin cancers, photoageing and pigmentary disorders. More pronounced sun-sensitivity in lighter or darker skin types depending on the detrimental effects and involved wavelengths is reviewed.",
"title": ""
},
{
"docid": "6c87cff16fb85eaa02c377fa047346bb",
"text": "BACKGROUND\n: Arterial and venous thoracic outlet syndrome (TOS) were recognized in the late 1800s and neurogenic TOS in the early 1900s. Diagnosis and treatment of the 2 vascular forms of TOS are generally accepted in all medical circles. On the other hand, neurogenic TOS is more difficult to diagnose because there is no standard objective test to confirm clinical impressions.\n\n\nREVIEW SUMMARY\n: The clinical features of arterial, venous, and neurogenic TOS are described. Because neurogenic TOS is by far the most common type, the pathology, pathophysiology, diagnostic tests, differential and associate diagnoses, and treatment are detailed and discussed. The controversial area of objective and subjective diagnostic criteria is addressed.\n\n\nCONCLUSION\n: Arterial and venous TOS are usually not difficult to recognize and the diagnosis can be confirmed by angiography. The diagnosis of neurogenic TOS is more challenging because its symptoms of nerve compression are not unique. The clinical diagnosis relies on documenting several positive findings on physical examination. To date there is still no reliable objective test to confirm the diagnosis, but measurements of the medial antebrachial cutaneous nerve appear promising.",
"title": ""
},
{
"docid": "15c715c3da3883e363aa8e442e903269",
"text": "A supervised learning rule for Spiking Neural Networks (SNNs) is presented that can cope with neurons that spike multiple times. The rule is developed by extending the existing SpikeProp algorithm which could only be used for one spike per neuron. The problem caused by the discontinuity in the spike process is counteracted with a simple but effective rule, which makes the learning process more efficient. Our learning rule is successfully tested on a classification task of Poisson spike trains. We also applied the algorithm on a temporal version of the XOR problem and show that it is possible to learn this classical problem using only one spiking neuron making use of a hairtrigger situation.",
"title": ""
},
{
"docid": "5c05b2d2086125bc8c6364b58c37971a",
"text": "In this exploratory field-study, we examined how normative messages (i.e., activating an injunctive norm, personal norm, or both) could encourage shoppers to use fewer free plastic bags for their shopping in addition to the supermarket‘s standard environmental message aimed at reducing plastic bags. In a one-way subjects-design (N = 200) at a local supermarket, we showed that shoppers used significantly fewer free plastic bags in the injunctive, personal and combined normative message condition than in the condition where only an environmental message was present. The combined normative message did result in the smallest uptake of free plastic bags compared to the injunctive and personal normative-only message, although these differences were not significant. Our findings imply that re-wording the supermarket‘s environmental message by including normative information could be a promising way to reduce the use of free plastic bags, which will ultimately benefit the environment.",
"title": ""
},
{
"docid": "7aff3e7bac49208478f2979ca591e059",
"text": "The postulate of independence of cause and mechanism (ICM) has recently led to several new causal discovery algorithms. The interpretation of independence and the way it is utilized, however, varies across these methods. Our aim in this paper is to propose a group theoretic framework for ICM to unify and generalize these approaches. In our setting, the cause-mechanism relationship is assessed by perturbing it with random group transformations. We show that the group theoretic view encompasses previous ICM approaches and provides a very general tool to study the structure of data generating mechanisms with direct applications to machine learning.",
"title": ""
},
{
"docid": "8f1cb692121899bb63e98f9a6ab3000e",
"text": "Magnet material prices has become an uncertain factor for electric machine development. Most of all, the output of ironless axial flux motors equipped with Halbach magnet arrays depend on the elaborated magnetic flux. Therefore, possibilities to reduce the manufacturing cost without negatively affecting the performance are studied in this paper. Both magnetostatic and transient 3D finite element analyses are applied to compare flux density distribution, elaborated output torque and induced back EMF. It is shown, that the proposed magnet shapes and magnetization pattern meet the requirements. Together with the assembly and measurements of functional linear Halbach magnet arrays, the prerequisite for the manufacturing of axial magnet arrays for an ironless in-wheel hub motor are given.",
"title": ""
},
{
"docid": "112ec676f74c22393d06bc23eaae50d8",
"text": "Multi-user multiple-input multiple-output (MU-MIMO) is the latest communication technology that promises to linearly increase the wireless capacity by deploying more antennas on access points (APs). However, the large number of MIMO antennas will generate a huge amount of digital signal samples in real time. This imposes a grand challenge on the AP design by multiplying the computation and the I/O requirements to process the digital samples. This paper presents BigStation, a scalable architecture that enables realtime signal processing in large-scale MIMO systems which may have tens or hundreds of antennas. Our strategy to scale is to extensively parallelize the MU-MIMO processing on many simple and low-cost commodity computing devices. Our design can incrementally support more antennas by proportionally adding more computing devices. To reduce the overall processing latency, which is a critical constraint for wireless communication, we parallelize the MU-MIMO processing with a distributed pipeline based on its computation and communication patterns. At each stage of the pipeline, we further use data partitioning and computation partitioning to increase the processing speed. As a proof of concept, we have built a BigStation prototype based on commodity PC servers and standard Ethernet switches. Our prototype employs 15 PC servers and can support real-time processing of 12 software radio antennas. Our results show that the BigStation architecture is able to scale to tens to hundreds of antennas. With 12 antennas, our BigStation prototype can increase wireless capacity by 6.8x with a low mean processing delay of 860μs. While this latency is not yet low enough for the 802.11 MAC, it already satisfies the real-time requirements of many existing wireless standards, e.g., LTE and WCDMA.",
"title": ""
},
{
"docid": "7ebd960866db666093fd61e22be6fe7b",
"text": "The elucidation of molecular targets of bioactive small organic molecules remains a significant challenge in modern biomedical research and drug discovery. This tutorial review summarizes strategies for the derivatization of bioactive small molecules and their use as affinity probes to identify cellular binding partners. Special emphasis is placed on logistical concerns as well as common problems encountered during such target identification experiments. The roadmap provided is a guide through the process of affinity probe selection, target identification, and downstream target validation.",
"title": ""
},
{
"docid": "92e50fc2351b4a05d573590f3ed05e81",
"text": "OBJECTIVE\nWe examined the effects of sensory-enhanced hatha yoga on symptoms of combat stress in deployed military personnel, compared their anxiety and sensory processing with that of stateside civilians, and identified any correlations between the State-Trait Anxiety Inventory scales and the Adolescent/Adult Sensory Profile quadrants.\n\n\nMETHOD\nSeventy military personnel who were deployed to Iraq participated in a randomized controlled trial. Thirty-five received 3 wk (≥9 sessions) of sensory-enhanced hatha yoga, and 35 did not receive any form of yoga.\n\n\nRESULTS\nSensory-enhanced hatha yoga was effective in reducing state and trait anxiety, despite normal pretest scores. Treatment participants showed significantly greater improvement than control participants on 16 of 18 mental health and quality-of-life factors. We found positive correlations between all test measures except sensory seeking. Sensory seeking was negatively correlated with all measures except low registration, which was insignificant.\n\n\nCONCLUSION\nThe results support using sensory-enhanced hatha yoga for proactive combat stress management.",
"title": ""
},
{
"docid": "4c4c25aba1600869f7899e20446fd75f",
"text": "This paper presents GRAPE, a parallel system for graph computations. GRAPE differs from prior systems in its ability to parallelize existing sequential graph algorithms as a whole. Underlying GRAPE are a simple programming model and a principled approach, based on partial evaluation and incremental computation. We show that sequential graph algorithms can be \"plugged into\" GRAPE with minor changes, and get parallelized. As long as the sequential algorithms are correct, their GRAPE parallelization guarantees to terminate with correct answers under a monotonic condition. Moreover, we show that algorithms in MapReduce, BSP and PRAM can be optimally simulated on GRAPE. In addition to the ease of programming, we experimentally verify that GRAPE achieves comparable performance to the state-of-the-art graph systems, using real-life and synthetic graphs.",
"title": ""
},
{
"docid": "7e4c00d8f17166cbfb3bdac8d5e5ad09",
"text": "Twitter is now used to distribute substantive content such as breaking news, increasing the importance of assessing the credibility of tweets. As users increasingly access tweets through search, they have less information on which to base credibility judgments as compared to consuming content from direct social network connections. We present survey results regarding users' perceptions of tweet credibility. We find a disparity between features users consider relevant to credibility assessment and those currently revealed by search engines. We then conducted two experiments in which we systematically manipulated several features of tweets to assess their impact on credibility ratings. We show that users are poor judges of truthfulness based on content alone, and instead are influenced by heuristics such as user name when making credibility assessments. Based on these findings, we discuss strategies tweet authors can use to enhance their credibility with readers (and strategies astute readers should be aware of!). We propose design improvements for displaying social search results so as to better convey credibility.",
"title": ""
},
{
"docid": "03826954a304a4d6bdb2c1f55bbe8001",
"text": "This paper gives an overview of the channel access methods of three wireless technologies that are likely to be used in the environment of vehicle networks: IEEE 802.15.4, IEEE 802.11 and Bluetooth. Researching the coexistence of IEEE 802.15.4 with IEEE 802.11 and Bluetooth, results of experiments conducted in a radio frequency anechoic chamber are presented. The power densities of the technologies on a single IEEE 802.15.4 channel are compared. It is shown that the pure existence of an IEEE 802.11 access point leads to collisions due to different timing scales. Furthermore, the packet drop rate caused by Bluetooth is analyzed and an estimation formula for it is given.",
"title": ""
},
{
"docid": "d2fb10bdbe745ace3a2512ccfa414d4c",
"text": "In cloud computing environment, especially in big data era, adversary may use data deduplication service supported by the cloud service provider as a side channel to eavesdrop users' privacy or sensitive information. In order to tackle this serious issue, in this paper, we propose a secure data deduplication scheme based on differential privacy. The highlights of the proposed scheme lie in constructing a hybrid cloud framework, using convergent encryption algorithm to encrypt original files, and introducing differential privacy mechanism to resist against the side channel attack. Performance evaluation shows that our scheme is able to effectively save network bandwidth and disk storage space during the processes of data deduplication. Meanwhile, security analysis indicates that our scheme can resist against the side channel attack and related files attack, and prevent the disclosure of privacy information.",
"title": ""
},
{
"docid": "e88def1e0d709047f910b7d5d2319508",
"text": "This paper presents an asymmetrical control with phase lock loop for series resonant inverters. This control strategy is used in full-bridge topologies for induction cookers. The operating frequency is automatically tracked to maintain a small constant lagging phase angle when load parameters change. The switching loss is minimized by operating the IGBT in the zero voltage resonance modes. The output power can be adjusted by using asymmetrical voltage cancellation control which is regulated with a PWM duty cycle control strategy.",
"title": ""
},
{
"docid": "e0fe5ab372bd6d4e39dfc6974832da34",
"text": "Purpose – The purpose of this paper is to determine the factors that influence the intention to use and actual usage of a G2B system such as electronic procurement system (EPS) by various ministries in the Government of Malaysia. Design/methodology/approach – The research uses an extension of DeLone and McLean’s model of IS success by including trust, facilitating conditions, and web design quality. The model is tested using an empirical approach. A questionnaire was designed and responses from 358 users from various ministries were collected and analyzed using structural equation modeling (SEM). Findings – The findings of the study indicate that: perceived usefulness, perceived ease of use, assurance of service by service providers, responsiveness of service providers, facilitating conditions, web design (service quality) are strongly linked to intention to use EPS; and intention to use is strongly linked to actual usage behavior. Practical implications – Typically, governments of developing countries spend millions of dollars to implement e-government systems. The investments can be considered useful only if the usage rate is high. The study can help ICT decision makers in government to recognize the critical factors that are responsible for the success of a G2B system like EPS. Originality/value – The model used in the study is one of the few models designed to determine factors influencing intention to use and actual usage behavior in a G2B system in a fast-developing country like Malaysia.",
"title": ""
}
] |
scidocsrr
|
136a7ade2c802609e5a827cc95f83190
|
A Novel Continuum Trunk Robot Based on Contractor Muscles
|
[
{
"docid": "8bb465b2ec1f751b235992a79c6f7bf1",
"text": "Continuum robotics has rapidly become a rich and diverse area of research, with many designs and applications demonstrated. Despite this diversity in form and purpose, there exists remarkable similarity in the fundamental simplified kinematic models that have been applied to continuum robots. However, this can easily be obscured, especially to a newcomer to the field, by the different applications, coordinate frame choices, and analytical formalisms employed. In this paper we review several modeling approaches in a common frame and notational convention, illustrating that for piecewise constant curvature, they produce identical results. This discussion elucidates what has been articulated in different ways by a number of researchers in the past several years, namely that constant-curvature kinematics can be considered as consisting of two separate submappings: one that is general and applies to all continuum robots, and another that is robot-specific. These mappings are then developed both for the singlesection and for the multi-section case. Similarly, we discuss the decomposition of differential kinematics (the robot’s Jacobian) into robot-specific and robot-independent portions. The paper concludes with a perspective on several of the themes of current research that are shaping the future of continuum robotics.",
"title": ""
}
] |
[
{
"docid": "467d953d489ca8f7d75c798d6e948a86",
"text": "The ability to detect recent natural selection in the human population would have profound implications for the study of human history and for medicine. Here, we introduce a framework for detecting the genetic imprint of recent positive selection by analysing long-range haplotypes in human populations. We first identify haplotypes at a locus of interest (core haplotypes). We then assess the age of each core haplotype by the decay of its association to alleles at various distances from the locus, as measured by extended haplotype homozygosity (EHH). Core haplotypes that have unusually high EHH and a high population frequency indicate the presence of a mutation that rose to prominence in the human gene pool faster than expected under neutral evolution. We applied this approach to investigate selection at two genes carrying common variants implicated in resistance to malaria: G6PD and CD40 ligand. At both loci, the core haplotypes carrying the proposed protective mutation stand out and show significant evidence of selection. More generally, the method could be used to scan the entire genome for evidence of recent positive selection.",
"title": ""
},
{
"docid": "ec7931f1a56bf7d4dd6cc1a5cb2d0625",
"text": "Modern life is intimately linked to the availability of fossil fuels, which continue to meet the world's growing energy needs even though their use drives climate change, exhausts finite reserves and contributes to global political strife. Biofuels made from renewable resources could be a more sustainable alternative, particularly if sourced from organisms, such as algae, that can be farmed without using valuable arable land. Strain development and process engineering are needed to make algal biofuels practical and economically viable.",
"title": ""
},
{
"docid": "d01692a4ee83531badacea6658b74d8f",
"text": "Question Answering (QA) research for factoid questions has recently achieved great success. Presently, QA systems developed for European, Middle Eastern and Asian languages are capable of providing answers with reasonable accuracy. However, Bengali being among themost spoken languages in theworld, no factoid question answering system is available for Bengali till date. This paper describes the first attempt on building a factoid question answering system for Bengali language. The challenges in developing a question answering system for Bengali have been discussed. Extraction and ranking of relevant sentences have also been proposed. Also extraction strategy of the ranked answers from the relevant sentences are suggested for Bengali question answering system.",
"title": ""
},
{
"docid": "e8fee9f93106ce292c89c26be373030f",
"text": "As a non-invasive imaging modality, optical coherence tomography (OCT) can provide micrometer-resolution 3D images of retinal structures. Therefore it is commonly used in the diagnosis of retinal diseases associated with edema in and under the retinal layers. In this paper, a new framework is proposed for the task of fluid segmentation and detection in retinal OCT images. Based on the raw images and layers segmented by a graph-cut algorithm, a fully convolutional neural network was trained to recognize and label the fluid pixels. Random forest classification was performed on the segmented fluid regions to detect and reject the falsely labeled fluid regions. The leave-one-out cross validation experiments on the RETOUCH database show that our method performs well in both segmentation (mean Dice: 0.7317) and detection (mean AUC: 0.985) tasks.",
"title": ""
},
{
"docid": "04756d4dfc34215c8acb895ecfcfb406",
"text": "The author describes five separate projects he has undertaken in the intersection of computer science and Canadian income tax law. They are:A computer-assisted instruction (CAI) course for teaching income tax, programmed using conventional CAI techniques;\nA “document modeling” computer program for generating the documentation for a tax-based transaction and advising the lawyer-user as to what decisions should be made and what the tax effects will be, programmed in a conventional language;\nA prototype expert system for determining the income tax effects of transactions and tax-defined relationships, based on a PROLOG representation of the rules of the Income Tax Act;\nAn intelligent CAI (ICAI) system for generating infinite numbers of randomized quiz questions for students, computing the answers, and matching wrong answers to particular student errors, based on a PROLOG representation of the rules of the Income Tax Act; and\nA Hypercard stack for providing information about income tax, enabling both education and practical research to follow the user's needs path.\n\nThe author shows that non-AI approaches are a way to produce packages quickly and efficiently. Their primary disadvantage is the massive rewriting required when the tax law changes. AI approaches based on PROLOG, on the other hand, are harder to develop to a practical level but will be easier to audit and maintain. The relationship between expert systems and CAI is discussed.",
"title": ""
},
{
"docid": "1e82e123cacca01a84a8ea2fef641d98",
"text": "We propose a new class of convex penalty functions, called variational Gram functions (VGFs), that can promote pairwise relations, such as orthogonality, among a set of vectors in a vector space. These functions can serve as regularizers in convex optimization problems arising from hierarchical classification, multitask learning, and estimating vectors with disjoint supports, among other applications. We study necessary and sufficient conditions under which a VGF is convex, and give a characterization of its subdifferential. We show how to compute its proximal operator, and discuss efficient optimization algorithms for regularized loss minimization problems where the loss admits a simple variational representation and the regularizer is a VGF. We also establish a general representer theorem for such learning problems. Lastly, numerical experiments on a hierarchical classification problem are presented to demonstrate the effectiveness of VGFs and the associated optimization algorithms.",
"title": ""
},
{
"docid": "9083b448b8bd82705db99c2e0104f9a7",
"text": "In free-viewpoint video, there is a recent trend to represent scene objects as solids rather than using multiple depth maps. Point clouds have been used in computer graphics for a long time, and with the recent possibility of real-time capturing and rendering, point clouds have been favored over meshes in order to save computation. Each point in the cloud is associated with its 3D position and its color. We devise a method to compress the colors in point clouds, which is based on a hierarchical transform and arithmetic coding. The transform is a hierarchical sub-band transform that resembles an adaptive variation of a Haar wavelet. The arithmetic encoding of the coefficients assumes Laplace distributions, one per sub-band. The Laplace parameter for each distribution is transmitted to the decoder using a custom method. The geometry of the point cloud is encoded using the well-established octtree scanning. Results show that the proposed solution performs comparably with the current state-of-the-art, in many occasions outperforming it, while being much more computationally efficient. We believe this paper represents the state of the art in intra-frame compression of point clouds for real-time 3D video.",
"title": ""
},
{
"docid": "4a94fb7432d172d5c1ce1e5429cc38b3",
"text": "OBJECTIVE\nAssociations between eminent creativity and bipolar disorders have been reported, but there are few data relating non-eminent creativity to bipolar disorders in clinical samples. We assessed non-eminent creativity in euthymic bipolar (BP) and unipolar major depressive disorder (MDD) patients, creative discipline controls (CC), and healthy controls (HC).\n\n\nMETHODS\n49 BP, 25 MDD, 32 CC, and 47 HC (all euthymic) completed four creativity measures yielding six parameters: the Barron-Welsh Art Scale (BWAS-Total, and two subscales, BWAS-Dislike and BWAS-Like), the Adjective Check List Creative Personality Scale (ACL-CPS), and the Torrance Tests of Creative Thinking--Figural (TTCT-F) and Verbal (TTCT-V) versions. Mean scores on these instruments were compared across groups.\n\n\nRESULTS\nBP and CC (but not MDD) compared to HC scored significantly higher on BWAS-Total (45% and 48% higher, respectively) and BWAS-Dislike (90% and 88% higher, respectively), but not on BWAS-Like. CC compared to MDD scored significantly higher (12% higher) on TTCT-F. For all other comparisons, creativity scores did not differ significantly between groups.\n\n\nCONCLUSIONS\nWe found BP and CC (but not MDD) had similarly enhanced creativity on the BWAS-Total (driven by an increase on the BWAS-Dislike) compared to HC. Further studies are needed to determine the mechanisms of enhanced creativity and how it relates to clinical (e.g. temperament, mood, and medication status) and preclinical (e.g. visual and affective processing substrates) parameters.",
"title": ""
},
{
"docid": "7b7924ccd60d01468f6651b9226cbed0",
"text": "Leucine-rich repeat kinase 2 (LRRK2) mutations have been implicated in autosomal dominant parkinsonism, consistent with typical levodopa-responsive Parkinson's disease. The gene maps to chromosome 12q12 and encodes a large, multifunctional protein. To identify novel LRRK2 mutations, we have sequenced 100 affected probands with family history of parkinsonism. Semiquantitative analysis was also performed in all probands to identify LRRK2 genomic multiplication or deletion. In these kindreds, referred from movement disorder clinics in many parts of Europe, Asia, and North America, parkinsonism segregates as an autosomal dominant trait. All 51 exons of the LRRK2 gene were analyzed and the frequency of all novel sequence variants was assessed within controls. The segregation of mutations with disease has been examined in larger, multiplex families. Our study identified 26 coding variants, including 15 nonsynonymous amino acid substitutions of which three affect the same codon (R1441C, R1441G, and R1441H). Seven of these coding changes seem to be pathogenic, as they segregate with disease and were not identified within controls. No multiplications or deletions were identified.",
"title": ""
},
{
"docid": "65ac52564041b0c2e173560d49ec762f",
"text": "Constructionism can be a powerful framework for teaching complex content to novices. At the core of constructionism is the suggestion that by enabling learners to build creative artifacts that require complex content to function, those learners will have opportunities to learn this content in contextualized, personally-meaningful ways. In this paper, we investigate the relevance of a set of approaches broadly called “educational data mining” or “learning analytics” (henceforth, EDM) to help provide a basis for quantitative research on constructionist learning which does not abandon the richness seen as essential by many researchers in that paradigm. We suggest that EDM may have the potential to support research that is meaningful and useful both to researchers working actively in the constructionist tradition but also to wider communities. Finally, we explore potential collaborations between researchers in the EDM and constructionist traditions; such collaborations have the potential to enhance the ability of constructionist researchers to make rich inference about learning and learners, while providing EDM researchers with many interesting new research questions and challenges. In recent years, project-based, student-centered approaches to education have gained prominence, due in part to an increased demand for higher-level skills in the job market (Levi and Murname, 2004), positive research findings on the effectiveness of such approaches (Barron, Pearson, et al., 2008), and a broader acceptance in public policy circles, as shown, for example, by the Next Generation Science Standards (NGSS Lead States, 2013). While several approaches for this type of learning exist, Constructionism is one of the most popular and well-developed ones (Papert, 1980). In this paper, we investigate the relevance of a set of approaches called “educational data mining” or “learning analytics” (henceforth abbreviated as ‘EDM’) (R. Baker & Yacef, 2009; Romero & Ventura, 2010a; R. Baker & Siemens, in press) to help provide a basis for quantitative research on constructionist learning which does not abandon the richness seen as essential by many researchers in that paradigm. As such, EDM may have the potential to support research that is meaningful and useful both to researchers working actively in the constructionist tradition and to the wider community of learning scientists and policymakers. EDM, broadly, is a set of methods that apply data mining and machine learning techniques such as prediction, classification, and discovery of latent structural regularities to rich, voluminous, and idiosyncratic educational data, potentially similar to those data generated by many constructionist learning environments which allows students to explore and build their own artifacts, computer programs, and media pieces. As such, we identify four axes in which EDM methods may be helpful for constructionist research: 1. EDM methods do not require constructionists to abandon deep qualitative analysis for simplistic summative or confirmatory quantitative analysis; 2. EDM methods can generate different and complementary new analyses to support qualitative research; 3. By enabling precise formative assessments of complex constructs, EDM methods can support an increase in methodological rigor and replicability; 4. EDM can be used to present comprehensible and actionable data to learners and teachers in situ. In order to investigate those axes, we start by describing our perspective on compatibilities and incompatibilities between constructionism and EDM. At the core of constructionism is the suggestion that by enabling learners to build creative artifacts that require complex content to function, those learners will have opportunities to learn that complex content in connected, meaningful ways. Constructionist projects often emphasize making those artifacts (and often data) public, socially relevant, and personally meaningful to learners, and encourage working in social spaces such that learners engage each other to accelerate the learning process. diSessa and Cobb (2004) argue that constructionism serves a framework for action, as it describes its own praxis (i.e., how it matches theory to practice). The learning theory supporting constructionism is classically constructivist, combining concepts from Piaget and Vygotsky (Fosnot, 2005). As constructionism matures as a constructivist framework for action and expands in scale, constructionist projects are becoming both more complex (Reynolds & Caperton, 2011), more scalable (Resnick, Maloney, et al., 2009), and more affordable for schools following significant development in low cost “construction” technologies such as robotics and 3D printers. As such, there have been increasing opportunities to learn more about how students learn in constructionist contexts, advancing the science of learning. These discoveries will have the potential to improve the quality of all constructivist learning experiences. For example, Wilensky and Reisman (2006) have shown how constructionist modeling and simulation can make science learning more accessible, Resnick (1998) has shown how constructionism can reframe programming as art at scale, Buechley & Eisenberg (2008) have used e-textiles to engage female students in robotics, Eisenberg (2011) and Blikstein (2013, 2014) use constructionist digital fabrication to successfully teach programming, engineering, and electronics in a novel, integrated way. The findings of these research and design projects have the potential to be useful to a wide external community of teachers, researchers, practitioners, and other stakeholders. However, connecting findings from the constructionist tradition to the goals of policymakers can be challenging, due to the historical differences in methodology and values between these communities. The resources needed to study such interventions at scale are considerable, given the need to carefully document, code, and analyze each student’s work processes and artifacts. The designs of constructionist research often result in findings that do not map to what researchers, outside interests, and policymakers are expecting, in contrast to conventional controlled studies, which are designed to (more conclusively) answer a limited set of sharply targeted research questions. Due the lack of a common ground to discuss benefits and scalability of constructionist and project-based designs, these designs have been too frequently sidelined to niche institutions such as private schools, museums, or atypical public schools. To understand what the role EDM methods can play in constructionist research, we must frame what we mean by constructionist research more precisely. We follow Papert and Harel (1991) in their situating of constructionism, but they do not constrain the term to one formal definition. The definition is further complicated by the fact that constructionism has many overlaps with other research and design traditions, such as constructivism and socio-constructivism themselves, as well as project-based pedagogies and inquiry-based designs. However, we believe that it is possible to define the subset of constructionism amenable to EDM, a focus we adopt in this article for brevity. In this paper, we focus on the constructionist literature dealing with students learning to construct understandings by constructing (physical or virtual) artifacts, where the students' learning environments are designed and constrained such that building artifacts in/with that environment is designed to help students construct their own understandings. In other words, we are focusing on creative work done in computational environments designed to foster creative and transformational learning, such as NetLogo (Wilensky, 1999), Scratch (Resnick, Maloney, et al., 2009), or LEGO Mindstorms. This sub-category of constructionism can and does generate considerable formative and summative data. It also has the benefit of having a history of success in the classroom. From Papert’s seminal (1972) work through today, constructionist learning has been shown to promote the development of deep understanding of relatively complex content, with many examples ranging from mathematics (Harel, 1990; Wilensky, 1996) to history (Zahn, Krauskopf, Hesse, & Pea, 2010). However, constructionist learning environments, ideas, and findings have yet to reach the majority of classrooms and have had incomplete influence in the broader education research community. There are several potential reasons for this. One of them may be a lack of demonstration that findings are generalizable across populations and across specific content. Another reason is that constructionist activities are seen to be timeconsuming for teachers (Warschauer & Matuchniak, 2010), though, in practice, it has been shown that supporting understanding through project-based work could actually save time (Fosnot, 2005) and enable classroom dynamics that may streamline class preparation (e.g., peer teaching or peer feedback). A last reason is that constructionists almost universally value more deep understanding of scientific principles than facts or procedural skills even in contexts (e.g., many classrooms) in which memorization of facts and procedural skills is the target to be evaluated (Abelson & diSessa, 1986; Papert & Harel, 1991). Therefore, much of what is learned in constructionist environments does not directly translate to test scores or other established metrics. Constructionist research can be useful and convincing to audiences that do not yet take full advantage of the scientific findings of this community, but it requires careful consideration of framing and evidence to reach them. Educational data mining methods pose the potential to both enhance constructionist research, and to support constructionist researchers in communicating their findings in a fashion that other researchers consider valid. Blikstein (2011, p. 110) made ",
"title": ""
},
{
"docid": "e7b9c3ef571770788cd557f8c4843bcf",
"text": "Different efforts have been done to address the problem of information overload on the Internet. Recommender systems aim at directing users through this information space, toward the resources that best meet their needs and interests by extracting knowledge from the previous users’ interactions. In this paper, we propose an algorithm to solve the web page recommendation problem. In our algorithm, we use distributed learning automata to learn the behavior of previous users’ and recommend pages to the current user based on learned pattern. Our experiments on real data set show that the proposed algorithm performs better than the other algorithms that we compared to and, at the same time, it is less complex than other algorithms with respect to memory usage and computational cost too.",
"title": ""
},
{
"docid": "a24eddbadb54b6012d243c3fd624d5aa",
"text": "A simple algorithm for computing the three-dimensional structure of a scene from a correlated pair of perspective projections is described here, when the spatial relationship between the two projections is unknown. This problem is relevant not only to photographic surveying1 but also to binocular vision2, where the non-visual information available to the observer about the orientation and focal length of each eye is much less accurate than the optical information supplied by the retinal images themselves. The problem also arises in monocular perception of motion3, where the two projections represent views which are separated in time as well as space. As Marr and Poggio4 have noted, the fusing of two images to produce a three-dimensional percept involves two distinct processes: the establishment of a 1:1 correspondence between image points in the two views—the ‘correspondence problem’—and the use of the associated disparities for determining the distances of visible elements in the scene. I shall assume that the correspondence problem has been solved; the problem of reconstructing the scene then reduces to that of finding the relative orientation of the two viewpoints.",
"title": ""
},
{
"docid": "28beae47973ec8dbf1b487daa389f37e",
"text": "Although cloud computing has the advantages of cost-saving, efficiency and scalability, it also brings about many security issues. Because almost all software, hardware, and application data are deployed and stored in the cloud platforms, there is often the distrust between users and cloud suppliers. To resolve the problem, this paper proposes a risk management framework on the basis of the previous work. The framework consists of five components: user requirement self-assessment, cloud service providers desktop assessment, risk assessment, third-party agencies review, and continuous monitoring. By means of the framework, the cloud service suppliers can better understand the user's requirements, and the trust between the users and the suppliers is more easily acquired.",
"title": ""
},
{
"docid": "2503784af4149b3d5bd61c458b6df2bf",
"text": "In this paper, our proposed method has two contributions to demosaicking: first, different from conventional interpolation methods based on two directions or four directions, the proposed method exploits to a greater degree correlations among neighboring pixels along eight directions to improve the interpolation performance. Second, we propose an efficient post-processing method to reduce interpolation artifacts based on the color difference planes. As compared with the latest demosaicking algorithms, experiments show that the proposed algorithm provides superior performance in terms of both objective and subjective image qualities.",
"title": ""
},
{
"docid": "79414d5ba6a202bf52d26a74caff4784",
"text": "The Co-Training algorithm uses unlabeled examples in multiple views to bootstrap classifiers in each view, typically in a greedy manner, and operating under assumptions of view-independence and compatibility. In this paper, we propose a Co-Regularization framework where classifiers are learnt in each view through forms of multi-view regularization. We propose algorithms within this framework that are based on optimizing measures of agreement and smoothness over labeled and unlabeled examples. These algorithms naturally extend standard regularization methods like Support Vector Machines (SVM) and Regularized Least squares (RLS) for multi-view semi-supervised learning, and inherit their benefits and applicability to high-dimensional classification problems. An empirical investigation is presented that confirms the promise of this approach.",
"title": ""
},
{
"docid": "325b97e73ea0a50d2413757e95628163",
"text": "Due to the recent advancement in procedural generation techniques, games are presenting players with ever growing cities and terrains to explore. However most sandbox-style games situated in cities, do not allow players to wander into buildings. In past research, space planning techniques have already been utilized to generate suitable layouts for both building floor plans and room layouts. We introduce a novel rule-based layout solving approach, especially suited for use in conjunction with procedural generation methods. We show how this solving approach can be used for procedural generation by providing the solver with a userdefined plan. In this plan, users can specify objects to be placed as instances of classes, which in turn contain rules about how instances should be placed. This approach gives us the opportunity to use our generic solver in different procedural generation scenarios. In this paper, we will illustrate mainly with interior generation examples.",
"title": ""
},
{
"docid": "bcb10716690875ec0e397eec4ba3ea2e",
"text": "Shamos [1] recently showed that the diameter of a convex n-sided polygon could be computed in O(n) time using a very elegant and simple procedure which resembles rotating a set of calipers around the polygon once. In this paper we show that this simple idea can be generalized in two ways: several sets of calipers can be used simultaneously on one convex polygon, or one set of calipers can be used on several convex polygons simultaneously. We then show that these generalizations allow us to obtain simple O(n) algorithms for solving a variety of problems defined on convex polygons. Such problems include (1) finding the minimum-area rectangle enclosing a polygon, (2) computing the maximum distance between two polygons, (3) performing the vector-sum of two polygons, (4) merging polygons in a convex hull finding algorithms, and (5) finding the critical support lines between two polygons. Finding the critical support lines, in turn, leads to obtaining solutions to several additional problems concerned with visibility, collision, avoidance, range fitting, linear separability, and computing the Grenander distance between sets.",
"title": ""
},
{
"docid": "d094b75f0a1b7f40b39f02bb74397d71",
"text": "We propose a theory that relates difficulty of learning in deep architectures to culture and language. It is articulated around the following hypotheses: (1) learning in an individual human brain is hampered by the presence of effective local minima; (2) this optimization difficulty is particularly important when it comes to learning higher-level abstractions, i.e., concepts that cover a vast and highly-nonlinear span of sensory configurations; (3) such high-level abstractions are best represented in brains by the composition of many levels of representation, i.e., by deep architectures; (4) a human brain can learn such high-level abstractions if guided by the signals produced by other humans, which act as hints or indirect supervision for these high-level abstractions; and (5), language and the recombination and optimization of mental concepts provide an efficient evolutionary recombination operator, and this gives rise to rapid search in the space of communicable ideas that help humans build up better high-level internal representations of their world. These hypotheses put together imply that human culture and the evolution of ideas have been crucial to counter an optimization difficulty: this optimization difficulty would otherwise make it very difficult for human brains to capture high-level knowledge of the world. The theory is grounded in experimental observations of the difficulties of training deep artificial neural networks. Plausible consequences of this theory for the efficiency of cultural evolution are sketched.",
"title": ""
},
{
"docid": "74adf22dff08c0d914197d71fabe4938",
"text": "Modeling contact in multibody simulation is a difficult problem frequently characterized by numerically brittle algorithms, long running times, and inaccurate (with respect to theory) models. We present a comprehensive evaluation of four methods for contact modeling on seven benchmark scenarios in order to quantify the performance of these methods with respect to robustness and speed. We also assess the accuracy of these methods where possible. We conclude the paper with a prescriptive description in order to guide the user of multibody simulation.",
"title": ""
},
{
"docid": "d4269f7b6f2ace3b459668f4d6cb6861",
"text": "The ability to rise above the present environment and reflect upon the past, the future, and the minds of others is a fundamentally defining human feature. It has been proposed that these three self-referential processes involve a highly interconnected core set of brain structures known as the default mode network (DMN). The DMN appears to be active when individuals are engaged in stimulus-independent thought. This network is a likely candidate for supporting multiple processes, but this idea has not been tested directly. We used fMRI to examine brain activity during autobiographical remembering, prospection, and theory-of-mind reasoning. Using multivariate analyses, we found a common pattern of neural activation underlying all three processes in the DMN. In addition, autobiographical remembering and prospection engaged midline DMN structures to a greater degree and theory-of-mind reasoning engaged lateral DMN areas. A functional connectivity analysis revealed that activity of a critical node in the DMN, medial prefrontal cortex, was correlated with activity in other regions in the DMN during all three tasks. We conclude that the DMN supports common aspects of these cognitive behaviors involved in simulating an internalized experience.",
"title": ""
}
] |
scidocsrr
|
af58162c117a6972bbfda4da439f4f19
|
A large scale exploratory analysis of software vulnerability life cycles
|
[
{
"docid": "811c430ff9efd0f8a61ff40753f083d4",
"text": "The Waikato Environment for Knowledge Analysis (Weka) is a comprehensive suite of Java class libraries that implement many state-of-the-art machine learning and data mining algorithms. Weka is freely available on the World-Wide Web and accompanies a new text on data mining [1] which documents and fully explains all the algorithms it contains. Applications written using the Weka class libraries can be run on any computer with a Web browsing capability; this allows users to apply machine learning techniques to their own data regardless of computer platform.",
"title": ""
}
] |
[
{
"docid": "6970acb72318375a5af6aa03ad634f7e",
"text": "BACKGROUND\nMyopia is an important public health problem because it is common and is associated with increased risk for chorioretinal degeneration, retinal detachment, and other vision- threatening abnormalities. In animals, ocular elongation and myopia progression can be lessened with atropine treatment. This study provides information about progression of myopia and atropine therapy for myopia in humans.\n\n\nMETHODS\nA total of 214 residents of Olmsted County, Minnesota (118 girls and 96 boys, median age, 11 years; range 6 to 15 years) received atropine for myopia from 1967 through 1974. Control subjects were matched by age, sex, refractive error, and date of baseline examination to 194 of those receiving atropine. Duration of treatment with atropine ranged from 18 weeks to 11.5 years (median 3.5 years).\n\n\nRESULTS\nMedian followup from initial to last refraction in the atropine group (11.7 years) was similar to that in the control group (12.4 years). Photophobia and blurred vision were frequently reported, but no serious adverse effects were associated with atropine therapy. Mean myopia progression during atropine treatment adjusted for age and refractive error (0.05 diopters per year) was significantly less than that among control subjects (0.36 diopters per year)(P<.001). Final refractions standardized to the age of 20 years showed a greater mean level of myopia in the control group (3.78 diopters) than in the atropine group (2.79 diopters) (P<.001).\n\n\nCONCLUSIONS\nThe data support the view that atropine therapy is associated with decreased progression of myopia and that beneficial effects remain after treatment has been discontinued.",
"title": ""
},
{
"docid": "26d0e97bbb14bc52b8dbb3c03522ac38",
"text": "Intraocular injections of rhodamine and horseradish peroxidase in chameleon, labelled retrogradely neurons in the ventromedial tegmental region of the mesencephalon and the ventrolateral thalamus of the diencephalon. In both areas, staining was observed contralaterally to the injected eye. Labelling was occasionally observed in some rhombencephalic motor nuclei. These results indicate that chameleons, unlike other reptilian species, have two retinopetal nuclei.",
"title": ""
},
{
"docid": "dddec8d72a4ed68ee47c0cc7f4f31dbd",
"text": "Probabilistic topic modeling of text collections is a powerful tool for statistical text analysis. In this tutorial we introduce a novel non-Bayesian approach, called Additive Regularization of Topic Models. ARTM is free of redundant probabilistic assumptions and provides a simple inference for many combined and multi-objective topic models.",
"title": ""
},
{
"docid": "b7dcd24f098965ff757b7ce5f183662b",
"text": "We give an overview of a complex systems approach to large blackouts of electric power transmission systems caused by cascading failure. Instead of looking at the details of particular blackouts, we study the statistics and dynamics of series of blackouts with approximate global models. Blackout data from several countries suggest that the frequency of large blackouts is governed by a power law. The power law makes the risk of large blackouts consequential and is consistent with the power system being a complex system designed and operated near a critical point. Power system overall loading or stress relative to operating limits is a key factor affecting the risk of cascading failure. Power system blackout models and abstract models of cascading failure show critical points with power law behavior as load is increased. To explain why the power system is operated near these critical points and inspired by concepts from self-organized criticality, we suggest that power system operating margins evolve slowly to near a critical point and confirm this idea using a power system model. The slow evolution of the power system is driven by a steady increase in electric loading, economic pressures to maximize the use of the grid, and the engineering responses to blackouts that upgrade the system. Mitigation of blackout risk should account for dynamical effects in complex self-organized critical systems. For example, some methods of suppressing small blackouts could ultimately increase the risk of large blackouts.",
"title": ""
},
{
"docid": "18ada6a64572d11cf186e4497fd81f43",
"text": "The task of ranking is crucial in information retrieval. With the advent of the Big Data age, new challenges have arisen for the field. Deep neural architectures are capable of learning complex functions, and capture the underlying representation of the data more effectively. In this work, ranking is reduced to a classification problem and deep neural architectures are used for this task. A dynamic, pointwise approach is used to learn a ranking function, which outperforms the existing ranking algorithms. We introduce three architectures for the task, our primary objective being to identify architectures which produce good results, and to provide intuitions behind their usefulness. The inputs to the models are hand-crafted features provided in the datasets. The outputs are relevance levels. Further, we also explore the idea as to whether the semantic grouping of handcrafted features aids deep learning models in our task.",
"title": ""
},
{
"docid": "55749da1639911c33ba86a2d7ddae0d2",
"text": "Artificial intelligence (AI) tools, such as expert system, fuzzy logic, and neural network are expected to usher a new era in power electronics and motion control in the coming decades. Although these technologies have advanced significantly in recent years and have found wide applications, they have hardly touched the power electronics and mackine drives area. The paper describes these Ai tools and their application in the area of power electronics and motion control. The body of the paper is subdivided into three sections which describe, respectively, the principles and applications of expert system, fuzzy logic, and neural network. The theoretical portion of each topic is of direct relevance to the application of power electronics. The example applications in the paper are taken from the published literature. Hopefully, the readers will be able to formulate new applications from these examples.",
"title": ""
},
{
"docid": "0b70a4a44a26ff9218224727fbba823c",
"text": "Recently, DNN model compression based on network architecture design, e.g., SqueezeNet, attracted a lot attention. No accuracy drop on image classification is observed on these extremely compact networks, compared to well-known models. An emerging question, however, is whether these model compression techniques hurt DNNs learning ability other than classifying images on a single dataset. Our preliminary experiment shows that these compression methods could degrade domain adaptation (DA) ability, though the classification performance is preserved. Therefore, we propose a new compact network architecture and unsupervised DA method in this paper. The DNN is built on a new basic module Conv-M which provides more diverse feature extractors without significantly increasing parameters. The unified framework of our DA method will simultaneously learn invariance across domains, reduce divergence of feature representations, and adapt label prediction. Our DNN has 4.1M parameters, which is only 6.7% of AlexNet or 59% of GoogLeNet. Experiments show that our DNN obtains GoogLeNet-level accuracy both on classification and DA, and our DA method slightly outperforms previous competitive ones. Put all together, our DA strategy based on our DNN achieves state-of-the-art on sixteen of total eighteen DA tasks on popular Office-31 and Office-Caltech datasets.",
"title": ""
},
{
"docid": "9ba3c67136d573c4a10b133a2391d8bc",
"text": "Modern text collections often contain large documents that span several subject areas. Such documents are problematic for relevance feedback since inappropriate terms can easi 1y be chosen. This study explores the highly effective approach of feeding back passages of large documents. A less-expensive method that discards long documents is also reviewed and found to be effective if there are enough relevant documents. A hybrid approach that feeds back short documents and passages of long documents may be the best compromise.",
"title": ""
},
{
"docid": "fd2abd6749eb7a85f3480ae9b4cbefa6",
"text": "We examine the current performance and future demands of interconnects to and on silicon chips. We compare electrical and optical interconnects and project the requirements for optoelectronic and optical devices if optics is to solve the major problems of interconnects for future high-performance silicon chips. Optics has potential benefits in interconnect density, energy, and timing. The necessity of low interconnect energy imposes low limits especially on the energy of the optical output devices, with a ~ 10 fJ/bit device energy target emerging. Some optical modulators and radical laser approaches may meet this requirement. Low (e.g., a few femtofarads or less) photodetector capacitance is important. Very compact wavelength splitters are essential for connecting the information to fibers. Dense waveguides are necessary on-chip or on boards for guided wave optical approaches, especially if very high clock rates or dense wavelength-division multiplexing (WDM) is to be avoided. Free-space optics potentially can handle the necessary bandwidths even without fast clocks or WDM. With such technology, however, optics may enable the continued scaling of interconnect capacity required by future chips.",
"title": ""
},
{
"docid": "545509f9e3aa65921a7d6faa41247ae6",
"text": "BACKGROUND\nPenicillins inhibit cell wall synthesis; therefore, Helicobacter pylori must be dividing for this class of antibiotics to be effective in eradication therapy. Identifying growth responses to varying medium pH may allow design of more effective treatment regimens.\n\n\nAIM\nTo determine the effects of acidity on bacterial growth and the bactericidal efficacy of ampicillin.\n\n\nMETHODS\nH. pylori were incubated in dialysis chambers suspended in 1.5-L of media at various pHs with 5 mM urea, with or without ampicillin, for 4, 8 or 16 h, thus mimicking unbuffered gastric juice. Changes in gene expression, viability and survival were determined.\n\n\nRESULTS\nAt pH 3.0, but not at pH 4.5 or 7.4, there was decreased expression of ~400 genes, including many cell envelope biosynthesis, cell division and penicillin-binding protein genes. Ampicillin was bactericidal at pH 4.5 and 7.4, but not at pH 3.0.\n\n\nCONCLUSIONS\nAmpicillin is bactericidal at pH 4.5 and 7.4, but not at pH 3.0, due to decreased expression of cell envelope and division genes with loss of cell division at pH 3.0. Therefore, at pH 3.0, the likely pH at the gastric surface, the bacteria are nondividing and persist with ampicillin treatment. A more effective inhibitor of acid secretion that maintains gastric pH near neutrality for 24 h/day should enhance the efficacy of amoxicillin, improving triple therapy and likely even allowing dual amoxicillin-based therapy for H. pylori eradication.",
"title": ""
},
{
"docid": "e2b8dd31dad42e82509a8df6cf21df11",
"text": "Recent experiments indicate the need for revision of a model of spatial memory consisting of viewpoint-specific representations, egocentric spatial updating and a geometric module for reorientation. Instead, it appears that both egocentric and allocentric representations exist in parallel, and combine to support behavior according to the task. Current research indicates complementary roles for these representations, with increasing dependence on allocentric representations with the amount of movement between presentation and retrieval, the number of objects remembered, and the size, familiarity and intrinsic structure of the environment. Identifying the neuronal mechanisms and functional roles of each type of representation, and of their interactions, promises to provide a framework for investigation of the organization of human memory more generally.",
"title": ""
},
{
"docid": "c19863ef5fa4979f288763837e887a7c",
"text": "Decentralized cryptocurrencies have pushed deployments of distributed consensus to more stringent environments than ever before. Most existing protocols rely on proofs-of-work which require expensive computational puzzles to enforce, imprecisely speaking, “one vote per unit of computation”. The enormous amount of energy wasted by these protocols has been a topic of central debate, and well-known cryptocurrencies have announced it a top priority to alternative paradigms. Among the proposed alternative solutions, proofs-of-stake protocols have been of particular interest, where roughly speaking, the idea is to enforce “one vote per unit of stake”. Although the community have rushed to propose numerous candidates for proofs-of-stake, no existing protocol has offered formal proofs of security, which we believe to be a critical, indispensible ingredient of a distributed consensus protocol, particularly one that is to underly a high-value cryptocurrency system. In this work, we seek to address the following basic questions: • What kind of functionalities and robustness requirements should a consensus candidate offer to be suitable in a proof-of-stake application? • Can we design a provably secure protocol that satisfies these requirements? To the best of our knowledge, we are the first to formally articulate a set of requirements for consensus candidates for proofs-of-stake. We argue that any consensus protocol satisfying these properties can be used for proofs-of-stake, as long as money does not switch hands too quickly. Moreover, we provide the first consensus candidate that provably satisfies the desired robustness properties.",
"title": ""
},
{
"docid": "1f7454de77b2f3f489c12a8e836ceb43",
"text": "Pornography use among emerging adults in the USA has increased in recent decades, as has the acceptance of such consumption. While previous research has linked pornography use to both positive and negative outcomes in emerging adult populations, few studies have investigated how attitudes toward pornography may alter these associations, or how examining pornography use together with other sexual behaviours may offer unique insights into the outcomes associated with pornography use. Using a sample of 792 emerging adults, the present study explored how the combined examination of pornography use, acceptance, and sexual behaviour within a relationship might offer insight into emerging adults' development. Results suggested clear gender differences in both pornography use and acceptance patterns. High male pornography use tended to be associated with high engagement in sex within a relationship and was associated with elevated risk-taking behaviours. High female pornography use was not associated with engagement in sexual behaviours within a relationship and was general associated with negative mental health outcomes.",
"title": ""
},
{
"docid": "f3e63f3fb0ce0e74697e0a74867d9671",
"text": "Convolutional Neural Networks (CNN) have been successfully applied to autonomous driving tasks, many in an end-to-end manner. Previous end-to-end steering control methods take an image or an image sequence as the input and directly predict the steering angle with CNN. Although single task learning on steering angles has reported good performances, the steering angle alone is not sufficient for vehicle control. In this work, we propose a multi-task learning framework to predict the steering angle and speed control simultaneously in an end-to-end manner. Since it is nontrivial to predict accurate speed values with only visual inputs, we first propose a network to predict discrete speed commands and steering angles with image sequences. Moreover, we propose a multi-modal multi-task network to predict speed values and steering angles by taking previous feedback speeds and visual recordings as inputs. Experiments are conducted on the public Udacity dataset and a newly collected SAIC dataset. Results show that the proposed model predicts steering angles and speed values accurately. Furthermore, we improve the failure data synthesis methods to solve the problem of error accumulation in real road tests.",
"title": ""
},
{
"docid": "5b2fbfe1e9ceb9cb9e969df992ea1271",
"text": "Distributed denial of service (DDoS) attacks continues to grow as a threat to organizations worldwide. From the first known attack in 1999 to the highly publicized Operation Ababil, the DDoS attacks have a history of flooding the victim network with an enormous number of packets, hence exhausting the resources and preventing the legitimate users to access them. After having standard DDoS defense mechanism, still attackers are able to launch an attack. These inadequate defense mechanisms need to be improved and integrated with other solutions. The purpose of this paper is to study the characteristics of DDoS attacks, various models involved in attacks and to provide a timeline of defense mechanism with their improvements to combat DDoS attacks. In addition to this, a novel scheme is proposed to detect DDoS attack efficiently by using MapReduce programming model.",
"title": ""
},
{
"docid": "912a05d1ee733d85d3dbe6b63c986a44",
"text": "Keyphrases efficiently summarize a document’s content and are used in various document processing and retrieval tasks. Several unsupervised techniques and classifiers exist for extracting keyphrases from text documents. Most of these methods operate at a phrase-level and rely on part-of-speech (POS) filters for candidate phrase generation. In addition, they do not directly handle keyphrases of varying lengths. We overcome these modeling shortcomings by addressing keyphrase extraction as asequential labelingtask in this paper. We explore a basic set of features commonly used in NLP tasks as well as predictions from various unsupervised methods to train our taggers. In addition to a more natural modeling for the keyphrase extraction problem, we show that tagging models yield significant performance benefits over existing stateof-the-art extraction methods.",
"title": ""
},
{
"docid": "db7426a1896920e0d2e3342d2df96401",
"text": "Nasal obstruction due to weakening of the nasal sidewall is a very common patient complaint. The conchal cartilage butterfly graft is a proven technique for the correction of nasal valve collapse. It allows for excellent functional results, and with experience and attention to technical detail, it may also provide excellent cosmetic results. While this procedure is most useful for restoring form and function in cases of secondary rhinoplasty following the reduction of nasal support structures, we have found it to be a very powerful and satisfying technique in primary rhinoplasty as well. This article aims to describe the butterfly graft, discuss its history, and detail the technical considerations which we have found useful.",
"title": ""
},
{
"docid": "c8a6f20bf8daded62ee23ea2615c8dc0",
"text": "In developing countries, fruit and vegetable juices sold by street vendors are widely consumed by millions of people. These juices provide a source of readily available and affordable source of nutrients to many sectors of the population, including the urban poor. Unpasteurized juices are preferred by the consumers because of the “fresh flavor” attributes and hence, in recent times, their demand has increased. They are simply prepared by extracting, usually by mechanical means, the liquid and pulp of mature fruit and vegetables. The final product is an unfermented, clouded, untreated juice, ready for consumption. Pathogenic organisms can enter fruits and vegetables through damaged surfaces, such as punctures, wounds, cuts and splits that occur during growing or harvesting. Contamination from raw materials and equipments, additional processing conditions, improper handling, prevalence of unhygienic conditions contribute substantially to the entry of bacterial pathogens in juices prepared from these fruits or vegetables (Victorian Government Department of Human Services 2005; Oliveira et al., 2006; Nicolas et al., 2007). In countries, where street food vending is prevalent, there is commonly a lack of information on the incidence of food borne diseases related to the street vended foods. However, microbial studies on such foods in American, Asian and African countries have revealed increased bacterial pathogens in the food. There have been documented outbreaks of illnesses in humans associated with the consumption of unpasteurized fruit and vegetable juices and fresh produce. A report published by Victorian Government Department of Abstract: Fresh squeezed juices of sugarcane, lime and carrot sold by street vendors in Mumbai city were analyzed for their microbial contents during the months of June 2007 to September 2007. The total viable counts of all 30 samples were approximately log 6.5 cfu/100ml with significant load of coliforms, faecal coliforms, Vibrio and Staphylococcal counts. Qualitative counts showed the presence of coagulase positive S.aureus in 5 samples of sugarcane and 2 samples of carrot juice. Almost 70% of the ice samples collected from street vendors showed high microbial load ranging from log 58.5. Our results demonstrate the non hygienic quality of three most popular types of street vended fruit juices and ice used for cooling of juices suggesting the urgent need for government participation in developing suitable intervention measures to improve microbial quality of juices.",
"title": ""
},
{
"docid": "9cc8d5f395a11ceaabdf9b2e57aa2bc9",
"text": "This paper proposes a Model Predictive Control methodology for a non-inverting Buck-Boost DC-DC converter for its efficient control. PID and MPC control strategies are simulated for the control of Buck-Boost converter and its performance is compared using MATLAB Simulink model. MPC shows better performance compared to PID controller. Output follows reference voltage more accurately showing that MPC can handle the dynamics of the system efficiently. The proposed methodology can be used for constant voltage applications. The control strategy can be implemented using a Field Programmable Gate Array (FPGA).",
"title": ""
}
] |
scidocsrr
|
ad3672d06889da394801de19e8b1a963
|
Ant Colony Optimization: A Swarm Intelligence based Technique
|
[
{
"docid": "0e4e965354dfd8d588bd38a2e1bb5569",
"text": "The growing complexity of real-world problems has motivated computer scientists to search for efficient problem-solving methods. Evolutionary computation and swarm intelligence meta-heuristics are outstanding examples that nature has been an unending source of inspiration. The behaviour of bees, bacteria, glow-worms, fireflies, slime moulds, cockroaches, mosquitoes and other organisms have inspired swarm intelligence researchers to devise new optimisation algorithms. This tutorial highlights the most recent nature-based inspirations as metaphors for swarm intelligence meta-heuristics. We describe the biological behaviours from which a number of computational algorithms were developed. Also, the most recent and important applications and the main features of such meta-heuristics are reported.",
"title": ""
},
{
"docid": "d90b68b84294d0a56d71b3c5b1a5eeb7",
"text": "Nature-inspired algorithms are among the most powerful algorithms for optimization. This paper intends to provide a detailed description of a new Firefly Algorithm (FA) for multimodal optimization applications. We will compare the proposed firefly algorithm with other metaheuristic algorithms such as particle swarm optimization (PSO). Simulations and results indicate that the proposed firefly algorithm is superior to existing metaheuristic algorithms. Finally we will discuss its applications and implications for further research.",
"title": ""
}
] |
[
{
"docid": "2f7a0ab1c7a3ae17ef27d2aa639c39b4",
"text": "Evolutionary algorithms are commonly used to create high-performing strategies or agents for computer games. In this paper, we instead choose to evolve the racing tracks in a car racing game. An evolvable track representation is devised, and a multiobjective evolutionary algorithm maximises the entertainment value of the track relative to a particular human player. This requires a way to create accurate models of players' driving styles, as well as a tentative definition of when a racing track is fun, both of which are provided. We believe this approach opens up interesting new research questions and is potentially applicable to commercial racing games.",
"title": ""
},
{
"docid": "c52d522451b4ebd1228c9c704ecf1ae9",
"text": "This paper describes an algorithm for Successive Approximation Register (SAR) ADCs with overlapping steps that allow comparison decision errors (due to, such as DAC incomplete settling) to be digitally corrected. We generalize this non-binary search algorithm, and clarify which decision errors it can digitally correct. This algorithm requires more SAR ADC conversion steps than a binary search algorithm, but we show that the sampling speed of an SAR ADC using this algorithm can be faster than that of a conventional binary-search SAR ADC — because the latter must wait for the settling time of the DAC inside the SAR ADC. key words: SAR ADC, digital error correction, non-binary, redundancy",
"title": ""
},
{
"docid": "5dac4a5d6adcb75742344268bb717e11",
"text": "System logs are widely used in various tasks of software system management. It is crucial to avoid logging too little or too much. To achieve so, developers need to make informed decisions on where to log and what to log in their logging practices during development. However, there exists no work on studying such logging practices in industry or helping developers make informed decisions. To fill this significant gap, in this paper, we systematically study the logging practices of developers in industry, with focus on where developers log. We obtain six valuable findings by conducting source code analysis on two large industrial systems (2.5M and 10.4M LOC, respectively) at Microsoft. We further validate these findings via a questionnaire survey with 54 experienced developers in Microsoft. In addition, our study demonstrates the high accuracy of up to 90% F-Score in predicting where to log.",
"title": ""
},
{
"docid": "d8190669434b167500312091d1a4bf30",
"text": "Path analysis was used to test the predictive and mediational role of self-efficacy beliefs in mathematical problem solving. Results revealed that math self-efficacy was more predictive of problem solving than was math self-concept, perceived usefulness of mathematics, prior experience with mathematics, or gender (N = 350). Self-efficacy also mediated the effect of gender and prior experience on self-concept, perceived usefulness, and problem solving. Gender and prior experience influenced self-concept, perceived usefulness, and problem solving largely through the mediational role of self-efficacy. Men had higher performance, self-efficacy, and self-concept and lower anxiety, but these differences were due largely to the influence of self-efficacy, for gender had a direct effect only on self-efficacy and a prior experience variable. Results support the hypothesized role of self-efficacy in A. Bandura's (1986) social cognitive theory.",
"title": ""
},
{
"docid": "6224f4f3541e9cd340498e92a380ad3f",
"text": "A personal story: From philosophy to software.",
"title": ""
},
{
"docid": "57bc24056a4eb170ea4db546d5cdaaab",
"text": "In this paper, we propose a novel approach to incorporate structure knowledge into Convolutional Neural Networks (CNNs) for articulated human pose estimation from a single still image. Recent research on pose estimation adopt CNNs as base blocks to combine with other graphical models. Different from existing methods using features from CNNs to model the tree structure, we directly use the structure pose prior to guide the learning of CNN. First, we introduce a deep CNN with effective receptive fields which capture the holistic context of the whole image. Second, limb loss is used as intermediate supervision of CNN to learn the correlations of joints. Both parts and joints features are extracted in the middle of neural network and then are used to guide the following network learning. The proposed framework can exploit an implicit structure model of human body. Only using one stage and without any complex post processing, our method achieves state-of-art results on both FLIC and LSP benchmarks.",
"title": ""
},
{
"docid": "e7473169711de31dc063ace07ec799f9",
"text": "Two major tasks in spoken language understanding (SLU) are intent determination (ID) and slot filling (SF). Recurrent neural networks (RNNs) have been proved effective in SF, while there is no prior work using RNNs in ID. Based on the idea that the intent and semantic slots of a sentence are correlative, we propose a joint model for both tasks. Gated recurrent unit (GRU) is used to learn the representation of each time step, by which the label of each slot is predicted. Meanwhile, a max-pooling layer is employed to capture global features of a sentence for intent classification. The representations are shared by two tasks and the model is trained by a united loss function. We conduct experiments on two datasets, and the experimental results demonstrate that our model outperforms the state-of-theart approaches on both tasks.",
"title": ""
},
{
"docid": "8d9246e7780770b5f7de9ef0adbab3e6",
"text": "This paper proposes a self-adaption Kalman observer (SAKO) used in a permanent-magnet synchronous motor (PMSM) servo system. The proposed SAKO can make up measurement noise of the absolute encoder with limited resolution ratio and avoid differentiating process and filter delay of the traditional speed measuring methods. To be different from the traditional Kalman observer, the proposed observer updates the gain matrix by calculating the measurement noise at the current time. The variable gain matrix is used to estimate and correct the observed position, speed, and load torque to solve the problem that the motor speed calculated by the traditional methods is prone to large speed error and time delay when PMSM runs at low speeds. The state variables observed by the proposed observer are used as the speed feedback signals and compensation signal of the load torque disturbance in PMSM servo system. The simulations and experiments prove that the SAKO can observe speed and load torque precisely and timely and that the feedforward and feedback control system of PMSM can improve the speed tracking ability.",
"title": ""
},
{
"docid": "7b6d2d261675aa83f53c4e3c5523a81b",
"text": "(IV) Intravenous therapy is one of the most commonly performed procedures in hospitalized patients yet phlebitis affects 27% to 70% of all patients receiving IV therapy. The incidence of phlebitis has proved to be a menace in effective care of surgical patients, delaying their recovery and increasing duration of hospital stay and cost. The recommendations for reducing its incidence and severity have been varied and of questionable efficacy. The current study was undertaken to evaluate whether elective change of IV cannula at fixed intervals can have any impact on incidence or severity of phlebitis in surgical patients. All patients admitted to the Department of Surgery, SMIMS undergoing IV cannula insertion, fulfilling the selection criteria and willing to participate in the study, were segregated into two random groups prospectively: Group A wherein cannula was changed electively after 24 hours into a fresh vein preferably on the other upper limb and Group B wherein IV cannula was changed only on development of phlebitis or leak i.e. need-based change. The material/brand and protocol for insertion of IV cannula were standardised for all patients, including skin preparation, insertion, fixation and removal. After cannulation, assessment was made after 6 hours, 12 hours and every 24 hours thereafter at all venepuncture sites. VIP and VAS scales were used to record phlebitis and pain respectively. Upon analysis, though there was a lower VIP score in group A compared to group B (0.89 vs. 1.32), this difference was not statistically significant (p-value = 0.277). Furthermore, the differences in pain, as assessed by VAS, at the site of puncture and along the vein were statistically insignificant (p-value > 0.05). Our results are in contradiction to few other studies which recommend a policy of routine change of cannula. Further we advocate a close and thorough monitoring of the venepuncture site and the length of vein immediately distal to the puncture site, as well as a meticulous standardized protocol for IV access.",
"title": ""
},
{
"docid": "2582b0fffad677d3f0ecf11b92d9702d",
"text": "This study explores teenage girls' narrations of the relationship between self-presentation and peer comparison on social media in the context of beauty. Social media provide new platforms that manifest media and peer influences on teenage girls' understanding of beauty towards an idealized notion. Through 24 in-depth interviews, this study examines secondary school girls' self-presentation and peer comparison behaviors on social network sites where the girls posted self-portrait photographs or “selfies” and collected peer feedback in the forms of “likes,” “followers,” and comments. Results of thematic analysis reveal a gap between teenage girls' self-beliefs and perceived peer standards of beauty. Feelings of low self-esteem and insecurity underpinned their efforts in edited self-presentation and quest for peer recognition. Peers played multiple roles that included imaginary audiences, judges, vicarious learning sources, and comparison targets in shaping teenage girls' perceptions and presentation of beauty. Findings from this study reveal the struggles that teenage girls face today and provide insights for future investigations and interventions pertinent to teenage girls’ presentation and evaluation of self on",
"title": ""
},
{
"docid": "fbb5a86992438d630585462f8626e13f",
"text": "As a basic task in computer vision, semantic segmentation can provide fundamental information for object detection and instance segmentation to help the artificial intelligence better understand real world. Since the proposal of fully convolutional neural network (FCNN), it has been widely used in semantic segmentation because of its high accuracy of pixel-wise classification as well as high precision of localization. In this paper, we apply several famous FCNN to brain tumor segmentation, making comparisons and adjusting network architectures to achieve better performance measured by metrics such as precision, recall, mean of intersection of union (mIoU) and dice score coefficient (DSC). The adjustments to the classic FCNN include adding more connections between convolutional layers, enlarging decoders after up sample layers and changing the way shallower layers’ information is reused. Besides the structure modification, we also propose a new classifier with a hierarchical dice loss. Inspired by the containing relationship between classes, the loss function converts multiple classification to multiple binary classification in order to counteract the negative effect caused by imbalance data set. Massive experiments have been done on the training set and testing set in order to assess our refined fully convolutional neural networks and new types of loss function. Competitive figures prove they are more effective than their predecessors.",
"title": ""
},
{
"docid": "2e13b95d6892f6ce00c464e456a6e6a6",
"text": "The development of such system that automatically recognizes the input speech and translates in another language like Sanskrit is a challenging task. Sanskrit language is much more conjured language. The purpose of this paper is to explain a system which convert the English Speech into English text and then translate that English text into Sanskrit text and again convert that into speech. This system falls into the category of Speech-to-Speech translation. It unifies the isolated words class under the Speech Recognition type, traditional dictionary rule based machine translation approach and text to speech synthesizer. So basically it is classifies into three areas: Speech Recognition, Machine Translation and Speech Synthesis. This system matches tokens [1] from database to differentiate Subject, Object, Verb, Preposition, Adjective, and Adverb. This paper presents approach for translating well-structured English sentences into Sanskrit sentences. Since Sanskrit is free ordering language (or syntax free language) or we can say its meaning won't be change even if the order of words changes.",
"title": ""
},
{
"docid": "4e75d06e1e23cf8efdcafd2f59a0313f",
"text": "The International Solid-State Circuits Conference (ISSCC) is the flagship conference of the IEEE Solid-State Circuits Society. This year, for the 65th ISSCC, the theme is \"Silicon Engineering a Social World.\" Continued advances in solid-state circuits and systems have brought ever-more powerful communication and computational capabilities into mobile form factors. Such ubiquitous smart devices lie at the heart of a revolution shaping how we connect, collaborate, build relationships, and share information. These social technologies allow people to maintain connections and support networks not otherwise possible; they provide the ability to access information instantaneously and from any location, thereby helping to shape world events and culture, empowering citizens of all nations, and creating social networks that allow worldwide communities to develop and form bonds based on common interests.",
"title": ""
},
{
"docid": "b6e6963d4e7122dd2d852b2300e50687",
"text": "User analysis is a crucial aspect of user-centered systems design, yet Human-Computer Interaction (HCI) has yet to formulate reliable and valid characterizations of users beyond gross distinctions based on task and experience. Individual differences research from mainstream psychology has identified a stable set of characteristics that would appear to offer potential application in the HCI arena. Furthermore, in its evolution over the last 100 years, research on individual differences has faced many of the problems of theoretical status and applicability that is common to HCI. In the present paper the relationship between work in cognitive and differential psychology and current analyses of users in HCI is examined. It is concluded that HCI could gain significant predictive power if individual differences research was related to the analysis of users in contemporary systems design.",
"title": ""
},
{
"docid": "ad0ea5bd92d87bd055ec4321aa502987",
"text": "Context: Although metamodelling is generally accepted as important for our understanding of software and systems development, arguments about the validity and utility of ontological versus linguistic meta-",
"title": ""
},
{
"docid": "951bac2da49c520a84f2c24e0f8b01e4",
"text": "The electromagnetic field (EMF) exposure to millimeter-wave (mmWave) phased arrays in mobile devices for 5G communication is analyzed in this letter. Unlike the current cellular band, the EMF exposure in the mmWave band (10-200 GHz) is evaluated by the free-space power density instead of the specific absorption rate. However, current regulations have not been well defined for the mobile device application. In this letter, we present the power density property of phased arrays in mobile devices at 15 and 28 GHz. Uniform linear patch arrays are used, and different array configurations are compared. Suggestions for the power density evaluation are also provided.",
"title": ""
},
{
"docid": "3a1b9a47a7fe51ab19f53ae6aaa18d6d",
"text": "The overall context proposed in this paper is part of our long-standing goal to contribute to a group of community that suffers from Autism Spectrum Disorder (ASD); a lifelong developmental disability. The objective of this paper is to present the development of our pilot experiment protocol where children with ASD will be exposed to the humanoid robot NAO. This fully programmable humanoid offers an ideal research platform for human-robot interaction (HRI). This study serves as the platform for fundamental investigation to observe the initial response and behavior of the children in the said environment. The system utilizes external cameras, besides the robot's own visual system. Anticipated results are the real initial response and reaction of ASD children during the HRI with the humanoid robot. This shall leads to adaptation of new procedures in ASD therapy based on HRI, especially for a non-technical-expert person to be involved in the robotics intervention during the therapy session.",
"title": ""
},
{
"docid": "5af5936ec0d889ab19bd8c6c8e8ebc35",
"text": "Development in the wireless communication systems is the evolving field of research in today’s world. The demand of high data rate, low latency at the minimum cost by the user requires many changes in the hardware organization. The use of digital modulation techniques like OFDM assures the reliability of communication in addition to providing flexibility and robustness. Modifications in the hardware structure can be replaced by the change in software only which gives birth to Software Define Radio (SDR): a radio which is more flexible as compared to conventional radio and can perform signal processing at the minimum cost. GNU Radio with the help of Universal Software Peripheral Radio (USRP) provides flexible and the cost effective SDR platform for the purpose of real time video transmission. The results given in this paper are taken from the experiment performed on USRP-1 along with the GNU Radio version 3.2.2.",
"title": ""
},
{
"docid": "950759f015897a7e3e4948f736788c76",
"text": "The characterization of complex air traffic situations is an important issue in air traffic management (ATM). Within the current ground-based ATM system, complexity metrics have been introduced with the goal of evaluating the difficulty experienced by air traffic controllers in guaranteeing the appropriate aircraft separation in a sector. The rapid increase in air travel demand calls for new generation ATM systems that can safely and efficiently handle higher levels of traffic. To this purpose, part of the responsibility for separation maintenance will be delegated to the aircraft, and trajectory management functions will be further automated and distributed. The evolution toward an autonomous aircraft framework envisages new tasks where assessing complexity may be valuable and requires a whole new perspective in the definition of suitable complexity metrics. This paper presents a critical analysis of the existing approaches for modeling and predicting air traffic complexity, examining their portability to autonomous ATM systems. Possible applications and related requirements will be discussed.",
"title": ""
},
{
"docid": "d2a1ecb8ad28ed5ba75460827341f741",
"text": "Most word representation methods assume that each word owns a single semantic vector. This is usually problematic because lexical ambiguity is ubiquitous, which is also the problem to be resolved by word sense disambiguation. In this paper, we present a unified model for joint word sense representation and disambiguation, which will assign distinct representations for each word sense.1 The basic idea is that both word sense representation (WSR) and word sense disambiguation (WSD) will benefit from each other: (1) highquality WSR will capture rich information about words and senses, which should be helpful for WSD, and (2) high-quality WSD will provide reliable disambiguated corpora for learning better sense representations. Experimental results show that, our model improves the performance of contextual word similarity compared to existing WSR methods, outperforms stateof-the-art supervised methods on domainspecific WSD, and achieves competitive performance on coarse-grained all-words WSD.",
"title": ""
}
] |
scidocsrr
|
2962534f7f5140e539d0099ed848a6b7
|
The Effects of Objectifying Hip-Hop Lyrics on Female Listeners
|
[
{
"docid": "f84f279b6ef3b112a0411f5cba82e1b0",
"text": "PHILADELPHIA The difficulties inherent in obtaining consistent and adequate diagnoses for the purposes of research and therapy have been pointed out by a number of authors. Pasamanick12 in a recent article viewed the low interclinician agreement on diagnosis as an indictment of the present state of psychiatry and called for \"the development of objective, measurable and verifiable criteria of classification based not on personal or parochial considerations, buton behavioral and other objectively measurable manifestations.\" Attempts by other investigators to subject clinical observations and judgments to objective measurement have resulted in a wide variety of psychiatric rating ~ c a l e s . ~ J ~ These have been well summarized in a review article by Lorr l1 on \"Rating Scales and Check Lists for the E v a 1 u a t i o n of Psychopathology.\" In the area of psychological testing, a variety of paper-andpencil tests have been devised for the purpose of measuring specific personality traits; for example, the Depression-Elation Test, devised by Jasper in 1930. This report describes the development of an instrument designed to measure the behavioral manifestations of depression. In the planning of the research design of a project aimed at testing certain psychoanalytic formulations of depression, the necessity for establishing an appropriate system for identifying depression was recognized. Because of the reports on the low degree of interclinician agreement on diagnosis,13 we could not depend on the clinical diagnosis, but had to formulate a method of defining depression that would be reliable and valid. The available instruments were not considered adequate for our purposes. The Minnesota Multiphasic Personality Inventory, for example, was not specifically designed",
"title": ""
},
{
"docid": "273153d0cf32162acb48ed989fa6d713",
"text": "This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae, and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand, or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.",
"title": ""
}
] |
[
{
"docid": "bed50a61cb79e20ff13243a9ddf8151c",
"text": "Conventional copy-move forgery detection methods mostly make use of hand-crafted features to conduct feature extraction and patch matching. However, the discriminative capability and the invariance to particular transformations of hand-crafted features are not good enough, which imposes restrictions on the performance of copy-move forgery detection. To solve this problem, we propose to utilize Convolutional Kernel Network to conduct copy-move forgery detection. Convolutional Kernel Network is a kind of data-driven local descriptor with the deep convolutional architecture. It can achieve competitive performance for its excellent discriminative capability. To well adapt to the condition of copy-move forgery detection, three significant improvements are made: First of all, our Convolutional Kernel Network is reconstructed for GPU. The GPU-based reconstruction results in high efficiency and makes it possible to apply to thousands of patches matching in copy-move forgery detection. Second, a segmentation-based keypoint distribution strategy is proposed to generate homogeneous distributed keypoints. Last but not least, an adaptive oversegmentation method is adopted. Experiments on the publicly available datasets are conducted to testify the state-of-the-art performance of the proposed method.",
"title": ""
},
{
"docid": "bd8b7b892060d8099217ef8553c79b71",
"text": "Purpose: The purpose of this study is to examine the barriers that SMEs are experiencing when confronted with the need to implement e-commerce to sustain their competitiveness. E-commerce is the medium that leads to economic growth of a country. Small and Medium Enterprises (SMEs) play an important role in contributing to the Gross Domestic Product and reducing the unemployment. However, there are some specific factors that inhibit the implementation of e-commerce among SMEs. Design/methodology/approach: A questionnaire approach was employed in this study and 160 questionnaires have been distributed but only 91usable questionnaires have been collected from SMEs. Literature found that main barriers to e-commerce adoption among SMEs are organizational barriers, financial barriers, technical barriers, legal and regulatory barriers, and behavioral barriers. Findings: Of this study showed that all these barriers carried an average influence on ecommerce adoption. The most important factor barriers of e-commerce adoption are legal and regulatory barriers followed by technical barriers, whereas lack of internet security is the highest barrier factor that inhibits the implementation of e-commerce in SMEs followed by the requirement to undertake additional training and skill development. Practical implications: This paper is useful for the management of SMEs in understanding and gaining insights into the real and potential barriers to e-commerce adoption. This can help the organization to design strategy in taking up barriers tactfully to its advantage.",
"title": ""
},
{
"docid": "542d698fbc97e07809c23cbef5bcb799",
"text": "Liver fibrosis is a major cause of morbidity and mortality worldwide due to chronic viral hepatitis and, more recently, from fatty liver disease associated with obesity. Hepatic stellate cell activation represents a critical event in fibrosis because these cells become the primary source of extracellular matrix in liver upon injury. Use of cell-culture and animal models has expanded our understanding of the mechanisms underlying stellate cell activation and has shed new light on genetic regulation, the contribution of immune signaling, and the potential reversibility of the disease. As pathways of fibrogenesis are increasingly clarified, the key challenge will be translating new advances into the development of antifibrotic therapies for patients with chronic liver disease.",
"title": ""
},
{
"docid": "782341e7a40a95da2a430faae977dea0",
"text": "Current Web services standards lack the means for expressing a service's nonfunctional attributes - namely, its quality of service. QoS can be objective (encompassing reliability, availability, and request-to-response time) or subjective (focusing on user experience). QoS attributes are key to dynamically selecting the services that best meet user needs. This article addresses dynamic service selection via an agent framework coupled with a QoS ontology. With this approach, participants can collaborate to determine each other's service quality and trustworthiness.",
"title": ""
},
{
"docid": "ca9f1a955ad033e43d25533d37f50b88",
"text": "Language in social media is extremely dynamic: new words emerge, trend and disappear, while the meaning of existing words can fluctuate over time. This work addresses several important tasks of visualizing and predicting short term text representation shift, i.e. the change in a word’s contextual semantics. We study the relationship between short-term concept drift and representation shift on a large social media corpus – VKontakte collected during the Russia-Ukraine crisis in 2014 – 2015. We visualize short-term representation shift for example keywords and build predictive models to forecast short-term shifts in meaning from previous meaning as well as from concept drift. We show that short-term representation shift can be accurately predicted up to several weeks in advance and that visualization provides insight into meaning change. Our approach can be used to explore and characterize specific aspects of the streaming corpus during crisis events and potentially improve other downstream classification tasks including real-time event forecasting in social media.",
"title": ""
},
{
"docid": "3e2e121e744d71818a65a35d6c3231c9",
"text": "Chlamydia trachomatis is a gram-negative bacterium that infects the columnar epithelium of the cervix, urethra, and rectum, as well as nongenital sites such as the lungs and eyes. The bacterium is the cause of the most frequently reported sexually transmitted disease in the United States, which is responsible for more than 1 million infections annually. Most persons with this infection are asymptomatic. Untreated infection can result in serious complications such as pelvic inflammatory disease, infertility, and ectopic pregnancy in women, and epididymitis and orchitis in men. Men and women can experience chlamydia-induced reactive arthritis. Treatment of uncomplicated cases should include azithromycin or doxycycline. Screening is recommended in all women younger than 25 years, in all pregnant women, and in women who are at increased risk of infection. Screening is not currently recommended in men. In neonates and infants, the bacterium can cause conjunctivitis and pneumonia. Adults may also experience conjunctivitis caused by chlamydia. Trachoma is a recurrent ocular infection caused by chlamydia and is endemic in the developing world.",
"title": ""
},
{
"docid": "8dc400d9745983da1e91f0cec70606c9",
"text": "Aspect-Oriented Programming (AOP) is intended to ease situations that involve many kinds of code tangling. This paper reports on a study to investigate AOP's ability to ease tangling related to exception detection and handling. We took an existing framework written in Java™, the JWAM framework, and partially reengineered its exception detection and handling aspects using AspectJ™, an aspect-oriented programming extension to Java.\nWe found that AspectJ supported implementations that drastically reduced the portion of the code related to exception detection and handling. In one scenario, we were able to reduce that code by a factor of 4. We also found that, with respect to the original implementation in plain Java, AspectJ provided better support for different configurations of exceptional behaviors, more tolerance for changes in the specifications of exceptional behaviors, better support for incremental development, better reuse, automatic enforcement of contracts in applications that use the framework, and cleaner program texts. We also found some weaknesses of AspectJ that should be addressed in the future.",
"title": ""
},
{
"docid": "4d8335fa722e1851536182d5657ab738",
"text": "Location-aware mobile applications have become extremely common, with a recent wave of mobile dating applications that provide relatively sparse profiles to connect nearby individuals who may not know each other for immediate social or sexual encounters. These applications have become particularly popular among men who have sex with men (MSM) and raise a range of questions about self-presentation, visibility to others, and impression formation, as traditional geographic boundaries and social circles are crossed. In this paper we address two key questions around how people manage potentially stigmatized identities in using these apps and what types of information they use to self-present in the absence of a detailed profile or rich social cues. To do so, we draw on profile data observed in twelve locations on Grindr, a location-aware social application for MSM. Results suggest clear use of language to manage stigma associated with casual sex, and that users draw regularly on location information and other descriptive language to present concisely to others nearby.",
"title": ""
},
{
"docid": "bf5535b2208be9f1cd204e1a77dec02e",
"text": "iii This work is dedicated to my beloved parents, for all the sacrifices they have made to ensure that I obtain the best education possible. Their unconditional love and words of encouragement has really been a tonic to me. Looking back to the dark days and tough times I have been through, my parents has always given me the strength to persevere. Then I dedicated to my brother and sisters. May Allah be with them every step of the way, and richly bless them in everything they do iv ACKNOWLEDGEMENTS First and foremost, I would like to give thanks to the Almighty Allah for He made my dream comes true by giving me strength and good health to complete this study. Without Him, all my efforts would have been fruitless but because He is the only one who knows our fate, He made it possible for me to pursue my studies at UTM. Special thanks go to my supervisor Dr. Jafri Bin Din, for allowing me to carry out this study under his supervision, and for his constructive criticism and support, which has enabled me to complete this study on time. During the past one year of my research under his supervision, I have known Dr. Jafri Bin Din as a sympathetic and principle-centered person. He thought me how to be a challenger, how to set my benchmark ever higher and how to look for solutions to problems rather than focus on the problems. I learned to believe in myself, my work and my future. Thank you Dr. Jafri Bin Din, for your love, emotional and intellectual support as well as your never-ending faith in me. Last but not least, I am forever indebted to all my family members for their constant support throughout the entire duration of this project. Their words of encouragement never failed to keep me going even through the hardest of times and it is here that I express my sincerest gratitude to them. ABSTRACT In parallel with terrestrial and satellite wireless networks, a new alternative based on platforms located in the stratosphere has recently introduced, known as High Altitude Platforms (HAPS). HAPS are either airships or aircraft positioned between 17 and 22.5 km above the earth surface. It has capability to deliver a wide spectrum of applications to both mobile and fixed users over a broad coverage area. Wideband code division multiple access (WCDMA) has …",
"title": ""
},
{
"docid": "7208a2b257c7ba7122fd2e278dd1bf4a",
"text": "Abstract—This paper shows in detail the mathematical model of direct and inverse kinematics for a robot manipulator (welding type) with four degrees of freedom. Using the D-H parameters, screw theory, numerical, geometric and interpolation methods, the theoretical and practical values of the position of robot were determined using an optimized algorithm for inverse kinematics obtaining the values of the particular joints in order to determine the virtual paths in a relatively short time.",
"title": ""
},
{
"docid": "483c95f5f42388409dceb8cdb3792d19",
"text": "The world of e-commerce is reshaping marketing strategies based on the analysis of e-commerce data. Huge amounts of data are being collecting and can be analyzed for some discoveries that may be used as guidance for people sharing same interests but lacking experience. Indeed, recommendation systems are becoming an essential business strategy tool from just a novelty. Many large e-commerce web sites are already encapsulating recommendation systems to provide a customer friendly environment by helping customers in their decision-making process. A recommendation system learns from a customer behavior patterns and recommend the most valuable from available alternative choices. In this paper, we developed a two-stage algorithm using self-organizing map (SOM) and fuzzy k-means with an improved distance function to classify users into clusters. This will lead to have in the same cluster users who mostly share common interests. Results from the combination of SOM and fuzzy K-means revealed better accuracy in identifying user related classes or clusters. We validated our results using various datasets to check the accuracy of the employed clustering approach. The generated groups of users form the domain for transactional datasets to find most valuable products for customers.",
"title": ""
},
{
"docid": "8898565b2a081af8374af7b5d25c52ec",
"text": "Traditionally, prejudice has been conceptualized as simple animosity. The stereotype content model (SCM) shows that some prejudice is worse. The SCM previously demonstrated separate stereotype dimensions of warmth (low-high) and competence (low-high), identifying four distinct out-group clusters. The SCM predicts that only extreme out-groups, groups that are both stereotypically hostile and stereotypically incompetent (low warmth, low competence), such as addicts and the homeless, will be dehumanized. Prior studies show that the medial prefrontal cortex (mPFC) is necessary for social cognition. Functional magnetic resonance imaging provided data for examining brain activations in 10 participants viewing 48 photographs of social groups and 12 participants viewing objects; each picture dependably represented one SCM quadrant. Analyses revealed mPFC activation to all social groups except extreme (low-low) out-groups, who especially activated insula and amygdala, a pattern consistent with disgust, the emotion predicted by the SCM. No objects, though rated with the same emotions, activated the mPFC. This neural evidence supports the prediction that extreme out-groups may be perceived as less than human, or dehumanized.",
"title": ""
},
{
"docid": "411258bffa65a0c2b398e44a20506dec",
"text": "A hybrid neural network-first principles modeling scheme is developed and used to model a fedbatch bioreactor. The hybrid model combines a partial first principles model, which incorporates the available prior knowledge about the process being modeled, with a neural network which serves as an estimator of unmeasuredprocess parameters that are difficult to model from first principles. This hybrid model has better properties than standard “black-box” neural network models in that it is able to interpolate and extrapolate much more accurately, is easier to analyze and interpret, and requires significantly fewer training examples. Two alternative state and parameter estimation strategies, extended Kalman filtering and NLP optimization, are also considered. When no a priori known model of the unobserved process parameters is available, the hybrid network model gives better estimates of the parameters, when compared to these methods. By providing a model of these unmeasured parameters, the hybrid network can also make predictions and hence can be used for process optimization. These results apply both when full and partial state measurements are available, but in the latter case a state reconstruction method must be used for the first principles component of the hybrid model.",
"title": ""
},
{
"docid": "c30ea570f744f576014aeacf545b027c",
"text": "We aimed to examine the effect of different doses of lutein supplementation on visual function in subjects with long-term computer display light exposure. Thirty-seven healthy subjects with long-term computer display light exposure ranging in age from 22 to 30 years were randomly assigned to one of three groups: Group L6 (6 mg lutein/d, n 12); Group L12 (12 mg lutein/d, n 13); and Group Placebo (maltodextrin placebo, n 12). Levels of serum lutein and visual performance indices such as visual acuity, contrast sensitivity and glare sensitivity were measured at weeks 0 and 12. After 12-week lutein supplementation, serum lutein concentrations of Groups L6 and L12 increased from 0.356 (SD 0.117) to 0.607 (SD 0.176) micromol/l, and from 0.328 (SD 0.120) to 0.733 (SD 0.354) micromol/l, respectively. No statistical changes from baseline were observed in uncorrected visual acuity and best-spectacle corrected visual acuity, whereas there was a trend toward increase in visual acuity in Group L12. Contrast sensitivity in Groups L6 and L12 increased with supplementation, and statistical significance was reached at most visual angles of Group L12. No significant change was observed in glare sensitivity over time. Visual function in healthy subjects who received the lutein supplement improved, especially in contrast sensitivity, suggesting that a higher intake of lutein may have beneficial effects on the visual performance.",
"title": ""
},
{
"docid": "e88def1e0d709047f910b7d5d2319508",
"text": "This paper presents an asymmetrical control with phase lock loop for series resonant inverters. This control strategy is used in full-bridge topologies for induction cookers. The operating frequency is automatically tracked to maintain a small constant lagging phase angle when load parameters change. The switching loss is minimized by operating the IGBT in the zero voltage resonance modes. The output power can be adjusted by using asymmetrical voltage cancellation control which is regulated with a PWM duty cycle control strategy.",
"title": ""
},
{
"docid": "4d5ba0bc7146518d5c59d7c535d0415e",
"text": "We introduce Opcodes, a Python package which presents x86 and x86-64 instruction sets as a set of high-level objects. Opcodes provides information about instruction names, implicit and explicit operands, and instruction encoding. We use the Opcodes package to auto-generate instruction classes for PeachPy, an x86-64 assembler embedded in Python, and enable new functionality.\n The new PeachPy functionality lets low-level optimization experts write high-performance assembly kernels in Python, load them as callable Python functions, test the kernels using numpy and generate object files for Windows, Linux, and Mac OS X entirely within Python. Additionally, the new PeachPy can generate and run assembly code inside Chromium-based browsers by leveraging Native Client technology. Beyond that, PeachPy gained ability to target Google Go toolchain, by generating either source listing for Go assembler, or object files that can be linked with Go toolchain.\n With backends for Windows, Linux, Mac OS X, Native Client, and Go, PeachPy is the most portable way to write high-performance kernels for x86-64 architecture.",
"title": ""
},
{
"docid": "0ef6df2d892221d43d1dbdd7f1ddd417",
"text": "This paper attempts to present a comprehensive summary of research results in the use of visual information to control robot manipulators and related mechanisms. An extensive bibliography is provided which also includes important papers from the elemental disciplines upon which visual servoing is based. The research results are discussed in terms of historical context, common-ality of function, algorithmic approach and method of implementation.",
"title": ""
},
{
"docid": "79a9208d16541c7ed4fbc9996a82ef6a",
"text": "Query processing in data integration occurs over network-bound, autonomous data sources. This requires extensions to traditional optimization and execution techniques for three reasons: there is an absence of quality statistics about the data, data transfer rates are unpredictable and bursty, and slow or unavailable data sources can often be replaced by overlapping or mirrored sources. This paper presents the Tukwila data integration system, designed to support adaptivity at its core using a two-pronged approach. Interleaved planning and execution with partial optimization allows Tukwila to quickly recover from decisions based on inaccurate estimates. During execution, Tukwila uses adaptive query operators such as the double pipelined hash join, which produces answers quickly, and the dynamic collector, which robustly and efficiently computes unions across overlapping data sources. We demonstrate that the Tukwila architecture extends previous innovations in adaptive execution (such as query scrambling, mid-execution re-optimization, and choose nodes), and we present experimental evidence that our techniques result in behavior desirable for a data integration system.",
"title": ""
},
{
"docid": "36e368c9960976a7436da8e986ed50f4",
"text": "Gediminas Adomavicius New York University [email protected] In many applications, ranging from recommender systems to one-to-one marketing to Web browsing, it is important to build personalized profiles of individual users from their transactional histories. These profiles describe individual behavior of users and can be specified with sets of rules learned from user transactional histories using various data mining techniques. Since many discovered rules can be spurious, irrelevant, or trivial, one of the main problems is how to perform post-analysis of the discovered rules, i.e., how to validate customer profiles by separating “good” rules from the “bad.” This paper presents a method for validating such rules with an explicit participation of a human expert",
"title": ""
},
{
"docid": "503101a7b0f923f8fecb6dc9bb0bde37",
"text": "In-vehicle electronic equipment aims to increase safety, by detecting risk factors and taking/suggesting corrective actions. This paper presents a knowledge-based framework for assisting a driver via her PDA. Car data extracted under On Board Diagnostics (OBD-II) protocol, data acquired from PDA embedded micro-devices and information retrieved from the Web are properly combined: a simple data fusion algorithm has been devised to collect and semantically annotate relevant safety events. Finally, a logic-based matchmaking allows to infer potential risk factors, enabling the system to issue accurate and timely warnings. The proposed approach has been implemented in a prototypical application for the Apple iPhone platform, in order to provide experimental evaluation in real-world test drives for corroborating the approach. Keywords-Semantic Web; On Board Diagnostics; Ubiquitous Computing; Data Fusion; Intelligent Transportation Systems",
"title": ""
}
] |
scidocsrr
|
f4fa5e4ee27a20315d153a7f823c2ed0
|
LABOUR TURNOVER : CAUSES , CONSEQUENCES AND PREVENTION Oladele
|
[
{
"docid": "c9972414881db682c219d69d59efa34a",
"text": "“Employee turnover” as a term is widely used in business circles. Although several studies have been conducted on this topic, most of the researchers focus on the causes of employee turnover. This research looked at extent of influence of various factors on employee turnover in urban and semi urban banks. The research was aimed at achieving the following objectives: identify the key factors of employee turnover; determine the extent to which the identified factors are influencing employees’ turnover. The study is based on the responses of the employees of leading banks. A self-developed questionnaire, measured on a Likert Scale was used to collect data from respondents. Quantitative research design was used and this design was chosen because its findings are generaliseable and data objective. The reliability of the data collected is done by split half method.. The collected data were being analyzed using a program called Statistical Package for Social Science (SPSS ver.16.0 For Windows). The data analysis is carried out by calculating mean, standard deviation and linear correlation. The difference between means of variable was estimated by using t-test. The following factors have significantly influenced employee turnover in banking sector: Work Environment, Job Stress, Compensation (Salary), Employee relationship with management, Career Growth.",
"title": ""
}
] |
[
{
"docid": "e769f52b6e10ea1cf218deb8c95f4803",
"text": "To facilitate the task of reading and searching information, it became necessary to find a way to reduce the size of documents without affecting the content. The solution is in Automatic text summarization system, it allows, from an input text to produce another smaller and more condensed without losing relevant data and meaning conveyed by the original text. The research works carried out on this area have experienced lately strong progress especially in English language. However, researches in Arabic text summarization are very few and are still in their beginning. In this paper we expose a literature review of recent techniques and works on automatic text summarization field research, and then we focus our discussion on some works concerning automatic text summarization in some languages. We will discuss also some of the main problems that affect the quality of automatic text summarization systems. © 2015 AESS Publications. All Rights Reserved.",
"title": ""
},
{
"docid": "a7ab755978c9309513ac79dbd6b09763",
"text": "In this paper, we propose a denoising method motivated by our previous analysis of the performance bounds for image denoising. Insights from that study are used here to derive a high-performance practical denoising algorithm. We propose a patch-based Wiener filter that exploits patch redundancy for image denoising. Our framework uses both geometrically and photometrically similar patches to estimate the different filter parameters. We describe how these parameters can be accurately estimated directly from the input noisy image. Our denoising approach, designed for near-optimal performance (in the mean-squared error sense), has a sound statistical foundation that is analyzed in detail. The performance of our approach is experimentally verified on a variety of images and noise levels. The results presented here demonstrate that our proposed method is on par or exceeding the current state of the art, both visually and quantitatively.",
"title": ""
},
{
"docid": "24880289ca2b6c31810d28c8363473b3",
"text": "Deep reinforcement learning (RL) has achieved several high profile successes in difficult decision-making problems. However, these algorithms typically require a huge amount of data before they reach reasonable performance. In fact, their performance during learning can be extremely poor. This may be acceptable for a simulator, but it severely limits the applicability of deep RL to many real-world tasks, where the agent must learn in the real environment. In this paper we study a setting where the agent may access data from previous control of the system. We present an algorithm, Deep Q-learning from Demonstrations (DQfD), that leverages small sets of demonstration data to massively accelerate the learning process even from relatively small amounts of demonstration data and is able to automatically assess the necessary ratio of demonstration data while learning thanks to a prioritized replay mechanism. DQfD works by combining temporal difference updates with supervised classification of the demonstrator’s actions. We show that DQfD has better initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN) as it starts with better scores on the first million steps on 41 of 42 games and on average it takes PDD DQN 83 million steps to catch up to DQfD’s performance. DQfD learns to out-perform the best demonstration given in 14 of 42 games. In addition, DQfD leverages human demonstrations to achieve state-of-the-art results for 11 games. Finally, we show that DQfD performs better than three related algorithms for incorporating demonstration data into DQN.",
"title": ""
},
{
"docid": "615d2f03b2ff975242e90103e98d70d3",
"text": "The insurance industries consist of more than thousand companies in worldwide. And collect more than one trillions of dollars premiums in each year. When a person or entity make false insurance claims in order to obtain compensation or benefits to which they are not entitled is known as an insurance fraud. The total cost of an insurance fraud is estimated to be more than forty billions of dollars. So detection of an insurance fraud is a challenging problem for the insurance industry. The traditional approach for fraud detection is based on developing heuristics around fraud indicator. The auto\\vehicle insurance fraud is the most prominent type of insurance fraud, which can be done by fake accident claim. In this paper, focusing on detecting the auto\\vehicle fraud by using, machine learning technique. Also, the performance will be compared by calculation of confusion matrix. This can help to calculate accuracy, precision, and recall.",
"title": ""
},
{
"docid": "56674d44df277e40d8aef20d8eb7549f",
"text": "The rapid proliferation of smartphones over the last few years has come hand in hand with and impressive growth in the number and sophistication of malicious apps targetting smartphone users. The availability of reuse-oriented development methodologies and automated malware production tools makes exceedingly easy to produce new specimens. As a result, market operators and malware analysts are increasingly overwhelmed by the amount of newly discovered samples that must be analyzed. This situation has stimulated research in intelligent instruments to automate parts of the malware analysis process. In this paper, we introduce Dendroid, a system based on text mining and information retrieval techniques for this task. Our approach is motivated by a statistical analysis of the code structures found in a dataset of Android OS malware families, which reveals some parallelisms with classical problems in those domains. We then adapt the standard Vector Space Model and reformulate the modelling process followed in text mining applications. This enables us to measure similarity between malware samples, which is then used to automatically classify them into families. We also investigate the application of hierarchical clustering over the feature vectors obtained for each malware family. The resulting dendograms resemble the so-called phylogenetic trees for biological species, allowing us to conjecture about evolutionary relationships among families. Our experimental results suggest that the approach is remarkably accurate and deals efficiently with large databases of malware instances.",
"title": ""
},
{
"docid": "c3ad915ac57bf56c4adc47acee816b54",
"text": "How does the brain “produce” conscious subjective experience, an awareness of something? This question has been regarded as perhaps the most challenging one facing science. Penfield et al. [9] had produced maps of whereresponses to electrical stimulation of cerebral cortex could be obtained in human neurosurgical patients. Mapping of cerebral activations in various subjective paradigms has been greatly extended more recently by utilizing PET scan and fMRI techniques. But there were virtually no studies of what the appropriate neurons do in order to elicit a conscious experience. The opportunity for me to attempt such studies arose when my friend and neurosurgeon colleague, Bertram Feinstein, invited me to utilize the opportunity presented by access to stimulating and recording electrodes placed for therapeutic purposes intracranially in awake and responsive patients. With the availability of an excellent facility and team of co-workers, I decided to study neuronal activity requirements for eliciting a simple conscious somatosensory experience, and compare that to activity requirements forunconsciousdetection of sensory signals. We discovered that a surprising duration of appropriate neuronal activations, up to about 500 msec, was required in order to elicit a conscious sensory experience [5]. This was true not only when the initiating stimulus was in any of the cerebral somatosensory pathways; several lines of evidence indicated that even a single stimulus pulse to the skin required similar durations of activities at the cortical level. That discovery led to further studies of such a delay factor for awareness generally, and to profound inferences for the nature of conscious subjective experience. It formed the basis of that highlight in my work [1,3]. For example, a neuronal requirement of about 500 msec to produce awareness meant that we do not experience our sensory world immediately, in real time. But that would contradict our intuitive feeling of the experience in real time. We solved this paradox with a hypothesis for “backward referral” of subjective experience to the time of the first cortical response, the primary evoked potential. This was tested and confirmed experimentally [8], a thrilling result. We could now add subjective referral in time to the already known subjective referral in space. Subjective referrals have no known neural basis and appear to be purely mental phenomena! Another experimental study supported my “time-on” theory for eliciting conscious sensations as opposed to unconscious detection [7]. The time-factor appeared also in an endogenous experience, the conscious intention or will to produce a purely voluntary act [4,6]. In this, we found that cerebral activity initiates this volitional process at least 350 msec before the conscious wish (W) to act appears. However, W appears about 200 msec before the muscles are activated. That retained the possibility that the conscious will could control the outcome of the volitional process; it could veto it and block the performance of the act. These discoveries have profound implications for the nature of free will, for individual responsibility and guilt. Discovery of these time factors led to unexpected ways of viewing conscious experience and unconscious mental functions. Experience of the sensory world is delayed. It raised the possibility that all conscious mental functions are initiated unconsciouslyand become conscious only if neuronal activities persist for a sufficiently long time. Conscious experiences must be discontinuousif there is a delay for each; the “stream of consciousness” must be modified. Quick actions or responses, whether in reaction times, sports activities, etc., would all be initially unconscious. Unconscious mental operations, as in creative thinking, artistic impulses, production of speech, performing in music, etc., can all proceed rapidly, since only brief neural actions are sufficient. Rapid unconscious events would allow faster processing in thinking, etc. The delay for awareness provides a physiological opportunity for modulatory influences to affect the content of an experience that finally appears, as in Freudian repression of certain sensory images or thoughts [2,3]. The discovery of the neural time factor (except in conscious will) could not have been made without intracranial access to the neural pathways. They provided an experimentally based entry into how new hypotheses, of how the brain deals with conscious experience, could be directly tested. That was in contrast to the many philosophical approaches which were speculative and mostly untestable. Evidence based views could now be accepted with some confidence.",
"title": ""
},
{
"docid": "6174220696199251e774489b6fc0001f",
"text": "This paper introduces a collaborative learning game called Futura: The Sustainable Futures Game, which is implemented on a custom multi-touch digital tabletop platform. The goal of the game is to work with other players to support a growing population as time passes while minimizing negative impact on the environment. The design-oriented research goal of the project is to explore the novel design space of collaborative, multi-touch tabletop games for learning. Our focus is on identifying and understanding key design factors of importance in creating opportunities for learning. We use four theoretical perspectives as lenses through which we conceptualize our design intentions and inform our analysis. These perspectives are: experiential learning, constructivist learning, collaborative learning, and game theory. In this paper we discuss design features that enable collaborative learning, present the results from two observational studies, and compare our findings to other guidelines in order to contribute to the growing body of empirically derived design guidelines for tangible, embodied and embedded interaction.",
"title": ""
},
{
"docid": "a059b4908b2ffde33fcedfad999e9f6e",
"text": "The use of a hull-climbing robot is proposed to assist hull surveyors in their inspection tasks, reducing cost and risk to personnel. A novel multisegmented hull-climbing robot with magnetic wheels is introduced where multiple two-wheeled modular segments are adjoined by flexible linkages. Compared to traditional rigid-body tracked magnetic robots that tend to detach easily in the presence of surface discontinuities, the segmented design adapts to such discontinuities with improved adhesion to the ferrous surface. Coordinated mobility is achieved with the use of a motion-control algorithm that estimates robot pose through position sensors located in each segment and linkage in order to optimally command each of the drive motors of the system. Self-powered segments and an onboard radio allow for wireless transmission of video and control data between the robot and its operator control unit. The modular-design approach of the system is highly suited for upgrading or adding segments as needed. For example, enhancing the system with a segment that supports an ultrasonic measurement device used to measure hull-thickness of corroded sites can help minimize the number of areas that a surveyor must personally visit for further inspection and repair. Future development efforts may lead to the design of autonomy segments that accept high-level commands from the operator and automatically execute wide-area inspections. It is also foreseeable that with several multi-segmented robots, a coordinated inspection task can take place in parallel, significantly reducing inspection time and cost. *[email protected] The focus of this paper is on the development efforts of the prototype system that has taken place since 2012. Specifically, the tradeoffs of the magnetic-wheel and linkage designs are discussed and the motion-control algorithm presented. Overall system-performance results obtained from various tests and demonstrations are also reported.",
"title": ""
},
{
"docid": "43228a3436f23d786ad7faa7776f1e1b",
"text": "Antineutrophil cytoplasmic antibody (ANCA)-associated vasculitides (AAV) include Wegener granulomatosis, microscopic polyangiitis, Churg–Strauss syndrome and renal-limited vasculitis. This Review highlights the progress that has been made in our understanding of AAV pathogenesis and discusses new developments in the treatment of these diseases. Evidence from clinical studies, and both in vitro and in vivo experiments, supports a pathogenic role for ANCAs in the development of AAV; evidence is stronger for myeloperoxidase-ANCAs than for proteinase-3-ANCAs. Neutrophils, complement and effector T cells are also involved in AAV pathogenesis. With respect to treatment of AAV, glucocorticoids, cyclophosphamide and other conventional therapies are commonly used to induce remission in generalized disease. Pulse intravenous cyclophosphamide is equivalent in efficacy to oral cyclophosphamide but seems to be associated with less adverse effects. Nevertheless, alternatives to cyclophosphamide therapy have been investigated, such as the use of methotrexate as a less-toxic alternative to cyclophosphamide to induce remission in non-organ-threatening or non-life-threatening AAV. Furthermore, rituximab is equally as effective as cyclophosphamide for induction of remission in AAV and might become the standard of therapy in the near future. Controlled trials in which specific immune effector cells and molecules are being therapeutically targeted have been initiated or are currently being planned.",
"title": ""
},
{
"docid": "1db450f3e28907d6940c87d828fc1566",
"text": "The task of colorizing black and white images has previously been explored for natural images. In this paper we look at the task of colorization on a different domain: webtoons. To our knowledge this type of dataset hasn't been used before. Webtoons are usually produced in color thus they make a good dataset for analyzing different colorization models. Comics like webtoons also present some additional challenges over natural images, such as occlusion by speech bubbles and text. First we look at some of the previously introduced models' performance on this task and suggest modifications to address their problems. We propose a new model composed of two networks; one network generates sparse color information and a second network uses this generated color information as input to apply color to the whole image. These two networks are trained end-to-end. Our proposed model solves some of the problems observed with other architectures, resulting in better colorizations.",
"title": ""
},
{
"docid": "571e2d2fcb55f16513a425b874102f69",
"text": "Distributed word representations have a rising interest in NLP community. Most of existing models assume only one vector for each individual word, which ignores polysemy and thus degrades their effectiveness for downstream tasks. To address this problem, some recent work adopts multiprototype models to learn multiple embeddings per word type. In this paper, we distinguish the different senses of each word by their latent topics. We present a general architecture to learn the word and topic embeddings efficiently, which is an extension to the Skip-Gram model and can model the interaction between words and topics simultaneously. The experiments on the word similarity and text classification tasks show our model outperforms state-of-the-art methods.",
"title": ""
},
{
"docid": "625f1f11e627c570e26da9f41f89a28b",
"text": "In this paper, we propose an approach to realize substrate integrated waveguide (SIW)-based leaky-wave antennas (LWAs) supporting continuous beam scanning from backward to forward above the cutoff frequency. First, through phase delay analysis, it was found that SIWs with straight transverse slots support backward and forward radiation of the -1-order mode with an open-stopband (OSB) in between. Subsequently, by introducing additional longitudinal slots as parallel components, the OSB can be suppressed, leading to continuous beam scanning at least from -40° through broadside to 35°. The proposed method only requires a planar structure and obtains less dispersive beam scanning compared with a composite right/left-handed (CRLH) LWA. Both simulations and measurements verify the intended beam scanning operation while verifying the underlying theory.",
"title": ""
},
{
"docid": "5bf330cdbaf7df4f1f585c7510a34f1f",
"text": "The availability of affordable and portable depth sensors has made scanning objects and people simpler than ever. However, dealing with occlusions and missing parts is still a significant challenge. The problem of reconstructing a (possibly non-rigidly moving) 3D object from a single or multiple partial scans has received increasing attention in recent years. In this work, we propose a novel learning-based method for the completion of partial shapes. Unlike the majority of existing approaches, our method focuses on objects that can undergo non-rigid deformations. The core of our method is a variational autoencoder with graph convolutional operations that learns a latent space for complete realistic shapes. At inference, we optimize to find the representation in this latent space that best fits the generated shape to the known partial input. The completed shape exhibits a realistic appearance on the unknown part. We show promising results towards the completion of synthetic and real scans of human body and face meshes exhibiting different styles of articulation and partiality.",
"title": ""
},
{
"docid": "a74081f7108e62fadb48446255dd246b",
"text": "Existing fuzzy neural networks (FNNs) are mostly developed under a shallow network configuration having lower generalization power than those of deep structures. This paper proposes a novel self-organizing deep fuzzy neural network, namely deep evolving fuzzy neural networks (DEVFNN). Fuzzy rules can be automatically extracted from data streams or removed if they play little role during their lifespan. The structure of the network can be deepened on demand by stacking additional layers using a drift detection method which not only detects the covariate drift, variations of input space, but also accurately identifies the real drift, dynamic changes of both feature space and target space. DEVFNN is developed under the stacked generalization principle via the feature augmentation concept where a recently developed algorithm, namely Generic Classifier (gClass), drives the hidden layer. It is equipped by an automatic feature selection method which controls activation and deactivation of input attributes to induce varying subsets of input features. A deep network simplification procedure is put forward using the concept of hidden layer merging to prevent uncontrollable growth of input space dimension due to the nature of feature augmentation approach in building a deep network structure. DEVFNN works in the sample-wise fashion and is compatible for data stream applications. The efficacy of DEVFNN has been thoroughly evaluated using six datasets with non-stationary properties under the prequential test-then-train protocol. It has been compared with four state-ofthe art data stream methods and its shallow counterpart where DEVFNN demonstrates improvement of classification accuracy. Moreover, it is also shown that the concept drift detection method is an effective tool to control the depth of network structure while the hidden layer merging scenario is capable of simplifying the network complexity of a deep network with negligible compromise of generalization performance.",
"title": ""
},
{
"docid": "bd4fd4d383a691106aab5d775381c388",
"text": "This paper describes a model-based pothole detection algorithm that exploits a multi-phase dynamic model. The responses of hitting potholes are empirically broken down into three phases governed by three simpler dynamic system sub-models. Each sub-model is based on a rigid-ring tire and quarter-car suspension model. The model is validated by comparing simulation results over various scenarios with FTire, a commercial simulation software for tire-road interaction. Based on the developed model, a pothole detection algorithm with Unscented Kalman Filter (UKF) and Bayesian estimation is developed and demonstrated.",
"title": ""
},
{
"docid": "34e1566235f94a265564cbe5d0bf7cc1",
"text": "Circuit techniques that overcome practical noise, reliability, and EMI limitations are reported. An auxiliary loop with ramping circuits suppresses pop-and-click noise to 1 mV for an amplifier with 4 V-achievable output voltage. Switching edge rate control enables the system to meet the EN55022 Class-B standard with a 15 dB margin. An enhanced scheme detects short-circuit conditions without relying on overlimit current events.",
"title": ""
},
{
"docid": "ae18e923e22687f66303c7ff07689f38",
"text": "Recognizing fine-grained sub-categories such as birds and dogs is extremely challenging due to the highly localized and subtle differences in some specific parts. Most previous works rely on object / part level annotations to build part-based representation, which is demanding in practical applications. This paper proposes an automatic fine-grained recognition approach which is free of any object / part annotation at both training and testing stages. Our method explores a unified framework based on two steps of deep filter response picking. The first picking step is to find distinctive filters which respond to specific patterns significantly and consistently, and learn a set of part detectors via iteratively alternating between new positive sample mining and part model retraining. The second picking step is to pool deep filter responses via spatially weighted combination of Fisher Vectors. We conditionally pick deep filter responses to encode them into the final representation, which considers the importance of filter responses themselves. Integrating all these techniques produces a much more powerful framework, and experiments conducted on CUB-200-2011 and Stanford Dogs demonstrate the superiority of our proposed algorithm over the existing methods.",
"title": ""
},
{
"docid": "1590742097219610170bd62eb3799590",
"text": "In this paper, we develop a vision-based system that employs a combined RGB and depth descriptor to classify hand gestures. The method is studied for a human-machine interface application in the car. Two interconnected modules are employed: one that detects a hand in the region of interaction and performs user classification, and another that performs gesture recognition. The feasibility of the system is demonstrated using a challenging RGBD hand gesture data set collected under settings of common illumination variation and occlusion.",
"title": ""
},
{
"docid": "516bbc36588afeeba0c3045f38efadb0",
"text": "full text) and the cognitively different indexer interpretations of the",
"title": ""
},
{
"docid": "502a948fbf73036a4a1546cdd4a04833",
"text": "The literature review is an established research genre in many academic disciplines, including the IS discipline. Although many scholars agree that systematic literature reviews should be rigorous, few instructional texts for compiling a solid literature review, at least with regard to the IS discipline, exist. In response to this shortage, in this tutorial, I provide practical guidance for both students and researchers in the IS community who want to methodologically conduct qualitative literature reviews. The tutorial differs from other instructional texts in two regards. First, in contrast to most textbooks, I cover not only searching and synthesizing the literature but also the challenging tasks of framing the literature review, interpreting research findings, and proposing research paths. Second, I draw on other texts that provide guidelines for writing literature reviews in the IS discipline but use many examples of published literature reviews. I use an integrated example of a literature review, which guides the reader through the overall process of compiling a literature review.",
"title": ""
}
] |
scidocsrr
|
8af5509f3ed558520d7bea466b0dd5b3
|
RGB-D flow: Dense 3-D motion estimation using color and depth
|
[
{
"docid": "1589e72380265787a10288c5ad906670",
"text": "The goal of this work is to accurately detect and localize boundaries in natural scenes using local image measurements. We formulate features that respond to characteristic changes in brightness, color, and texture associated with natural boundaries. In order to combine the information from these features in an optimal way, we train a classifier using human labeled images as ground truth. The output of this classifier provides the posterior probability of a boundary at each image location and orientation. We present precision-recall curves showing that the resulting detector significantly outperforms existing approaches. Our two main results are 1) that cue combination can be performed adequately with a simple linear model and 2) that a proper, explicit treatment of texture is required to detect boundaries in natural images.",
"title": ""
}
] |
[
{
"docid": "9dfda21b53ade4c92ef640162f2dd8ef",
"text": "Many recent works on knowledge distillation have provided ways to transfer the knowledge of a trained network for improving the learning process of a new one, but finding a good technique for knowledge distillation is still an open problem. In this paper, we provide a new perspective based on a decision boundary, which is one of the most important component of a classifier. The generalization performance of a classifier is closely related to the adequacy of its decision boundaries, so a good classifier bears good decision boundaries. Therefore, transferring the boundaries directly can be a good attempt for knowledge distillation. To realize this goal, we utilize an adversarial attack to discover samples supporting the decision boundaries. Based on this idea, to transfer more accurate information about the decision boundaries, the proposed algorithm trains a student classifier based on the adversarial samples supporting the decision boundaries. Alongside, two metrics are proposed to evaluate the similarity between decision boundaries. Experiments show that the proposed method indeed improves knowledge distillation and produces much more similar decision boundaries to the teacher classifier.",
"title": ""
},
{
"docid": "e0d0a0f59f5a894c3674b903c5b7b14c",
"text": "Automated Information Systems has played a major role in the growth, advancement, and modernization of our daily work processes. The main purpose of this paper is to develop a safe and secure web based attendance monitoring system using Biometrics and Radio Frequency Identification (RFID) Technology based on multi-tier architecture, for both computers and smartphones. The system can maintain the attendance records of both students and teachers/staff members of an institution. The system can also detect the current location of the students, faculties, and other staff members anywhere within the domain of institution campus. With the help of android application one can receive live feeds of various campus activities, keep updated with the current topics in his/her enrolled courses as well as track his/her friends on a real time basis. An automated SMS service is facilitated in the system, which sends an SMS automatically to the parents in order to notify that their ward has successfully reached the college. Parents as well as student will be notified via e-mail, if the student is lagging behind in attendance. There is a functionality of automatic attendance performance graph in the system, which gives an idea of the student's consistency in attendance throughout the semester.",
"title": ""
},
{
"docid": "8e1b6eb4a939c493eff27cf78bab8d47",
"text": "Among the various natural calamities, flood is considered one of the most catastrophic natural hazards, which has a significant impact on the socio-economic lifeline of a country. The Assessment of flood risks facilitates taking appropriate measures to reduce the consequences of flooding. The flood risk assessment requires Big data which are coming from different sources, such as sensors, social media, and organizations. However, these data sources contain various types of uncertainties because of the presence of incomplete and inaccurate information. This paper presents a Belief rule-based expert system (BRBES) which is developed in Big data platform to assess flood risk in real time. The system processes extremely large dataset by integrating BRBES with Apache Spark while a web-based interface has developed allowing the visualization of flood risk in real time. Since the integrated BRBES employs knowledge driven learning mechanism, it has been compared with other data-driven learning mechanisms to determine the reliability in assessing flood risk. The integrated BRBES produces reliable results in comparison to other data-driven approaches. Data for the expert system has been collected by considering different case study areas of Bangladesh to validate the system.",
"title": ""
},
{
"docid": "c64ff373043fe7814d2acef08142e1a5",
"text": "This article deals with the identification of gene regulatory networks from experimental data using a statistical machine learning approach. A stochastic model of gene interactions capable of handling missing variables is proposed. It can be described as a dynamic Bayesian network particularly well suited to tackle the stochastic nature of gene regulation and gene expression measurement. Parameters of the model are learned through a penalized likelihood maximization implemented through an extended version of EM algorithm. Our approach is tested against experimental data relative to the S.O.S. DNA Repair network of the Escherichia coli bacterium. It appears to be able to extract the main regulations between the genes involved in this network. An added missing variable is found to model the main protein of the network. Good prediction abilities on unlearned data are observed. These first results are very promising: they show the power of the learning algorithm and the ability of the model to capture gene interactions.",
"title": ""
},
{
"docid": "ef1064ba6dcd464fd048aab9f70c4bdd",
"text": "The problem of reproducing high dynamic range images on devices with restricted dynamic range has gained a lot of interest in the computer graphics community. There exist various approaches to this issue, which span several research areas including computer graphics, image processing, color science, physiology, neurology, psychology, et c. These approaches assume a thorough knowledge of both the objective and subjective attributes of an image. However, no comprehensive overview and analysis of such attributes has been published so far. In this paper, we present an overview of image quality attributes of different tone mapping methods. Furthermore, we propose a scheme of relationships between these attributes, leading to the definition of an overall image qua lity measure. We present results of subjective psychophysic al tests that we have performed to prove the proposed relationship scheme. We also present the evaluation of existing tone mapping methods with regard to these attributes. Our effort is not just useful to get into the tone mapping field or when implementing a tone mapping operator, but it also sets the stage for well-founded quality comparisons between tone mapping operators. By providing good definitions of the different attributes, user-driven or fully a utomatic comparisons are made possible at all.",
"title": ""
},
{
"docid": "8e7adfab46fa21202e7ff7311d11b51d",
"text": "In this paper we describe a joint effort by the City University of New York (CUNY), University of Illinois at Urbana-Champaign (UIUC) and SRI International at participating in the mono-lingual entity linking (MLEL) and cross-lingual entity linking (CLEL) tasks for the NIST Text Analysis Conference (TAC) Knowledge Base Population (KBP2011) track. The MLEL system is based on a simple combination of two published systems by CUNY (Chen and Ji, 2011) and UIUC (Ratinov et al., 2011). Therefore, we mainly focus on describing our new CLEL system. In addition to a baseline system based on name translation, machine translation and MLEL, we propose two novel approaches. One is based on a cross-lingual name similarity matrix, iteratively updated based on monolingual co-occurrence, and the other uses topic modeling to enhance performance. Our best systems placed 4th in mono-lingual track and 2nd in cross-lingual track.",
"title": ""
},
{
"docid": "134297d45c943f0751f002fa5c456940",
"text": "Widespread application of real-time, Nonlinear Model Predictive Control (NMPC) algorithms to systems of large scale or with fast dynamics is challenged by the high associated computational cost, in particular in presence of long prediction horizons. In this paper, a fast NMPC strategy to reduce the on-line computational cost is proposed. A Curvature-based Measure of Nonlinearity (CMoN) of the system is exploited to reduce the required number of sensitivity computations, which largely contribute to the overall computational cost. The proposed scheme is validated by a simulation study on the chain of masses motion control problem, a toy example that can be easily extended to an arbitrary dimension. Simulations have been run with long prediction horizons and large state dimensions. Results show that sensitivity computations are significantly reduced with respect to other sensitivity updating schemes, while preserving control performance.",
"title": ""
},
{
"docid": "49e616b9db5ba5003ae01abfb6ed3e16",
"text": "BACKGROUND\nAlthough substantial evidence suggests that stressful life events predispose to the onset of episodes of depression and anxiety, the essential features of these events that are depressogenic and anxiogenic remain uncertain.\n\n\nMETHODS\nHigh contextual threat stressful life events, assessed in 98 592 person-months from 7322 male and female adult twins ascertained from a population-based registry, were blindly rated on the dimensions of humiliation, entrapment, loss, and danger and their categories. Onsets of pure major depression (MD), pure generalized anxiety syndrome (GAS) (defined as generalized anxiety disorder with a 2-week minimum duration), and mixed MD-GAS episodes were examined using logistic regression.\n\n\nRESULTS\nOnsets of pure MD and mixed MD-GAS were predicted by higher ratings of loss and humiliation. Onsets of pure GAS were predicted by higher ratings of loss and danger. High ratings of entrapment predicted only onsets of mixed episodes. The loss categories of death and respondent-initiated separation predicted pure MD but not pure GAS episodes. Events with a combination of humiliation (especially other-initiated separation) and loss were more depressogenic than pure loss events, including death. No sex differences were seen in the prediction of episodes of illness by event categories.\n\n\nCONCLUSIONS\nIn addition to loss, humiliating events that directly devalue an individual in a core role were strongly linked to risk for depressive episodes. Event dimensions and categories that predispose to pure MD vs pure GAS episodes can be distinguished with moderate specificity. The event dimensions that preceded mixed MD-GAS episodes were largely the sum of those that preceded pure MD and pure GAS episodes.",
"title": ""
},
{
"docid": "dd975fded3a24052a31bb20587ff8566",
"text": "This paper presents a design methodology for a high power density converter, which emphasizes weight minimization. The design methodology considers various inverter topologies and semiconductor devices with application of cold plate cooling and LCL filter. Design for a high-power inverter is evaluated with demonstration of a 50 kVA 2-level 3-phase SiC inverter operating at 60 kHz switching frequency. The prototype achieves high gravimetric power density of 6.49 kW/kg.",
"title": ""
},
{
"docid": "fdd7237680ee739b598cd508c4a2ed38",
"text": "Rectovaginal Endometriosis (RVE) is a severe form of endometriosis classified by Kirtner as stage 4 [1,2]. It is less frequent than peritoneal or ovarian endometriosis affecting 3.8% to 37% of patients with endometriosis [3,4]. RVE infiltrates the rectum, vagina, and rectovaginal septum, up to obliteration of the pouch of Douglas [4]. Endometriotic nodules exceeding 30 mm in diameter have 17.9% risk of ureteral involvement [5], while 5.3% to 12% of patients have bowel endometriosis, most commonly found in the recto-sigmoid involving 74% of those patients [3,4].",
"title": ""
},
{
"docid": "cae661146bc0156af25d8014cb61ef0b",
"text": "The two critical factors distinguishing inventory management in a multifirm supply-chain context from the more traditional centrally planned perspective are incentive conflicts and information asymmetries. We study the well-known order quantity/reorder point (Q r) model in a two-player context, using a framework inspired by observations during a case study. We show how traditional allocations of decision rights to supplier and buyer lead to inefficient outcomes, and we use principal-agent models to study the effects of information asymmetries about setup cost and backorder cost, respectively. We analyze two “opposite” models of contracting on inventory policies. First, we derive the buyer’s optimal menu of contracts when the supplier has private information about setup cost, and we show how consignment stock can help reduce the impact of this information asymmetry. Next, we study consignment and assume the supplier cannot observe the buyer’s backorder cost. We derive the supplier’s optimal menu of contracts on consigned stock level and show that in this case, the supplier effectively has to overcompensate the buyer for the cost of each stockout. Our theoretical analysis and the case study suggest that consignment stock helps reduce cycle stock by providing the supplier with an additional incentive to decrease batch size, but simultaneously gives the buyer an incentive to increase safety stock by exaggerating backorder costs. This framework immediately points to practical recommendations on how supply-chain incentives should be realigned to overcome existing information asymmetries.",
"title": ""
},
{
"docid": "3e83f454f66e8aba14733205c8e19753",
"text": "BACKGROUND\nNormal-weight adults gain lower-body fat via adipocyte hyperplasia and upper-body subcutaneous (UBSQ) fat via adipocyte hypertrophy.\n\n\nOBJECTIVES\nWe investigated whether regional fat loss mirrors fat gain and whether the loss of lower-body fat is attributed to decreased adipocyte number or size.\n\n\nDESIGN\nWe assessed UBSQ, lower-body, and visceral fat gains and losses in response to overfeeding and underfeeding in 23 normal-weight adults (15 men) by using dual-energy X-ray absorptiometry and abdominal computed tomography scans. Participants gained ∼5% of weight in 8 wk and lost ∼80% of gained fat in 8 wk. We measured abdominal subcutaneous and femoral adipocyte sizes and numbers after weight gain and loss.\n\n\nRESULTS\nVolunteers gained 3.1 ± 2.1 (mean ± SD) kg body fat with overfeeding and lost 2.4 ± 1.7 kg body fat with underfeeding. Although UBSQ and visceral fat gains were completely reversed after 8 wk of underfeeding, lower-body fat had not yet returned to baseline values. Abdominal and femoral adipocyte sizes, but not numbers, decreased with weight loss. Decreases in abdominal adipocyte size and UBSQ fat mass were correlated (ρ = 0.76, P = 0.001), as were decreases in femoral adipocyte size and lower-body fat (ρ = 0.49, P = 0.05).\n\n\nCONCLUSIONS\nUBSQ and visceral fat increase and decrease proportionately with a short-term weight gain and loss, whereas a gain of lower-body fat does not relate to the loss of lower-body fat. The loss of lower-body fat is attributed to a reduced fat cell size, but not number, which may result in long-term increases in fat cell numbers.",
"title": ""
},
{
"docid": "70fd543752f17237386b3f8e99954230",
"text": "Using Markov logic to integrate logical and distributional information in natural-language semantics results in complex inference problems involving long, complicated formulae. Current inference methods for Markov logic are ineffective on such problems. To address this problem, we propose a new inference algorithm based on SampleSearch that computes probabilities of complete formulae rather than ground atoms. We also introduce a modified closed-world assumption that significantly reduces the size of the ground network, thereby making inference feasible. Our approach is evaluated on the recognizing textual entailment task, and experiments demonstrate its dramatic impact on the efficiency",
"title": ""
},
{
"docid": "071d2d56b4516dc77fb70fcefb999fa0",
"text": "Boiling heat transfer occurs in many situations and can be used for thermal management in various engineered systems with high energy density, from power electronics to heat exchangers in power plants and nuclear reactors. Essentially, boiling is a complex physical process that involves interactions between heating surface, liquid, and vapor. For engineering applications, the boiling heat transfer is usually predicted by empirical correlations or semi-empirical models, which has relatively large uncertainty. In this paper, a data-driven approach based on deep feedforward neural networks is studied. The proposed networks use near wall local features to predict the boiling heat transfer. The inputs of networks include the local momentum and energy convective transport, pressure gradients, turbulent viscosity, and surface information. The outputs of the networks are the quantities of interest of a typical boiling system, including heat transfer components, wall superheat, and near wall void fraction. The networks are trained by the high-fidelity data processed from first principle simulation of pool boiling under varying input heat fluxes. State-of-the-art algorithms are applied to prevent the overfitting issue when training the deep networks. The trained networks are tested in interpolation cases and extrapolation cases which both demonstrate good agreement with the original high-fidelity simulation results.",
"title": ""
},
{
"docid": "fcb526dfd8f1d24b622995d4c0ff3e6c",
"text": "Scene flow is defined as the motion field in 3D space, and can be computed from a single view when using an RGBD sensor. We propose a new scene flow approach that exploits the local and piecewise rigidity of real world scenes. By modeling the motion as a field of twists, our method encourages piecewise smooth solutions of rigid body motions. We give a general formulation to solve for local and global rigid motions by jointly using intensity and depth data. In order to deal efficiently with a moving camera, we model the motion as a rigid component plus a non-rigid residual and propose an alternating solver. The evaluation demonstrates that the proposed method achieves the best results in the most commonly used scene flow benchmark. Through additional experiments we indicate the general applicability of our approach in a variety of different scenarios.",
"title": ""
},
{
"docid": "9984fc080b1f2fe2bf4910b9091591a7",
"text": "In the modern era, the vehicles are focused to be automated to give human driver relaxed driving. In the field of automobile various aspects have been considered which makes a vehicle automated. Google, the biggest network has started working on the self-driving cars since 2010 and still developing new changes to give a whole new level to the automated vehicles. In this paper we have focused on two applications of an automated car, one in which two vehicles have same destination and one knows the route, where other don't. The following vehicle will follow the target (i.e. Front) vehicle automatically. The other application is automated driving during the heavy traffic jam, hence relaxing driver from continuously pushing brake, accelerator or clutch. The idea described in this paper has been taken from the Google car, defining the one aspect here under consideration is making the destination dynamic. This can be done by a vehicle automatically following the destination of another vehicle. Since taking intelligent decisions in the traffic is also an issue for the automated vehicle so this aspect has been also under consideration in this paper.",
"title": ""
},
{
"docid": "23d560ca3bb6f2d7d9b615b5ad3224d2",
"text": "The Pebbles project is creating applications to connmt multiple Personal DigiM Assistants &DAs) to a main computer such as a PC We are cmenfly using 3Com Pd@Ilots b-use they are popdar and widespread. We created the ‘Remote Comrnandefl application to dow users to take turns sending input from their PahnPiiots to the PC as if they were using the PCS mouse and keyboard. ‘.PebblesDraw” is a shared whiteboard application we btit that allows dl of tie users to send input simtdtaneously while sharing the same PC display. We are investigating the use of these applications in various contexts, such as colocated mmtings. Keywor& Personal Digiti Assistants @DAs), PH11oc Single Display Groupware, Pebbles, AmuleL",
"title": ""
},
{
"docid": "1b581e17dad529b3452d3fbdcb1b3dd1",
"text": "Authorship attribution is the task of identifying the author of a given text. The main concern of this task is to define an appropriate characterization of documents that captures the writing style of authors. This paper proposes a new method for authorship attribution supported on the idea that a proper identification of authors must consider both stylistic and topic features of texts. This method characterizes documents by a set of word sequences that combine functional and content words. The experimental results on poem classification demonstrated that this method outperforms most current state-of-the-art approaches, and that it is appropriate to handle the attribution of short documents.",
"title": ""
},
{
"docid": "4bce6150e9bc23716a19a0d7c02640c0",
"text": "A Data Mining Framework for Constructing Features and Models for Intrusion Detection Systems",
"title": ""
},
{
"docid": "d3156f87367e8f55c3e62d376352d727",
"text": "The topic of deep-learning has recently received considerable attention in the machine learning research community, having great potential to liberate computer scientists from hand-engineering training datasets, because the method can learn the desired features automatically. This is particularly beneficial in medical research applications of machine learning, where getting good hand labelling of data is especially expensive. We propose application of a single-layer sparse-auto encoder to dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) for fully automatic classification of tissue types in a large unlabelled dataset with minimal human interference -- in a manner similar to data-mining. DCE-MRI analysis, looking at the change of the MR contrast-agent concentration over successively acquired images, is time-series analysis. We analyse the change of brightness (which is related to the contrast-agent concentration) of the DCE-MRI images over time to classify different tissue types in the images. Therefore our system is an application of an auto encoder to time-series analysis while the demonstrated result and further possible successive application areas are in computer vision. We discuss the important factors affecting performance of the system in applying the auto encoder to the time-series analysis of DCE-MRI medical image data.",
"title": ""
}
] |
scidocsrr
|
85f29d0e7177cf5557b50e7b64d80510
|
Decentralized Cloud-SDN Architecture in Smart Grid: A Dynamic Pricing Model
|
[
{
"docid": "adec3b3578d56cefed73fd74d270ca22",
"text": "In the framework of liberalized electricity markets, distributed generation and controllable demand have the opportunity to participate in the real-time operation of transmission and distribution networks. This may be done by using the virtual power plant (VPP) concept, which consists of aggregating the capacity of many distributed energy resources (DER) in order to make them more accessible and manageable across energy markets. This paper provides an optimization algorithm to manage a VPP composed of a large number of customers with thermostatically controlled appliances. The algorithm, based on a direct load control (DLC), determines the optimal control schedules that an aggregator should apply to the controllable devices of the VPP in order to optimize load reduction over a specified control period. The results define the load reduction bid that the aggregator can present in the electricity market, thus helping to minimize network congestion and deviations between generation and demand. The proposed model, which is valid for both transmission and distribution networks, is tested on a real power system to demonstrate its applicability.",
"title": ""
},
{
"docid": "c2606da8495680b58898c4145365888e",
"text": "This paper proposes a distributed framework for demand response and user adaptation in smart grid networks. In particular, we borrow the concept of congestion pricing in Internet traffic control and show that pricing information is very useful to regulate user demand and hence balance network load. User preference is modeled as a willingness to pay parameter which can be seen as an indicator of differential quality of service. Both analysis and simulation results are presented to demonstrate the dynamics and convergence behavior of the algorithm. Based on this algorithm, we then propose a novel charging method for plug-in hybrid electric vehicles (PHEVs) in a smart grid, where users or PHEVs can adapt their charging rates according to their preferences. Simulation results are presented to demonstrate the dynamic behavior of the charging algorithm and impact of different parameters on system performance.",
"title": ""
}
] |
[
{
"docid": "9dd83eb5760e8dbf6f3bd918eb73c79f",
"text": "Pontine tegmental cap dysplasia (PTCD) is a recently described hindbrain malformation characterized by pontine hypoplasia and ectopic dorsal transverse pontine fibers (1). To date, a total of 19 cases of PTCD have been published, all patients had sensorineural hearing loss (SNHL). We contribute 1 additional case of PTCD with SNHL with and VIIIth cranial nerve and temporal bone abnormalities using dedicated magnetic resonance (MR) and high-resolution temporal bone computed tomographic (CT) images.",
"title": ""
},
{
"docid": "2c2e0f5ddfb2e1d5121a9a58e2ee870d",
"text": "Emotional events often attain a privileged status in memory. Cognitive neuroscientists have begun to elucidate the psychological and neural mechanisms underlying emotional retention advantages in the human brain. The amygdala is a brain structure that directly mediates aspects of emotional learning and facilitates memory operations in other regions, including the hippocampus and prefrontal cortex. Emotion–memory interactions occur at various stages of information processing, from the initial encoding and consolidation of memory traces to their long-term retrieval. Recent advances are revealing new insights into the reactivation of latent emotional associations and the recollection of personal episodes from the remote past.",
"title": ""
},
{
"docid": "11a28e11ba6e7352713b8ee63291cd9c",
"text": "This review focuses on discussing the main changes on the upcoming fourth edition of the WHO Classification of Tumors of the Pituitary Gland emphasizing histopathological and molecular genetics aspects of pituitary neuroendocrine (i.e., pituitary adenomas) and some of the non-neuroendocrine tumors involving the pituitary gland. Instead of a formal review, we introduced the highlights of the new WHO classification by answering select questions relevant to practising pathologists. The revised classification of pituitary adenomas, in addition to hormone immunohistochemistry, recognizes the role of other immunohistochemical markers including but not limited to pituitary transcription factors. Recognizing this novel approach, the fourth edition of the WHO classification has abandoned the concept of \"a hormone-producing pituitary adenoma\" and adopted a pituitary adenohypophyseal cell lineage designation of the adenomas with subsequent categorization of histological variants according to hormone content and specific histological and immunohistochemical features. This new classification does not require a routine ultrastructural examination of these tumors. The new definition of the Null cell adenoma requires the demonstration of immunonegativity for pituitary transcription factors and adenohypophyseal hormones Moreover, the term of atypical pituitary adenoma is no longer recommended. In addition to the accurate tumor subtyping, assessment of the tumor proliferative potential by mitotic count and Ki-67 index, and other clinical parameters such as tumor invasion, is strongly recommended in individual cases for consideration of clinically aggressive adenomas. This classification also recognizes some subtypes of pituitary neuroendocrine tumors as \"high-risk pituitary adenomas\" due to the clinical aggressive behavior; these include the sparsely granulated somatotroph adenoma, the lactotroph adenoma in men, the Crooke's cell adenoma, the silent corticotroph adenoma, and the newly introduced plurihormonal Pit-1-positive adenoma (previously known as silent subtype III pituitary adenoma). An additional novel aspect of the new WHO classification was also the definition of the spectrum of thyroid transcription factor-1 expressing pituitary tumors of the posterior lobe as representing a morphological spectrum of a single nosological entity. These tumors include the pituicytoma, the spindle cell oncocytoma, the granular cell tumor of the neurohypophysis, and the sellar ependymoma.",
"title": ""
},
{
"docid": "a43698feab07ba6e1ea917843cc4129a",
"text": "The nation's critical infrastructures, such as those found in Supervisory Control and Data Acquisition (SCADA) and industrial control systems (ICS), are increasingly at risk and vulnerable to internal and external threats. Security best practices on these systems come at a very opportune time. Further, the value of risk assessment of these systems is something that cannot just be relegated as irrelevant. In this paper, we present a review of security best practices and risk assessment of SCADA and ICS and report our research findings on an on-going risk modeling of a prototypical industrial control system using the CORAS framework tool.",
"title": ""
},
{
"docid": "abcbd831178e1bc5419da8274dc17bbf",
"text": "Most state-of-the-art statistical machine translation systems use log-linear models, which are defined in terms of hypothesis features and weights for those features. It is standard to tune the feature weights in order to maximize a translation quality metric, using heldout test sentences and their corresponding reference translations. However, obtaining reference translations is expensive. In our earlier work (Madnani et al., 2007), we introduced a new full-sentence paraphrase technique, based on English-to-English decoding with an MT system, and demonstrated that the resulting paraphrases can be used to cut the number of human reference translations needed in half. In this paper, we take the idea a step further, asking how far it is possible to get with just a single good reference translation for each item in the development set. Our analysis suggests that it is necessary to invest in four or more human translations in order to significantly improve on a single translation augmented by monolingual paraphrases.",
"title": ""
},
{
"docid": "1503fae33ae8609a2193e978218d1543",
"text": "The construct of resilience has captured the imagination of researchers across various disciplines over the last five decades (Ungar, 2008a). Despite a growing body of research in the area of resilience, there is little consensus among researchers about the definition and meaning of this concept. Resilience has been used to describe eight kinds of phenomena across different disciplines. These eight phenomena can be divided into two clusters based on the disciplinary origin. The first cluster mainly involves definitions of resilience derived from the discipline of psychology and covers six themes including (i) personality traits, (ii) positive outcomes/forms of adaptation despite high-risk, (iii) factors associated with positive adaptation, (iv) processes, (v) sustained competent functioning/stress resistance, and (vi) recovery from trauma or adversity. The second cluster of definitions is rooted in the discipline of sociology and encompasses two themes including (i) human agency and resistance, and (ii) survival. This paper discusses the inconsistencies in the varied definitions used within the published literature and describes the differing conceptualizations of resilience as well as their limitations. The paper concludes by offering a unifying conceptualization of resilience and by discussing implications for future research on resilience.",
"title": ""
},
{
"docid": "6fe9aaaa0033d3322e989588df3105fe",
"text": "Set-valued data, in which a set of values are associated with an individual, is common in databases ranging from market basket data, to medical databases of patients’ symptoms and behaviors, to query engine search logs. Anonymizing this data is important if we are to reconcile the conflicting demands arising from the desire to release the data for study and the desire to protect the privacy of individuals represented in the data. Unfortunately, the bulk of existing anonymization techniques, which were developed for scenarios in which each individual is associated with only one sensitive value, are not well-suited for set-valued data. In this paper we propose a top-down, partition-based approach to anonymizing set-valued data that scales linearly with the input size and scores well on an information-loss data quality metric. We further note that our technique can be applied to anonymize the infamous AOL query logs, and discuss the merits and challenges in anonymizing query logs using our approach.",
"title": ""
},
{
"docid": "f1fe8a9d2e4886f040b494d76bc4bb78",
"text": "The benefits of enhanced condition monitoring in the asset management of the electricity transmission infrastructure are increasingly being exploited by the grid operators. Adding more sensors helps to track the plant health more accurately. However, the installation or operating costs of any additional sensors could outweigh the benefits they bring due to the requirement for new cabling or battery maintenance. Energy harvesting devices are therefore being proposed to power a new generation of wireless sensors. The harvesting devices could enable the sensors to be maintenance free over their lifetime and substantially reduce the cost of installing and operating a condition monitoring system.",
"title": ""
},
{
"docid": "f86b96306f56150679eaa65330a2eb0e",
"text": "DEFINITION Visual analytics is the science of analytical reasoning supported by interactive visual interfaces according to [6]. Over the last decades data was produced at an incredible rate. However, the ability to collect and store this data is increasing at a faster rate than the ability to analyze it. While purely automatic or purely visual analysis methods were developed in the last decades, the complex nature of many problems makes it indispensable to include humans at an early stage in the data analysis process. Visual analytics methods allow decision makers to combine their flexibility, creativity, and background knowledge with the enormous storage and processing capacities of today’s computers to gain insight into complex problems. The goal of visual analytics research is thus to turn the information overload into an opportunity by enabling decision-makers to examine this massive information stream to take effective actions in real-time situations.",
"title": ""
},
{
"docid": "8c68b4a0f02b0764fc2d69a65341a4a7",
"text": "This paper presents a miniature DC-70 GHz single-pole four-throw (SP4T) built in a low-cost 0.13-µm CMOS process. The switch is based on a series-shunt design with input and output matching circuits. Deep n-well (also called triple-well) CMOS transistors are used to minimize the substrate coupling. Also, deep trench isolation is used between the different ports to minimize the port-to-port coupling. The SP4T results in a measured insertion loss of less than 3.5 dB up to 67 GHz with an isolation of greater than 25 dB. The measured port-to-port coupling is less than 28 dB up to 67 GHz. The measured P1dB and IIP3 are independent of frequency and are 9–10 dBm and 20–21 dBm, respectively. The active chip area is 0.24×0.23 mm2. To our knowledge, this work represents the widest bandwidth SP4T switch in any CMOS technology to-date.",
"title": ""
},
{
"docid": "766bc5cee369a729dc310c7134edc36e",
"text": "Spatial multiple access holds the promise to boost the capacity of wireless networks when an access point has multiple antennas. Due to the asynchronous and uncontrolled nature of wireless LANs, conventional MIMO technology does not work efficiently when concurrent transmissions from multiple stations are uncoordinated. In this paper, we present the design and implementation of a crosslayer system, called SAM, that addresses the challenges of enabling spatial multiple access for multiple devices in a random access network like WLAN. SAM uses a chain-decoding technique to reliably recover the channel parameters for each device, and iteratively decode concurrent frames with misaligned symbol timings and frequency offsets. We propose a new MAC protocol, called CCMA, to enable concurrent transmissions by different mobile stations while remaining backward compatible with 802.11. Finally, we implement the PHY and MAC layer of SAM using the Sora high-performance software radio platform. Our evaluation results under real wireless conditions show that SAM can improve network uplink throughput by 70% with two antennas over 802.11.",
"title": ""
},
{
"docid": "b24a0f878f50d5b92d268e183fe62dde",
"text": "Management is the process of setting and achieving organizational goals through its functions: forecasting, organization, coordination, training and monitoring-evaluation.Leadership is: the ability to influence, to make others follow you, the ability to guide, the human side of business for \"teacher\". Interest in leadership increased during the early part of the twentieth century. Early leadership theories focused on what qualities distinguished between leaders and followers, while subsequent theories looked at other variables such as situational factors and skill levels. Other considerations emphasize aspects that separate management of leadership, calling them two completely different processes.The words manager and lider are very often used to designate the same person who leads, however, they represent different realities and the main difference arises form the way in which people around are motivated. The difference between being a manager and being a leader is simple. Management is a career. Leadership is a calling. A leader is someone who people naturally follow through their own choice, whereas a manager must be obeyed. A manager may only have obtained his position of authority through time and loyalty given to the company, not as a result of his leadership qualities. A leader may have no organisational skills, but his vision unites people behind him. Leadership and management are two notions that are often used interchangeably. However, these words actually describe two different concepts. Leadership is the main component of change, providing vision, and dedication necessary for its realization. Leadership is a skill that is formed by education, experiences, interaction with people and inspiring, of course, practice. Effective leadership depends largely on how their leaders define, follow and share the vision to followers. Leadership is just one important component of the directing function. A manager cannot just be a leader, he also needs formal authority to be effective.",
"title": ""
},
{
"docid": "b68a728f4e737f293dca0901970b41fe",
"text": "With maturity of advanced technologies and urgent requirement for maintaining a healthy environment with reasonable price, China is moving toward a trend of generating electricity from renewable wind resources. How to select a suitable wind farm becomes an important focus for stakeholders. This paper first briefly introduces wind farm and then develops its critical success criteria. A new multi-criteria decision-making (MCDM) model, based on the analytic hierarchy process (AHP) associated with benefits, opportunities, costs and risks (BOCR), is proposed to help select a suitable wind farm project. Multiple factors that affect the success of wind farm operations are analyzed by taking into account experts’ opinions, and a performance ranking of the wind farms is generated. 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "c5b39921ebebb8bbb20fdef471e9d275",
"text": "One popular justification for punishment is the just deserts rationale: A person deserves punishment proportionate to the moral wrong committed. A competing justification is the deterrence rationale: Punishing an offender reduces the frequency and likelihood of future offenses. The authors examined the motivation underlying laypeople's use of punishment for prototypical wrongs. Study 1 (N = 336) revealed high sensitivity to factors uniquely associated with the just deserts perspective (e.g., offense seriousness, moral trespass) and insensitivity to factors associated with deterrence (e.g., likelihood of detection, offense frequency). Study 2 (N = 329) confirmed the proposed model through structural equation modeling (SEM). Study 3 (N = 351) revealed that despite strongly stated preferences for deterrence theory, individual sentencing decisions seemed driven exclusively by just deserts concerns.",
"title": ""
},
{
"docid": "b34db00c8a84eab1c7b1a6458fc6cd97",
"text": "The use of hand gestures provides an attractive alternative to cumbersome interface devices for human-computer interaction (HCI). In particular, visual interpretation of hand gestures can help in achieving the ease and naturalness desired for HCI. This has motivated a very active research area concerned with computer vision-based analysis and interpretation of hand gestures. We survey the literature on visual interpretation of hand gestures in the context of its role in HCI. This discussion is organized on the basis of the method used for modeling, analyzing, and recognizing gestures. Important differences in the gesture interpretation approaches arise depending on whether a 3D model of the human hand or an image appearance model of the human hand is used. 3D hand models offer a way of more elaborate modeling of hand gestures but lead to computational hurdles that have not been overcome given the real-time requirements of HCI. Appearance-based models lead to computationally efficient “purposive” approaches that work well under constrained situations but seem to lack the generality desirable for HCI. We also discuss implemented gestural systems as well as other potential applications of vision-based gesture recognition. Although the current progress is encouraging, further theoretical as well as computational advances are needed before gestures can be widely used for HCI. We discuss directions of future research in gesture recognition, including its integration with other natural modes of humancomputer interaction. Index Terms —Vision-based gesture recognition, gesture analysis, hand tracking, nonrigid motion analysis, human-computer",
"title": ""
},
{
"docid": "86e5c9defae0135db8466df0bdbe5aef",
"text": "Autonomous Underwater Vehicles (AUVs) are robots able to perform tasks without human intervention (remote operators). Research and development of this class of vehicles has growing, due to the excellent characteristics of the AUVs to operate in different situations. Therefore, this study aims to analyze turbulent single fluid flow over different geometric configurations of an AUV hull, in order to obtain test geometry that generates lower drag force, which reduces the energy consumption of the vehicle, thereby increasing their autonomy during operation. In the numerical analysis was used ANSYS-CFX® 11.0 software, which is a powerful tool for solving problems involving fluid mechanics. Results of the velocity (vectors and streamlines), pressure distribution and drag coefficient are showed and analyzed. Optimum hull geometry was found. Lastly, a relationship between the geometric parameters analyzed and the drag coefficient was obtained.",
"title": ""
},
{
"docid": "9d1c0462c27516974a2b4e520916201e",
"text": "The current method of grading prostate cancer on histology uses the Gleason system, which describes five increasingly malignant stages of cancer according to qualitative analysis of tissue architecture. The Gleason grading system has been shown to suffer from inter- and intra-observer variability. In this paper we present a new method for automated and quantitative grading of prostate biopsy specimens. A total of 102 graph-based, morphological, and textural features are extracted from each tissue patch in order to quantify the arrangement of nuclei and glandular structures within digitized images of histological prostate tissue specimens. A support vector machine (SVM) is used to classify the digitized histology slides into one of four different tissue classes: benign epithelium, benign stroma, Gleason grade 3 adenocarcinoma, and Gleason grade 4 adenocarcinoma. The SVM classifier was able to distinguish between all four types of tissue patterns, achieving an accuracy of 92.8% when distinguishing between Gleason grade 3 and stroma, 92.4% between epithelium and stroma, and 76.9% between Gleason grades 3 and 4. Both textural and graph-based features were found to be important in discriminating between different tissue classes. This work suggests that the current Gleason grading scheme can be improved by utilizing quantitative image analysis to aid pathologists in producing an accurate and reproducible diagnosis",
"title": ""
},
{
"docid": "12b205881ead4d31ae668d52f4ba52c7",
"text": "The general theory of side-looking synthetic aperture radar systems is developed. A simple circuit-theory model is developed; the geometry of the system determines the nature of the prefilter and the receiver (or processor) is the postfilter. The complex distributed reflectivity density appears as the input, and receiver noise is first considered as the interference which limits performance. Analysis and optimization are carried out for three performance criteria (resolution, signal-to-noise ratio, and least squares estimation of the target field). The optimum synthetic aperture length is derived in terms of the noise level and average transmitted power. Range-Doppler ambiguity limitations and optical processing are discussed briefly. The synthetic aperture concept for rotating target fields is described. It is observed that, for a physical aperture, a side-looking radar, and a rotating target field, the azimuth resolution is λ/α where α is the change in aspect angle over which the target field is viewed, The effects of phase errors on azimuth resolution are derived in terms of the power density spectrum of the derivative of the phase errors and the performance in the absence of phase errors.",
"title": ""
},
{
"docid": "c433b602177782e814848a26c711361a",
"text": "Running is a complex dynamical task which places strict design requirements on both the physical components and software control systems of a robot. This paper explores some of those requirements and illustrates how a variable compliance actuation system can satisfy them. We present the design, analysis, simulation, and benchtop experimental validation of such an actuator system. We demonstrate, through simulation, the application of our prototype actuator to the problem of biped running.",
"title": ""
},
{
"docid": "30e798ef3668df14f1625d40c53011a0",
"text": "Classification with big data has become one of the latest trends when talking about learning from the available information. The data growth in the last years has rocketed the interest in effectively acquiring knowledge to analyze and predict trends. The variety and veracity that are related to big data introduce a degree of uncertainty that has to be handled in addition to the volume and velocity requirements. This data usually also presents what is known as the problem of classification with imbalanced datasets, a class distribution where the most important concepts to be learned are presented by a negligible number of examples in relation to the number of examples from the other classes. In order to adequately deal with imbalanced big data we propose the Chi-FRBCS-BigDataCS algorithm, a fuzzy rule based classification system that is able to deal with the uncertainly that is introduced in large volumes of data without disregarding the learning in the underrepresented class. The method uses the MapReduce framework to distribute the computational operations of the fuzzy model while it includes cost-sensitive learning techniques in its design to address the imbalance that is present in the data. The good performance of this approach is supported by the experimental analysis that is carried out over twenty-four imbalanced big data cases of study. The results obtained show that the proposal is able to handle these problems obtaining competitive results both in the classification performance of the model and the time needed for the computation. © 2014 Elsevier B.V. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
af1a6f7baa4b0a78c2d2adebfa845712
|
BROCCOLI: Software for fast fMRI analysis on many-core CPUs and GPUs
|
[
{
"docid": "d7142245920a5c1f51c716a549a0ee8e",
"text": "Finding objective and effective thresholds for voxelwise statistics derived from neuroimaging data has been a long-standing problem. With at least one test performed for every voxel in an image, some correction of the thresholds is needed to control the error rates, but standard procedures for multiple hypothesis testing (e.g., Bonferroni) tend to not be sensitive enough to be useful in this context. This paper introduces to the neuroscience literature statistical procedures for controlling the false discovery rate (FDR). Recent theoretical work in statistics suggests that FDR-controlling procedures will be effective for the analysis of neuroimaging data. These procedures operate simultaneously on all voxelwise test statistics to determine which tests should be considered statistically significant. The innovation of the procedures is that they control the expected proportion of the rejected hypotheses that are falsely rejected. We demonstrate this approach using both simulations and functional magnetic resonance imaging data from two simple experiments.",
"title": ""
}
] |
[
{
"docid": "6d149a530769b61a34bcd5b8d900dbcd",
"text": "Click here and insert your abstract text. The Web accessibility issue has been subject of study for a wide number of organizations all around the World. The current paper describes an accessibility evaluation that aimed to test the Portuguese enterprises websites. Has the presented results state, the evaluated websites accessibility levels are significantly bad, but the majority of the detected errors are not very complex from a technological point-of-view. With this is mind, our research team, in collaboration with a Portuguese enterprise named ANO and the support of its UTAD-ANOgov/PEPPOL research project, elaborated an improvement proposal, directed to the Web content developers, which aimed on helping these specialists to better understand and implement Web accessibility features. © 2013 The Authors. Published by Elsevier B.V. Selection and peer-review under responsibility of the Scientific Programme Committee of the 5th International Conference on Software Development and Technologies for Enhancing Accessibility and Fighting Info-exclusion (DSAI 2013).",
"title": ""
},
{
"docid": "3575842a3306a11bfcc5b370c6d67daf",
"text": "BACKGROUND AND PURPOSE\nMental practice (MP) of a particular motor skill has repeatedly been shown to activate the same musculature and neural areas as physical practice of the skill. Pilot study results suggest that a rehabilitation program incorporating MP of valued motor skills in chronic stroke patients provides sufficient repetitive practice to increase affected arm use and function. This Phase 2 study compared efficacy of a rehabilitation program incorporating MP of specific arm movements to a placebo condition using randomized controlled methods and an appropriate sample size. Method- Thirty-two chronic stroke patients (mean=3.6 years) with moderate motor deficits received 30-minute therapy sessions occurring 2 days/week for 6 weeks, and emphasizing activities of daily living. Subjects randomly assigned to the experimental condition also received 30-minute MP sessions provided directly after therapy requiring daily MP of the activities of daily living; subjects assigned to the control group received the same amount of therapist interaction as the experimental group, and a sham intervention directly after therapy, consisting of relaxation. Outcomes were evaluated by a blinded rater using the Action Research Arm test and the upper extremity section of the Fugl-Meyer Assessment.\n\n\nRESULTS\nNo pre-existing group differences were found on any demographic variable or movement scale. Subjects receiving MP showed significant reductions in affected arm impairment and significant increases in daily arm function (both at the P<0.0001 level). Only patients in the group receiving MP exhibited new ability to perform valued activities.\n\n\nCONCLUSIONS\nThe results support the efficacy of programs incorporating mental practice for rehabilitating affected arm motor function in patients with chronic stroke. These changes are clinically significant.",
"title": ""
},
{
"docid": "cc6458464cd8bb152683fde0af1e3d23",
"text": "While the application of IoT in smart technologies becomes more and more proliferated, the pandemonium of its protocols becomes increasingly confusing. More seriously, severe security deficiencies of these protocols become evident, as time-to-market is a key factor, which satisfaction comes at the price of a less thorough security design and testing. This applies especially to the smart home domain, where the consumer-driven market demands quick and cheap solutions. This paper presents an overview of IoT application domains and discusses the most important wireless IoT protocols for smart home, which are KNX-RF, EnOcean, Zigbee, Z-Wave and Thread. Finally, it describes the security features of said protocols and compares them with each other, giving advice on whose protocols are more suitable for a secure smart home.",
"title": ""
},
{
"docid": "3cc97542631d734d8014abfbef652c79",
"text": "Internet exchange points (IXPs) are an important ingredient of the Internet AS-level ecosystem - a logical fabric of the Internet made up of about 30,000 ASes and their mutual business relationships whose primary purpose is to control and manage the flow of traffic. Despite the IXPs' critical role in this fabric, little is known about them in terms of their peering matrices (i.e., who peers with whom at which IXP) and corresponding traffic matrices (i.e., how much traffic do the different ASes that peer at an IXP exchange with one another). In this paper, we report on an Internet-wide traceroute study that was specifically designed to shed light on the unknown IXP-specific peering matrices and involves targeted traceroutes from publicly available and geographically dispersed vantage points. Based on our method, we were able to discover and validate the existence of about 44K IXP-specific peering links - nearly 18K more links than were previously known. In the process, we also classified all known IXPs depending on the type of information required to detect them. Moreover, in view of the currently used inferred AS-level maps of the Internet that are known to miss a significant portion of the actual AS relationships of the peer-to-peer type, our study provides a new method for augmenting these maps with IXP-related peering links in a systematic and informed manner.",
"title": ""
},
{
"docid": "54eea56f03b9b9f5983857550b83a5da",
"text": "This paper summarizes opportunities for silicon process technologies at mmwave and terahertz frequencies and demonstrates key building blocks for 94-GHz and 600-GHz imaging arrays. It reviews potential applications and summarizes state-of-the-art terahertz technologies. Terahertz focal-plane arrays (FPAs) for video-rate imaging applications have been fabricated in commercially available CMOS and SiGe process technologies respectively. The 3times5 arrays achieve a responsivity of up to 50 kV/W with a minimum NEP of 400 pW/radicHz at 600 GHz. Images of postal envelopes are presented which demonstrate the potential of silicon integrate 600-GHz terahertz FPAs for future low-cost terahertz camera systems.",
"title": ""
},
{
"docid": "e1e878c5df90a96811f885935ac13888",
"text": "Multiple-input-multiple-output (MIMO) wireless systems use multiple antenna elements at transmit and receive to offer improved capacity over single antenna topologies in multipath channels. In such systems, the antenna properties as well as the multipath channel characteristics play a key role in determining communication performance. This paper reviews recent research findings concerning antennas and propagation in MIMO systems. Issues considered include channel capacity computation, channel measurement and modeling approaches, and the impact of antenna element properties and array configuration on system performance. Throughout the discussion, outstanding research questions in these areas are highlighted.",
"title": ""
},
{
"docid": "81919bc432dd70ed3e48a0122d91b9e4",
"text": "Artemisinin resistance in Plasmodium falciparum has emerged as a major threat for malaria control and elimination worldwide. Mutations in the Kelch propeller domain of PfK13 are the only known molecular markers for artemisinin resistance in this parasite. Over 100 non-synonymous mutations have been identified in PfK13 from various malaria endemic regions. This study aimed to investigate the genetic diversity of PvK12, the Plasmodium vivax ortholog of PfK13, in parasite populations from Southeast Asia, where artemisinin resistance in P. falciparum has emerged. The PvK12 sequences in 120 P. vivax isolates collected from Thailand (22), Myanmar (32) and China (66) between 2004 and 2008 were obtained and 353 PvK12 sequences from worldwide populations were retrieved for further analysis. These PvK12 sequences revealed a very low level of genetic diversity (π = 0.00003) with only three single nucleotide polymorphisms (SNPs). Of these three SNPs, only G581R is nonsynonymous. The synonymous mutation S88S is present in 3% (1/32) of the Myanmar samples, while G704G and G581R are present in 1.5% (1/66) and 3% (2/66) of the samples from China, respectively. None of the mutations observed in the P. vivax samples were associated with artemisinin resistance in P. falciparum. Furthermore, analysis of 473 PvK12 sequences from twelve worldwide P. vivax populations confirmed the very limited polymorphism in this gene and detected only five distinct haplotypes. The PvK12 sequences from global P. vivax populations displayed very limited genetic diversity indicating low levels of baseline polymorphisms of PvK12 in these areas.",
"title": ""
},
{
"docid": "810158f2907eec894e54a57dabb2b9c4",
"text": "Dependability properties of bi-directional and braided rings are well recognized in improving communication availability. However, current ring-based topologies have no mechanisms for extreme integrity and have not been considered for emerging high-dependability markets where cost is a significant driver, such as the automotive \"by-wire\" applications. This paper introduces a braided-ring architecture with superior guardian functionality and complete Byzantine fault tolerance while simultaneously reducing cost. This paper reviews anticipated requirements for high-dependability low-cost applications and emphasizes the need for regular safe testing of core coverage functions. The paper describes the ring's main mechanisms for achieving integrity and availability levels similar to SAFEbus/spl reg/ but at low automotive costs. The paper also presents a mechanism to achieve self-stabilizing TDMA-based communication and design methods for fault-tolerant protocols on a network of simplex nodes. The paper also introduces a new self-checking pair concept that leverages braided-ring properties. This novel message-based self-checking-pair concept allows high-integrity source data at extremely low cost.",
"title": ""
},
{
"docid": "1203822bf82dcd890e7a7a60fb282ce5",
"text": "Individuals with psychosocial problems such as social phobia or feelings of loneliness might be vulnerable to excessive use of cyber-technological devices, such as smartphones. We aimed to determine the relationship of smartphone addiction with social phobia and loneliness in a sample of university students in Istanbul, Turkey. Three hundred and sixty-seven students who owned smartphones were given the Smartphone Addiction Scale (SAS), UCLA Loneliness Scale (UCLA-LS), and Brief Social Phobia Scale (BSPS). A significant difference was found in the mean SAS scores (p < .001) between users who declared that their main purpose for smartphone use was to access social networking sites. The BSPS scores showed positive correlations with all six subscales and with the total SAS scores. The total UCLA-LS scores were positively correlated with daily life disturbance, positive anticipation, cyber-oriented relationship, and total scores on the SAS. In regression analyses, total BSPS scores were significant predictors for SAS total scores (β = 0.313, t = 5.992, p < .001). In addition, BSPS scores were significant predictors for all six SAS subscales, whereas UCLA-LS scores were significant predictors for only cyber-oriented relationship subscale scores on the SAS (β = 0.130, t = 2.416, p < .05). The results of this study indicate that social phobia was associated with the risk for smartphone addiction in young people. Younger individuals who primarily use their smartphones to access social networking sites also have an excessive pattern of smartphone use. ARTICLE HISTORY Received 12 January 2016 Accepted 19 February 2016",
"title": ""
},
{
"docid": "fb116c7cd3ab8bd88fb7817284980d4a",
"text": "Sentence-level sentiment classification is important to understand users' fine-grained opinions. Existing methods for sentence-level sentiment classification are mainly based on supervised learning. However, it is difficult to obtain sentiment labels of sentences since manual annotation is expensive and time-consuming. In this paper, we propose an approach for sentence-level sentiment classification without the need of sentence labels. More specifically, we propose a unified framework to incorporate two types of weak supervision, i.e., document-level and word-level sentiment labels, to learn the sentence-level sentiment classifier. In addition, the contextual information of sentences and words extracted from unlabeled sentences is incorporated into our approach to enhance the learning of sentiment classifier. Experiments on benchmark datasets show that our approach can effectively improve the performance of sentence-level sentiment classification.",
"title": ""
},
{
"docid": "3301a0cf26af8d4d8c7b2b9d56cec292",
"text": "Reading comprehension (RC)—in contrast to information retrieval—requires integrating information and reasoning about events, entities, and their relations across a full document. Question answering is conventionally used to assess RC ability, in both artificial agents and children learning to read. However, existing RC datasets and tasks are dominated by questions that can be solved by selecting answers using superficial information (e.g., local context similarity or global term frequency); they thus fail to test for the essential integrative aspect of RC. To encourage progress on deeper comprehension of language, we present a new dataset and set of tasks in which the reader must answer questions about stories by reading entire books or movie scripts. These tasks are designed so that successfully answering their questions requires understanding the underlying narrative rather than relying on shallow pattern matching or salience. We show that although humans solve the tasks easily, standard RC models struggle on the tasks presented here. We provide an analysis of the dataset and the challenges it presents.",
"title": ""
},
{
"docid": "0f173a3486bf09ced9d221019241c7c4",
"text": "In millimeter-wave (mmWave) systems, antenna architecture limitations make it difficult to apply conventional fully digital precoding techniques but call for low-cost analog radio frequency (RF) and digital baseband hybrid precoding methods. This paper investigates joint RF-baseband hybrid precoding for the downlink of multiuser multiantenna mmWave systems with a limited number of RF chains. Two performance measures, maximizing the spectral efficiency and the energy efficiency of the system, are considered. We propose a codebook-based RF precoding design and obtain the channel state information via a beam sweep procedure. Via the codebook-based design, the original system is transformed into a virtual multiuser downlink system with the RF chain constraint. Consequently, we are able to simplify the complicated hybrid precoding optimization problems to joint codeword selection and precoder design (JWSPD) problems. Then, we propose efficient methods to address the JWSPD problems and jointly optimize the RF and baseband precoders under the two performance measures. Finally, extensive numerical results are provided to validate the effectiveness of the proposed hybrid precoders.",
"title": ""
},
{
"docid": "c26c5691c34a26f7710448765521b6d5",
"text": "Text messages sent via the Short Message Service (SMS) have revolutionized interpersonal communication. Recent years have also seen this service become a critical component of the security infrastructure, assisting with tasks including identity verification and second-factor authentication. At the same time, this messaging infrastructure has become dramatically more open and connected to public networks than ever before. However, the implications of this openness, the security practices of benign services, and the malicious misuse of this ecosystem are not well understood. In this paper, we provide the first longitudinal study to answer these questions, analyzing nearly 400,000 text messages sent to public online SMS gateways over the course of 14 months. From this data, we are able to identify not only a range of services sending extremely sensitive plaintext data and implementing low entropy solutions for one-use codes, but also offer insights into the prevalence of SMS spam and behaviors indicating that public gateways are primarily used for evading account creation policies that require verified phone numbers. This latter finding has significant implications for research combatting phone-verified account fraud and demonstrates that such evasion will continue to be difficult to detect and prevent.",
"title": ""
},
{
"docid": "dfae67d62731a9307a10de7b11d6d117",
"text": "A 16 Gb 4-state MLC NAND flash memory augments the sustained program throughput to 34 MB/s by fully exercising all the available cells along a selected word line and by using additional performance enhancement modes. The same chip operating as an 8 Gb SLC device guarantees over 60 MB/s programming throughput. The newly introduced all bit line (ABL) architecture has multiple advantages when higher performance is targeted and it was made possible by adopting the ldquocurrent sensingrdquo (as opposed to the mainstream ldquovoltage sensingrdquo) technique. The general chip architecture is presented in contrast to a state of the art conventional circuit and a double size data buffer is found to be necessary for the maximum parallelism attained. Further conceptual changes designed to counterbalance the area increase are presented, hierarchical column architecture being of foremost importance. Optimization of other circuits, such as the charge pump, is another example. Fast data access rate is essential, and ways of boosting it are described, including a new redundancy scheme. ABL contribution to energy saving is also acknowledged.",
"title": ""
},
{
"docid": "eec7a9a6859e641c3cc0ade73583ef5c",
"text": "We propose an Apache Spark-based scale-up server architecture using Docker container-based partitioning method to improve performance scalability. The performance scalability problem of Apache Spark-based scale-up servers is due to garbage collection(GC) and remote memory access overheads when the servers are equipped with significant number of cores and Non-Uniform Memory Access(NUMA). The proposed method minimizes the problems using Docker container-based architecture effectively partitioning the original scale-up server into small logical servers. Our evaluation study based on benchmark programs revealed that the partitioning method showed performance improvement by ranging from 1.1x through 1.7x on a 120 core scale-up system. Our proof-of-concept scale-up server architecture provides the basis towards complete and practical design of partitioning-based scale-up servers showing performance scalability.",
"title": ""
},
{
"docid": "9ce08ed9e7e34ef1f5f12bfbe54e50ea",
"text": "GPU-based clusters are increasingly being deployed in HPC environments to accelerate a variety of scientific applications. Despite their growing popularity, the GPU devices themselves are under-utilized even for many computationally-intensive jobs. This stems from the fact that the typical GPU usage model is one in which a host processor periodically offloads computationally intensive portions of an application to the coprocessor. Since some portions of code cannot be offloaded to the GPU (for example, code performing network communication in MPI applications), this usage model results in periods of time when the GPU is idle. GPUs could be time-shared across jobs to \"fill\" these idle periods, but unlike CPU resources such as the cache, the effects of sharing the GPU are not well understood. Specifically, two jobs that time-share a single GPU will experience resource contention and interfere with each other. The resulting slow-down could lead to missed job deadlines. Current cluster managers do not support GPU-sharing, but instead dedicate GPUs to a job for the job's lifetime.\n In this paper, we present a framework to predict and handle interference when two or more jobs time-share GPUs in HPC clusters. Our framework consists of an analysis model, and a dynamic interference detection and response mechanism to detect excessive interference and restart the interfering jobs on different nodes. We implement our framework in Torque, an open-source cluster manager, and using real workloads on an HPC cluster, show that interference-aware two-job colocation (although our method is applicable to colocating more than two jobs) improves GPU utilization by 25%, reduces a job's waiting time in the queue by 39% and improves job latencies by around 20%.",
"title": ""
},
{
"docid": "d60e344c8bfb4422c947ddf22e9837b5",
"text": "INTRODUCTION\nPrevious studies evaluated the perception of laypersons to symmetric alteration of anterior dental esthetics. However, no studies have evaluated the perception of asymmetric esthetic alterations. This investigation will determine whether asymmetric and symmetric anterior dental discrepancies are detectable by dental professionals and laypersons.\n\n\nMETHODS\nSeven images of women's smiles were intentionally altered with a software-imaging program. The alterations involved crown length, crown width, midline diastema, papilla height, and gingiva-to-lip relationship of the maxillary anterior teeth. These altered images were rated by groups of general dentists, orthodontists, and laypersons using a visual analog scale. Statistical analysis of the responses resulted in the establishment of threshold levels of attractiveness for each group.\n\n\nRESULTS\nOrthodontists were more critical than dentists and laypeople when evaluating asymmetric crown length discrepancies. All 3 groups could identify a unilateral crown width discrepancy of 2.0 mm. A small midline diastema was not rated as unattractive by any group. Unilateral reduction of papillary height was generally rated less attractive than bilateral alteration. Orthodontists and laypeople rated a 3-mm distance from gingiva to lip as unattractive.\n\n\nCONCLUSIONS\nAsymmetric alterations make teeth more unattractive to not only dental professionals but also the lay public.",
"title": ""
},
{
"docid": "881cd0e0807d28cddcf8e999913c872b",
"text": "We examine the relationship between quality-based manufacturing strategy and the use of different types of performance measures, as well as their separate and joint effects on performance. A key part of our investigation is the distinction between financial and both objective and subjective nonfinancial measures. Our results support the view that performance measurement diversity benefits performance as we find that, regardless of strategy, firms with more extensive performance measurement systems—especially those that include objective and subjective nonfinancial measures—have higher performance. But our findings also partly support the view that the strategy-measurement ‘‘fit’’ affects performance. We find that firms that emphasize quality in manufacturing use more of both objective and subjective nonfinancial measures. However, there is only a positive effect on performance from pairing a qualitybased manufacturing strategy with extensive use of subjective measures, but not with objective nonfinancial measures. INTRODUCTION Performance measures play a key role in translating an organization’s strategy into desired behaviors and results (Campbell et al. 2004; Chenhall and Langfield-Smith 1998; Kaplan and Norton 2001; Lillis 2002). They also help to communicate expectations, monitor progress, provide feedback, and motivate employees through performancebased rewards (Banker et al. 2000; Chenhall 2003; Ittner and Larcker 1998b; Ittner et al. 1997; Ittner, Larcker, and Randall 2003). Traditionally, firms have primarily used financial measures for these purposes (Balkcom et al. 1997; Kaplan and Norton 1992). But with the ‘‘new’’ competitive realities of increased customization, flexibility, and responsiveness, and associated advances in manufacturing practices, both academics and practitioners have argued that traditional financial performance measures are no longer adequate for these functions (Dixon et al. 1990; Fisher 1992; Ittner and Larcker 1998a; Neely 1999). Indeed, many We acknowledge the helpful suggestions by Tom Groot, Jim Hesford, Ranjani Krishnan, Fred Lindahl, Helene Loning, Michal Matejka, Ken Merchant, Frank Moers, Mark Peecher, Mike Shields, Sally Widener, workshop participants at the University of Illinois, the 2002 AAA Management Accounting Meeting in Austin, the 2002 World Congress of Accounting Educators in Hong Kong, and the 2003 AAA Annual Meeting in Honolulu. An earlier version of this paper won the best paper award at the 9th World Congress of Accounting Educators in Hong Kong (2002). 186 Van der Stede, Chow, and Lin Behavioral Research in Accounting, 2006 accounting researchers have identified the continued reliance on traditional management accounting systems as a major reason why many new manufacturing initiatives perform poorly (Banker et al. 1993; Ittner and Larcker 1995). In light of this development in theory and practice, the current study seeks to advance understanding of the role that performance measurement plays in executing strategy and enhancing organizational performance. It proposes and empirically tests three hypotheses about the performance effects of performance measurement diversity; the relation between quality-based manufacturing strategy and firms’ use of different types of performance measures; and the joint effects of strategy and performance measurement on organizational performance. The distinction between objective and subjective performance measures is a pivotal part of our investigation. Prior empirical research has typically only differentiated between financial and nonfinancial performance measures. We go beyond this dichotomy to further distinguish between nonfinancial measures that are quantitative and objectively derived (e.g., defect rates), and those that are qualitative and subjectively determined (e.g., an assessment of the degree of cooperation or knowledge sharing across departmental borders). Making this finer distinction between types of nonfinancial performance measures contributes to recent work in accounting that has begun to focus on the use of subjectivity in performance measurement, evaluation, and incentives (e.g., Bushman et al. 1996; Gibbs et al. 2004; Ittner, Larcker, and Meyer 2003; MacLeod and Parent 1999; Moers 2005; Murphy and Oyer 2004). Using survey data from 128 manufacturing firms, we find that firms with more extensive performance measurement systems, especially ones that include objective and subjective nonfinancial measures, have higher performance. This result holds regardless of the firm’s manufacturing strategy. As such, our finding supports the view that performance measurement diversity, per se, is beneficial. But we also find evidence that firms adjust their use of performance measures to strategy. Firms that emphasize quality in manufacturing tend to use more of both objective and subjective nonfinancial measures, but without reducing the number of financial measures. Interestingly, however, combining quality-based strategies with extensive use of objective nonfinancial measures is not associated with higher performance. This set of results is consistent with Ittner and Larcker (1995) who found that quality programs are associated with greater use of nontraditional (i.e., nonfinancial) measures and reward systems, but combining nontraditional measures with extensive quality programs does not improve performance. However, by differentiating between objective and subjective nonfinancial measures—thereby going beyond Ittner and Larcker (1995) and much of the extant accounting literature—we find that performance is higher when the performance measures used in conjunction with a quality-based manufacturing strategy are of the subjective type. Finally, we find that among firms with similar quality-based strategies, those with less extensive performance measurement systems have lower performance, whereas those with more extensive performance measurement systems do not. In the case of subjective performance measures, firms that use them more extensively than firms with similar qualitybased strategies actually have significantly higher performance. Thus, a ‘‘mismatch’’ between performance measurement and strategy is associated with lower performance only when firms use fewer measures than firms with similar quality-based strategies, but not when they use more. The paper proceeds as follows. The next section builds on the extant literature to formulate three hypotheses. The third section discusses the method, sample, and measures. Strategy, Choice of Performance Measures, and Performance 187 Behavioral Research in Accounting, 2006 The fourth section presents the results. The fifth section provides a summary, discusses the study’s limitations, and suggests possible directions for future research. HYPOTHESES Although there is widespread agreement on the need to expand performance measurement, two different views exist on the nature of the desirable change (Ittner, Larcker, and Randall 2003; Ruddle and Feeny 2000). In this section, we engage the relevant literatures to develop three hypotheses. Collectively, the hypotheses provide the basis for comparing the two prevailing schools of thought on how performance measurement should be improved; that of performance measurement diversity regardless of strategy versus that of performance measurement alignment with strategy (Ittner, Larcker, and Randall 2003). The Performance Measurement Diversity View A number of authors have argued that broadening the set of performance measures, per se, enhances organizational performance (e.g., Edvinsson and Malone 1997; Lingle and Schiemann 1996). The premise is that managers have an incentive to concentrate on those activities for which their performance is measured, often at the expense of other relevant but non-measured activities (Hopwood 1974), and greater measurement diversity can reduce such dysfunctional effects (Lillis 2002). Support for this view is available from economicsbased agency studies. Datar et al. (2001), Feltham and Xie (1994), Hemmer (1996), Holmstrom (1979), and Lambert (2001), for example, have demonstrated that in the absence of measurement costs, introducing incentives based on nonfinancial measures can improve contracting by incorporating information on managerial actions that are not fully captured by financial measures. Analytical studies have further identified potential benefits from using performance measures that are subjectively derived. For example, Baiman and Rajan (1995) and Baker et al. (1994) have shown that subjective measures can help to mitigate distortions in managerial effort by ‘‘backing out’’ dysfunctional behavior induced by incomplete objective performance measures, as well as reduce noise in the overall performance evaluation. However, the literature also has noted potential drawbacks from measurement diversity. It increases system complexity, thus taxing managers’ cognitive abilities (Ghosh and Lusch 2000; Lipe and Salterio 2000, 2002). It also increases the burden of determining relative weights for different measures (Ittner and Larcker 1998a; Moers 2005). Finally, multiple measures are also potentially conflicting (e.g., manufacturing efficiency and customer responsiveness), leading to incongruence of goals, at least in the short run (Baker 1992; Holmstrom and Milgrom 1991), and organizational friction (Lillis 2002). Despite these potential drawbacks, there is considerable empirical support for increased measurement diversity. For example, in a study of time-series data in 18 hotels, Banker et al. (2000) found that when nonfinancial measures are included in the compensation contract, managers more closely aligned their efforts to those measures, resulting in increased performance. Hoque and James (2000) and Scott and Tiessen (1999) also have found positive relations between firm performance and increased use of different types of performance measures (e.g., financial and nonfinancial). These resul",
"title": ""
},
{
"docid": "c51acd24cb864b050432a055fef2de9a",
"text": "Electric motor and power electronics-based inverter are the major components in industrial and automotive electric drives. In this paper, we present a model-based fault diagnostics system developed using a machine learning technology for detecting and locating multiple classes of faults in an electric drive. Power electronics inverter can be considered to be the weakest link in such a system from hardware failure point of view; hence, this work is focused on detecting faults and finding which switches in the inverter cause the faults. A simulation model has been developed based on the theoretical foundations of electric drives to simulate the normal condition, all single-switch and post-short-circuit faults. A machine learning algorithm has been developed to automatically select a set of representative operating points in the (torque, speed) domain, which in turn is sent to the simulated electric drive model to generate signals for the training of a diagnostic neural network, fault diagnostic neural network (FDNN). We validated the capability of the FDNN on data generated by an experimental bench setup. Our research demonstrates that with a robust machine learning approach, a diagnostic system can be trained based on a simulated electric drive model, which can lead to a correct classification of faults over a wide operating domain.",
"title": ""
},
{
"docid": "9f2db5cf1ee0cfd0250e68bdbc78b434",
"text": "A novel transverse equivalent network is developed in this letter to efficiently analyze a recently proposed leaky-wave antenna in substrate integrated waveguide (SIW) technology. For this purpose, precise modeling of the SIW posts for any distance between vias is essential to obtain accurate results. A detailed parametric study is performed resulting in leaky-mode dispersion curves as a function of the main geometrical dimensions of the antenna. Finally, design curves that directly provide the requested dimensions to synthesize the desired scanning response and leakage rate are reported and validated with experiments.",
"title": ""
}
] |
scidocsrr
|
c661449ef79514f7401a52066f48e29b
|
Neural Network Models for Paraphrase Identification, Semantic Textual Similarity, Natural Language Inference, and Question Answering
|
[
{
"docid": "5d1b66986357f2566ac503727a80bb87",
"text": "Natural Language Inference (NLI) task requires an agent to determine the logical relationship between a natural language premise and a natural language hypothesis. We introduce Interactive Inference Network (IIN), a novel class of neural network architectures that is able to achieve high-level understanding of the sentence pair by hierarchically extracting semantic features from interaction space. We show that an interaction tensor (attention weight) contains semantic information to solve natural language inference, and a denser interaction tensor contains richer semantic information. One instance of such architecture, Densely Interactive Inference Network (DIIN), demonstrates the state-of-the-art performance on large scale NLI copora and large-scale NLI alike corpus. It’s noteworthy that DIIN achieve a greater than 20% error reduction on the challenging Multi-Genre NLI (MultiNLI; Williams et al. 2017) dataset with respect to the strongest published system.",
"title": ""
},
{
"docid": "5664ca8d7f0f2f069d5483d4a334c670",
"text": "In Semantic Textual Similarity, systems rate the degree of semantic equivalence between two text snippets. This year, the participants were challenged with new data sets for English, as well as the introduction of Spanish, as a new language in which to assess semantic similarity. For the English subtask, we exposed the systems to a diversity of testing scenarios, by preparing additional OntoNotesWordNet sense mappings and news headlines, as well as introducing new genres, including image descriptions, DEFT discussion forums, DEFT newswire, and tweet-newswire headline mappings. For Spanish, since, to our knowledge, this is the first time that official evaluations are conducted, we used well-formed text, by featuring sentences extracted from encyclopedic content and newswire. The annotations for both tasks leveraged crowdsourcing. The Spanish subtask engaged 9 teams participating with 22 system runs, and the English subtask attracted 15 teams with 38 system runs.",
"title": ""
},
{
"docid": "87f0a390580c452d77fcfc7040352832",
"text": "• J. Wieting, M. Bansal, K. Gimpel, K. Livescu, and D. Roth. 2015. From paraphrase database to compositional paraphrase model and back. TACL. • K. S. Tai, R. Socher, and C. D. Manning. 2015. Improved semantic representations from treestructured long short-term memory networks. ACL. • W. Yin and H. Schutze. 2015. Convolutional neural network for paraphrase identification. NAACL. The product also streams internet radio and comes with a 30-day free trial for realnetworks' rhapsody music subscription. The device plays internet radio streams and comes with a 30-day trial of realnetworks rhapsody music service. Given two sentences, measure their similarity:",
"title": ""
},
{
"docid": "de721f4b839b0816f551fa8f8ee2065e",
"text": "This paper presents a syntax-driven approach to question answering, specifically the answer-sentence selection problem for short-answer questions. Rather than using syntactic features to augment existing statistical classifiers (as in previous work), we build on the idea that questions and their (correct) answers relate to each other via loose but predictable syntactic transformations. We propose a probabilistic quasi-synchronous grammar, inspired by one proposed for machine translation (D. Smith and Eisner, 2006), and parameterized by mixtures of a robust nonlexical syntax/alignment model with a(n optional) lexical-semantics-driven log-linear model. Our model learns soft alignments as a hidden variable in discriminative training. Experimental results using the TREC dataset are shown to significantly outperform strong state-of-the-art baselines.",
"title": ""
}
] |
[
{
"docid": "4a76739b77446025bc209a9c7d7cf1a0",
"text": "Background\nMetabolic syndrome is defined as a cluster of at least three out of five clinical risk factors: abdominal (visceral) obesity, hypertension, elevated serum triglycerides, low serum high-density lipoprotein (HDL) and insulin resistance. It is estimated to affect over 20% of the global adult population. Abdominal (visceral) obesity is thought to be the predominant risk factor for metabolic syndrome and as predictions estimate that 50% of adults will be classified as obese by 2030 it is likely that metabolic syndrome will be a significant problem for health services and a drain on health economies.Evidence shows that regular and consistent exercise reduces abdominal obesity and results in favourable changes in body composition. It has therefore been suggested that exercise is a medicine in its own right and should be prescribed as such.\n\n\nPurpose of this review\nThis review provides a summary of the current evidence on the pathophysiology of dysfunctional adipose tissue (adiposopathy). It describes the relationship of adiposopathy to metabolic syndrome and how exercise may mediate these processes, and evaluates current evidence on the clinical efficacy of exercise in the management of abdominal obesity. The review also discusses the type and dose of exercise needed for optimal improvements in health status in relation to the available evidence and considers the difficulty in achieving adherence to exercise programmes.\n\n\nConclusion\nThere is moderate evidence supporting the use of programmes of exercise to reverse metabolic syndrome although at present the optimal dose and type of exercise is unknown. The main challenge for health care professionals is how to motivate individuals to participate and adherence to programmes of exercise used prophylactically and as a treatment for metabolic syndrome.",
"title": ""
},
{
"docid": "3b38ff37137549b170dc3bdcf0a955c5",
"text": "Little is known about corporate social responsibility (CSR) in lesser developed countries. To address this knowledge gap, we used Chile as a test case, and conducted 44 in-depth interviews with informants who are leading CSR initiatives. Using institutional theory as a lens, we outline the state of CSR practice in Chile, describe the factors that have led to the emergence of CSR, and note the barriers to wider adoption of these initiatives.",
"title": ""
},
{
"docid": "91a56dbdefc08d28ff74883ec10a5d6e",
"text": "A truly autonomous guided vehicle (AGV) must sense its surrounding environment and react accordingly. In order to maneuver an AGV autonomously, it has to overcome navigational and collision avoidance problems. Previous AGV control systems have relied on hand-coded algorithms for processing sensor information. An intelligent distributed fuzzy logic control system (IDFLCS) has been implemented in a mecanum wheeled AGV system in order to achieve improved reliability and to reduce complexity of the development of control systems. Fuzzy logic controllers have been used to achieve robust control of mechatronic systems by fusing multiple signals from noisy sensors, integrating the representation of human knowledge and implementing behaviour-based control using if-then rules. This paper presents an intelligent distributed controller that implements fuzzy logic on an AGV that uses four independently driven mecanum wheels, incorporating laser, inertial and ultrasound sensors. Distributed control system, fuzzy control strategy, navigation and motion control of such an AGV are presented.",
"title": ""
},
{
"docid": "1f28f5efa70a6387b00e335a8cf1e1d0",
"text": "The two underlying requirements of face age progression, i.e. aging accuracy and identity permanence, are not well studied in the literature. In this paper, we present a novel generative adversarial network based approach. It separately models the constraints for the intrinsic subject-specific characteristics and the age-specific facial changes with respect to the elapsed time, ensuring that the generated faces present desired aging effects while simultaneously keeping personalized properties stable. Further, to generate more lifelike facial details, high-level age-specific features conveyed by the synthesized face are estimated by a pyramidal adversarial discriminator at multiple scales, which simulates the aging effects in a finer manner. The proposed method is applicable to diverse face samples in the presence of variations in pose, expression, makeup, etc., and remarkably vivid aging effects are achieved. Both visual fidelity and quantitative evaluations show that the approach advances the state-of-the-art.",
"title": ""
},
{
"docid": "c08518b806c93dde1dd04fdf3c9c45bb",
"text": "Purpose – The objectives of this article are to develop a multiple-item scale for measuring e-service quality and to study the influence of perceived quality on consumer satisfaction levels and the level of web site loyalty. Design/methodology/approach – First, there is an explanation of the main attributes of the concepts examined, with special attention being paid to the multi-dimensional nature of the variables and the relationships between them. This is followed by an examination of the validation processes of the measuring instruments. Findings – The validation process of scales suggested that perceived quality is a multidimensional construct: web design, customer service, assurance and order management; that perceived quality influences on satisfaction; and that satisfaction influences on consumer loyalty. Moreover, no differences in these conclusions were observed if the total sample is divided between buyers and information searchers. Practical implications – First, the need to develop user-friendly web sites which ease consumer purchasing and searching, thus creating a suitable framework for the generation of higher satisfaction and loyalty levels. Second, the web site manager should enhance service loyalty, customer sensitivity, personalised service and a quick response to complaints. Third, the web site should uphold sufficient security levels in communications and meet data protection requirements regarding the privacy. Lastly, the need for correct product delivery and product manipulation or service is recommended. Originality/value – Most relevant studies about perceived quality in the internet have focused on web design aspects. Moreover, the existing literature regarding internet consumer behaviour has not fully analysed profits generated by higher perceived quality in terms of user satisfaction and loyalty.",
"title": ""
},
{
"docid": "f50342dfacd198dc094ef96415de4899",
"text": "While the ubiquity and importance of nonliteral language are clear, people’s ability to use and understand it remains a mystery. Metaphor in particular has been studied extensively across many disciplines in cognitive science. One approach focuses on the pragmatic principles that listeners utilize to infer meaning from metaphorical utterances. While this approach has generated a number of insights about how people understand metaphor, to our knowledge there is no formal model showing that effects in metaphor understanding can arise from basic principles of communication. Building upon recent advances in formal models of pragmatics, we describe a computational model that uses pragmatic reasoning to interpret metaphorical utterances. We conduct behavioral experiments to evaluate the model’s performance and show that our model produces metaphorical interpretations that closely fit behavioral data. We discuss implications of the model for metaphor understanding, principles of communication, and formal models of language understanding.",
"title": ""
},
{
"docid": "256d8659fe5bca53bd03a2f7a101282b",
"text": "The paper combines and extends the technologies of fuzzy sets and association rules, considering users' differential emphasis on each attribute through fuzzy regions. A fuzzy data mining algorithm is proposed to discovery fuzzy association rules for weighted quantitative data. This is expected to be more realistic and practical than crisp association rules. Discovered rules are expressed in natural language that is more understandable to humans. The paper demonstrates the performance of the proposed approach using a synthetic but realistic dataset",
"title": ""
},
{
"docid": "2af36afd2440a4940873fef1703aab3f",
"text": "In recent years researchers have found that alternations in arterial or venular tree of the retinal vasculature are associated with several public health problems such as diabetic retinopathy which is also the leading cause of blindness in the world. A prerequisite for automated assessment of subtle changes in arteries and veins, is to accurately separate those vessels from each other. This is a difficult task due to high similarity between arteries and veins in addition to variation of color and non-uniform illumination inter and intra retinal images. In this paper a novel structural and automated method is presented for artery/vein classification of blood vessels in retinal images. The proposed method consists of three main steps. In the first step, several image enhancement techniques are employed to improve the images. Then a specific feature extraction process is applied to separate major arteries from veins. Indeed, vessels are divided to smaller segments and feature extraction and vessel classification are applied to each small vessel segment instead of each vessel point. Finally, a post processing step is added to improve the results obtained from the previous step using structural characteristics of the retinal vascular network. In the last stage, vessel features at intersection and bifurcation points are processed for detection of arterial and venular sub trees. Ultimately vessel labels are revised by publishing the dominant label through each identified connected tree of arteries or veins. Evaluation of the proposed approach against two different datasets of retinal images including DRIVE database demonstrates the good performance and robustness of the method. The proposed method may be used for determination of arteriolar to venular diameter ratio in retinal images. Also the proposed method potentially allows for further investigation of labels of thinner arteries and veins which might be found by tracing them back to the major vessels.",
"title": ""
},
{
"docid": "5a2be4e590d31b0cb553215f11776a15",
"text": "This paper presents a review of the state of the art and a discussion on vertical take-off and landing (VTOL) unmanned aerial vehicles (UAVs) applied to the inspection of power utility assets and other similar civil applications. The first part of the paper presents the authors' view on specific benefits and operation constraints associated with the use of UAVs in power industry applications. The second part cites more than 70 recent publications related to this field of application. Among them, some present complete technologies while others deal with specific subsystems relevant to the application of such mobile platforms to power line inspection. The authors close with a discussion of key factors for successful application of VTOL UAVs to power industry infrastructure inspection.",
"title": ""
},
{
"docid": "91e97df8ee68b2aa8219faeba398f20f",
"text": "We propose a method for animating still manga imagery through camera movements. Given a series of existing manga pages, we start by automatically extracting panels, comic characters, and balloons from the manga pages. Then, we use a data-driven graphical model to infer per-panel motion and emotion states from low-level visual patterns. Finally, by combining domain knowledge of film production and characteristics of manga, we simulate camera movements over the manga pages, yielding an animation. The results augment the still manga contents with animated motion that reveals the mood and tension of the story, while maintaining the original narrative. We have tested our method on manga series of different genres, and demonstrated that our method can generate animations that are more effective in storytelling and pacing, with less human efforts, as compared with prior works. We also show two applications of our method, mobile comic reading, and comic trailer generation.",
"title": ""
},
{
"docid": "3e7bac216957b18a24cbd0393b0ff26a",
"text": "This research investigated the influence of parent–adolescent communication quality, as perceived by the adolescents, on the relationship between adolescents’ Internet use and verbal aggression. Adolescents (N = 363, age range 10–16, MT1 = 12.84, SD = 1.93) were examined twice with a six-month delay. Controlling for social support in general terms, moderated regression analyses showed that Internet-related communication quality with parents determined whether Internet use is associated with an increase or a decrease in adolescents’ verbal aggression scores over time. A three way interaction indicated that high Internet-related communication quality with peers can have disadvantageous effects if the communication quality with parents is low. Implications on resources and risk factors related to the effects of Internet use are discussed. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "5bca58cbd1ef80ebf040529578d2a72a",
"text": "In this letter, a printable chipless tag with electromagnetic code using split ring resonators is proposed. A 4 b chipless tag that can be applied to paper/plastic-based items such as ID cards, tickets, banknotes and security documents is designed. The chipless tag generates distinct electromagnetic characteristics by various combinations of a split ring resonator. Furthermore, a reader system is proposed to digitize electromagnetic characteristics and convert chipless tag to electromagnetic code.",
"title": ""
},
{
"docid": "8ff8a8ce2db839767adb8559f6d06721",
"text": "Indoor environments present opportunities for a rich set of location-aware applications such as navigation tools for humans and robots, interactive virtual games, resource discovery, asset tracking, location-aware sensor networking etc. Typical indoor applications require better accuracy than what current outdoor location systems provide. Outdoor location technologies such as GPS have poor indoor performance because of the harsh nature of indoor environments. Further, typical indoor applications require different types of location information such as physical space, position and orientation. This dissertation describes the design and implementation of the Cricket indoor location system that provides accurate location in the form of user space, position and orientation to mobile and sensor network applications. Cricket consists of location beacons that are attached to the ceiling of a building, and receivers, called listeners, attached to devices that need location. Each beacon periodically transmits its location information in an RF message. At the same time, the beacon also transmits an ultrasonic pulse. The listeners listen to beacon transmissions and measure distances to nearby beacons, and use these distances to compute their own locations. This active-beacon passive-listener architecture is scalable with respect to the number of users, and enables applications that preserve user privacy. This dissertation describes how Cricket achieves accurate distance measurements between beacons and listeners. Once the beacons are deployed, the MAT and AFL algorithms, described in this dissertation, use measurements taken at a mobile listener to configure the beacons with a coordinate assignment that reflects the beacon layout. This dissertation presents beacon interference avoidance and detection algorithms, as well as outlier rejection algorithms to prevent and filter out outlier distance estimates caused by uncoordinated beacon transmissions. The Cricket listeners can measure distances with an accuracy of 5 cm. The listeners can detect boundaries with an accuracy of 1 cm. Cricket has a position estimation accuracy of 10 cm and an orientation accuracy of 3 degrees. Thesis Supervisor: Hari Balakrishnan Title: Associate Professor of Computer Science and Engineering",
"title": ""
},
{
"docid": "8760b523ca90dccf7a9a197622bda043",
"text": "The increasing need for better performance, protection, and reliability in shipboard power distribution systems, and the increasing availability of power semiconductors is generating the potential for solid state circuit breakers to replace traditional electromechanical circuit breakers. This paper reviews various solid state circuit breaker topologies that are suitable for low and medium voltage shipboard system protection. Depending on the application solid state circuit breakers can have different main circuit topologies, fault detection methods, commutation methods of power semiconductor devices, and steady state operation after tripping. This paper provides recommendations on the solid state circuit breaker topologies that provides the best performance-cost tradeoff based on the application.",
"title": ""
},
{
"docid": "ce41e19933571f6904e317a33b97716b",
"text": "Ivan Voitalov, 2 Pim van der Hoorn, 2 Remco van der Hofstad, and Dmitri Krioukov 2, 4, 5 Department of Physics, Northeastern University, Boston, Massachusetts 02115, USA Network Science Institute, Northeastern University, Boston, Massachusetts 02115, USA Department of Mathematics and Computer Science, Eindhoven University of Technology, Postbus 513, 5600 MB Eindhoven, Netherlands Department of Mathematics, Northeastern University, Boston, Massachusetts 02115, USA Department of Electrical & Computer Engineering, Northeastern University, Boston, Massachusetts 02115, USA",
"title": ""
},
{
"docid": "8f95bf125d4b10acb373e54407c39b9b",
"text": "Research and development irrigation management information systems are the important measures of making irrigation management more modernized and standardized. The difficulties of building information systems have been increased along with the continuous development of information technology and the complexity of information systems, information systems put forward higher request to “shared” and “reuse”. Ontology-based information systems modeling can eliminate semantic differences, and carry out knowledge sharing and interoperability of different systems. In this paper, we introduce several common models which used in information systems modeling briefly; and then we introduce ontology, summarize ontology-based information systems modeling process; finally, we discuss the applications of ontology-based information systems modeling in irrigation management information systems preliminary.",
"title": ""
},
{
"docid": "ca70bf377f8823c2ecb1cdd607c064ec",
"text": "To date, few studies have compared the effectiveness of topical silicone gels versus that of silicone gel sheets in preventing scars. In this prospective study, we compared the efficacy and the convenience of use of the 2 products. We enrolled 30 patients who had undergone a surgical procedure 2 weeks to 3 months before joining the study. These participants were randomly assigned to 2 treatment arms: one for treatment with a silicone gel sheet, and the other for treatment with a topical silicone gel. Vancouver Scar Scale (VSS) scores were obtained for all patients; in addition, participants completed scoring patient questionnaires 1 and 3 months after treatment onset. Our results reveal not only that no significant difference in efficacy exists between the 2 products but also that topical silicone gels are more convenient to use. While previous studies have advocated for silicone gel sheets as first-line therapies in postoperative scar management, we maintain that similar effects can be expected with topical silicone gel. The authors recommend that, when clinicians have a choice of silicone-based products for scar prevention, they should focus on each patient's scar location, lifestyle, and willingness to undergo scar prevention treatment.",
"title": ""
},
{
"docid": "ea8622fad1ceba3f274e30247dd2f678",
"text": "In software engineering it is widely acknowledged that the usage of metrics at the initial phases of the object oriented software life cycle can help designers to make better decisions and to predict external quality attributes, such as maintainability. Following this idea we have carried out three controlled experiments to ascertain if any correlation exists between the structural complexity and the size of UML class diagrams and their maintainability. We used 8 metrics for measuring the structural complexity of class diagrams due to the usage of UML relationships, and 3 metrics to measure their size. With the aim of determining which of these metrics are really relevant to be used as class diagrams maintainability indicators, we present in this work a study based on Principal Component Analysis. The obtained results show that the metrics related to associations, aggregations, generalizations and dependencies, are the most relevant whilst those related to size seem to be redundant.",
"title": ""
},
{
"docid": "cd18d1e77af0e2146b67b028f1860ff0",
"text": "Convolutional neural networks (CNN) have recently shown outstanding image classification performance in the large- scale visual recognition challenge (ILSVRC2012). The success of CNNs is attributed to their ability to learn rich mid-level image representations as opposed to hand-designed low-level features used in other image classification methods. Learning CNNs, however, amounts to estimating millions of parameters and requires a very large number of annotated image samples. This property currently prevents application of CNNs to problems with limited training data. In this work we show how image representations learned with CNNs on large-scale annotated datasets can be efficiently transferred to other visual recognition tasks with limited amount of training data. We design a method to reuse layers trained on the ImageNet dataset to compute mid-level image representation for images in the PASCAL VOC dataset. We show that despite differences in image statistics and tasks in the two datasets, the transferred representation leads to significantly improved results for object and action classification, outperforming the current state of the art on Pascal VOC 2007 and 2012 datasets. We also show promising results for object and action localization.",
"title": ""
},
{
"docid": "a64b7b5a24e75bac31d3a071f5a29025",
"text": "A new hand gesture recognition method based on Input– Output Hidden Markov Models is presented. This method deals with the dynamic aspects of gestures. Gestures are extracted from a sequence of video images by tracking the skin–color blobs corresponding to the hand into a body– face space centered on the face of the user. Our goal is to recognize two classes of gestures: deictic and symbolic.",
"title": ""
}
] |
scidocsrr
|
e3d4a31f2814505c595e0ed7c8f5f23e
|
A new secure model for the use of cloud computing in big data analytics
|
[
{
"docid": "fde3a2559dc66c18923f29350a005597",
"text": "Motivated by privacy and usability requirements in various scenarios where existing cryptographic tools (like secure multi-party computation and functional encryption) are not adequate, we introduce a new cryptographic tool called Controlled Functional Encryption (C-FE). As in functional encryption, C-FE allows a user (client) to learn only certain functions of encrypted data, using keys obtained from an authority. However, we allow (and require) the client to send a fresh key request to the authority every time it wants to evaluate a function on a ciphertext. We obtain efficient solutions by carefully combining CCA2 secure public-key encryption (or rerandomizable RCCA secure public-key encryption, depending on the nature of security desired) with Yao's garbled circuit. Our main contributions in this work include developing and for- mally defining the notion of C-FE; designing theoretical and practical constructions of C-FE schemes achieving these definitions for specific and general classes of functions; and evaluating the performance of our constructions on various application scenarios.",
"title": ""
},
{
"docid": "c0a05cad5021b1e779682b50a53f25fd",
"text": "We initiate the formal study of functional encryption by giving precise definitions of the concept and its security. Roughly speaking, functional encryption supports restricted secret keys that enable a key holder to learn a specific function of encrypted data, but learn nothing else about the data. For example, given an encrypted program the secret key may enable the key holder to learn the output of the program on a specific input without learning anything else about the program. We show that defining security for functional encryption is non-trivial. First, we show that a natural game-based definition is inadequate for some functionalities. We then present a natural simulation-based definition and show that it (provably) cannot be satisfied in the standard model, but can be satisfied in the random oracle model. We show how to map many existing concepts to our formalization of functional encryption and conclude with several interesting open problems in this young area. ∗Supported by NSF, MURI, and the Packard foundation. †Supported by NSF CNS-0716199, CNS-0915361, and CNS-0952692, Air Force Office of Scientific Research (AFO SR) under the MURI award for “Collaborative policies and assured information sharing” (Project PRESIDIO), Department of Homeland Security Grant 2006-CS-001-000001-02 (subaward 641), and the Alfred P. Sloan Foundation.",
"title": ""
}
] |
[
{
"docid": "30fda7dabb70dffbf297096671802c93",
"text": "Much attention has recently been given to a printing method because they are easily designable, have a low cost, and can be mass produced. Numerous electronic devices are fabricated using printing methods because of these advantages. In paper mechatronics, attempts have been made to fabricate robots by printing on paper substrates. The robots are given structures through self-folding and functions using printed actuators. We developed a new system and device to fabricate more sophisticated printed robots. First, we successfully fabricated complex self-folding structures by applying an automatic cutting. Second, a rapidly created and low-voltage electrothermal actuator was developed using an inkjet printed circuit. Finally, a printed robot was fabricated by combining two techniques and two types of paper; a structure design paper and a circuit design paper. Gripper and conveyor robots were fabricated, and their functions were verified. These works demonstrate the possibility of paper mechatronics for rapid and low-cost prototyping as well as of printed robots.",
"title": ""
},
{
"docid": "d558db90f72342eae413ed7937e9120f",
"text": "Latent Dirichlet Allocation (LDA) models trained without stopword removal often produce topics with high posterior probabilities on uninformative words, obscuring the underlying corpus content. Even when canonical stopwords are manually removed, uninformative words common in that corpus will still dominate the most probable words in a topic. In this work, we first show how the standard topic quality measures of coherence and pointwise mutual information act counter-intuitively in the presence of common but irrelevant words, making it difficult to even quantitatively identify situations in which topics may be dominated by stopwords. We propose an additional topic quality metric that targets the stopword problem, and show that it, unlike the standard measures, correctly correlates with human judgements of quality. We also propose a simple-to-implement strategy for generating topics that are evaluated to be of much higher quality by both human assessment and our new metric. This approach, a collection of informative priors easily introduced into most LDA-style inference methods, automatically promotes terms with domain relevance and demotes domain-specific stop words. We demonstrate this approach’s effectiveness in three very different domains: Department of Labor accident reports, online health forum posts, and NIPS abstracts. Overall we find that current practices thought to solve this problem do not do so adequately, and that our proposal offers a substantial improvement for those interested in interpreting their topics as objects in their own right.",
"title": ""
},
{
"docid": "2ea2c86a3c23ff7238b13b0508a592a1",
"text": "In earlier work we have introduced the “Recursive Sparse Blocks” (RSB) sparse matrix storage scheme oriented towards cache efficient matrix-vector multiplication (SpMV ) and triangular solution (SpSV ) on cache based shared memory parallel computers. Both the transposed (SpMV T ) and symmetric (SymSpMV ) matrix-vector multiply variants are supported. RSB stands for a meta-format: it recursively partitions a rectangular sparse matrix in quadrants; leaf submatrices are stored in an appropriate traditional format — either Compressed Sparse Rows (CSR) or Coordinate (COO). In this work, we compare the performance of our RSB implementation of SpMV, SpMV T, SymSpMV to that of the state-of-the-art Intel Math Kernel Library (MKL) CSR implementation on the recent Intel’s Sandy Bridge processor. Our results with a few dozens of real world large matrices suggest the efficiency of the approach: in all of the cases, RSB’s SymSpMV (and in most cases, SpMV T as well) took less than half of MKL CSR’s time; SpMV ’s advantage was smaller. Furthermore, RSB’s SpMV T is more scalable than MKL’s CSR, in that it performs almost as well as SpMV. Additionally, we include comparisons to the state-of-the art format Compressed Sparse Blocks (CSB) implementation. We observed RSB to be slightly superior to CSB in SpMV T, slightly inferior in SpMV, and better (in most cases by a factor of two or more) in SymSpMV. Although RSB is a non-traditional storage format and thus needs a special constructor, it can be assembled from CSR or any other similar rowordered representation arrays in the time of a few dozens of matrix-vector multiply executions. Thanks to its significant advantage over MKL’s CSR routines for symmetric or transposed matrix-vector multiplication, in most of the observed cases the assembly cost has been observed to amortize with fewer than fifty iterations.",
"title": ""
},
{
"docid": "cc4548925973baa6220ad81082a93c86",
"text": "Usually benefits for transportation investments are analysed within a framework of cost-benefit analysis or its related techniques such as financial analysis, cost-effectiveness analysis, life-cycle costing, economic impact analysis, and others. While these tools are valid techniques in general, their application to intermodal transportation would underestimate the overall economic impact by missing important aspects of productivity enhancement. Intermodal transportation is an example of the so-called general purpose technologies (GPTs) that are characterized by statistically significant spillover effects. Diffusion, secondary innovations, and increased demand for specific human capital are basic features of GPTs. Eventually these features affect major macroeconomic variables, especially productivity. Recent economic literature claims that in order to study GPTs, micro and macro evidence should be combined to establish a better understanding of the connecting mechanisms from the micro level to the overall performance of an economy or the macro level. This study analyses these issues with respect to intermodal transportation. The goal is to understand the basic micro and macro mechanisms behind intermodal transportation in order to further develop a rigorous framework for evaluation of benefits from intermodal transportation. In doing so, lessons from computer simulation of the basic features of intermodal transportation are discussed and conclusions are made regarding an agenda for work in the field. 1 Dr. Yuri V. Yevdokimov, Assistant Professor of Economics and Civil Engineering, University of New Brunswick, Canada, Tel. (506) 447-3221, Fax (506) 453-4514, E-mail: [email protected] Introduction Intermodal transportation can be thought of as a process for transporting freight and passengers by means of a system of interconnected networks, involving various combinations of modes of transportation, in which all of the components are seamlessly linked and efficiently combined. Intermodal transportation is rapidly gaining acceptance as an integral component of the systems approach of conducting business in an increasingly competitive and interdependent global economy. For example, the United States Code with respect to transportation states: AIt is the policy of the United States Government to develop a National Intermodal Transportation System that is economically efficient and environmentally sound, provides the foundation for the United States to compete in the global economy and will move individuals and property in an energy efficient way. The National Intermodal Transportation System shall consist of all forms of transportation in a unified, interconnected manner, including the transportation systems of the future, to reduce energy consumption and air pollution while promoting economic development and supporting the United States= pre-eminent position in international commerce.@ (49 USC, Ch. 55, Sec. 5501, 1998) David Collenette (1997), the Transport Minister of Canada, noted: AWith population growth came development, and the relative advantages and disadvantages of the different modes changed as the transportation system became more advanced.... Intermodalism today is about safe, efficient transportation by the most appropriate combination of modes.@ (The Summit on North American Intermodal Transportation, 1997) These statements define intermodal transportation as a macroeconomic concept, because an effective transportation system is a vital factor in assuring the efficiency of an economic system as a whole. Moreover, intermodal transportation is an important socio-economic phenomenon which implies that the benefits of intermodal transportation have to be evaluated at the macroeconomic level, or at least at the regional level, involving all elements of the economic system that gain from having a more efficient transportation network in place. Defining Economic Benefits of Intermodal Transportation Traditionally, the benefits of a transportation investment have been primarily evaluated through reduced travel time and reduced vehicle maintenance and operation costs. However, according to Weisbrod and Treyz (1998), such methods underestimate the total benefits of transportation investment by Amissing other important aspects of productivity enhancement.@ It is so because transportation does not have an intrinsic purpose in itself and is rather intended to enable other economic activities such as production, consumption, leisure, and dissemination of knowledge to take place. Hence, in order to measure total economic benefits of investing in intermodal transportation, it is necessary to understand their basic relationships with different economic activities. Eventually, improvements in transportation reduce transportation costs. The immediate benefit of the reduction is the fall in total cost of production in an economic system under study which results in growth of the system=s output. This conclusion has been known in economic development literature since Tinbergen=s paper in 1957 (Tinbergen, 1957). However, the literature does not explicitly identify why transportation costs will fall. This issue is addressed in this discussion with respect to intermodal transportation. Transportation is a multiple service to multiple users. It is produced in transportation networks that provide infrastructure for economic activities. It appears that transportation networks have economies of scale. As discussed below, intermodal transportation magnifies these scale effects resulting in increasing returns to scale (IRS) of a specific nature. It implies that there are positive externalities that arise because of the scale effects, externalities that can initiate cumulative economic growth at the regional level as well as at the national level (see, for example, Brathen and Hervick, 1997, and Hussain and Westin, 1997). The phenomenon is known as a spill-over effect. Previously the effect has been evaluated through the contribution of transportation infrastructure investment to economic growth. Since Auschauer=s (1989) paper many economists have found evidence of such a contribution (see, for example, Bonaglia and Ferrara, 2000 and Khanam, 1996). Intermodal transportation as it was defined at the very beginning is more than mere improvements in transportation infrastructure. From a theoretical standpoint, it posseses some characteristics of the general-purpose technologies (GPT), and it seems appropriate to regard it as an example of the GPT, which is discussed below. It appears reasonable to study intermodal transportation as a two-way improvement of an economic system=s productivity. On the one hand, it improves current operational functions of the system. On the other hand, it expands those functions. Both improvements are achieved by consolidating different transportation systems into a seamless transportation network that utilizes the comparative advantages of different transportation modes. Improvements due to intermodal transportation are associated with the increased productivity of transportation services and a reduction in logistic costs. The former results in an increased volume of transportation per unit cost, while the latter directly reduces costs of commodity production. Expansion of the intermodal transportation network is associated with economies of scale and better accessibility to input and output markets. The overall impact of intermodal transportation can be divided into four elements: (i) an increase in the volume of transportation in an existing transportation network; (ii) a reduction in logistic costs of current operations; (iii) the economies of scale associated with transportation network expansion; (iv) better accessibility to input and output markets. These four elements are discussed below in a sequence. Increase in volume of transportation in the existing network An increase in volume of transportation can lead to economies of density a specific scale effect. The economies of density exist if an increase in the volume of transportation in the network does not require a proportional increase in all inputs of the network. Usually the phenomenon is associated with an increase in the frequency of transportation (traffic) within the existing network (see Boyer, 1998 for a formal definition, Ciccone and Hall, 1996 for general discussion of economies of density, and Fujii, Im and Mak, 1992 for examples of economies of density in transportation). In the case of intermodal transportation, economies of density are achieved through cargo containerization, cargo consolidation and computer-guiding systems at intermodal facilities. Cargo containerization and consolidation result in an increased load factor of transportation vehicles and higher capacity utilization of the transportation fixed facilities, while utilization of computer-guiding systems results in higher labour productivity. For instance, in 1994 Burlington Northern Santa Fe Railway (BNSF) introduced the Alliance Intermodal Facility at Fort Worth, Texas, into its operations between Chicago and Los Angeles. According to OmniTRAX specialists, who operates the facility, BNSF has nearly doubled its volume of throughput at the intermodal facility since 1994. First, containerization of commodities being transported plus hubbing or cargo consolidation at the intermodal facility resulted in longer trains with higher frequency. Second, all day-to-day operations at the intermodal facility are governed by the Optimization Alternatives Strategic Intermodal Scheduler (OASIS) computer system, which allowed BNSF to handle more operations with less labour. Reduction in Logistic Costs Intermodal transportation is characterized by optimal frequency of service and modal choice and increased reliability. Combined, these two features define the just-in-time delivery -a major service produced by intermodal transportation. Furthermore, Blackburn (1991) argues that just-in-time d",
"title": ""
},
{
"docid": "a41bb1fe5670cc865bf540b34848f45f",
"text": "The general idea of discovering knowledge in large amounts of data is both appealing and intuitive. Typically we focus our attention on learning algorithms, which provide the core capability of generalizing from large numbers of small, very specific facts to useful high-level rules; these learning techniques seem to hold the most excitement and perhaps the most substantive scientific content in the knowledge discovery in databases (KDD) enterprise. However, when we engage in real-world discovery tasks, we find that they can be extremely complex, and that induction of rules is only one small part of the overall process. While others have written overviews of \"the concept of KDD, and even provided block diagrams for \"knowledge discovery systems,\" no one has begun to identify all of the building blocks in a realistic KDD process. This is what we attempt to do here. Besides bringing into the discussion several parts of the process that have received inadequate attention in the KDD community, a careful elucidation of the steps in a realistic knowledge discovery process can provide a framework for comparison of different technologies and tools that are almost impossible to compare without a clean model.",
"title": ""
},
{
"docid": "c5ffd6108b05b27172d92ee578437859",
"text": "Fairness in machine learning has predominantly been studied in static classification settings without concern for how decisions change the underlying population over time. Conventional wisdom suggests that fairness criteria promote the longterm well-being of those groups they aim to protect. We study how static fairness criteria interact with temporal indicators of well-being, such as long-term improvement, stagnation, and decline in a variable of interest. We demonstrate that even in a one-step feedback model, common fairness criteria in general do not promote improvement over time, and may in fact cause harm in cases where an unconstrained objective would not. We completely characterize the delayed impact of three standard criteria, contrasting the regimes in which these exhibit qualitatively different behavior. In addition, we find that a natural form of measurement error broadens the regime in which fairness criteria perform favorably. Our results highlight the importance of measurement and temporal modeling in the evaluation of fairness criteria, suggesting a range of new challenges and trade-offs.",
"title": ""
},
{
"docid": "4703b02dc285a55002f15d06d98251e7",
"text": "Nowadays, most Photovoltaic installations are grid connected system. From distribution system point of view, the main point and concern related to PV grid-connected are overvoltage or overcurrent in the distribution network. This paper describes the simulation study which focuses on ferroresonance phenomenon of PV system on lower side of distribution transformer. PSCAD program is selected to simulate the ferroresonance phenomenon in this study. The example of process that creates ferroresonance by the part of PV system and ferroresonance effect will be fully described in detail.",
"title": ""
},
{
"docid": "03764875c88a1480264050b0b0a16437",
"text": "Social media anomaly detection is of critical importance to prevent malicious activities such as bullying, terrorist attack planning, and fraud information dissemination. With the recent popularity of social media, new types of anomalous behaviors arise, causing concerns from various parties. While a large amount of work have been dedicated to traditional anomaly detection problems, we observe a surge of research interests in the new realm of social media anomaly detection. In this paper, we present a survey on existing approaches to address this problem. We focus on the new type of anomalous phenomena in the social media and review the recent developed techniques to detect those special types of anomalies. We provide a general overview of the problem domain, common formulations, existing methodologies and potential directions. With this work, we hope to call out the attention from the research community on this challenging problem and open up new directions that we can contribute in the future.",
"title": ""
},
{
"docid": "bffd230e76ec32eefe70904a9290bf41",
"text": "This paper introduces a new idea in describing people using their first names, i.e., the name assigned at birth. We show that describing people in terms of similarity to a vector of possible first names is a powerful description of facial appearance that can be used for face naming and building facial attribute classifiers. We build models for 100 common first names used in the United States and for each pair, construct a pair wise first-name classifier. These classifiers are built using training images downloaded from the Internet, with no additional user interaction. This gives our approach important advantages in building practical systems that do not require additional human intervention for labeling. We use the scores from each pair wise name classifier as a set of facial attributes. We show several surprising results. Our name attributes predict the correct first names of test faces at rates far greater than chance. The name attributes are applied to gender recognition and to age classification, outperforming state-of-the-art methods with all training images automatically gathered from the Internet.",
"title": ""
},
{
"docid": "31e052aaf959a4c5d6f1f3af6587d6cd",
"text": "We introduce a learning framework called learning using privileged information (LUPI) to the computer vision field. We focus on the prototypical computer vision problem of teaching computers to recognize objects in images. We want the computers to be able to learn faster at the expense of providing extra information during training time. As additional information about the image data, we look at several scenarios that have been studied in computer vision before: attributes, bounding boxes and image tags. The information is privileged as it is available at training time but not at test time. We explore two maximum-margin techniques that are able to make use of this additional source of information, for binary and multiclass object classification. We interpret these methods as learning easiness and hardness of the objects in the privileged space and then transferring this knowledge to train a better classifier in the original space. We provide a thorough analysis and comparison of information transfer from privileged to the original data spaces for both LUPI methods. Our experiments show that incorporating privileged information can improve the classification accuracy. Finally, we conduct user studies to understand which samples are easy and which are hard for human learning, and explore how this information is related to easy and hard samples when learning a classifier.",
"title": ""
},
{
"docid": "fb1092ee4fe5f29394148ae0b134dd08",
"text": "The landscape of online learning has evolved in a synchronous fashion with the development of the every-growing repertoire of technologies, especially with the recent addition of Massive Online Open Courses (MOOCs). Since MOOC platforms allow thousands of students to participate at the same time, MOOC participants can have fairly varied motivation. Meanwhile, a low course completion rate has been observed across different MOOC platforms. The first and initiated stage of the proposed research here is a preliminary attempt to study how different motivational aspects of MOOC learners correlate with course participation and completion, with motivation measured using a survey and participation measured using log analytics. The exploratory stage of the study has been conducted within the context of an educational data mining MOOC, within Coursera. In the long run, research results can be expected to inform future interventions, and the design of MOOCs, as well as increasing understanding of the emergent needs of MOOC learners as data collection extends beyond the current scope by incorporating wider disciplinary areas.",
"title": ""
},
{
"docid": "7e4b20e6fe3030fecbd05d37dc079d63",
"text": "Women's reproductive fertility peaks for a few days in the middle of their cycle around ovulation. Because conception is most likely to occur inside this brief fertile window, evolutionary theories suggest that men possess adaptations designed to maximize their reproductive success by mating with women during their peak period of fertility. In this article, we provide evidence from 3 studies that subtle cues of fertility prime mating motivation in men, thus facilitating psychological and behavioral processes associated with the pursuit of a sexual partner. In Study 1, men exposed to the scent of a woman near peak levels of fertility displayed increased accessibility to sexual concepts. Study 2 demonstrated that, among men who reported being sensitive to odors, scent cues of fertility triggered heightened perceptions of women's sexual arousal. Study 3 revealed that, in a face-to-face interaction, high levels of female fertility were associated with a greater tendency for men to make risky decisions and to behaviorally mimic a female partner. Hence, subtle cues of fertility led to a cascade of mating-related processes-from lower order cognition to overt behavior-that reflected heightened mating motivation. Implications for theories of goal pursuit, romantic attraction, and evolutionary psychology are discussed.",
"title": ""
},
{
"docid": "6090d8c6e8ef8532c5566908baa9a687",
"text": "Cardiovascular diseases (CVD) are known to be the most widespread causes to death. Therefore, detecting earlier signs of cardiac anomalies is of prominent importance to ease the treatment of any cardiac complication or take appropriate actions. Electrocardiogram (ECG) is used by doctors as an important diagnosis tool and in most cases, it's recorded and analyzed at hospital after the appearance of first symptoms or recorded by patients using a device named holter ECG and analyzed afterward by doctors. In fact, there is a lack of systems able to capture ECG and analyze it remotely before the onset of severe symptoms. With the development of wearable sensor devices having wireless transmission capabilities, there is a need to develop real time systems able to accurately analyze ECG and detect cardiac abnormalities. In this paper, we propose a new CVD detection system using Wireless Body Area Networks (WBAN) technology. This system processes the captured ECG using filtering and Undecimated Wavelet Transform (UWT) techniques to remove noises and extract nine main ECG diagnosis parameters, then the system uses a Bayesian Network Classifier model to classify ECG based on its parameters into four different classes: Normal, Premature Atrial Contraction (PAC), Premature Ventricular Contraction (PVC) and Myocardial Infarction (MI). The experimental results on ECGs from real patients databases show that the average detection rate (TPR) is 96.1% for an average false alarm rate (FPR) of 1.3%.",
"title": ""
},
{
"docid": "04cc398c2a95119b4af7e0351d1d798a",
"text": "A 16-year-old boy presented to the Emergency Department having noted the pictured skin markings on his left forearm several hours earlier. He stated that the markings had not been present earlier that afternoon, and had remained unchanged since first noted after track and field practice. There was no history of trauma, ingestions, or any systemic symptoms. The markings were neither tender nor pruritic. His parents denied any family history of malignancy. Physical examination revealed the raised black markings with minimal surrounding erythema, as seen in Figure 1. The rest of the dermatologic and remaining physical examinations were, and remained, unremarkable.",
"title": ""
},
{
"docid": "dc4aba1d336c602b896fbff3e614be39",
"text": "Requirements in computational power have grown dramatically in recent years. This is also the case in many language processing tasks, due to the overwhelming and ever increasing amount of textual information that must be processed in a reasonable time frame. This scenario has led to a paradigm shift in the computing architectures and large-scale data processing strategies used in the Natural Language Processing field. In this paper we present a new distributed architecture and technology for scaling up text analysis running a complete chain of linguistic processors on several virtual machines. Furthermore, we also describe a series of experiments carried out with the goal of analyzing the scaling capabilities of the language processing pipeline used in this setting. We explore the use of Storm in a new approach for scalable distributed language processing across multiple machines and evaluate its effectiveness and efficiency when processing documents on a medium and large scale. The experiments have shown that there is a big room for improvement regarding language processing performance when adopting parallel architectures, and that we might expect even better results with the use of large clusters with many processing",
"title": ""
},
{
"docid": "92c72aa180d3dccd5fcc5504832780e9",
"text": "The site of S1-S2 root activation following percutaneous high-voltage electrical (ES) and magnetic stimulation were located by analyzing the variations of the time interval from M to H soleus responses elicited by moving the stimulus point from lumbar to low thoracic levels. ES was effective in activating S1-S2 roots at their origin. However supramaximal motor root stimulation required a dorsoventral montage, the anode being a large, circular surface electrode placed ventrally, midline between the apex of the xiphoid process and the umbilicus. Responses to magnetic stimuli always resulted from the activation of a fraction of the fiber pool, sometimes limited to the low-thresholds afferent component, near its exit from the intervertebral foramina, or even more distally. Normal values for conduction velocity in motor and 1a afferent fibers in the proximal nerve tract are provided.",
"title": ""
},
{
"docid": "69b909b2aaa2d79b71c1fb4c4ac15724",
"text": "Chronic musculoskeletal pain (CMP) is one of the main reasons for referral to a pediatric rheumatologist and is the third most common cause of chronic pain in children and adolescents. Causes of CMP include amplified musculoskeletal pain, benign limb pain of childhood, hypermobility, overuse syndromes, and back pain. CMP can negatively affect physical, social, academic, and psychological function so it is essential that clinicians know how to diagnose and treat these conditions. This article provides an overview of the epidemiology and impact of CMP, the steps in a comprehensive pain assessment, and the management of the most common CMPs.",
"title": ""
},
{
"docid": "0d82a64bdcc3ca4c0522ca7c945b1d55",
"text": "Thin sheets have long been known to experience an increase in stiffness when they are bent, buckled, or assembled into smaller interlocking structures. We introduce a unique orientation for coupling rigidly foldable origami tubes in a \"zipper\" fashion that substantially increases the system stiffness and permits only one flexible deformation mode through which the structure can deploy. The flexible deployment of the tubular structures is permitted by localized bending of the origami along prescribed fold lines. All other deformation modes, such as global bending and twisting of the structural system, are substantially stiffer because the tubular assemblages are overconstrained and the thin sheets become engaged in tension and compression. The zipper-coupled tubes yield an unusually large eigenvalue bandgap that represents the unique difference in stiffness between deformation modes. Furthermore, we couple compatible origami tubes into a variety of cellular assemblages that can enhance mechanical characteristics and geometric versatility, leading to a potential design paradigm for structures and metamaterials that can be deployed, stiffened, and tuned. The enhanced mechanical properties, versatility, and adaptivity of these thin sheet systems can provide practical solutions of varying geometric scales in science and engineering.",
"title": ""
},
{
"docid": "b9065d678b3a9aab8d9f98d7367ad7bb",
"text": "Ms. Pac-Man is a challenging, classic arcade game that provides an interesting platform for Artificial Intelligence (AI) research. This paper reports the first Monte-Carlo approach to develop a ghost avoidance module of an intelligent agent that plays the game. Our experimental results show that the look-ahead ability of Monte-Carlo simulation often prevents Ms. Pac-Man being trapped by ghosts and reduces the chance of losing Ms. Pac-Man's life significantly. Our intelligent agent has achieved a high score of around 21,000. It is sometimes capable of clearing the first three stages and playing at the level of a novice human player.",
"title": ""
},
{
"docid": "6b27ae277c5ec0fb74d89a13dbba473d",
"text": "This article surveys recent work in active learning aimed at making it more practical for real-world use. In general, active learning systems aim to make machine learning more economical, since they can participate in the acquisition of their own training data. An active learner might iteratively select informative query instances to be labeled by an oracle, for example. Work over the last two decades has shown that such approaches are effective at maintaining accuracy while reducing training set size in many machine learning applications. However, as we begin to deploy active learning in real ongoing learning systems and data annotation projects, we are encountering unexpected problems—due in part to practical realities that violate the basic assumptions of earlier foundational work. I review some of these issues, and discuss recent work being done to address the challenges.",
"title": ""
}
] |
scidocsrr
|
e49fe7b4aa3e5e380870566bc84d5d51
|
A Survey of Cloudlet Based Mobile Computing
|
[
{
"docid": "e3b91b1133a09d7c57947e2cd85a17c7",
"text": "Although mobile devices are gaining more and more capabilities (i.e. CPU power, memory, connectivity, ...), they still fall short to execute complex rich media and data analysis applications. Offloading to the cloud is not always a solution, because of the high WAN latencies, especially for applications with real-time constraints such as augmented reality. Therefore the cloud has to be moved closer to the mobile user in the form of cloudlets. Instead of moving a complete virtual machine from the cloud to the cloudlet, we propose a more fine grained cloudlet concept that manages applications on a component level. Cloudlets do not have to be fixed infrastructure close to the wireless access point, but can be formed in a dynamic way with any device in the LAN network with available resources. We present a cloudlet architecture together with a prototype implementation, showing the advantages and capabilities for a mobile real-time augmented reality application.",
"title": ""
}
] |
[
{
"docid": "7ca863355d1fb9e4954c360c810ece53",
"text": "The detection of community structure is a widely accepted means of investigating the principles governing biological systems. Recent efforts are exploring ways in which multiple data sources can be integrated to generate a more comprehensive model of cellular interactions, leading to the detection of more biologically relevant communities. In this work, we propose a mathematical programming model to cluster multiplex biological networks, i.e. multiple network slices, each with a different interaction type, to determine a single representative partition of composite communities. Our method, known as SimMod, is evaluated through its application to yeast networks of physical, genetic and co-expression interactions. A comparative analysis involving partitions of the individual networks, partitions of aggregated networks and partitions generated by similar methods from the literature highlights the ability of SimMod to identify functionally enriched modules. It is further shown that SimMod offers enhanced results when compared to existing approaches without the need to train on known cellular interactions.",
"title": ""
},
{
"docid": "2f0eb4a361ff9f09bda4689a1f106ff2",
"text": "The growth of Quranic digital publishing increases the need to develop a better framework to authenticate Quranic quotes with the original source automatically. This paper aims to demonstrate the significance of the quote authentication approach. We propose an approach to verify the e-citation of the Quranic quote as compared with original texts from the Quran. In this paper, we will concentrate mainly on discussing the Algorithm to verify the fundamental text for Quranic quotes.",
"title": ""
},
{
"docid": "dd11d7291d8f0ee2313b74dc5498acfa",
"text": "Going further At this point, the theorem is proved. While for every summarizer σ there exists at least one tuple (θ,O), in practice there exist multiple tuples, and the one proposed by the proof would not be useful to rank models of summary quality. We can formulate an algorithm which constructs θ from σ and which yields an ordering of candidate summaries. Let σD\\{s1,...,sn} be the summarizer σ which still uses D as initial document collection, but which is not allowed to output sentences from {s1, . . . , sn} in the final summary. For a given summary S to score, let Rσ,S be the smallest set of sentences {s1, . . . , sn} that one has to remove fromD such that σD\\R outputs S. Then the definition of θσ follows:",
"title": ""
},
{
"docid": "8ae1ef032c0a949aa31b3ca8bc024cb5",
"text": "Measuring intellectual capital is on the agenda of most 21st century organisations. This paper takes a knowledge-based view of the firm and discusses the importance of measuring organizational knowledge assets. Knowledge assets underpin capabilities and core competencies of any organisation. Therefore, they play a key strategic role and need to be measured. This paper reviews the existing approaches for measuring knowledge based assets and then introduces the knowledge asset map which integrates existing approaches in order to achieve comprehensiveness. The paper then introduces the knowledge asset dashboard to clarify the important actor/infrastructure relationship, which elucidates the dynamic nature of these assets. Finally, the paper suggests to visualise the value pathways of knowledge assets before designing strategic key performance indicators which can then be used to test the assumed causal relationships. This will enable organisations to manage and report these key value drivers in today’s economy. Introduction In the last decade management literature has paid significant attention to the role of knowledge for global competitiveness in the 21st century. It is recognised as a durable and more sustainable strategic resource to acquire and maintain competitive advantages (Barney, 1991a; Drucker, 1988; Grant, 1991a). Today’s business world is characterised by phenomena such as e-business, globalisation, higher degrees of competitiveness, fast evolution of new technology, rapidly changing client demands, as well as changing economic and political structures. In this new context companies need to develop clearly defined strategies that will give them a competitive advantage (Porter, 2001; Barney, 1991a). For this, organisations have to understand which capabilities they need in order to gain and maintain this competitive advantage (Barney, 1991a; Prahalad and Hamel, 1990). Organizational capabilities are based on knowledge. Thus, knowledge is a resource that forms the foundation of the company’s capabilities. Capabilities combine to The Emerald Research Register for this journal is available at The current issue and full text archive of this journal is available at www.emeraldinsight.com/researchregister www.emeraldinsight.com/1463-7154.htm The authors would like to thank, Göran Roos, Steven Pike, Oliver Gupta, as well as the two anonymous reviewers for their valuable comments which helped us to improve this paper. Intellectual capital",
"title": ""
},
{
"docid": "aba1bbd9163e5f9d16ef2d98d16ce1c2",
"text": "The basic reproduction number (0) is arguably the most important quantity in infectious disease epidemiology. The next-generation matrix (NGM) is the natural basis for the definition and calculation of (0) where finitely many different categories of individuals are recognized. We clear up confusion that has been around in the literature concerning the construction of this matrix, specifically for the most frequently used so-called compartmental models. We present a detailed easy recipe for the construction of the NGM from basic ingredients derived directly from the specifications of the model. We show that two related matrices exist which we define to be the NGM with large domain and the NGM with small domain. The three matrices together reflect the range of possibilities encountered in the literature for the characterization of (0). We show how they are connected and how their construction follows from the basic model ingredients, and establish that they have the same non-zero eigenvalues, the largest of which is the basic reproduction number (0). Although we present formal recipes based on linear algebra, we encourage the construction of the NGM by way of direct epidemiological reasoning, using the clear interpretation of the elements of the NGM and of the model ingredients. We present a selection of examples as a practical guide to our methods. In the appendix we present an elementary but complete proof that (0) defined as the dominant eigenvalue of the NGM for compartmental systems and the Malthusian parameter r, the real-time exponential growth rate in the early phase of an outbreak, are connected by the properties that (0) > 1 if and only if r > 0, and (0) = 1 if and only if r = 0.",
"title": ""
},
{
"docid": "1e4daa242bfee88914b084a1feb43212",
"text": "In this paper, we present a novel approach of human activity prediction. Human activity prediction is a probabilistic process of inferring ongoing activities from videos only containing onsets (i.e. the beginning part) of the activities. The goal is to enable early recognition of unfinished activities as opposed to the after-the-fact classification of completed activities. Activity prediction methodologies are particularly necessary for surveillance systems which are required to prevent crimes and dangerous activities from occurring. We probabilistically formulate the activity prediction problem, and introduce new methodologies designed for the prediction. We represent an activity as an integral histogram of spatio-temporal features, efficiently modeling how feature distributions change over time. The new recognition methodology named dynamic bag-of-words is developed, which considers sequential nature of human activities while maintaining advantages of the bag-of-words to handle noisy observations. Our experiments confirm that our approach reliably recognizes ongoing activities from streaming videos with a high accuracy.",
"title": ""
},
{
"docid": "a934474bb38e37e8246ff561efd74bd3",
"text": "While it is possible to understand utopias and dystopias as particular kinds of sociopolitical systems, in this text we argue that utopias and dystopias can also be understood as particular kinds of information systems in which data is received, stored, generated, processed, and transmitted by the minds of human beings that constitute the system’s ‘nodes’ and which are connected according to specific network topologies. We begin by formulating a model of cybernetic information-processing properties that characterize utopias and dystopias. It is then shown that the growing use of neuroprosthetic technologies for human enhancement is expected to radically reshape the ways in which human minds access, manipulate, and share information with one another; for example, such technologies may give rise to posthuman ‘neuropolities’ in which human minds can interact with their environment using new sensorimotor capacities, dwell within shared virtual cyberworlds, and link with one another to form new kinds of social organizations, including hive minds that utilize communal memory and decision-making. Drawing on our model, we argue that the dynamics of such neuropolities will allow (or perhaps even impel) the creation of new kinds of utopias and dystopias that were previously impossible to realize. Finally, we suggest that it is important that humanity begin thoughtfully exploring the ethical, social, and political implications of realizing such technologically enabled societies by studying neuropolities in a place where they have already been ‘pre-engineered’ and provisionally exist: in works of audiovisual science fiction such as films, television series, and role-playing games",
"title": ""
},
{
"docid": "bf784d447f523c89e4863edffb334c8b",
"text": "We investigate the use of a nonlinear control allocation scheme for automotive vehicles. Such a scheme is useful in e.g. yaw or roll stabilization of the vehicle. The control allocation allows a modularization of the control task, such that a higher level control system specifies a desired moment to work on the vehicle, while the control allocation distributes this moment among the individual wheels by commanding appropriate wheel slips. The control allocation problem is defined as a nonlinear optimization problem, to which an explicit piecewise linear approximate solution function is computed off-line. Such a solution function can computationally efficiently be implemented in real time with at most a few hundred arithmetic operations per sample. Yaw stabilization of the vehicle yaw dynamics is used as an example of use of the control allocation. Simulations show that the controller stabilizes the vehicle in an extreme manoeuvre where the vehicle yaw dynamics otherwise becomes unstable.",
"title": ""
},
{
"docid": "bb05c05cb57dbc22afeceaa13a651630",
"text": "In this letter, a broadband and compact phase shifter using omega particles is designed. Bandwidth of the 90 <sup>°</sup> and 45 <sup>°</sup> versions of the designed phase shifter are around 55% with the accuracy of 3 <sup>°</sup> and 60% with the accuracy of 2.5 <sup>°</sup>, respectively. The proposed phase shifter has compact size compared with previously published SIW based phase shifter designs. A prototype of the proposed 90 <sup>°</sup> phase shifter is fabricated and comparison of the measured and simulated results is provided.",
"title": ""
},
{
"docid": "e6bca434e626f770ecab60d022abc2ad",
"text": "This paper presents and investigates Clustered Shading for deferred and forward rendering. In Clustered Shading, view samples with similar properties (e.g. 3D-position and/or normal) are grouped into clusters. This is comparable to tiled shading, where view samples are grouped into tiles based on 2D-position only. We show that Clustered Shading creates a better mapping of light sources to view samples than tiled shading, resulting in a significant reduction of lighting computations during shading. Additionally, Clustered Shading enables using normal information to perform per-cluster back-face culling of lights, again reducing the number of lighting computations. We also show that Clustered Shading not only outperforms tiled shading in many scenes, but also exhibits better worst case behaviour under tricky conditions (e.g. when looking at high-frequency geometry with large discontinuities in depth). Additionally, Clustered Shading enables real-time scenes with two to three orders of magnitudes more lights than previously feasible (up to around one million light sources).",
"title": ""
},
{
"docid": "343ed18e56e6f562fa509710e4cf8dc6",
"text": "The automated analysis of facial expressions has been widely used in different research areas, such as biometrics or emotional analysis. Special importance is attached to facial expressions in the area of sign language, since they help to form the grammatical structure of the language and allow for the creation of language disambiguation, and thus are called Grammatical Facial Expressions (GFEs). In this paper we outline the recognition of GFEs used in the Brazilian Sign Language. In order to reach this objective, we have captured nine types of GFEs using a KinectTMsensor, designed a spatial-temporal data representation, modeled the research question as a set of binary classification problems, and employed a Machine Learning technique.",
"title": ""
},
{
"docid": "eedd8784dcda161ef993e67b0ac190f8",
"text": "3 origami (˛ orig¯ a·mi) The Japanese art of making elegant designs using folds in all kinds of paper. One style of functional programming is based purely on recursive equations. Such equations are easy to explain, and adequate for any computational purpose , but hard to use well as programs get bigger and more complicated. In a sense, recursive equations are the 'assembly language' of functional programming , and direct recursion the goto. As computer scientists discovered in the 1960s with structured programming, it is better to identify common patterns of use of such too-powerful tools, and capture these patterns as new constructions and abstractions. In functional programming, in contrast to imperative programming, we can often express the new constructions as higher-order operations within the language, whereas the move from un-structured to structured programming entailed the development of new languages. There are advantages in expressing programs as instances of common patterns, rather than from first principles — the same advantages as for any kind of abstraction. Essentially, one can discover general properties of the abstraction once and for all, and infer those properties of the specific instances for free. These properties may be theorems, design idioms, implementations, optimisations, and so on. In this chapter we will look at folds and unfolds as abstractions. In a precise technical sense, folds and unfolds are the natural patterns of computation over recursive datatypes; unfolds generate data structures and folds consume them. Functional programmers are very familiar with the foldr function on lists, and its directional dual foldl; they are gradually coming to terms with the generalisation to folds on other datatypes (IFPH §3.3, §6.1.3, §6.4). The",
"title": ""
},
{
"docid": "d07a75f66e8fc53cf91904aadd0585c7",
"text": "Hashing techniques have been intensively investigated for large scale vision applications. Recent research has shown that leveraging supervised information can lead to high quality hashing. However, most existing supervised hashing methods only construct similarity-preserving hash codes. Observing that semantic structures carry complementary information, we propose the idea of cotraining for hashing, by jointly learning projections from image representations to hash codes and classification. Specifically, a novel deep semanticpreserving and ranking-based hashing (DSRH) architecture is presented, which consists of three components: a deep CNN for learning image representations, a hash stream of a binary mapping layer by evenly dividing the learnt representations into multiple bags and encoding each bag into one hash bit, and a classification stream. Meanwhile, our model is learnt under two constraints at the top loss layer of hash stream: a triplet ranking loss and orthogonality constraint. The former aims to preserve the relative similarity ordering in the triplets, while the latter makes different hash bit as independent as possible. We have conducted experiments on CIFAR-10 and NUS-WIDE image benchmarks, demonstrating that our approach can provide superior image search accuracy than other state-of-theart hashing techniques.",
"title": ""
},
{
"docid": "5ed744299cb2921bcb42f57cf1809f69",
"text": "Credit risk prediction models seek to predict quality factors such as whether an individual will default (bad applicant) on a loan or not (good applicant). This can be treated as a kind of machine learning (ML) problem. Recently, the use of ML algorithms has proven to be of great practical value in solving a variety of risk problems including credit risk prediction. One of the most active areas of recent research in ML has been the use of ensemble (combining) classifiers. Research indicates that ensemble individual classifiers lead to a significant improvement in classification performance by having them vote for the most popular class. This paper explores the predicted behaviour of five classifiers for different types of noise in terms of credit risk prediction accuracy, and how such accuracy could be improved by using classifier ensembles. Benchmarking results on four credit datasets and comparison with the performance of each individual classifier on predictive accuracy at various attribute noise levels are presented. The experimental evaluation shows that the ensemble of classifiers technique has the potential to improve prediction accuracy. 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "f21b0f519f4bf46cb61b2dc2861014df",
"text": "Player experience is difficult to evaluate and report, especially using quantitative methodologies in addition to observations and interviews. One step towards tying quantitative physiological measures of player arousal to player experience reports are Biometric Storyboards (BioSt). They can visualise meaningful relationships between a player's physiological changes and game events. This paper evaluates the usefulness of BioSt to the game industry. We presented the Biometric Storyboards technique to six game developers and interviewed them about the advantages and disadvantages of this technique.",
"title": ""
},
{
"docid": "05b1a669fe426a4ec29b821800727432",
"text": "In this paper we suggest a user-subjective approach to Personal Information Management (PIM) system design. This approach advocates that PIM systems relate to the subjective value-added attributes that the user gives to the data stored in the PIM system. These attributes should facilitate system use: help the user find the information item again, recall it when needed and use it effectively in the next interaction with the item. Driven from the user-subjective approach are three generic principles which are described and discussed: (a) The subjective classification principle stating that all information items related to the same subjective topic should be classified together regardless of their technological format; (b) The subjective importance principle proposing that the subjective importance of information should determine its degree of visual salience and accessibility; and (c) The subjective context principle suggesting that information should be retrieved and viewed by the user in the same context in which it was previously used. We claim that these principles are only sporadically implemented in operating systems currently available on personal computers, and demonstrate alternatives for interface design. USER-SUBJECTIVE APPROACH TO PIM SYSTEMS 3",
"title": ""
},
{
"docid": "9d4b97f66055979079940b267257758f",
"text": "A model that predicts the static friction for elastic-plastic contact of rough surface presented. The model incorporates the results of accurate finite element analyses elastic-plastic contact, adhesion and sliding inception of a single asperity in a statis representation of surface roughness. The model shows strong effect of the externa and nominal contact area on the static friction coefficient in contrast to the classical of friction. It also shows that the main dimensionless parameters affecting the s friction coefficient are the plasticity index and adhesion parameter. The effect of adh on the static friction is discussed and found to be negligible at plasticity index va larger than 2. It is shown that the classical laws of friction are a limiting case of present more general solution and are adequate only for high plasticity index and n gible adhesion. Some potential limitations of the present model are also discussed ing to possible improvements. A comparison of the present results with those obt from an approximate CEB friction model shows substantial differences, with the l severely underestimating the static friction coefficient. @DOI: 10.1115/1.1609488 #",
"title": ""
},
{
"docid": "6a616f2aaa08ecf57236510cda926cad",
"text": "While much work has focused on the design of actuators for inputting energy into robotic systems, less work has been dedicated to devices that remove energy in a controlled manner, especially in the field of soft robotics. Such devices have the potential to significantly modulate the dynamics of a system with very low required input power. In this letter, we leverage the concept of layer jamming, previously used for variable stiffness devices, to create a controllable, high force density, soft layer jamming brake (SLJB). We introduce the design, modeling, and performance analysis of the SLJB and demonstrate variable tensile resisting forces through the regulation of vacuum pressure. Further, we measure and model the tensile force with respect to different layer materials, vacuum pressures, and lengthening velocities, and show its ability to absorb energy during collisions. We hope to apply the SLJB in a number of applications in wearable technology.",
"title": ""
},
{
"docid": "124f9f9764a05047fca3f8d956dc5d48",
"text": "There is no doubt to say that researchers have made significant contributions by developing numerous tools and techniques of various Requirements Engineering (RE) processes but at the same time, the field still demands further research to come up with the novel solutions for many ongoing issues. Some of the key challenges in RE may be the issues in describing the system limit, issues in understanding among the different groups affected by the improvement of a given system, and challenges in dealing with the explosive nature of requirements. These challenges may lead to poor requirements and the termination of system progress, or else the disappointing or inefficient result of a system, which increases high maintenance costs or suffers from frequent changes. RE can be decomposed into various sub-phases: requirements elicitation, specification, documentation and validation. Through proper requirements elicitation, RE process can be upgraded, resulting in enriched system requirements and possibly a much better system. Keeping in view the importance of the area, major elicitation techniques have already been identified in one of our previous papers. This paper is an extension of our previous work and here, an attempt is made to identify and describe the recurring issues and challenges in various requirements elicitation techniques.",
"title": ""
}
] |
scidocsrr
|
c8c44ec46585285a00a3b9a15a2771fb
|
Faceted Wikipedia Search
|
[
{
"docid": "ee95ad7e7243607b56e92b6cb4228288",
"text": "We have developed an innovative search interface that allows non-expert users to move through large information spaces in a flexible manner without feeling lost. The design goal was to offer users a “browsing the shelves” experience seamlessly integrated with focused search. Key to achieving our goal is the explicit exposure of hierarchical faceted metadata in a manner that is intuitive and inviting to users. After several iterations of design and testing, the usability results are strikingly positive. We believe our approach marks a major step forward in search user interfaces and can serve as a model for web-based collections of up to 100,000 items. Topics: Search User Interfaces, Faceted Metadata INTRODUCTION Although general Web search is steadily improving [30], studies show that search is still the primary usability problem in web site design. A recent report by Vividence Research analyzing 69 web sites found that the most common usability problem was poorly organized search results, affecting 53% of sites studied. The second most common problem was poor information architecture, affecting 32% of sites [27]. Studies of search behavior reveal that good search involves both broadening and narrowing of the query, appropriate selection of terminology, and the ability to modify the query [31]. Still others show that users often express a concern about online search systems since they do not allow a “browsing the shelves” experience afforded by physical libraries [6] and that users like wellstructured hyperlinks but often feel lost when navigating through complex sites [23]. Our goals are to support search usability guidelines [28], while avoiding negative consequences like empty result sets or feelings of being lost. We are especially interested in large collections of similar-style items (such as product catalog sites, sites consisting of collections of images, or text documents on a topic such as medicine or law). Our approach is to follow iterative design practices from the field of human-computer interaction [29], meaning that we first assess the behavior of the target users, then prototype a system, then assess that system with target users, learn from and adjust to the problems found, and repeat until a successful interface is produced. We have applied this method to the problem of creating an information architecture that seamlessly integrates navigation and free-text search into one interface. This system builds on earlier work that shows the importance of query previews [25] for indicating next choices (thus allowing the user to use recognition over recall) and avoiding empty result sets. The approach makes use of faceted hierarchical metadata (described below) as the basis for a navigation structure showing next choices, providing alternative views, and permitting refinement and expansion in new directions, while at the same time maintaining a consistent representation of the collection’s structure [14]. This use of metadata is integrated with free-text search, allowing the user to follow links, then add search terms, then follow more links, without interrupting the interaction flow. Our most recent usability studies show strong, positive results along most measured variables. An added advantage of this framework is that it can be built using off-the-shelf database technology, and it allows the contents of the collection to be changed without requiring the web site maintainer to change the system or the interface. For these reasons, we believe these results should influence the design of information architecture of information-centric web sites. In the following sections we define the metadata-based terminology, describe the interface framework as applied to a collection of architectural images, report the results of usability studies, discuss related work, and discuss the implications of these results. Submitted for Publication METADATA Content-oriented category metadata has become more prevalent in the last few years, and many people are interested in standards for describing content in various fields (e.g., Dublin Core and the Semantic Web). Web directories such as Yahoo and the Open Directory Project are familiar examples of the use of metadata for navigation structures. Web search engines have begun to interleave search hits on category labels with other search results. Many individual collections already have rich metadata assigned to their contents; for example, biomedical journal articles have on average a dozen or more content attributes attached to them. Metadata for organizing content collections can be classified along several dimensions: • The metadata may be faceted, that is, composed of orthogonal sets of categories. For example, in the domain of architectural images, some possible facets might be Materials (concrete, brick, wood, etc.), Styles (Baroque, Gothic, Ming, etc.), View Types, People (architects, artists, developers, etc.), Locations, Periods, and so on. • The metadata (or an individual facet) may be hierarchical (“located in Berkeley, California, United States”) or flat (“by Ansel Adams”). • The metadata (or an individual facet) may be singlevalued or multi-valued. That is, the data may be constrained so that at most one value can be assigned to an item (“measures 36 cm tall”) or it may allow multiple values to be assigned to an item (“uses oil paint, ink, and watercolor”). We note that there are a number of issues associated with creation of metadata itself which we are not addressing here. The most pressing problem is how to decide which descriptors are correct or at least most appropriate for a collection of information. Another problem relates to how to assign metadata descriptors to items that currently do not have metadata assigned. We will not be addressing these issues, in part because many other researchers already are, and because the fact remains that there are many existing, important collections whose contents have hierarchical metadata already assigned. RECIPE USABILITY STUDY We are particularly concerned with supporting non-professional searchers in rich information seeking tasks. Specifically we aim to answer the following questions: do users like and understand flexible organizations of metadata from different hierarchies? Are faceted hierarchies preferable to single hierarchies? Do people prefer to follow category-based hyperlinks or do they prefer to issue a keyword-based query and sort through results listings? 1http://dublincore.org, http://www.w3.org/2001/sw 2http://www.yahoo.com, http://dmoz.org Figure 1: The opening page for both interfaces shows a text search box and the first level of metadata terms. Hovering over a facet name yields a tooltip (here shown below Locations) explaining the meaning of the facet. Before developing our system, we tested the idea of using hierarchical faceted metadata on an existing interface that exemplified some of our design goals. This preliminary study was conducted using a commercial recipe web site called Epicurious containing five flat facets, 93 metadata terms, and approximately 13,000 recipes. We compared the three available search interfaces:(1) Simple keyword search, with unsorted results list (2) Enhanced search form that exposes metadata using checkboxes and drop-down lists, with unsorted results list. (3) Browse interface that allows user to navigate through the collection, implicitly building up a query consisting of an AND across facets; Selecting a category within a facet (e.g., Pasta within Main Ingredient) narrows results set, and users are shown query previews at every step. In the interests of space, we can only provide a brief summary of this small (9 participant) study: All the participants who liked the site (7 out of 9) said they were likely to use the browse interface again. Only 4 said this about enhanced search and 0 said this about simple search. Participants especially liked the browse interface for open-ended tasks such as “plan a dinner party.” We took this as encouraging support for the faceted metadata approach. However, the recipe browse facility is lacking in several ways. Free-text search is not integrated with metadata browse, the collection and metadata are of only moderate size, and the metadata is organized into flat (non-hierarchical) facets. Finally users are only allowed to refine queries, they cannot broaden 3http://eat.epicurious.com/recipes/browse home/",
"title": ""
}
] |
[
{
"docid": "bd2adf12f6d6bd0c50b7fa6fceb7f568",
"text": "The lack of a common benchmark for the evaluation of the gaze estimation task from RGB and RGB-D data is a serious limitation for distinguishing the advantages and disadvantages of the many proposed algorithms found in the literature. This paper intends to overcome this limitation by introducing a novel database along with a common framework for the training and evaluation of gaze estimation approaches. In particular, we have designed this database to enable the evaluation of the robustness of algorithms with respect to the main challenges associated to this task: i) Head pose variations; ii) Person variation; iii) Changes in ambient and sensing conditions and iv) Types of target: screen or 3D object.",
"title": ""
},
{
"docid": "dd37e97635b0ded2751d64cafcaa1aa4",
"text": "DEVICES, AND STRUCTURES By S.E. Lyshevshi, CRC Press, 2002. This book is the first of the CRC Press “Nanoand Microscience, Engineering, Technology, and Medicine Series,” of which the author of this book is also the editor. This book could be a textbook of a semester course on microelectro mechanical systems (MEMS) and nanoelectromechanical systems (NEMS). The objective is to cover the topic from basic theory to the design and development of structures of practical devices and systems. The idea of MEMS and NEMS is to utilize and further extend the technology of integrated circuits (VLSI) to nanometer structures of mechanical and biological devices for potential applications in molecular biology and medicine. MEMS and NEMS (nanotechnology) are hot topics in the future development of electronics. The interest is not limited to electrical engineers. In fact, many scientists and researchers are interested in developing MEMS and NEMS for biological and medical applications. Thus, this field has attracted researchers from many different fields. Many new books are coming out. This book seems to be the first one aimed to be a textbook for this field, but it is very hard to write a book for readers with such different backgrounds. The author of this book has emphasized computer modeling, mostly due to his research interest in this field. It would be good to provide coverage on biological and medical MEMS, for example, by reporting a few gen or DNA-related cases. Furthermore, the mathematical modeling in term of a large number of nonlinear coupled differential equations, as used in many places in the book, does not appear to have any practical value to the actual physical structures.",
"title": ""
},
{
"docid": "70fa03bcd9c5eec86050052ea77d30fd",
"text": "The importance of SMEs SMEs (small and medium-sized enterprises) account for 60 to 70 per cent of jobs in most OECD countries, with a particularly large share in Italy and Japan, and a relatively smaller share in the United States. Throughout they also account for a disproportionately large share of new jobs, especially in those countries which have displayed a strong employment record, including the United States and the Netherlands. Some evidence points also to the importance of age, rather than size, in job creation: young firms generate more than their share of employment. However, less than one-half of start-ups survive for more than five years and only a fraction develop into the high-growth firms which make important contributions to job creation. High job turnover poses problems for employment security; and small establishments are often exempt from giving notice to their employees. Small firms also tend to invest less in training and rely relatively more on external recruitment for raising competence. The demand for reliable, relevant and internationally comparable data on SMEs is on the rise, and statistical offices have started to expand their collection and publication of data. International comparability is still weak, however, due to divergent size-class definitions and sector classifications. To enable useful policy analysis, OECD governments need to improve their build-up of data, without creating additional obstacles for firms through the burden of excessive paper work. The greater variance in profitability, survival and growth of SMEs compared to larger firms accounts for special problems in financing. SMEs generally tend to be confronted with higher interest rates, as well as credit rationing due to shortage of collateral. The issues that arise in financing differ considerably between existing and new firms, as well as between those which grow slowly and those that grow rapidly. The expansion of private equity markets, including informal markets, has greatly improved the access to venture capital for start-ups and SMEs, but considerable differences remain among countries. Regulatory burdens remain a major obstacle for SMEs as these firms tend to be poorly equipped to deal with the problems arising from regulations. Access to information about regulations should be made available to SMEs at minimum cost. Policy makers must ensure that the compliance procedures associated with, e.g. R&D and new technologies, are not unnecessarily costly, complex or lengthy. Transparency is of particular importance to SMEs, and information technology has great potential to narrow the information …",
"title": ""
},
{
"docid": "9263fd7d4846157332322697a482a68d",
"text": "Mental fatigue is a psychobiological state caused by prolonged periods of demanding cognitive activity. Although the impact of mental fatigue on cognitive and skilled performance is well known, its effect on physical performance has not been thoroughly investigated. In this randomized crossover study, 16 subjects cycled to exhaustion at 80% of their peak power output after 90 min of a demanding cognitive task (mental fatigue) or 90 min of watching emotionally neutral documentaries (control). After experimental treatment, a mood questionnaire revealed a state of mental fatigue (P = 0.005) that significantly reduced time to exhaustion (640 +/- 316 s) compared with the control condition (754 +/- 339 s) (P = 0.003). This negative effect was not mediated by cardiorespiratory and musculoenergetic factors as physiological responses to intense exercise remained largely unaffected. Self-reported success and intrinsic motivation related to the physical task were also unaffected by prior cognitive activity. However, mentally fatigued subjects rated perception of effort during exercise to be significantly higher compared with the control condition (P = 0.007). As ratings of perceived exertion increased similarly over time in both conditions (P < 0.001), mentally fatigued subjects reached their maximal level of perceived exertion and disengaged from the physical task earlier than in the control condition. In conclusion, our study provides experimental evidence that mental fatigue limits exercise tolerance in humans through higher perception of effort rather than cardiorespiratory and musculoenergetic mechanisms. Future research in this area should investigate the common neurocognitive resources shared by physical and mental activity.",
"title": ""
},
{
"docid": "dbe5561dc992bab2b3fbebca5412fd39",
"text": "Detox diets are popular dieting strategies that claim to facilitate toxin elimination and weight loss, thereby promoting health and well-being. The present review examines whether detox diets are necessary, what they involve, whether they are effective and whether they present any dangers. Although the detox industry is booming, there is very little clinical evidence to support the use of these diets. A handful of clinical studies have shown that commercial detox diets enhance liver detoxification and eliminate persistent organic pollutants from the body, although these studies are hampered by flawed methodologies and small sample sizes. There is preliminary evidence to suggest that certain foods such as coriander, nori and olestra have detoxification properties, although the majority of these studies have been performed in animals. To the best of our knowledge, no randomised controlled trials have been conducted to assess the effectiveness of commercial detox diets in humans. This is an area that deserves attention so that consumers can be informed of the potential benefits and risks of detox programmes.",
"title": ""
},
{
"docid": "695af0109c538ca04acff8600d6604d4",
"text": "Human actions can be represented by the trajectories of skeleton joints. Traditional methods generally model the spatial structure and temporal dynamics of human skeleton with hand-crafted features and recognize human actions by well-designed classifiers. In this paper, considering that recurrent neural network (RNN) can model the long-term contextual information of temporal sequences well, we propose an end-to-end hierarchical RNN for skeleton based action recognition. Instead of taking the whole skeleton as the input, we divide the human skeleton into five parts according to human physical structure, and then separately feed them to five subnets. As the number of layers increases, the representations extracted by the subnets are hierarchically fused to be the inputs of higher layers. The final representations of the skeleton sequences are fed into a single-layer perceptron, and the temporally accumulated output of the perceptron is the final decision. We compare with five other deep RNN architectures derived from our model to verify the effectiveness of the proposed network, and also compare with several other methods on three publicly available datasets. Experimental results demonstrate that our model achieves the state-of-the-art performance with high computational efficiency.",
"title": ""
},
{
"docid": "c018a5cb5e89ee697f20d634ea360954",
"text": "A comprehensive approach to the design of a stripline for EMC testing is given in this paper. The authors attention has been focused on the design items that are most crucial by the achievement of satisfactory value of the VSWR and the impedance matching at the feeding ports in the extended frequency range from 80 MHz to 1000 GHz. For this purpose, the Vivaldi-structure and other advanced structures were considered. The theoretical approach based on numerical simulations lead to conclusions which have been applied by the physical design and also evaluated by experimental results.",
"title": ""
},
{
"docid": "c8482ed26ba2c4ba1bd3eed6ac0e00b4",
"text": "Virtual Reality (VR) has now emerged as a promising tool in many domains of therapy and rehabilitation (Rizzo, Schultheis, Kerns & Mateer, 2004; Weiss & Jessel, 1998; Zimand, Anderson, Gershon, Graap, Hodges, & Rothbaum, 2002; Glantz, Rizzo & Graap, 2003). Continuing advances in VR technology along with concomitant system cost reductions have supported the development of more usable, useful, and accessible VR systems that can uniquely target a wide range of physical, psychological, and cognitive rehabilitation concerns and research questions. What makes VR application development in the therapy and rehabilitation sciences so distinctively important is that it represents more than a simple linear extension of existing computer technology for human use. VR offers the potential to create systematic human testing, training and treatment environments that allow for the precise control of complex dynamic 3D stimulus presentations, within which sophisticated interaction, behavioral tracking and performance recording is possible. Much like an aircraft simulator serves to test and train piloting ability, virtual environments (VEs) can be developed to present simulations that assess and rehabilitate human functional performance under a range of stimulus conditions that are not easily deliverable and controllable in the real world. When combining these assets within the context of functionally relevant, ecologically enhanced VEs, a fundamental advancement could emerge in how human functioning can be addressed in many rehabilitation disciplines.",
"title": ""
},
{
"docid": "9ed5fdb991edd5de57ffa7f13121f047",
"text": "We analyze the increasing threats against IoT devices. We show that Telnet-based attacks that target IoT devices have rocketed since 2014. Based on this observation, we propose an IoT honeypot and sandbox, which attracts and analyzes Telnet-based attacks against various IoT devices running on different CPU architectures such as ARM, MIPS, and PPC. By analyzing the observation results of our honeypot and captured malware samples, we show that there are currently at least 5 distinct DDoS malware families targeting Telnet-enabled IoT devices and one of the families has quickly evolved to target more devices with as many as 9 different CPU architectures.",
"title": ""
},
{
"docid": "41261cf72d8ee3bca4b05978b07c1c4f",
"text": "The association of Sturge-Weber syndrome with naevus of Ota is an infrequently reported phenomenon and there are only four previously described cases in the literature. In this paper we briefly review the literature regarding the coexistence of vascular and pigmentary naevi and present an additional patient with the association of the Sturge-Weber syndrome and naevus of Ota.",
"title": ""
},
{
"docid": "5d5a103852019f1de8455e4d13c0e82a",
"text": "INTRODUCTION The cryptocurrency market has evolved erratically and at unprecedented speed over the course of its short lifespan. Since the release of the pioneer anarchic cryptocurrency, Bitcoin, to the public in January 2009, more than 550 cryptocurrencies have been developed, the majority with only a modicum of success [1]. Research on the industry is still scarce. The majority of it is singularly focused on Bitcoin rather than a more diverse spread of cryptocurrencies and is steadily being outpaced by fluid industry developments, including new coins, technological progression, and increasing government regulation of the markets. Though the fluidity of the industry does, admittedly, present a challenge to research, a thorough evaluation of the cryptocurrency industry writ large is necessary. This paper seeks to provide a concise yet comprehensive analysis of the cryptocurrency industry with particular analysis of Bitcoin, the first decentralized cryptocurrency. Particular attention will be given to examining theoretical economic differences between existing coins. Section 1 of this paper provides an overview of the industry. Section 1.1 provides a brief history of digital currencies, which segues into a discussion of Bitcoin in section 1.2. Section 2 of this paper provides an in-depth analysis of coin economics, partitioning the major currencies by their network security protocol mechanisms, and discussing the long-term theoretical implications that these classes entail. Section 2.1 will discuss network security protocol. The mechanisms will be discussed in the order that follows. Section 2.2 will discuss the proof-of-work (PoW) mechanism used in the Bitcoin protocol and various altcoins. Section 2.3 will discuss the proof-of-stake (PoS) protocol scheme first introduced by Peercoin in 2011, which relies on a less energy intensive security mechanism than PoW. Section 2.4 will discuss a hybrid PoW/PoS mechanism. Section 2.5 will discuss the Byzantine Consensus mechanism. Section 2.6 presents the results of a systematic review of 21 cryptocurrencies. Section 3 provides an overview of factors affecting industry growth, focusing heavily on the regulatory environment in section 3.1. Section 3.2 discusses public perception and acceptance of cryptocurrency as a payment system in the current retail environment. Section 4 concludes the analysis. A note on sources: Because the cryptocurrency industry is still young and factors that impact it are changing on a daily basis, few comprehensive or fully updated academic sources exist on the topic. While academic work was of course consulted for this project, the majority of the information that informs this paper was derived from …",
"title": ""
},
{
"docid": "fec2b6b7cdef1ddf88dffd674fe7111a",
"text": "This paper introduces Dex, a reinforcement learning environment toolkit specialized for training and evaluation of continual learning methods as well as general reinforcement learning problems. We also present the novel continual learning method of incremental learning, where a challenging environment is solved using optimal weight initialization learned from first solving a similar easier environment. We show that incremental learning can produce vastly superior results than standard methods by providing a strong baseline method across ten Dex environments. We finally develop a saliency method for qualitative analysis of reinforcement learning, which shows the impact incremental learning has on network attention.",
"title": ""
},
{
"docid": "4a26443fd7e16c7af86bcf07c6ba39ca",
"text": "This study proposes representative figures of merit for circadian and vision performance for healthy and efficient use of smartphone displays. The recently developed figures of merit for circadian luminous efficacy of radiation (CER) and circadian illuminance (CIL) related to human health and circadian rhythm were measured to compare three kinds of commercial smartphone displays. The CIL values for social network service (SNS) messenger screens from all three displays were higher than 41.3 biolux (blx) in a dark room at night, and the highest CIL value reached 50.9 blx. These CIL values corresponded to melatonin suppression values (MSVs) of 7.3% and 11.4%, respectively. Moreover, smartphone use in a bright room at night had much higher CIL and MSV values (58.7 ~ 105.2 blx and 15.4 ~ 36.1%, respectively). This study also analyzed the nonvisual and visual optical properties of the three smartphone displays while varying the distance between the screen and eye and controlling the brightness setting. Finally, a method to possibly attenuate the unhealthy effects of smartphone displays was proposed and investigated by decreasing the emitting wavelength of blue LEDs in a smartphone LCD backlight and subsequently reducing the circadian effect of the display.",
"title": ""
},
{
"docid": "165fcc5242321f6fed9c353cc12216ff",
"text": "Fingerprint alteration represents one of the newest challenges in biometric identification. The aim of fingerprint mutilation is to destroy the structure of the papillary ridges so that the identity of the offender cannot be recognized by the biometric system. The problem has received little attention and there is a lack of a real world altered fingerprints database that would allow researchers to develop new algorithms and techniques for altered fingerprints detection. The major contribution of this paper is that it provides a new public database of synthetically altered fingerprints. Starting from the cases described in the literature, three methods for generating simulated altered fingerprints are proposed.",
"title": ""
},
{
"docid": "2c5eb3fb74c6379dfd38c1594ebe85f4",
"text": "Accurately recognizing speaker emotion and age/gender from speech can provide better user experience for many spoken dialogue systems. In this study, we propose to use deep neural networks (DNNs) to encode each utterance into a fixed-length vector by pooling the activations of the last hidden layer over time. The feature encoding process is designed to be jointly trained with the utterance-level classifier for better classification. A kernel extreme learning machine (ELM) is further trained on the encoded vectors for better utterance-level classification. Experiments on a Mandarin dataset demonstrate the effectiveness of our proposed methods on speech emotion and age/gender recognition tasks.",
"title": ""
},
{
"docid": "c02e7ece958714df34539a909c2adb7d",
"text": "Despite the growing evidence of the association between shame experiences and eating psychopathology, the specific effect of body image-focused shame memories on binge eating remains largely unexplored. The current study examined this association and considered current body image shame and self-criticism as mediators. A multi-group path analysis was conducted to examine gender differences in these relationships. The sample included 222 women and 109 men from the Portuguese general and college student populations who recalled an early body image-focused shame experience and completed measures of the centrality of the shame memory, current body image shame, binge eating symptoms, depressive symptoms, and self-criticism. For both men and women, the effect of the centrality of shame memories related to body image on binge eating symptoms was fully mediated by body image shame and self-criticism. In women, these effects were further mediated by self-criticism focused on a sense of inadequacy and also on self-hatred. In men, only the form of self-criticism focused on a sense of inadequacy mediated these associations. The present study has important implications for the conceptualization and treatment of binge eating symptoms. Findings suggest that, in both genders, body image-focused shame experiences are associated with binge eating symptoms via their effect on current body image shame and self-criticism.",
"title": ""
},
{
"docid": "f66f9e04fe16dd4a1de20554e25ec902",
"text": "Motor imagery (MI) based brain-computer interface (BCI) plays a crucial role in various scenarios ranging from post-traumatic rehabilitation to control prosthetics. Computer-aided interpretation of MI has augmented prior mentioned scenarios since decades but failed to address interpersonal variability. Such variability further escalates in case of multi-class MI, which is currently a common practice. The failures due to interpersonal variability can be attributed to handcrafted features as they failed to extract more generalized features. The proposed approach employs convolution neural network (CNN) based model with both filtering (through axis shuffling) and feature extraction to avail end-to-end training. Axis shuffling is performed adopted in initial blocks of the model for 1D preprocessing and reduce the parameters required. Such practice has avoided the overfitting which resulted in an improved generalized model. Publicly available BCI Competition-IV 2a dataset is considered to evaluate the proposed model. The proposed model has demonstrated the capability to identify subject-specific frequency band with an average and highest accuracy of 70.5% and S3.6% respectively. Proposed CNN model can classify in real time without relying on accelerated computing device like GPU.",
"title": ""
},
{
"docid": "ee997fc4bf329ef2918d5dbe021b3be3",
"text": "This study examines the potential link of Facebook group participation with viral advertising responses. The results suggest that college-aged Facebook group members engage in higher levels of self-disclosure and maintain more favorable attitudes toward social media and advertising in general than do nongroup members. However, Facebook group participation does not exert an influence on users' viral advertising pass-on behaviors. The results also identify variations in predictors of passon behaviors between group members and nonmembers. These findings have theoretical and managerial implications for viral advertising on Facebook.",
"title": ""
},
{
"docid": "124d740d3796d6a707100e0d8c384f1f",
"text": "We present Nodeinfo, an unsupervised algorithm for anomaly detection in system logs. We demonstrate Nodeinfo's effectiveness on data from four of the world's most powerful supercomputers: using logs representing over 746 million processor-hours, in which anomalous events called alerts were manually tagged for scoring, we aim to automatically identify the regions of the log containing those alerts. We formalize the alert detection task in these terms, describe how Nodeinfo uses the information entropy of message terms to identify alerts, and present an online version of this algorithm, which is now in production use. This is the first work to investigate alert detection on (several) publicly-available supercomputer system logs, thereby providing a reproducible performance baseline.",
"title": ""
},
{
"docid": "ec9810e7def2ae57493996b460540af0",
"text": "PURPOSE\nTo describe the results of a diabetic retinopathy screening program implemented in a primary care area.\n\n\nMETHODS\nA retrospective study was conducted using data automatically collected since the program began on 1 January 2007 until 31 December 2015.\n\n\nRESULTS\nThe number of screened diabetic patients has progressively increased, from 7,173 patients in 2007 to 42,339 diabetic patients in 2015. Furthermore, the ability of family doctors to correctly interpret retinographies has improved, with the proportion of retinal images classified as normal having increased from 55% in 2007 to 68% at the end of the study period. The proportion of non-evaluable retinographies decreased to 7% in 2015, having peaked at 15% during the program. This was partly due to a change in the screening program policy that allowed the use of tropicamide. The number of severe cases detected has declined, from 14% with severe non-proliferative and proliferativediabetic retinopathy in the initial phase of the program to 3% in 2015.\n\n\nCONCLUSIONS\nDiabetic eye disease screening by tele-ophthalmology has shown to be a valuable method in a growing population of diabetics. It leads to a regular medical examination of patients, helps ease the workload of specialised care services and favours the early detection of treatable cases. However, the results of implementing a program of this type are not immediate, achieving only modest results in the early years of the project that have improved over subsequent years.",
"title": ""
}
] |
scidocsrr
|
74d7dcad0b1dfec38eec24f8fccef8b9
|
Audio recapture detection using deep learning
|
[
{
"docid": "3223563162967868075a43ca86c1d31a",
"text": "Deep learning research aims at discovering learning algorithms that discover multiple levels of distributed representations, with higher levels representing more abstract concepts. Although the study of deep learning has already led to impressive theoretical results, learning algorithms and breakthrough experiments, several challenges lie ahead. This paper proposes to examine some of these challenges, centering on the questions of scaling deep learning algorithms to much larger models and datasets, reducing optimization difficulties due to ill-conditioning or local minima, designing more efficient and powerful inference and sampling procedures, and learning to disentangle the factors of variation underlying the observed data. It also proposes a few forward-looking research directions aimed at overcoming these",
"title": ""
}
] |
[
{
"docid": "34e21b8051f3733c077d7087c035be2f",
"text": "This paper deals with the synthesis of a speed control strategy for a DC motor drive based on an output feedback backstepping controller. The backstepping method takes into account the non linearities of the system in the design control law and leads to a system asymptotically stable in the context of Lyapunov theory. Simulated results are displayed to validate the feasibility and the effectiveness of the proposed strategy.",
"title": ""
},
{
"docid": "a06d00c783ef31008a622a8500a4ca86",
"text": "Wandering is a common and risky behavior in people with dementia (PWD). In this paper, we present a mobile healthcare application to detect wandering patterns in indoor settings. The application harnesses consumer electronics devices including WiFi access points and mobile phones and has been tested successfully in a home environment. Experimental results show that the mobile-health application is able to detect wandering patterns including lapping, pacing and random in real-time. Once wandering is detected, an alert message is sent using SMS (Short Message Service) to attending caregivers or physicians for further examination and timely interventions.",
"title": ""
},
{
"docid": "7555bad7391b1fe2f0336648d035c6f4",
"text": "A signal analysis technique is developed for discriminating a set of lower arm and wrist functions using surface EMG signals. Data wete obtained from four electrodes placed around the proximal forearm. The functions analyzed included wrist flexion/extension, wrist abduction/adduction, and forearm pronation/supination. Multivariate autoregression models were derived for each function; discrimination was performed using a multiple-model hypothesis detection technique. This approach extends the work of Graupe and Cline [1] by including spatial correlations and by using a more generalized detection philosophy, based on analysis of the time history of all limb function probabilities. These probabilities are the sufficient statistics for the problem if the EMG data are stationary Gauss-Markov processes. Experimental results on-normal subjects are presented which demonstrate the advantages of using the spatial and time correlation of the signals. This technique should be useful in generating control signals for prosthetic devices.",
"title": ""
},
{
"docid": "6951f051c3fe9ab24259dcc6f812fc68",
"text": "User Generated Content has become very popular since the birth of web services such as YouTube allowing the distribution of such user-produced media content in an easy manner. YouTube-like services are different from existing traditional VoD services because the service provider has only limited control over the creation of new content. We analyze how the content distribution in YouTube is realized and then conduct a measurement study of YouTube traffic in a large university campus network. The analysis of the traffic shows that: (1) No strong correlation is observed between global and local popularity; (2) neither time scale nor user population has an impact on the local popularity distribution; (3) video clips of local interest have a high local popularity. Using our measurement data to drive trace-driven simulations, we also demonstrate the implications of alternative distribution infrastructures on the performance of a YouTube-like VoD service. The results of these simulations show that client-based local caching, P2P-based distribution, and proxy caching can reduce network traffic significantly and allow faster access to video clips.",
"title": ""
},
{
"docid": "47fcf50c200818440def43ed97d2edd1",
"text": "A unique case of accidental hanging due to compression of the neck of an adult by the branches of a coffee tree is reported. The decedent was a 42-year-old male who was found dead in a semi prone position on a slope. His neck was lodged in a wedge formed by two branches of a coffee tree, with his legs angled downwards on the slope. Autopsy revealed two friction abrasions located horizontally on either side of the front of the neck, just above the larynx. The findings were compatible with compression of the neck by the branches of the tree, with the body weight of the decedent contributing to compression. Subsequent complete autopsy examination confirmed the cause of death as hanging. Following an inquest the death was ruled to be accidental.",
"title": ""
},
{
"docid": "89eee86640807e11fa02d0de4862b3a5",
"text": "The evolving fifth generation (5G) cellular wireless networks are envisioned to overcome the fundamental challenges of existing cellular networks, for example, higher data rates, excellent end-to-end performance, and user-coverage in hot-spots and crowded areas with lower latency, energy consumption, and cost per information transfer. To address these challenges, 5G systems will adopt a multi-tier architecture consisting of macrocells, different types of licensed small cells, relays, and device-to-device (D2D) networks to serve users with different quality-of-service (QoS) requirements in a spectrum and energy-efficient manner. Starting with the visions and requirements of 5G multi-tier networks, this article outlines the challenges of interference management (e.g. power control, cell association) in these networks with shared spectrum access (i.e. when the different network tiers share the same licensed spectrum). It is argued that the existing interference management schemes will not be able to address the interference management problem in prioritized 5G multi-tier networks where users in different tiers have different priorities for channel access. In this context a survey and qualitative comparison of the existing cell association and power control schemes is provided to demonstrate their limitations for interference management in 5G networks. Open challenges are highlighted and guidelines are provided to modify the existing schemes in order to overcome these limitations and make them suitable for the emerging 5G systems.",
"title": ""
},
{
"docid": "2b471e61a6b95221d9ca9c740660a726",
"text": "We propose a low-overhead sampling infrastructure for gathering information from the executions experienced by a program's user community. Several example applications illustrate ways to use sampled instrumentation to isolate bugs. Assertion-dense code can be transformed to share the cost of assertions among many users. Lacking assertions, broad guesses can be made about predicates that predict program errors and a process of elimination used to whittle these down to the true bug. Finally, even for non-deterministic bugs such as memory corruption, statistical modeling based on logistic regression allows us to identify program behaviors that are strongly correlated with failure and are therefore likely places to look for the error.",
"title": ""
},
{
"docid": "2bf619a1af1bab48b4b6f57df8f29598",
"text": "Alcoholism and drug addiction have marked impacts on the ability of families to function. Much of the literature has been focused on adult members of a family who present with substance dependency. There is limited research into the effects of adolescent substance dependence on parenting and family functioning; little attention has been paid to the parents' experience. This qualitative study looks at the parental perspective as they attempted to adapt and cope with substance dependency in their teenage children. The research looks into family life and adds to family functioning knowledge when the identified client is a youth as opposed to an adult family member. Thirty-one adult caregivers of 21 teenagers were interviewed, resulting in eight significant themes: (1) finding out about the substance dependence problem; (2) experiences as the problems escalated; (3) looking for explanations other than substance dependence; (4) connecting to the parent's own history; (5) trying to cope; (6) challenges of getting help; (7) impact on siblings; and (8) choosing long-term rehabilitation. Implications of this research for clinical practice are discussed.",
"title": ""
},
{
"docid": "332db7a0d5bf73f65e55c6f2e97dd22c",
"text": "The knowledge of surface electromyography (SEMG) and the number of applications have increased considerably during the past ten years. However, most methodological developments have taken place locally, resulting in different methodologies among the different groups of users.A specific objective of the European concerted action SENIAM (surface EMG for a non-invasive assessment of muscles) was, besides creating more collaboration among the various European groups, to develop recommendations on sensors, sensor placement, signal processing and modeling. This paper will present the process and the results of the development of the recommendations for the SEMG sensors and sensor placement procedures. Execution of the SENIAM sensor tasks, in the period 1996-1999, has been handled in a number of partly parallel and partly sequential activities. A literature scan was carried out on the use of sensors and sensor placement procedures in European laboratories. In total, 144 peer-reviewed papers were scanned on the applied SEMG sensor properties and sensor placement procedures. This showed a large variability of methodology as well as a rather insufficient description. A special workshop provided an overview on the scientific and clinical knowledge of the effects of sensor properties and sensor placement procedures on the SEMG characteristics. Based on the inventory, the results of the topical workshop and generally accepted state-of-the-art knowledge, a first proposal for sensors and sensor placement procedures was defined. Besides containing a general procedure and recommendations for sensor placement, this was worked out in detail for 27 different muscles. This proposal was evaluated in several European laboratories with respect to technical and practical aspects and also sent to all members of the SENIAM club (>100 members) together with a questionnaire to obtain their comments. Based on this evaluation the final recommendations of SENIAM were made and published (SENIAM 8: European recommendations for surface electromyography, 1999), both as a booklet and as a CD-ROM. In this way a common body of knowledge has been created on SEMG sensors and sensor placement properties as well as practical guidelines for the proper use of SEMG.",
"title": ""
},
{
"docid": "9407bdf78114e1369e6cc90283fbe892",
"text": "Making machines understand human expressions enables various useful applications in human-machine interaction. In this article, we present a novel facial expression recognition approach with 3D Mesh Convolutional Neural Networks (3DMCNN) and a visual analytics-guided 3DMCNN design and optimization scheme. From an RGBD camera, we first reconstruct a 3D face model of a subject with facial expressions and then compute the geometric properties of the surface. Instead of using regular Convolutional Neural Networks (CNNs) to learn intensities of the facial images, we convolve the geometric properties on the surface of the 3D model using 3DMCNN. We design a geodesic distance-based convolution method to overcome the difficulties raised from the irregular sampling of the face surface mesh. We further present interactive visual analytics for the purpose of designing and modifying the networks to analyze the learned features and cluster similar nodes in 3DMCNN. By removing low-activity nodes in the network, the performance of the network is greatly improved. We compare our method with the regular CNN-based method by interactively visualizing each layer of the networks and analyze the effectiveness of our method by studying representative cases. Testing on public datasets, our method achieves a higher recognition accuracy than traditional image-based CNN and other 3D CNNs. The proposed framework, including 3DMCNN and interactive visual analytics of the CNN, can be extended to other applications.",
"title": ""
},
{
"docid": "f8836ddc384c799d9264b8ea43f9685a",
"text": "Pattern matching has proved an extremely powerful and durable notion in functional programming. This paper contributes a new programming notation for type theory which elaborates the notion in various ways. First, as is by now quite well-known in the type theory community, definition by pattern matching becomes a more discriminating tool in the presence of dependent types, since it refines the explanation of types as well as values. This becomes all the more true in the presence of the rich class of datatypes known as inductive families (Dybjer, 1991). Secondly, as proposed by Peyton Jones (1997) for Haskell, and independently rediscovered by us, subsidiary case analyses on the results of intermediate computations, which commonly take place on the right-hand side of definitions by pattern matching, should rather be handled on the left. In simply-typed languages, this subsumes the trivial case of Boolean guards; in our setting it becomes yet more powerful. Thirdly, elementary pattern matching decompositions have a well-defined interface given by a dependent type; they correspond to the statement of an induction principle for the datatype. More general, user-definable decompositions may be defined which also have types of the same general form. Elementary pattern matching may therefore be recast in abstract form, with a semantics given by translation. Such abstract decompositions of data generalize Wadler’s (1987) notion of ‘view’. The programmer wishing to introduce a new view of a type T , and exploit it directly in pattern matching, may do so via a standard programming idiom. The type theorist, looking through the Curry–Howard lens, may see this as proving a theorem, one which establishes the validity of a new induction principle for T . We develop enough syntax and semantics to account for this high-level style of programming in dependent type theory. We close with the development of a typechecker for the simply-typed lambda calculus, which furnishes a view of raw terms as either being well-typed, or containing an error. The implementation of this view is ipso facto a proof that typechecking is decidable.",
"title": ""
},
{
"docid": "850f29a1d3c5bc96bb36787aba428331",
"text": "In this paper, we introduce a novel framework for WEakly supervised Learning of Deep cOnvolutional neural Networks (WELDON). Our method is dedicated to automatically selecting relevant image regions from weak annotations, e.g. global image labels, and encompasses the following contributions. Firstly, WELDON leverages recent improvements on the Multiple Instance Learning paradigm, i.e. negative evidence scoring and top instance selection. Secondly, the deep CNN is trained to optimize Average Precision, and fine-tuned on the target dataset with efficient computations due to convolutional feature sharing. A thorough experimental validation shows that WELDON outperforms state-of-the-art results on six different datasets.",
"title": ""
},
{
"docid": "cebeaf1d155d5d7e4c62ec84cf36c087",
"text": "This paper presents the comparison of power captured by vertical and horizontal axis wind turbine (VAWT and HAWT). According to Betz, the limit of maximum coefficient power (CP) is 0.59. In this case CP is important parameter that determines the power extracted by a wind turbine we made. This paper investigates the impact of wind speed variation of wind turbine to extract the power. For VAWT we used H-darrieus type whose swept area is 3.14 m2 and so is HAWT. The wind turbines have 3 blades for each type. The air foil of both wind turbines are NACA 4412. We tested the model of wind turbine with various wind velocity which affects the performance. We have found that CP of HAWT is 0.54 with captured maximum power is 1363.6 Watt while the CP of VAWT is 0.34 with captured maximum power is 505.69 Watt. The power extracted of both wind turbines seems that HAWT power is much better than VAWT power.",
"title": ""
},
{
"docid": "1c80fdc30b2b37443367dae187fbb376",
"text": "The web is a catalyst for drawing people together around shared goals, but many groups never reach critical mass. It can thus be risky to commit time or effort to a goal: participants show up only to discover that nobody else did, and organizers devote significant effort to causes that never get off the ground. Crowdfunding has lessened some of this risk by only calling in donations when an effort reaches a collective monetary goal. However, it leaves unsolved the harder problem of mobilizing effort, time and participation. We generalize the concept into activation thresholds, commitments that are conditioned on others' participation. With activation thresholds, supporters only need to show up for an event if enough other people commit as well. Catalyst is a platform that introduces activation thresholds for on-demand events. For more complex coordination needs, Catalyst also provides thresholds based on time or role (e.g., a bake sale requiring commitments for bakers, decorators, and sellers). In a multi-month field deployment, Catalyst helped users organize events including food bank volunteering, on-demand study groups, and mass participation events like a human chess game. Our results suggest that activation thresholds can indeed catalyze a large class of new collective efforts.",
"title": ""
},
{
"docid": "4f8ef942fdc47b08ac864f93c33c0fab",
"text": "Managing risks in construction projects has been recognised as a very important management process in order to achieve the project objectives in terms of time, cost, quality, safety and environmental sustainability. However, until now most research has focused on some aspects of construction risk management rather than using a systematic and holistic approach to identify risks and analyse the likelihood of occurrence and impacts of these risks. This paper aims to identify and analyse the risks associated with the development of construction projects from project stakeholder and life cycle perspectives. Postal questionnaire surveys were used to collect data. Based on a comprehensive assessment of the likelihood of occurrence and their impacts on the project objectives, this paper identifies twenty major risk factors. This research found that these risks are mainly related to (in ranking) contractors, clients and designers, with few related to government bodies, subcontractors/suppliers and external issues. Among them, “tight project schedule” is recognised to influence all project objectives maximally, whereas “design variations”, “excessive approval procedures in administrative government departments”, “high performance/quality expectation”, “unsuitable construction program planning”, as well as “variations of construction program” are deemed to impact at least four aspects of project objectives. This research also found that these risks spread through the whole project life cycle and many risks occur at more than one phase, with the construction stage as the most risky phase, followed by the feasibility stage. It is concluded that clients, designers and government bodies must work cooperatively from the feasibility phase onwards to address potential risks in time, and contractors and subcontractors with robust construction and management knowledge must be employed early to make sound preparation for carrying out safe, efficient and quality construction activities.",
"title": ""
},
{
"docid": "09ada66e157c6a99c6317a7cb068367f",
"text": "Experience design is a relatively new approach to product design. While there are several possible starting points in designing for positive experiences, we start with experience goals that state a profound source for a meaningful experience. In this paper, we investigate three design cases that used experience goals as the starting point for both incremental and radical design, and analyse them from the perspective of their potential for design space expansion. Our work addresses the recent call for design research directed toward new interpretations of what could be meaningful to people, which is seen as the source for creating new meanings for products, and thereby, possibly leading to radical innovations. Based on this idea, we think about the design space as a set of possible concepts derived from deep meanings that experience goals help to communicate. According to our initial results from the small-scale touchpoint design cases, the type of experience goals we use seem to have the potential to generate not only incremental but also radical design ideas.",
"title": ""
},
{
"docid": "bb2153c927ceff61687f5f183d3b9e65",
"text": "A new clock gated flip-flop is presented. The circuit is based on a new clock gating approach to reduce the consumption of clock signal's switching power. It operates with no redundant clock cycles and has reduced number of transistors to minimize the overhead and to make it suitable for data signals with higher switching activity. The proposed flip-flop is used to design 10 bits binary counter and 14 bits successive approximation register. These applications have been designed up to the layout level with 1 V power supply in 90 nm CMOS technology and have been simulated using Spectre. Simulations with the inclusion of parasitics have shown the effectiveness of the new approach on power consumption and transistor count.",
"title": ""
},
{
"docid": "2b98fd7a61fd7c521758651191df74d0",
"text": "Nowadays, a great effort is done to find new alternative renewable energy sources to replace part of nuclear energy production. In this context, this paper presents a new axial counter-rotating turbine for small-hydro applications which is developed to recover the energy lost in release valves of water supply. The design of the two PM-generators, their mechanical integration in a bulb placed into the water conduit and the AC-DC Vienna converter developed for these turbines are presented. The sensorless regulation of the two generators is also briefly discussed. Finally, measurements done on the 2-kW prototype are analyzed and compared with the simulation.",
"title": ""
},
{
"docid": "53b1ac64f63cab0d99092764eed4f829",
"text": "We present a new unsupervised topic discovery model for a collection of text documents. In contrast to the majority of the state-of-the-art topic models, our model does not break the document's structure such as paragraphs and sentences. In addition, it preserves word order in the document. As a result, it can generate two levels of topics of different granularity, namely, segment-topics and word-topics. In addition, it can generate n-gram words in each topic. We also develop an approximate inference scheme using Gibbs sampling method. We conduct extensive experiments using publicly available data from different collections and show that our model improves the quality of several text mining tasks such as the ability to support fine grained topics with n-gram words in the correlation graph, the ability to segment a document into topically coherent sections, document classification, and document likelihood estimation.",
"title": ""
}
] |
scidocsrr
|
d575078642d0e8562ffe6821d1d849fa
|
Joint Latent Dirichlet Allocation for Social Tags
|
[
{
"docid": "70e34d4ccd294d7811e344616638a3af",
"text": "The rapid development of social media sharing has created a huge demand for automatic media classification and annotation techniques. Attribute learning has emerged as a promising paradigm for bridging the semantic gap and addressing data sparsity via transferring attribute knowledge in object recognition and relatively simple action classification. In this paper, we address the task of attribute learning for understanding multimedia data with sparse and incomplete labels. In particular, we focus on videos of social group activities, which are particularly challenging and topical examples of this task because of their multimodal content and complex and unstructured nature relative to the density of annotations. To solve this problem, we 1) introduce a concept of semilatent attribute space, expressing user-defined and latent attributes in a unified framework, and 2) propose a novel scalable probabilistic topic model for learning multimodal semilatent attributes, which dramatically reduces requirements for an exhaustive accurate attribute ontology and expensive annotation effort. We show that our framework is able to exploit latent attributes to outperform contemporary approaches for addressing a variety of realistic multimedia sparse data learning tasks including: multitask learning, learning with label noise, N-shot transfer learning, and importantly zero-shot learning.",
"title": ""
},
{
"docid": "120e36cc162f4ce602da810c80c18c7d",
"text": "We describe a new model for learning meaningful representations of text documents from an unlabeled collection of documents. This model is inspired by the recently proposed Replicated Softmax, an undirected graphical model of word counts that was shown to learn a better generative model and more meaningful document representations. Specifically, we take inspiration from the conditional mean-field recursive equations of the Replicated Softmax in order to define a neural network architecture that estimates the probability of observing a new word in a given document given the previously observed words. This paradigm also allows us to replace the expensive softmax distribution over words with a hierarchical distribution over paths in a binary tree of words. The end result is a model whose training complexity scales logarithmically with the vocabulary size instead of linearly as in the Replicated Softmax. Our experiments show that our model is competitive both as a generative model of documents and as a document representation learning algorithm.",
"title": ""
}
] |
[
{
"docid": "ebb70af20b550c911a63757b754c6619",
"text": "This paper presents a vehicle price prediction system by using the supervised machine learning technique. The research uses multiple linear regression as the machine learning prediction method which offered 98% prediction precision. Using multiple linear regression, there are multiple independent variables but one and only one dependent variable whose actual and predicted values are compared to find precision of results. This paper proposes a system where price is dependent variable which is predicted, and this price is derived from factors like vehicle’s model, make, city, version, color, mileage, alloy rims and power steering.",
"title": ""
},
{
"docid": "a4d294547c92296a2ea3222dc8d92afe",
"text": "Energy theft is a very common problem in countries like India where consumers of energy are increasing consistently as the population increases. Utilities in electricity system are destroying the amounts of revenue each year due to energy theft. The newly designed AMR used for energy measurements reveal the concept and working of new automated power metering system but this increased the Electricity theft forms administrative losses because of not regular interval checkout at the consumer's residence. It is quite impossible to check and solve out theft by going every customer's door to door. In this paper, a new procedure is followed based on MICROCONTROLLER Atmega328P to detect and control the energy meter from power theft and solve it by remotely disconnect and reconnecting the service (line) of a particular consumer. An SMS will be sent automatically to the utility central server through GSM module whenever unauthorized activities detected and a separate message will send back to the microcontroller in order to disconnect the unauthorized supply. A unique method is implemented by interspersed the GSM feature into smart meters with Solid state relay to deal with the non-technical losses, billing difficulties, and voltage fluctuation complication.",
"title": ""
},
{
"docid": "c49ed75ce48fb92db6e80e4fe8af7127",
"text": "The One Class Classification (OCC) problem is different from the conventional binary/multi-class classification problem in the sense that in OCC, the negative class is either not present or not properly sampled. The problem of classifying positive (or target) cases in the absence of appropriately-characterized negative cases (or outliers) has gained increasing attention in recent years. Researchers have addressed the task of OCC by using different methodologies in a variety of application domains. In this paper we formulate a taxonomy with three main categories based on the way OCC has been envisaged, implemented and applied by various researchers in different application domains. We also present a survey of current state-of-the-art OCC algorithms, their importance, applications and limitations.",
"title": ""
},
{
"docid": "610068a7b1737375034960f0bf4d208d",
"text": "Polymorphic malware detection is challenging due to the continual mutations miscreants introduce to successive instances of a particular virus. Such changes are akin to mutations in biological sequences. Recently, high-throughput methods for gene sequence classification have been developed by the bioinformatics and computational biology communities. In this paper, we argue that these methods can be usefully applied to malware detection. Unfortunately, gene classification tools are usually optimized for and restricted to an alphabet of four letters (nucleic acids). Consequently, we have selected the Strand gene sequence classifier, which offers a robust classification strategy that can easily accommodate unstructured data with any alphabet including source code or compiled machine code. To demonstrate Stand's suitability for classifying malware, we execute it on approximately 500GB of malware data provided by the Kaggle Microsoft Malware Classification Challenge (BIG 2015) used for predicting 9 classes of polymorphic malware. Experiments show that, with minimal adaptation, the method achieves accuracy levels well above 95% requiring only a fraction of the training times used by the winning team's method.",
"title": ""
},
{
"docid": "e1c04d30c7b8f71d9c9b19cb2bb36a33",
"text": "This Guide has been written to provide guidance for individuals involved in curriculum design who wish to develop research skills and foster the attributes in medical undergraduates that help develop research. The Guide will provoke debate on an important subject, and although written specifically with undergraduate medical education in mind, we hope that it will be of interest to all those involved with other health professionals' education. Initially, the Guide describes why research skills and its related attributes are important to those pursuing a medical career. It also explores the reasons why research skills and an ethos of research should be instilled into professionals of the future. The Guide also tries to define what these skills and attributes should be for medical students and lays out the case for providing opportunities to develop research expertise in the undergraduate curriculum. Potential methods to encourage the development of research-related attributes are explored as are some suggestions as to how research skills could be taught and assessed within already busy curricula. This publication also discusses the real and potential barriers to developing research skills in undergraduate students, and suggests strategies to overcome or circumvent these. Whilst we anticipate that this Guide will appeal to all levels of expertise in terms of student research, we hope that, through the use of case studies, we will provide practical advice to those currently developing this area within their curriculum.",
"title": ""
},
{
"docid": "ccafd3340850c5c1a4dfbedd411f1d62",
"text": "The paper predicts changes in global and regional incidences of armed conflict for the 2010–2050 period. The predictions are based on a dynamic multinomial logit model estimation on a 1970–2009 cross-sectional dataset of changes between no armed conflict, minor conflict, and major conflict. Core exogenous predictors are population size, infant mortality rates, demographic composition, education levels, oil dependence, ethnic cleavages, and neighborhood characteristics. Predictions are obtained through simulating the behavior of the conflict variable implied by the estimates from this model. We use projections for the 2011–2050 period for the predictors from the UN World Population Prospects and the International Institute for Applied Systems Analysis. We treat conflicts, recent conflict history, and neighboring conflicts as endogenous variables. Out-of-sample validation of predictions for 2007–2009 (based on estimates for the 1970–2000 period) indicates that the model predicts well, with an AUC of 0.937. Using a p > 0.30 threshold for positive prediction, the True Positive Rate 7–9 years into the future is 0.79 and the False Positive Rate 0.085. We predict a continued decline in the proportion of the world’s countries that have internal armed conflict, from about 15% in 2009 to 7% in 2050. The decline is particularly strong in the Western Asia and North Africa region, and less clear in Africa South of Sahara. The remaining conflict countries will increasingly be concentrated in East, Central, and Southern Africa and in East and South Asia. ∗An earlier version of this paper was presented to the ISA Annual Convention 2009, New York, 15–18 Feb. The research was funded by the Norwegian Research Council grant no. 163115/V10. Thanks to Ken Benoit, Mike Colaresi, Scott Gates, Nils Petter Gleditsch, Joe Hewitt, Bjørn Høyland, Andy Mack, Näıma Mouhleb, Gerald Schneider, and Phil Schrodt for valuable comments.",
"title": ""
},
{
"docid": "000a7813bebebedf0308849ae3a8c237",
"text": "Computers are increasingly used to make decisions that have significant impact on people's lives. Often, these predictions can affect different population subgroups disproportionately. As a result, the issue of fairness has received much recent interest, and a number of fairness-enhanced classifiers have appeared in the literature. This paper seeks to study the following questions: how do these different techniques fundamentally compare to one another, and what accounts for the differences? Specifically, we seek to bring attention to many under-appreciated aspects of such fairness-enhancing interventions that require investigation for these algorithms to receive broad adoption.\n We present the results of an open benchmark we have developed that lets us compare a number of different algorithms under a variety of fairness measures and existing datasets. We find that although different algorithms tend to prefer specific formulations of fairness preservations, many of these measures strongly correlate with one another. In addition, we find that fairness-preserving algorithms tend to be sensitive to fluctuations in dataset composition (simulated in our benchmark by varying training-test splits) and to different forms of preprocessing, indicating that fairness interventions might be more brittle than previously thought.",
"title": ""
},
{
"docid": "23bf7564a02a2926e39af2ef1d5499ad",
"text": "Welcome to PLoS Computational Biology, a community journal from the Public Library of Science dedicated to reporting biological advances achieved through computation. The journal is published in partnership with the International Society for Computational Biology (ISCB). The importance of this partnership is described in the accompanying letter from Michael Gribskov, ISCB president. What motivates us to start a new journal at this time? Computation, driven in part by the influx of large amounts of data at all biological scales, has become a central feature of research and discovery in the life sciences. This work tends to be published either in methods journals that are not read by experimentalists or in one of the numerous journals reporting novel biology, each of which publishes only small amounts of computational research. Hence, the impact of this research is diluted. PLoS Computational Biology provides a home for important biological research driven by computation—a place where computational biologists can find the best work produced by their colleagues, and where the broader biological community can see the myriad ways computation is advancing our understanding of biological systems. PLoS Computational Biology is governed by one overarching principle: scientific quality. This quality is reflected in the editorial board and the editorial staff. The editorial board members are leaders in their respective scientific areas and have agreed to give their valuable time to support a quality journal in their field. Behind the scenes, through a rigorous presubmission process, three quality reviews for each paper, and an acceptance rate below 20%, the editors and staff already knew in the six months since the journal was launched that we were producing a first-rate product. The scientific content is now here for all of you to see and will continue to build in the months and years to come.",
"title": ""
},
{
"docid": "15e31918fcebb95beaf381d93d7605a5",
"text": "One challenge for UHF RFID passive tag design is to obtain a low-profile antenna that minimizes the influence of near-body or attached objects without sacrificing both read range and universal UHF RFID band interoperability. A new improved design of a RFID passive tag antenna is presented that performs well near problematic surfaces (human body, liquids, metals) across most of the universal UHF RFID (840-960 MHz) band. The antenna is based on a low-profile printed configuration with slots, and it is evaluated through extensive simulations and experimental tests.",
"title": ""
},
{
"docid": "f20f924fc0e975e0a4b2107692e6bd4c",
"text": "One of the ultimate goals of open ended learning systems is to take advantage of experience to get a future benefit. We can identify two levels in learning. One builds directly over the data : it captures the pattern and regularities which allow for reliable predictions on new samples. The other starts from such an obtained source knowledge and focuses on how to generalize it to new target concepts : this is also known as learning to learn. Most of the existing machine learning methods stop at the first level and are able of reliable future decisions only if a large amount of training samples is available. This work is devoted to the second level of learning and focuses on how to transfer information from prior knowledge, exploiting it on a new learning problem with possibly scarce labeled data. We propose several algorithmic solutions by leveraging over prior models or features. One possibility is to constrain any target learning model to be close to the linear combination of several source models. Alternatively the prior knowledge can be used as an expert which judges over the target samples and considers the obtained output as an extra feature descriptor. All the proposed approaches evaluate automatically the relevance of prior knowledge and decide from where and how much to transfer without any need of external supervision or heuristically hand tuned parameters. A thorough experimental analysis shows the effectiveness of the defined methods both in case of interclass transfer and for adaptation across different domains. The last part of this work is dedicated to moving forward knowledge transfer towards life long learning. We show how to combine transfer and online learning to obtain a method which processes continuously new data guided by information acquired in the past. We also present an approach to exploit the large variety of existing visual data resources every time it is necessary to solve a new situated learning problem. We propose an image representation that decomposes orthogonally into a specific and a generic part. The last one can be used as an un-biased reference knowledge for future learning tasks.",
"title": ""
},
{
"docid": "413d6b01d62148fa86627f7cede5c53a",
"text": "Each day, anti-virus companies receive tens of thousands samples of potentially harmful executables. Many of the malicious samples are variations of previously encountered malware, created by their authors to evade pattern-based detection. Dealing with these large amounts of data requires robust, automatic detection approaches. This paper studies malware classification based on call graph clustering. By representing malware samples as call graphs, it is possible to abstract certain variations away, enabling the detection of structural similarities between samples. The ability to cluster similar samples together will make more generic detection techniques possible, thereby targeting the commonalities of the samples within a cluster. To compare call graphs mutually, we compute pairwise graph similarity scores via graph matchings which approximately minimize the graph edit distance. Next, to facilitate the discovery of similar malware samples, we employ several clustering algorithms, including k-medoids and Density-Based Spatial Clustering of Applications with Noise (DBSCAN). Clustering experiments are conducted on a collection of real malware samples, and the results are evaluated against manual classifications provided by human malware analysts. Experiments show that it is indeed possible to accurately detect malware families via call graph clustering. We anticipate that in the future, call graphs can be used to analyse the emergence of new malware families, and ultimately to automate implementation of generic detection schemes.",
"title": ""
},
{
"docid": "b4a50cddb96379dc55ce2476dad01dfa",
"text": "Many of the industrial and research databases are plagued by the problem of missing values. Some evident examples include databases associated with instrument maintenance, medical applications, and surveys. One of the common ways to cope with missing values is to complete their imputation (filling in). Given the rapid growth of sizes of databases, it becomes imperative to come up with a new imputation methodology along with efficient algorithms. The main objective of this paper is to develop a unified framework supporting a host of imputation methods. In the development of this framework, we require that its usage should (on average) lead to the significant improvement of accuracy of imputation while maintaining the same asymptotic computational complexity of the individual methods. Our intent is to provide a comprehensive review of the representative imputation techniques. It is noticeable that the use of the framework in the case of a low-quality single-imputation method has resulted in the imputation accuracy that is comparable to the one achieved when dealing with some other advanced imputation techniques. We also demonstrate, both theoretically and experimentally, that the application of the proposed framework leads to a linear computational complexity and, therefore, does not affect the asymptotic complexity of the associated imputation method.",
"title": ""
},
{
"docid": "8a92594dbd75885002bad0dc2e658e10",
"text": "Exposure to some music, in particular classical music, has been reported to produce transient increases in cognitive performance. The authors investigated the effect of listening to an excerpt of Vivaldi's Four Seasons on category fluency in healthy older adult controls and Alzheimer's disease patients. In a counterbalanced repeated-measure design, participants completed two, 1-min category fluency tasks whilst listening to an excerpt of Vivaldi and two, 1-min category fluency tasks without music. The authors report a positive effect of music on category fluency, with performance in the music condition exceeding performance without music in both the healthy older adult control participants and the Alzheimer's disease patients. In keeping with previous reports, the authors conclude that music enhances attentional processes, and that this can be demonstrated in Alzheimer's disease.",
"title": ""
},
{
"docid": "856012f3cf81a1527916da8a5136ce79",
"text": "Folk psychology postulates a spatial unity of self and body, a \"real me\" that resides in one's body and is the subject of experience. The spatial unity of self and body has been challenged by various philosophical considerations but also by several phenomena, perhaps most notoriously the \"out-of-body experience\" (OBE) during which one's visuo-spatial perspective and one's self are experienced to have departed from their habitual position within one's body. Here the authors marshal evidence from neurology, cognitive neuroscience, and neuroimaging that suggests that OBEs are related to a failure to integrate multisensory information from one's own body at the temporo-parietal junction (TPJ). It is argued that this multisensory disintegration at the TPJ leads to the disruption of several phenomenological and cognitive aspects of self-processing, causing illusory reduplication, illusory self-location, illusory perspective, and illusory agency that are experienced as an OBE.",
"title": ""
},
{
"docid": "57fcce4eeac895ef56945008e2c4cd59",
"text": "BACKGROUND\nComputational state space models (CSSMs) enable the knowledge-based construction of Bayesian filters for recognizing intentions and reconstructing activities of human protagonists in application domains such as smart environments, assisted living, or security. Computational, i. e., algorithmic, representations allow the construction of increasingly complex human behaviour models. However, the symbolic models used in CSSMs potentially suffer from combinatorial explosion, rendering inference intractable outside of the limited experimental settings investigated in present research. The objective of this study was to obtain data on the feasibility of CSSM-based inference in domains of realistic complexity.\n\n\nMETHODS\nA typical instrumental activity of daily living was used as a trial scenario. As primary sensor modality, wearable inertial measurement units were employed. The results achievable by CSSM methods were evaluated by comparison with those obtained from established training-based methods (hidden Markov models, HMMs) using Wilcoxon signed rank tests. The influence of modeling factors on CSSM performance was analyzed via repeated measures analysis of variance.\n\n\nRESULTS\nThe symbolic domain model was found to have more than 10(8) states, exceeding the complexity of models considered in previous research by at least three orders of magnitude. Nevertheless, if factors and procedures governing the inference process were suitably chosen, CSSMs outperformed HMMs. Specifically, inference methods used in previous studies (particle filters) were found to perform substantially inferior in comparison to a marginal filtering procedure.\n\n\nCONCLUSIONS\nOur results suggest that the combinatorial explosion caused by rich CSSM models does not inevitably lead to intractable inference or inferior performance. This means that the potential benefits of CSSM models (knowledge-based model construction, model reusability, reduced need for training data) are available without performance penalty. However, our results also show that research on CSSMs needs to consider sufficiently complex domains in order to understand the effects of design decisions such as choice of heuristics or inference procedure on performance.",
"title": ""
},
{
"docid": "c07ea25fe12ec56e6bf7df9508a6b494",
"text": "The psychological and anthropological literature on cultural variations in emotions is reviewed. The literature has been interpreted within the framework of a cognitive-process model of emotions. Both cross-cultural differences and similarities were identified in each phase of the emotion process; similarities in 1 phase do not necessarily imply similarities in other phases. Whether cross-cultural differences or similarities are found depends to an important degree on the level of description of the emotional phenomena. Cultural differences in emotions appear to be due to differences in event types or schemas, in culture-specific appraisal propensities, in behavior repertoires, or in regulation processes. Differences in taxonomies of emotion words sometimes reflect true emotion differences like those just mentioned, but they may also just result from differences in which emotion-process phase serves as the basis for categorization.",
"title": ""
},
{
"docid": "c8a31c20e77a8b10f2845120bae8a3e8",
"text": "Line arresters are considered as an effective way to improve the lightning performance of transmission lines, especially in parts of line that suffer from high soil resistivity and lightning ground flash density. This paper presents results of the application of line surge arresters on the 132KV double circuit transmission line in EMTP-RV and all the practical scenarios for installation of surge arresters. The study has shown that a significant level of improvement can be reached by installing arresters at all or only some of the line phases. It can increase the strength of the line to withstand lightning currents up to -292kA. the probability of having this lightning current, is practically zero.",
"title": ""
},
{
"docid": "2bb4366b813728af555be714da0ee241",
"text": "A case of acute mathamphetamine (MA) poisoning death was occasionally found in autopsy by leaking into alimentary tract from package in drug traffic. A Korean man (39-year-old) was found dead in his apartment in Shenyang and 158 columned-shaped packages (390 g) of MA were found in his alimentary tract by autopsy, in which four packages were found in the esophagus, 118 in the stomach and 36 in the lower part of small intestine. The packages were wrapped with tinfoil and plastic film, from which one package in the stomach was empty and ruptured. Extreme pulmonary edema, congestion and hemorrhage as well as moderate edema, congestion and petechial hemorrhage in the other viscera were observed at autopsy and microscopically. Simultaneously AMP (amphatamine) in urine was tested positive by Trige DOA kit. Quantitative analysis was performed by gas chromatography/mass spectrometry. Extremely high concentrations of MA were found in the cardiac blood (24.8 microg/mL), the urine (191 microg/mL), the liver (116 microg/mL) and the gastric contents (1045 microg/mL), and no alcohol and other conventional drugs or poisons were detected in the same samples. The poisoning dosage is 5 microg/mL in the plasma and lethal dosage is 10-40 microg/mL in the plasma according the report. This high concentrations of MA in blood indicated that the cause of death was result from acute MA poisoning due to MA leaking into stomach. Much attention must be paid in the body packer of drugs in illegal drug traffic.",
"title": ""
},
{
"docid": "e9c523662963a7c609eb59a4c19eff7f",
"text": "We propose a sampling theory for signals that are supported on either directed or undirected graphs. The theory follows the same paradigm as classical sampling theory. We show that perfect recovery is possible for graph signals bandlimited under the graph Fourier transform. The sampled signal coefficients form a new graph signal, whose corresponding graph structure preserves the first-order difference of the original graph signal. For general graphs, an optimal sampling operator based on experimentally designed sampling is proposed to guarantee perfect recovery and robustness to noise; for graphs whose graph Fourier transforms are frames with maximal robustness to erasures as well as for Erdös-Rényi graphs, random sampling leads to perfect recovery with high probability. We further establish the connection to the sampling theory of finite discrete-time signal processing and previous work on signal recovery on graphs. To handle full-band graph signals, we propose a graph filter bank based on sampling theory on graphs. Finally, we apply the proposed sampling theory to semi-supervised classification of online blogs and digit images, where we achieve similar or better performance with fewer labeled samples compared to previous work.",
"title": ""
},
{
"docid": "d3ce627360a466ac95de3a61d64995e1",
"text": "The large size of power systems makes behavioral analysis of electricity markets computationally taxing. Reducing the system into a smaller equivalent, based on congestion zones, can substantially reduce the computational requirements. In this paper, we propose a scheme to determine the equivalent reactance of interfaces of a reduced system based upon the zonal power transfer distribution factors of the original system. The dc power flow model is used to formulate the problem. Test examples are provided using both an illustrative six-bus system and a more realistically sized 12 925-bus system.",
"title": ""
}
] |
scidocsrr
|
2bc0c55d3a97ffaf2cc378f46c87773b
|
An empirical evaluation of information metrics for low-rate and high-rate DDoS attack detection
|
[
{
"docid": "24b62b4d3ecee597cffef75e0864bdd8",
"text": "Botnets can cause significant security threat and huge loss to organizations, and are difficult to discover their existence. Therefore they have become one of the most severe threats on the Internet. The core component of botnets is their command and control channel. Botnets often use IRC (Internet Relay Chat) as a communication channel through which the botmaster can control the bots to launch attacks or propagate more infections. In this paper, anomaly score based botnet detection is proposed to identify the botnet activities by using the similarity measurement and the periodic characteristics of botnets. To improve the detection rate, the proposed system employs two-level correlation relating the set of hosts with same anomaly behaviors. The proposed method can differentiate the malicious network traffic generated by infected hosts (bots) from that by normal IRC clients, even in a network with only a very small number of bots. The experiment results show that, regardless the size of the botnet in a network, the proposed approach efficiently detects abnormal IRC traffic and identifies botnet activities. © 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "3ae51aede5a7a551cfb2aecbc77a9ecb",
"text": "We present the Crossfire attack -- a powerful attack that degrades and often cuts off network connections to a variety of selected server targets (e.g., servers of an enterprise, a city, a state, or a small country) by flooding only a few network links. In Crossfire, a small set of bots directs low intensity flows to a large number of publicly accessible servers. The concentration of these flows on the small set of carefully chosen links floods these links and effectively disconnects selected target servers from the Internet. The sources of the Crossfire attack are undetectable by any targeted servers, since they no longer receive any messages, and by network routers, since they receive only low-intensity, individual flows that are indistinguishable from legitimate flows. The attack persistence can be extended virtually indefinitely by changing the set of bots, publicly accessible servers, and target links while maintaining the same disconnection targets. We demonstrate the attack feasibility using Internet experiments, show its effects on a variety of chosen targets (e.g., servers of universities, US states, East and West Coasts of the US), and explore several countermeasures.",
"title": ""
}
] |
[
{
"docid": "ee0ba4a70bfa4f53d33a31b2d9063e89",
"text": "Since the identification of long-range dependence in network traffic ten years ago, its consistent appearance across numerous measurement studies has largely discredited Poisson-based models. However, since that original data set was collected, both link speeds and the number of Internet-connected hosts have increased by more than three orders of magnitude. Thus, we now revisit the Poisson assumption, by studying a combination of historical traces and new measurements obtained from a major backbone link belonging to a Tier 1 ISP. We show that unlike the older data sets, current network traffic can be well represented by the Poisson model for sub-second time scales. At multisecond scales, we find a distinctive piecewise-linear nonstationarity, together with evidence of long-range dependence. Combining our observations across both time scales leads to a time-dependent Poisson characterization of network traffic that, when viewed across very long time scales, exhibits the observed long-range dependence. This traffic characterization reconciliates the seemingly contradicting observations of Poisson and long-memory traffic characteristics. It also seems to be in general agreement with recent theoretical models for large-scale traffic aggregation",
"title": ""
},
{
"docid": "41dfc6647b8937b161c00a1372e986c2",
"text": "Deep unsupervised learning in stochastic recurrent neural networks with many layers of hidden units is a recent breakthrough in neural computation research. These networks build a hierarchy of progressively more complex distributed representations of the sensory data by fitting a hierarchical generative model. In this article we discuss the theoretical foundations of this approach and we review key issues related to training, testing and analysis of deep networks for modeling language and cognitive processing. The classic letter and word perception problem of McClelland and Rumelhart (1981) is used as a tutorial example to illustrate how structured and abstract representations may emerge from deep generative learning. We argue that the focus on deep architectures and generative (rather than discriminative) learning represents a crucial step forward for the connectionist modeling enterprise, because it offers a more plausible model of cortical learning as well as a way to bridge the gap between emergentist connectionist models and structured Bayesian models of cognition.",
"title": ""
},
{
"docid": "6f1877d360251e601b3ce63e7b991052",
"text": "In education research, there is a widely-cited result called \"Bloom's two sigma\" that characterizes the differences in learning outcomes between students who receive one-on-one tutoring and those who receive traditional classroom instruction. Tutored students scored in the 95th percentile, or two sigmas above the mean, on average, compared to students who received traditional classroom instruction. In human-robot interaction research, however, there is relatively little work exploring the potential benefits of personalizing a robot's actions to an individual's strengths and weaknesses. In this study, participants solved grid-based logic puzzles with the help of a personalized or non-personalized robot tutor. Participants' puzzle solving times were compared between two non-personalized control conditions and two personalized conditions (n=80). Although the robot's personalizations were less sophisticated than what a human tutor can do, we still witnessed a \"one-sigma\" improvement (68th percentile) in post-tests between treatment and control groups. We present these results as evidence that even relatively simple personalizations can yield significant benefits in educational or assistive human-robot interactions.",
"title": ""
},
{
"docid": "dd3161062dac2962ce37f46217b1a0c7",
"text": "Many current applications use recommendations in order to modify the natural user behavior, such as to increase the number of sales or the time spent on a website. This results in a gap between the final recommendation objective and the classical setup where recommendation candidates are evaluated by their coherence with past user behavior, by predicting either the missing entries in the user-item matrix, or the most likely next event. To bridge this gap, we optimize a recommendation policy for the task of increasing the desired outcome versus the organic user behavior. We show this is equivalent to learning to predict recommendation outcomes under a fully random recommendation policy. To this end, we propose a new domain adaptation algorithm that learns from logged data containing outcomes from a biased recommendation policy and predicts recommendation outcomes according to random exposure. We compare our method against state-of-the-art factorization methods, in addition to new approaches of causal recommendation and show significant improvements.",
"title": ""
},
{
"docid": "1676b186ee41c2eb0f059c9d936e58a4",
"text": "A sample of 169 first- and third-grade children, selected because of their high exposure to television violence, was randomly divided into an experimental and a control group. Over the course of 2 years, the experimental subjects were exposed to two treatments designed to reduce the likelihood of their imitating the aggressive behaviors they observed on TV. The control group received comparable neutral treatments. By the end of the second year, the experimental subjects were rated as significantly less aggressive by their peers, and the relation between violence viewing and aggressiveness was diminished in the experimental group.",
"title": ""
},
{
"docid": "5543b8931fc51ec25cdcab07bd1d09e2",
"text": "Given a dataset P and a preference function f, a top-k query retrieves the k tuples in P with the highest scores according to f. Even though the problem is well-studied in conventional databases, the existing methods are inapplicable to highly dynamic environments involving numerous long-running queries. This paper studies continuous monitoring of top-k queries over a fixed-size window W of the most recent data. The window size can be expressed either in terms of the number of active tuples or time units. We propose a general methodology for top-k monitoring that restricts processing to the sub-domains of the workspace that influence the result of some query. To cope with high stream rates and provide fast answers in an on-line fashion, the data in W reside in main memory. The valid records are indexed by a grid structure, which also maintains book-keeping information. We present two processing techniques: the first one computes the new answer of a query whenever some of the current top-k points expire; the second one partially pre-computes the future changes in the result, achieving better running time at the expense of slightly higher space requirements. We analyze the performance of both algorithms and evaluate their efficiency through extensive experiments. Finally, we extend the proposed framework to other query types and a different data stream model.",
"title": ""
},
{
"docid": "609c3a75308eb951079373feb88432ae",
"text": "We propose DuoRC, a novel dataset for Reading Comprehension (RC) that motivates several new challenges for neural approaches in language understanding beyond those offered by existing RC datasets. DuoRC contains 186,089 unique question-answer pairs created from a collection of 7680 pairs of movie plots where each pair in the collection reflects two versions of the same movie one from Wikipedia and the other from IMDb written by two different authors. We asked crowdsourced workers to create questions from one version of the plot and a different set of workers to extract or synthesize answers from the other version. This unique characteristic of DuoRC where questions and answers are created from different versions of a document narrating the same underlying story, ensures by design, that there is very little lexical overlap between the questions created from one version and the segments containing the answer in the other version. Further, since the two versions have different levels of plot detail, narration style, vocabulary, etc., answering questions from the second version requires deeper language understanding and incorporating external background knowledge. Additionally, the narrative style of passages arising from movie plots (as opposed to typical descriptive passages in existing datasets) exhibits the need to perform complex reasoning over events across multiple sentences. Indeed, we observe that state-ofthe-art neural RC models which have achieved near human performance on the SQuAD dataset (Rajpurkar et al., 2016b), even when coupled with traditional NLP techniques to address the challenges presented in DuoRC exhibit very poor performance (F1 score of 37.42% on DuoRC v/s 86% on SQuAD dataset). This opens up several interesting research avenues wherein DuoRC could complement other RC datasets to explore novel neural approaches for studying language understanding.",
"title": ""
},
{
"docid": "51fe6376956593cb8a2e4de3b37cb8fe",
"text": "The human musculoskeletal system is supposed to play an important role in doing various static and dynamic tasks. From this standpoint, some musculoskeletal humanoid robots have been developed in recent years. However, existing musculoskeletal robots did not have upper body with several DOFs to balance their bodies statically or did not have enough power to perform dynamic tasks. We think the musculoskeletal structure has two significant properties: whole-body flexibility and whole-body coordination. Using these two properties can enable us to make robots' performance better than before. In this study, we developed a humanoid robot with a musculoskeletal system that is driven by pneumatic artificial muscles. To demonstrate the robot's capability in static and dynamic tasks, we conducted two experiments. As a static task, we conducted a standing experiment using a simple feedback control and evaluated the stability by applying an impulse to the robot. As a dynamic task, we conducted a walking experiment using a feedforward controller with human muscle activation patterns and confirmed that the robot was able to perform the dynamic task.",
"title": ""
},
{
"docid": "f2380d3c6d7bb8c8173b3d310f93c6e1",
"text": "This study assessed the breastfeeding knowledge and complementary feeding knowledge as well as their practices among mothers in Enugu state, Nigeria. A multi-stage sampling technique was used to select 419 mothers with children between 6-24months from 9 randomly selected communities. A semistructured interviewer administered questionnaire which included socio-demographic characteristics, 8-point knowledge scale and 5-point practice scale of both breastfeeding and complementary feeding. The data collected was analyzed using SPSS version 20.0 and presented using descriptive and inferential statistics. The mean age of the respondents was 28.4±6 years and 67% had secondary school education. The knowledge of the respondents indicated that 66.6% were aware of breastfeeding initiation within one hour of birth, 44.5% reported the introduction of water and herbal drinks while 62.8% agreed that breastfeeding should be continued until the child is 24months. Seven out of every 10 agreed with the commencement of complementary feeding at 6 month and also agreed that local foods should be used as the main complementary foods for the infants. Nearly all the responded were in agreement with the inclusion of foods such as staples, legumes as well as eggs and other animal protein as the main complementary diet to the infants from 6 months up until 24months. The feeding practice revealed that only 14.5% of the mothers introduced breastmilk within 1 hour of birth and 75% had introduced prelacteal feeds. Exclusive breastfeeding was practiced by 24.3% and a quarter of the respondents reported to have been discouraged on the practice of EBF. In all, 68.7% of the respondents had good knowledge towards infant feeding while the eventual practice of the mothers revealed that only 22.4% had adequate practice of infant feeding. No significant association was found between knowledge of mothers and infant feeding practice. This study found suboptimal breastfeeding and complementary feeding despite their high level of adequate knowledge. There is the need to further explore the factors responsible for suboptimum feeding practice of mothers.",
"title": ""
},
{
"docid": "239644f4ecd82758ca31810337a10fda",
"text": "This paper discusses a design of stable filters withH∞ disturbance attenuation of Takagi–Sugeno fuzzy systemswith immeasurable premise variables. When we consider the filter design of Takagi–Sugeno fuzzy systems, the selection of premise variables plays an important role. If the premise variable is the state of the system, then a fuzzy system describes a wide class of nonlinear systems. In this case, however, a filter design of fuzzy systems based on parallel distributed compensator idea is infeasible. To avoid such a difficulty, we consider the premise variables uncertainties. Then we consider a robust H∞ filtering problem for such an uncertain system. A solution of the problem is given in terms of linear matrix inequalities (LMIs). Some numerical examples are given to illustrate our theory. © 2008 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "5bf8b65e644f0db9920d3dd7fdf4d281",
"text": "Software developers face a number of challenges when creating applications that attempt to keep important data confidential. Even with diligent attention paid to correct software design and implementation practices, secrets can still be exposed through a single flaw in any of the privileged code on the platform, code which may have been written by thousands of developers from hundreds of organizations throughout the world. Intel is developing innovative security technology which provides the ability for software developers to maintain control of the security of sensitive code and data by creating trusted domains within applications to protect critical information during execution and at rest. This paper will describe how this technology has been effectively used in lab exercises to protect private information in applications including enterprise rights management, video chat, trusted financial transactions, and others. Examples will include both protection of local processing and the establishment of secure communication with cloud services. It will illustrate useful software design patterns that can be followed to create many additional types of trusted software solutions.",
"title": ""
},
{
"docid": "4e85b664458771705c9d417bc1aace7a",
"text": "This paper outlines a framework of the temporal interpretation in Chinese with a special focus on complement and relative clauses. It argues that not only Chinese has no morphological tenses but there is no need to resort to covert semantic features under a tense node in order to interpret time in Chinese. Instead, it utilizes various factors such as the information provided by default aspect, the tense-aspect particles, and pragmatic reasoning to determine the temporal interpretation of sentences. It is shown that aspectual markers in Chinese play the role that tense plays in a tense language. This result implies that the Chinese phrase structure has AspP above VP but no TP is above AspP.",
"title": ""
},
{
"docid": "7ef14aed74249f10adffe2cc49475229",
"text": "We prove that idealised discriminative Bayesian neural networks, capturing perfect epistemic uncertainty, cannot have adversarial examples: Techniques for crafting adversarial examples will necessarily fail to generate perturbed images which fool the classifier. This suggests why MC dropout-based techniques have been observed to be fairly effective against adversarial examples. We support our claims mathematically and empirically. We experiment with HMC on synthetic data derived from MNIST for which we know the ground truth image density, showing that near-perfect epistemic uncertainty correlates to density under image manifold, and that adversarial images lie off the manifold. Using our new-found insights we suggest a new attack for MC dropout-based models by looking for imperfections in uncertainty estimation, and also suggest a mitigation. Lastly, we demonstrate our mitigation on a cats-vs-dogs image classification task with a VGG13 variant.",
"title": ""
},
{
"docid": "de9ad7595925f15c94d484fb00c328e3",
"text": "Although livedo reticularis is a known adverse effect of amantadine, only limited studies have addressed this association. Livedo racemosa in contrast to livedo reticularis is characterized by a striking violaceous netlike pattern of the skin similar to livedo reticularis with a different histopathology and morphology (irregular, broken circular segments). In this case report, we present 2 cases of livedo racemosa and edema of lower extremities following amantadine treatment. The cutaneous biopsies in both cases showed intraluminal thrombi in subcutaneous blood vessels without evidence of vasculitis, which is consistent with livedo racemosa.",
"title": ""
},
{
"docid": "f571329b93779ae073184d9d63eb0c6c",
"text": "Retailers are now the dominant partners in most suply systems and have used their positions to re-engineer operations and partnership s with suppliers and other logistic service providers. No longer are retailers the pass ive recipients of manufacturer allocations, but instead are the active channel con trollers organizing supply in anticipation of, and reaction to consumer demand. T his paper reflects on the ongoing transformation of retail supply chains and logistics. If considers this transformation through an examination of the fashion, grocery and selected other retail supply chains, drawing on practical illustrations. Current and fut ure challenges are then discussed. Introduction Retailers were once the passive recipients of produ cts allocated to stores by manufacturers in the hope of purchase by consumers and replenished o nly at the whim and timing of the manufacturer. Today, retailers are the controllers of product supply in anticipation of, and reaction to, researched, understood, and real-time customer demand. Retailers now control, organise, and manage the supply chain from producti on to consumption. This is the essence of the retail logistics and supply chain transforma tion that has taken place since the latter part of the twentieth century. Retailers have become the channel captains and set the pace in logistics. Having extended their channel control and focused on corporate effi ci ncy and effectiveness, retailers have",
"title": ""
},
{
"docid": "1c17535a4f1edc36b698295136e9711a",
"text": "Massive digital acquisition and preservation of deteriorating historical and artistic documents is of particular importance due to their value and fragile condition. The study and browsing of such digital libraries is invaluable for scholars in the Cultural Heritage field but requires automatic tools for analyzing and indexing these datasets. We present two completely automatic methods requiring no human intervention: text height estimation and text line extraction. Our proposed methods have been evaluated on a huge heterogeneous corpus of illuminated medieval manuscripts of different writing styles and with various problematic attributes, such as holes, spots, ink bleed-through, ornamentation, background noise, and overlapping text lines. Our experimental results demonstrate that these two new methods are efficient and reliable, even when applied to very noisy and damaged old handwritten manuscripts.",
"title": ""
},
{
"docid": "a48dc6a2e9baf084e548bf4066075b64",
"text": "In the implementation of high-performance CMOS over-sampling A/D converters, high-speed comparators are indispensable. This paper discusses the design and analysis of a low-power regenerative latched CMOS comparator, based on an analytical approach which gives a deeper insight into the associated trade-offs. Calculation details and simulation results for a 20 MHz clocked comparator in a 0.5/spl mu/m technology are presented.",
"title": ""
},
{
"docid": "1de46f2eee8db2fad444faa6fbba4d1c",
"text": "Hyunsook Yoon Dongguk University, Korea This paper reports on a qualitative study that investigated the changes in students’ writing process associated with corpus use over an extended period of time. The primary purpose of this study was to examine how corpus technology affects students’ development of competence as second language (L2) writers. The research was mainly based on case studies with six L2 writers in an English for Academic Purposes writing course. The findings revealed that corpus use not only had an immediate effect by helping the students solve immediate writing/language problems, but also promoted their perceptions of lexicogrammar and language awareness. Once the corpus approach was introduced to the writing process, the students assumed more responsibility for their writing and became more independent writers, and their confidence in writing increased. This study identified a wide variety of individual experiences and learning contexts that were involved in deciding the levels of the students’ willingness and success in using corpora. This paper also discusses the distinctive contributions of general corpora to English for Academic Purposes and the importance of lexical and grammatical aspects in L2 writing pedagogy.",
"title": ""
},
{
"docid": "47ee81ef9fb8a9bc792ee6edc9a2b503",
"text": "Current image captioning approaches generate descriptions which lack specific information, such as named entities that are involved in the images. In this paper we propose a new task which aims to generate informative image captions, given images and hashtags as input. We propose a simple but effective approach to tackle this problem. We first train a convolutional neural networks long short term memory networks (CNN-LSTM) model to generate a template caption based on the input image. Then we use a knowledge graph based collective inference algorithm to fill in the template with specific named entities retrieved via the hashtags. Experiments on a new benchmark dataset collected from Flickr show that our model generates news-style image descriptions with much richer information. Our model outperforms unimodal baselines significantly with various evaluation metrics.",
"title": ""
},
{
"docid": "c10a6f61d7202184785cf68150ecce80",
"text": "This paper gives a comprehensive analysis of security with respect to NFC. It is not limited to a certain application of NFC, but it uses a systematic approach to analyze the various aspects of security whenever an NFC interface is used. The authors want to clear up many misconceptions about security and NFC in various applications. The paper lists the threats, which are applicable to NFC, and describes solutions to protect against these threats. All of this is given in the context of currently available NFC hardware, NFC applications and possible future developments of NFC.",
"title": ""
}
] |
scidocsrr
|
5c0596a9af4526f9b91dd88091c6016f
|
Modeling the impact of short- and long-term behavior on search personalization
|
[
{
"docid": "649797f21efa24c523361afee80419c5",
"text": "Web search engines typically provide search results without considering user interests or context. We propose a personalized search approach that can easily extend a conventional search engine on the client side. Our mapping framework automatically maps a set of known user interests onto a group of categories in the Open Directory Project (ODP) and takes advantage of manually edited data available in ODP for training text classifiers that correspond to, and therefore categorize and personalize search results according to user interests. In two sets of controlled experiments, we compare our personalized categorization system (PCAT) with a list interface system (LIST) that mimics a typical search engine and with a nonpersonalized categorization system (CAT). In both experiments, we analyze system performances on the basis of the type of task and query length. We find that PCAT is preferable to LIST for information gathering types of tasks and for searches with short queries, and PCAT outperforms CAT in both information gathering and finding types of tasks, and for searches associated with free-form queries. From the subjects' answers to a questionnaire, we find that PCAT is perceived as a system that can find relevant Web pages quicker and easier than LIST and CAT.",
"title": ""
},
{
"docid": "28ea3d754c1a28ccfeb8a6e884898f96",
"text": "Understanding users'search intent expressed through their search queries is crucial to Web search and online advertisement. Web query classification (QC) has been widely studied for this purpose. Most previous QC algorithms classify individual queries without considering their context information. However, as exemplified by the well-known example on query \"jaguar\", many Web queries are short and ambiguous, whose real meanings are uncertain without the context information. In this paper, we incorporate context information into the problem of query classification by using conditional random field (CRF) models. In our approach, we use neighboring queries and their corresponding clicked URLs (Web pages) in search sessions as the context information. We perform extensive experiments on real world search logs and validate the effectiveness and effciency of our approach. We show that we can improve the F1 score by 52% as compared to other state-of-the-art baselines.",
"title": ""
}
] |
[
{
"docid": "87b969368332c8f1ad4ddeb4c98c1867",
"text": "A comprehensive understanding of individual customer value is crucial to any successful customer relationship management strategy. It is also the key to building products for long-term value returns. Modeling customer lifetime value (CLTV) can be fraught with technical difficulties, however, due to both the noisy nature of user-level behavior and the potentially large customer base. Here we describe a new CLTV system that solves these problems. This was built at Groupon, a large global e-commerce company, where confronting the unique challenges of local commerce means quickly iterating on new products and the optimal inventory to appeal to a wide and diverse audience. Given current purchaser frequency we need a faster way to determine the health of individual customers, and given finite resources we need to know where to focus our energy. Our CLTV system predicts future value on an individual user basis with a random forest model which includes features that account for nearly all aspects of each customer's relationship with our platform. This feature set includes those quantifying engagement via email and our mobile app, which give us the ability to predict changes in value far more quickly than models based solely on purchase behavior. We further model different customer types, such as one-time buyers and power users, separately so as to allow for different feature weights and to enhance the interpretability of our results. Additionally, we developed an economical scoring framework wherein we re-score a user when any trigger events occur and apply a decay function otherwise, to enable frequent scoring of a large customer base with a complex model. This system is deployed, predicting the value of hundreds of millions of users on a daily cadence, and is actively being used across our products and business initiatives.",
"title": ""
},
{
"docid": "04e6982e32933f49ce6e0821362a96ff",
"text": "As one of the methods to stack the 3D-IC fast, the collective bonding process using TCB (thermo-compression bonder) attracts attention [1-3]. In the collective bonding process, we can improve the throughput considerably by postbonding the multilayered pre-bonded chips at a time. However, when the number of the stacked chips increased, we found that the temperature difference between the upper layer and lower layer became large and the good solder connection was not obtained in all the layer in conventional TCB. Therefore, we reported that the collective bonding could be realized by using the heat insulation stage which can prevent an outflow of the heat from the bonding head [1]. By using the heat insulation stage, we could reduce the temperature difference to less than 10oC for four layers, but it was necessary to reduce temperature difference more, depending on the kind of NCF. Besides, it was difficult to reduce difference of temperature when, for example, the number of the stacked chips increased more than eight layers. In this presentation, we report about new collective bonding process that is able to reduce the temperature difference by using high temperature backup stage. We could enable the high temperature process of the stage by using the wafer-handling mechanism which lifts a substrate from the stage every one bonding cycle.",
"title": ""
},
{
"docid": "2b2398bf61847843e18d1f9150a1bccc",
"text": "We present a robust method for capturing articulated hand motions in realtime using a single depth camera. Our system is based on a realtime registration process that accurately reconstructs hand poses by fitting a 3D articulated hand model to depth images. We register the hand model using depth, silhouette, and temporal information. To effectively map low-quality depth maps to realistic hand poses, we regularize the registration with kinematic and temporal priors, as well as a data-driven prior built from a database of realistic hand poses. We present a principled way of integrating such priors into our registration optimization to enable robust tracking without severely restricting the freedom of motion. A core technical contribution is a new method for computing tracking correspondences that directly models occlusions typical of single-camera setups. To ensure reproducibility of our results and facilitate future research, we fully disclose the source code of our implementation.",
"title": ""
},
{
"docid": "ce031463581fd08813991404d6178014",
"text": "With the development of social networks, fake news for various commercial and political purposes has been appearing in large numbers and gotten widespread in the online world. With deceptive words, people can get infected by the fake news very easily and will share them without any fact-checking. For instance, during the 2016 US president election, various kinds of fake news about the candidates widely spread through both official news media and the online social networks. These fake news is usually released to either smear the opponents or support the candidate on their side. The erroneous information in the fake news is usually written to motivate the voters’ irrational emotion and enthusiasm. Such kinds of fake news sometimes can bring about devastating effects, and an important goal in improving the credibility of online social networks is to identify the fake news timely. In this paper, we propose to study the “fake news detection” problem. Automatic fake news identification is extremely hard, since pure model based fact-checking for news is still an open problem, and few existing models can be applied to solve the problem. With a thorough investigation of a fake news data, lots of useful explicit features are identified from both the text words and images used in the fake news. Besides the explicit features, there also exist some hidden patterns in the words and images used in fake news, which can be captured with a set of latent features extracted via the multiple convolutional layers in our model. A model named as TI-CNN (Text and Image information based Convolutinal Neural Network) is proposed in this paper. By projecting the explicit and latent features into a unified feature space, TI-CNN is trained with both the text and image information simultaneously. Extensive experiments carried on the real-world fake news datasets have demonstrate the effectiveness of TI-CNN in solving the fake new detection problem.",
"title": ""
},
{
"docid": "a09866f7077022fa5b00b3380dd70b24",
"text": "Light can elicit acute physiological and alerting responses in humans, the magnitude of which depends on the timing, intensity, and duration of light exposure. Here, we report that the alerting response of light as well as its effects on thermoregulation and heart rate are also wavelength dependent. Exposure to 2 h of monochromatic light at 460 nm in the late evening induced a significantly greater melatonin suppression than occurred with 550-nm monochromatic light, concomitant with a significantly greater alerting response and increased core body temperature and heart rate ( approximately 2.8 x 10(13) photons/cm(2)/sec for each light treatment). Light diminished the distal-proximal skin temperature gradient, a measure of the degree of vasoconstriction, independent of wavelength. Nonclassical ocular photoreceptors with peak sensitivity around 460 nm have been found to regulate circadian rhythm function as measured by melatonin suppression and phase shifting. Our findings-that the sensitivity of the human alerting response to light and its thermoregulatory sequelae are blue-shifted relative to the three-cone visual photopic system-indicate an additional role for these novel photoreceptors in modifying human alertness, thermophysiology, and heart rate.",
"title": ""
},
{
"docid": "5e95aaa54f8acf073ccc11c08c148fe0",
"text": "Billions of dollars of loss are caused every year due to fraudulent credit card transactions. The design of efficient fraud detection algorithms is key for reducing these losses, and more and more algorithms rely on advanced machine learning techniques to assist fraud investigators. The design of fraud detection algorithms is however particularly challenging due to non stationary distribution of the data, highly imbalanced classes distributions and continuous streams of transactions. At the same time public data are scarcely available for confidentiality issues, leaving unanswered many questions about which is the best strategy to deal with them. In this paper we provide some answers from the practitioner’s perspective by focusing on three crucial issues: unbalancedness, non-stationarity and assessment. The analysis is made possible by a real credit card dataset provided by our industrial partner.",
"title": ""
},
{
"docid": "994bebd20ef2594f5337387d97c6bd12",
"text": "In complex, open, and heterogeneous environments, agents must be able to reorganize towards the most appropriate organizations to adapt unpredictable environment changes within Multi-Agent Systems (MAS). Types of reorganization can be seen from two different levels. The individual agents level (micro-level) in which an agent changes its behaviors and interactions with other agents to adapt its local environment. And the organizational level (macro-level) in which the whole system changes it structure by adding or removing agents. This chapter is dedicated to overview different aspects of what is called MAS Organization including its motivations, paradigms, models, and techniques adopted for statically or dynamically organizing agents in MAS.",
"title": ""
},
{
"docid": "589da022358bee9f14b337db42536067",
"text": "To represent a text as a bag of properly identified “phrases” and use the representation for processing the text is proved to be useful. The key question here is how to identify the phrases and represent them. The traditional method of utilizing n-grams can be regarded as an approximation of the approach. Such a method can suffer from data sparsity, however, particularly when the length of n-gram is large. In this paper, we propose a new method of learning and utilizing task-specific distributed representations of n-grams, referred to as “region embeddings”. Without loss of generality we address text classification. We specifically propose two models for region embeddings. In our models, the representation of a word has two parts, the embedding of the word itself, and a weighting matrix to interact with the local context, referred to as local context unit. The region embeddings are learned and used in the classification task, as parameters of the neural network classifier. Experimental results show that our proposed method outperforms existing methods in text classification on several benchmark datasets. The results also indicate that our method can indeed capture the salient phrasal expressions in the texts.",
"title": ""
},
{
"docid": "a86056ab9e6fc98247459e9798aa9949",
"text": "We address the problem of 3D rotation equivariance in convolutional neural networks. 3D rotations have been a challenging nuisance in 3D classification tasks requiring higher capacity and extended data augmentation in order to tackle it. We model 3D data with multivalued spherical functions and we propose a novel spherical convolutional network that implements exact convolutions on the sphere by realizing them in the spherical harmonic domain. Resulting filters have local symmetry and are localized by enforcing smooth spectra. We apply a novel pooling on the spectral domain and our operations are independent of the underlying spherical resolution throughout the network. We show that networks with much lower capacity and without requiring data augmentation can exhibit performance comparable to the state of the art in standard retrieval and classification benchmarks.",
"title": ""
},
{
"docid": "525c35dd86ac9370bf7f42d0afb00327",
"text": "Objectives: This study aimed to assess the levels of breast cancer awareness among Saudi females, and to compare between house wives and employees women regarding knowledge and practical of breast cancer. Methods: This cross sectional study was conducted among 300 women in Taif city. Data were collected using a self administrated questionnaire which included questions on socio-demographic data, knowledge of risk factors of breast cancer, breast self examination, clinical breast examination and awareness of mammogram. Results: Age of respondents was 16 to 45years, employee (51%), educated (90%) and married (71%). The majority had good knowledge about risk factors of breast cancer and breast self examination (93.3%, 87% respectively) and indicated TV, magazines and breast cancer campaigns as their source of information (33.7%, 29% respectively). No significant difference between employees, house wives and students, regarding breast cancer knowledge (p≥0.05). 73.3% of women were unaware of clinical breast exam and 80.3% of mammogram. Conclusion: Most women were aware of risk factors of breast cancer. However, the knowledge about clinical breast examination and awareness of mammogram were inadequate. It is recommended that the level of knowledge should be raised among women, especially breast cancer screen procedure CBE, and mammogram.",
"title": ""
},
{
"docid": "e3dca51d6d427f9751b4ee566515ddb1",
"text": "We discuss the design, realization and experimental characterization of a GaN-based hybrid Doherty power amplifier for wideband operation in the 3-3.6-GHz frequency range. The design adopts a novel, simple approach based on wideband compensator networks. Second-harmonic tuning is exploited for the main amplifier at the upper limit of the frequency band, thus improving gain equalization over the amplifier bandwidth. The realized amplifier is based on a packaged GaN HEMT and shows, at 6 dB of output power back-off, a drain efficiency higher than 38% in the 3-3.6-GHz band, gain around 10 dB, and maximum power between 43 and 44 dBm, with saturated efficiency between 55% and 66%. With respect to the state of the art, we obtain, at a higher frequency, a wideband amplifier with similar performances in terms of bandwidth, output power, and efficiency, through a simpler approach. Moreover, the measured constant maximum output power of 20 W suggests that the power utilization factor of the 10-W (Class A) GaN HEMT is excellent over the amplifier band.",
"title": ""
},
{
"docid": "72a51dfdcdf5ff70c94922a048f218d1",
"text": "We have synthesized thermodynamically metastable Ca2IrO4 thin-films on YAlO3 (110) substrates by pulsed laser deposition. The epitaxial Ca2IrO4 thin-films are of K2NiF4-type tetragonal structure. Transport and optical spectroscopy measurements indicate that the electronic structure of the Ca2IrO4 thin-films is similar to that of Jeff = 1/2 spin-orbit-coupled Mott insulator Sr2IrO4 and Ba2IrO4, with the exception of an increased gap energy. The gap increase is to be expected in Ca2IrO4 due to its increased octahedral rotation and tilting, which results in enhanced electron-correlation, U/W. Our results suggest that the epitaxial stabilization growth of metastable-phase thin-films can be used effectively for investigating layered iridates and various complex-oxide systems.",
"title": ""
},
{
"docid": "804ddcaf56ef34b0b578cc53d7cca304",
"text": "This review article describes two protocols adapted from lung ultrasound: the bedside lung ultrasound in emergency (BLUE)-protocol for the immediate diagnosis of acute respiratory failure and the fluid administration limited by lung sonography (FALLS)-protocol for the management of acute circulatory failure. These applications require the mastery of 10 signs indicating normal lung surface (bat sign, lung sliding, A-lines), pleural effusions (quad and sinusoid sign), lung consolidations (fractal and tissue-like sign), interstitial syndrome (lung rockets), and pneumothorax (stratosphere sign and the lung point). These signs have been assessed in adults, with diagnostic accuracies ranging from 90% to 100%, allowing consideration of ultrasound as a reasonable bedside gold standard. In the BLUE-protocol, profiles have been designed for the main diseases (pneumonia, congestive heart failure, COPD, asthma, pulmonary embolism, pneumothorax), with an accuracy > 90%. In the FALLS-protocol, the change from A-lines to lung rockets appears at a threshold of 18 mm Hg of pulmonary artery occlusion pressure, providing a direct biomarker of clinical volemia. The FALLS-protocol sequentially rules out obstructive, then cardiogenic, then hypovolemic shock for expediting the diagnosis of distributive (usually septic) shock. These applications can be done using simple grayscale machines and one microconvex probe suitable for the whole body. Lung ultrasound is a multifaceted tool also useful for decreasing radiation doses (of interest in neonates where the lung signatures are similar to those in adults), from ARDS to trauma management, and from ICUs to points of care. If done in suitable centers, training is the least of the limitations for making use of this kind of visual medicine.",
"title": ""
},
{
"docid": "28df21c82806cb660713d720f6a6d324",
"text": "Neural Networks are very successful in acquiring hidden knowledge in datasets. Their most important weakness is that the knowled ge they acquire is represented in a form not understandable to humans. Understandability problem of Neural Networks can be solved by extracti ng Decision Rules or Decision Trees from the trained network. There are s everal Decision Rule extraction methods and Mark Craven’s TREPAN which extracts MofN type Decision Trees from trained networks. We introduced new splitting techniques for extracting classical Decision Trees fr om trained Neural Networks. We showed that the new method (DecText) is effecti ve in extracting high fidelity trees from trained networks. We also introduced a new discretization technique to make DecText be able to hand le continuous features and a new pruning technique for finding simplest tree with the highest fidelity.",
"title": ""
},
{
"docid": "3b2ddbef9ee3e5db60e2b315064a02c3",
"text": "It is indispensable to understand and analyze industry structure and company relations from documents, such as news articles, in order to make management decisions concerning supply chains, selection of business partners, etc. Analysis of company relations from news articles requires both a macro-viewpoint, e.g., overviewing competitor groups, and a micro-viewpoint, e.g., grasping the descriptions of the relationship between a specific pair of companies collaborating. Research has typically focused on only the macro-viewpoint, classifying each company pair into a specific relation type. In this paper, to support company relation analysis from both macro-and micro-viewpoints, we propose a method that extracts collaborative/competitive company pairs from individual sentences in Web news articles by applying a Markov logic network and gather extracted relations from each company pair. By this method, we are able not only to perform clustering of company pairs into competitor groups based on the dominant relations of each pair (macro-viewpoint) but also to know how each company pair is described in individual sentences (micro-viewpoint). We empirically confirmed that the proposed method is feasible through analysis of 4,661 Web news articles on the semiconductor and related industries.",
"title": ""
},
{
"docid": "753e0af8b59c8bfd13b63c3add904ffe",
"text": "Background: Surgery of face and parotid gland may cause injury to branches of the facial nerve, which results in paralysis of muscles of facial expression. Knowledge of branching patterns of the facial nerve and reliable landmarks of the surrounding structures are essential to avoid this complication. Objective: Determine the facial nerve branching patterns, the course of the marginal mandibular branch (MMB), and the extraparotid ramification in relation to the lateral palpebral line (LPL). Materials and methods: One hundred cadaveric half-heads were dissected for determining the facial nerve branching patterns according to the presence of anastomosis between branches. The course of the MMB was followed until it entered the depressor anguli oris in 49 specimens. The vertical distance from the mandibular angle to this branch was measured. The horizontal distance from the LPL to the otobasion superious (LPL-OBS) and the apex of the parotid gland (LPL-AP) were measured in 52 specimens. Results: The branching patterns of the facial nerve were categorized into six types. The least common (1%) was type I (absent of anastomosis), while type V, the complex pattern was the most common (29%). Symmetrical branching pattern occurred in 30% of cases. The MMB was coursing below the lower border of the mandible in 57% of cases. The mean vertical distance was 0.91±0.22 cm. The mean horizontal distances of LPL-OBS and LPLAP were 7.24±0.6 cm and 3.95±0.96 cm, respectively. The LPL-AP length was 54.5±11.4% of LPL-OBS. Conclusion: More complex branching pattern of the facial nerve was found in this population and symmetrical branching pattern occurred less of ten. The MMB coursed below the lower border of the angle of mandible with a mean vertical distance of one centimeter. The extraparotid ramification of the facial nerve was located in the area between the apex of the parotid gland and the LPL.",
"title": ""
},
{
"docid": "43fa16b19c373e2d339f45c71a0a2c22",
"text": "McKusick-Kaufman syndrome is a human developmental anomaly syndrome comprising mesoaxial or postaxial polydactyly, congenital heart disease and hydrometrocolpos. This syndrome is diagnosed most frequently in the Old Order Amish population and is inherited in an autosomal recessive pattern with reduced penetrance and variable expressivity. Homozygosity mapping and linkage analyses were conducted using two pedigrees derived from a larger pedigree published in 1978. The PedHunter software query system was used on the Amish Genealogy Database to correct the previous pedigree, derive a minimal pedigree connecting those affected sibships that are in the database and determine the most recent common ancestors of the affected persons. Whole genome short tandem repeat polymorphism (STRP) screening showed homozygosity in 20p12, between D20S162 and D20S894 , an area that includes the Alagille syndrome critical region. The peak two-point LOD score was 3.33, and the peak three-point LOD score was 5.21. The physical map of this region has been defined, and additional polymorphic markers have been isolated. The region includes several genes and expressed sequence tags (ESTs), including the jagged1 gene that recently has been shown to be haploinsufficient in the Alagille syndrome. Sequencing of jagged1 in two unrelated individuals affected with McKusick-Kaufman syndrome has not revealed any disease-causing mutations.",
"title": ""
},
{
"docid": "f5e14c4bf03acb092abc4b00d913e6f3",
"text": "In incoherent Direct Sequence Optical Code Division Multiple Access system (DSOCDMA), the Multiple Access Interference (MAI) is one of the main limitations. To mitigate the MAI, many types of codes can be used to remove the contributions from users. In this paper, we study two types of unipolar codes used in DS-OCDMA system incoherent which are optical orthogonal codes (OOC) and the prime code (PC). We developed the characteristics of these codes i,e factors correlations, and the theoretical upper bound of the probability of error. The simulation results showed that PC codes have better performance than OOC codes.",
"title": ""
},
{
"docid": "39d522e6db7971ccf8a9d3bd3a915a10",
"text": "The Internet of Things (IoT) is next generation technology that is intended to improve and optimize daily life by operating intelligent sensors and smart objects together. At application layer, communication of resourceconstrained devices is expected to use constrained application protocol (CoAP).Communication security is an important aspect of IoT environment. However closed source security solutions do not help in formulating security in IoT so that devices can communicate securely with each other. To protect the transmission of confidential information secure CoAP uses datagram transport layer security (DTLS) as the security protocol for communication and authentication of communicating devices. DTLS was initially designed for powerful devices that are connected through reliable and high bandwidth link. This paper proposes a collaboration of DTLS and CoAP for IoT. Additionally proposed DTLS header compression scheme that helps to reduce packet size, energy consumption and avoids fragmentation by complying the 6LoWPAN standards. Also proposed DTLS header compression scheme does not compromises the point-to-point security provided by DTLS. Since DTLS has chosen as security protocol underneath the CoAP, enhancement to the existing DTLS also provided by introducing the use of raw public key in DTLS.",
"title": ""
},
{
"docid": "7c6fa8d48ad058f1c65f1c775b71e4b5",
"text": "A new method for determining nucleotide sequences in DNA is described. It is similar to the \"plus and minus\" method [Sanger, F. & Coulson, A. R. (1975) J. Mol. Biol. 94, 441-448] but makes use of the 2',3'-dideoxy and arabinonucleoside analogues of the normal deoxynucleoside triphosphates, which act as specific chain-terminating inhibitors of DNA polymerase. The technique has been applied to the DNA of bacteriophage varphiX174 and is more rapid and more accurate than either the plus or the minus method.",
"title": ""
}
] |
scidocsrr
|
e025cf07f4fc45b82405b8b15a768e93
|
Comparative Study of ID 3 / C 4 . 5 Decision tree and Multilayer Perceptron Algorithms for the Prediction of Typhoid Fever
|
[
{
"docid": "06f575b18d1421472a178c555d31987b",
"text": "In recent, growth of higher education has increased rapidly. Many new institutions, colleges and universities are being established by both the private and government sectors for the growth of education and welfare of the students. Each institution aims at producing higher and exemplary education rates by employing various teaching and grooming methods. But still there are cases of unemployment that exists among the medium and low risk students. This paper describes the use of data mining techniques to improve the efficiency of academic performance in the educational institutions. Various data mining techniques such as decision tree, association rule, nearest neighbors, neural networks, genetic algorithms, exploratory factor analysis and stepwise regression can be applied to the higher education process, which in turn helps to improve student’s performance. This type of approach gives high confidence to students in their studies. This method helps to identify the students who need special advising or counseling by the teacher which gives high quality of education. Keywords-component; Data Mining; KDD; EDM; Association Rule",
"title": ""
}
] |
[
{
"docid": "d3c9785f2981670430e58ebabb25f564",
"text": "A model of category effects on reports from memory is presented. The model holds that stimuli are represented at 2 levels of detail: a fine-grain value and a category. When memory is inexact but people must report an exact value, they use estimation processes that combine the remembered stimulus value with category information. The proposed estimation processes include truncation at category boundaries and weighting with a central (prototypic) category value. These processes introduce bias in reporting even when memory is unbiased, but nevertheless may improve overall accuracy (by decreasing the variability of reports). Four experiments are presented in which people report the location of a dot in a circle. Subjects spontaneously impose horizontal and vertical boundaries that divide the circle into quadrants. They misplace dots toward a central (prototypic) location in each quadrant, as predicted by the model. The proposed model has broad implications; notably, it has the potential to explain biases of the sort described in psychophysics (contraction bias and the bias captured by Weber's law) as well as symmetries in similarity judgments, without positing distorted representations of physical scales.",
"title": ""
},
{
"docid": "5bd483e895de779f8b91ca8537950a2f",
"text": "To evaluate the efficacy of pregabalin in facilitating taper off chronic benzodiazepines, outpatients (N = 106) with a lifetime diagnosis of generalized anxiety disorder (current diagnosis could be subthreshold) who had been treated with a benzodiazepine for 8-52 weeks were stabilized for 2-4 weeks on alprazolam in the range of 1-4 mg/day. Patients were then randomized to 12 weeks of double-blind treatment with either pregabalin 300-600 mg/day or placebo while undergoing a gradual benzodiazepine taper at a rate of 25% per week, followed by a 6-week benzodiazepine-free phase during which they continued double-blind study treatment. Outcome measures included ability to remain benzodiazepine-free (primary) as well as changes in Hamilton Anxiety Rating Scale (HAM)-A and Physician Withdrawal Checklist (PWC). At endpoint, a non-significant higher proportion of patients remained benzodiazepine-free receiving pregabalin compared with placebo (51.4% vs 37.0%). Treatment with pregabalin was associated with significantly greater endpoint reduction in the HAM-A total score versus placebo (-2.5 vs +1.3; p < 0.001), and lower endpoint mean PWC scores (6.5 vs 10.3; p = 0.012). Thirty patients (53%) in the pregabalin group and 19 patients (37%) in the placebo group completed the study, reducing the power to detect a significant difference on the primary outcome. The results on the anxiety and withdrawal severity measures suggest that switching to pregabalin may be a safe and effective method for discontinuing long-term benzodiazepine therapy.",
"title": ""
},
{
"docid": "49c19e5417aa6a01c59f666ba7cc3522",
"text": "The effect of various drugs on the extracellular concentration of dopamine in two terminal dopaminergic areas, the nucleus accumbens septi (a limbic area) and the dorsal caudate nucleus (a subcortical motor area), was studied in freely moving rats by using brain dialysis. Drugs abused by humans (e.g., opiates, ethanol, nicotine, amphetamine, and cocaine) increased extracellular dopamine concentrations in both areas, but especially in the accumbens, and elicited hypermotility at low doses. On the other hand, drugs with aversive properties (e.g., agonists of kappa opioid receptors, U-50,488, tifluadom, and bremazocine) reduced dopamine release in the accumbens and in the caudate and elicited hypomotility. Haloperidol, a neuroleptic drug, increased extracellular dopamine concentrations, but this effect was not preferential for the accumbens and was associated with hypomotility and sedation. Drugs not abused by humans [e.g., imipramine (an antidepressant), atropine (an antimuscarinic drug), and diphenhydramine (an antihistamine)] failed to modify synaptic dopamine concentrations. These results provide biochemical evidence for the hypothesis that stimulation of dopamine transmission in the limbic system might be a fundamental property of drugs that are abused.",
"title": ""
},
{
"docid": "391b2716b952c1613d964fe58d70ee5f",
"text": "BACKGROUND\nDue to an increasing number of norovirus infections in the last years rapid, specific, and sensitive diagnostic tools are needed. Reverse transcriptase-polymerase chain reactions (RT-PCR) have become the methods of choice. To minimize the working time and the risk of carryover contamination during the multi-step procedure of PCR the multiplex real-time RT-PCR for the simultaneous detection of genogroup I (GI) and II (GII) offers advantages for the handling of large amounts of clinical specimens.\n\n\nMETHODS\nWe have developed and evaluated a multiplex one-tube RT-PCR using a combination of optimized GI and GII specific primers located in the junction between ORF1 and ORF2 of the norovirus genome. For the detection of GI samples, a 3'-minor groove binder-DNA probe (GI-MGB-probe) were designed and used for the multiplex real-time PCR.\n\n\nRESULTS\nComparable results to those of our in-house nested PCR and monoplex real-time-PCR were only obtained using the GI specific MGB-probe. The MGB-probe forms extremely stable duplexes with single-stranded DNA targets, which enabled us to design a shorter probe (length 15 nucleotides) hybridizing to a more conserved part of the GI sequences. 97% of 100 previously norovirus positive specimens (tested by nested PCR and/or monoplex real-time PCR) were detected by the multiplex real-time PCR. A broad dynamic range from 2 x 10(1) to 2 x 10(7) genomic equivalents per assay using plasmid DNA standards for GI and GII were obtained and viral loads between 2.5 x 10(2) and 2 x 10(12) copies per ml stool suspension were detected.\n\n\nCONCLUSION\nThe one-tube multiplex RT real-time PCR using a minor groove binder-DNA probe for GI is a fast, specific, sensitive and cost-effective tool for the detection of norovirus infections in both mass outbreaks and sporadic cases and may have also applications in food and environmental testing.",
"title": ""
},
{
"docid": "fdb0a20c089535aa8129e298bdc6ef35",
"text": "A compact dual band micro strip patch antenna is designed for C (4-8 GHz) band and X (8-12 GHz) band applications. The proposed antenna consists of a rectangular patch having four U-slots and one I-slot with H-shaped DGS (Defected Ground Structure). The antenna has overall size of 25 mm by 23 mm and gives bandwidth of about 140 MHz from 5.85 GHz to 6 GHz and of about 1.21 GHz from 7.87 to 9 GHz at resonating frequency of 5.9 GHz and 8.8 GHz respectively with DGS. The antenna without DGS mainly resonates at 6 GHz and 8.7 GHz. The antenna with DGS has return losses -16.29 dB at 5.9 GHz and -18.28 dB at 8.8 GHz, gain 1.2 dBi for 5.9 GHz and 4.4 dBi for 8.8 GHz. This antenna has been analyzed using IE3D electromagnetic solver.",
"title": ""
},
{
"docid": "1811a3b1fb9f5a492e15b0cc845d29f5",
"text": "We present a tree-structured architecture for supervised learning. The statistical model underlying the architecture is a hierarchical mixture model in which both the mixture coefficients and the mixture components are generalized linear models (GLIM's). Learning is treated as a maximum likelihood problem; in particular, we present an Expectation-Maximization (EM) algorithm for adjusting the parameters of the architecture. We also develop an on-line learning algorithm in which the parameters are updated incrementally. Comparative simulation results are presented in the robot dynamics domain.",
"title": ""
},
{
"docid": "737b07a559fccc77c62a51abcad49f2b",
"text": "Markov Logic Networks (MLNs) are well-suited for expressing statistics such as “with high probability a smoker knows another smoker” but not for expressing statements such as “there is a smoker who knows most other smokers”, which is necessary for modeling, e.g. influencers in social networks. To overcome this shortcoming, we study quantified MLNs which generalize MLNs by introducing statistical universal quantifiers, allowing to express also the latter type of statistics in a principled way. Our main technical contribution is to show that the standard reasoning tasks in quantified MLNs, maximum a posteriori and marginal inference, can be reduced to their respective MLN counterparts in polynomial time.",
"title": ""
},
{
"docid": "fb970fec75df61b9f991d8ebd0edc50e",
"text": "The Voronoi diagram is a famous structure of computational geometry. We show that there is a straightforward equivalent in graph theory which can be eeciently computed. In particular, we give two algorithms for the computation of graph Voronoi diagrams, prove a lower bound on the problem, and we identify cases where the algorithms presented are optimal. The space requirement of a graph Voronoi diagram is modest, since it needs no more space than the graph itself. The investigation of graph Voronoi diagrams is motivated by many applications and problems on networks that can be easily solved with their help. This includes the computation of nearest facilities, all nearest neighbors and closest pairs, some kind of collision free moving, and anti-centers and closest points.",
"title": ""
},
{
"docid": "741ba628eacb59d7b9f876520406e600",
"text": "Awareness of the physical location for each node is required by many wireless sensor network applications. The discovery of the position can be realized utilizing range measurements including received signal strength, time of arrival, time difference of arrival and angle of arrival. In this paper, we focus on localization techniques based on angle of arrival information between neighbor nodes. We propose a new localization and orientation scheme that considers beacon information multiple hops away. The scheme is derived under the assumption of noisy angle measurements. We show that the proposed method achieves very good accuracy and precision despite inaccurate angle measurements and a small number of beacons",
"title": ""
},
{
"docid": "56d9033f4a624e0ae9cad99cd62b6af0",
"text": "Municipal solid waste management (MSWM) is one of the major environmental problems of Indian cities. Improper management of municipal solid waste (MSW) causes hazards to inhabitants. Various studies reveal that about 90% of MSW is disposed of unscientifically in open dumps and landfills, creating problems to public health and the environment. In the present study, an attempt has been made to provide a comprehensive review of the characteristics, generation, collection and transportation, disposal and treatment technologies of MSW practiced in India. The study pertaining to MSWM for Indian cities has been carried out to evaluate the current status and identify the major problems. Various adopted treatment technologies for MSW are critically reviewed, along with their advantages and limitations. The study is concluded with a few fruitful suggestions, which may be beneficial to encourage the competent authorities/researchers to work towards further improvement of the present system.",
"title": ""
},
{
"docid": "466f4ed7a59f9b922a8b87685d8f3a77",
"text": "Ten cases of oral hairy leukoplakia (OHL) in HIV- negative patients are presented. Eight of the 10 patients were on steroid treatment for chronic obstructive pulmonary disease, 1 patient was on prednisone as part of a therapeutic regimen for gastrointestinal stromal tumor, and 1 patient did not have any history of immunosuppression. There were 5 men and 5 women, ages 32-79, with mean age being 61.8 years. Nine out of 10 lesions were located unilaterally on the tongue, whereas 1 lesion was located at the junction of the hard and soft palate. All lesions were described as painless, corrugated, nonremovable white plaques (leukoplakias). Histologic features were consistent with Epstein-Barr virus-associated hyperkeratosis suggestive of OHL, and confirmatory in situ hybridization was performed in all cases. Candida hyphae and spores were present in 8 cases. Pathologists should be aware of OHL presenting not only in HIV-positive and HIV-negative organ transplant recipients but also in patients receiving steroid treatment, and more important, certain histologic features should raise suspicion for such diagnosis without prior knowledge of immunosuppression.",
"title": ""
},
{
"docid": "229d891e8b899236480ef2ec5683886d",
"text": "In many applications the process of generating label information is expensive and time consuming. We present a new method that combines active and semi-supervised deep learning to achieve high generalization performance from a deep convolutional neural network with as few known labels as possible. In a setting where a small amount of labeled data as well as a large amount of unlabeled data is available, our method first learns the labeled data set. This initialization is followed by an expectation maximization algorithm, where further training reduces classification entropy on the unlabeled data by targeting a low entropy fit which is consistent with the labeled data. In addition the algorithm asks at a specified frequency an oracle for labels of data with entropy above a certain entropy quantile. Using this active learning component we obtain an agile labeling process that achieves high accuracy, but requires only a small amount of known labels. For the MNIST dataset we report an error rate of 2.06% using only 300 labels and 1.06% for 1,000 labels. These results are obtained without employing any special network architecture or data augmentation.",
"title": ""
},
{
"docid": "5ed4c23e1fcfb3f18c18bb1eb6f408ab",
"text": "In this paper we introduce the concept of privacy preserving data mining. In our model, two parties owning confidential databases wish to run a data mining algorithm on the union of their databases, without revealing any unnecessary information. This problem has many practical and important applications, such as in medical research with confidential patient records. Data mining algorithms are usually complex, especially as the size of the input is measured in megabytes, if not gigabytes. A generic secure multi-party computation solution, based on evaluation of a circuit computing the algorithm on the entire input, is therefore of no practical use. We focus on the problem of decision tree learning and use ID3, a popular and widely used algorithm for this problem. We present a solution that is considerably more efficient than generic solutions. It demands very few rounds of communication and reasonable bandwidth. In our solution, each party performs by itself a computation of the same order as computing the ID3 algorithm for its own database. The results are then combined using efficient cryptographic protocols, whose overhead is only logarithmic in the number of transactions in the databases. We feel that our result is a substantial contribution, demonstrating that secure multi-party computation can be made practical, even for complex problems and large inputs.",
"title": ""
},
{
"docid": "7a84328148fac2738d8954976b09aa45",
"text": "The region was covered by 1:250 000 mapping by the Geological Survey of Canada during the mid 1940s (Lord, 1948). A number of showings were found. One of these, the Marmot, was the focus of the first modern exploration (1960s) in the general area. At the same time there was significant exploration activity for porphyry copper and molybdenum mineralization in the intrusive belt running north and south through the McConnell Range. A large gossan was discovered in 1966 at the present site of the Kemess North prospect and led to similar exploration on nearby ground. Falconbridge Nickel Ltd., during a reconnaissance helicopter flight in 1971, discovered a malachite-stained bed in the Sustut drainage that was traceable for over 2500 feet. Their assessment suggested a replacement copper deposi t hosted by volcaniclastic rocks in the upper part of the Takla Group. Numerous junior and major resource companies acquired ground in the area. In 1972 copper was found on the Willow cliffs on the opposite side of the Sustut River and a porphyry style target was identified at the Day. In 1973 the B.C. Geological Survey conducted a mineral deposit study of the Sustut copper area (Church, 1974a). The Geological Survey of Canada returned to pursue general and detailed studies within the McConnell sheet (Richards 1976, and Monger 1977). Monger and Church (1976) revised the stratigraphic nomenclature based on breaks and lithological changes in the volcanic succession supported by fossil data and field observations. In 1983, follow up of a gold-copper-molybdenum soil anomaly led to the discovery of the Kemess South porphyry deposit.",
"title": ""
},
{
"docid": "83444eb9853ef051ef2a8092e1a336b9",
"text": "The problem of distributing gas through a network of pipelines is formulated as a cost minimization subject to nonlinear flow-pressure relations, material balances and pressure bounds. The solution method is based on piecewise linear approximations of the nonlinear flowpressure relations. The approximated problem is solved by an extension of the Simplex method. The solution method is tested on real world data and compared with alternative solution methods.",
"title": ""
},
{
"docid": "c70466f8b1e70fcdd4b7fe3f2cb772b2",
"text": "We present Tor, a circuit-based low-latency anonymous communication service. This second-generation Onion Routing system addresses limitations in the original design. Tor adds perfect forward secrecy, congestion control, directory servers, integrity checking, configurable exit policies, and a practical design for rendezvous points. Tor works on the real-world Internet, requires no special privileges or kernel modifications, requires little synchronization or coordination between nodes, and provides a reasonable tradeoff between anonymity, usability, and efficiency. We briefly describe our experiences with an international network of more than a dozen hosts. We close with a list of open problems in anonymous communication.",
"title": ""
},
{
"docid": "f41c9b1bcc36ed842f15d7570ff67f92",
"text": "Game and creation are activities which have good potential for computational thinking skills. In this paper we present T-Maze, an economical tangible programming tool for children aged 5-9 to build computer programs in maze games by placing wooden blocks. Through the use of computer vision technology, T-Maze provides a live programming interface with real-time graphical and voice feedback. We conducted a user study with 7 children using T-Maze to play two levels of maze-escape games and create their own mazes. The results show that T-Maze is not only easy to use, but also has the potential to help children cultivate computational thinking like abstraction, problem decomposition, and creativity.",
"title": ""
},
{
"docid": "3cb0232cd4b75a8691f9aa4f1d663e9a",
"text": "We introduce an approach for realtime segmentation of a scene into foreground objects, background, and object shadows superimposed on the background. To segment foreground objects, we use an adaptive thresholding method, which is able to deal with rapid changes of the overall brightness. The segmented image usually includes shadows cast by the objects onto the background. Our approach is able to robustly remove the shadow from the background while preserving the silhouette of the foreground object. We discuss a similarity measure for comparing color pixels, which improves the quality of shadow removal significantly. As the image segmentation is part of a real-time interaction environment, real-time processing is needed. Our implementation allows foreground segmentation and robust shadow removal with 15 Hz.",
"title": ""
},
{
"docid": "6fc6167d1ef6b96d239fea03b9653865",
"text": "Deep learning algorithms achieve high classification accuracy at the expense of significant computation cost. In order to reduce this cost, several quantization schemes have gained attention recently with some focusing on weight quantization, and others focusing on quantizing activations. This paper proposes novel techniques that target weight and activation quantizations separately resulting in an overall quantized neural network (QNN). The activation quantization technique, PArameterized Clipping acTivation (PACT), uses an activation clipping parameter α that is optimized during training to find the right quantization scale. The weight quantization scheme, statistics-aware weight binning (SAWB), finds the optimal scaling factor that minimizes the quantization error based on the statistical characteristics of the distribution of weights without the need for an exhaustive search. The combination of PACT and SAWB results in a 2-bit QNN that achieves state-of-the-art classification accuracy (comparable to full precision networks) across a range of popular models and datasets.",
"title": ""
},
{
"docid": "d961d5b1e310513cb3a70376cb65e5e4",
"text": "Defect prediction models help software quality assurance teams to effectively allocate their limited resources to the most defect-prone software modules. A variety of classification techniques have been used to build defect prediction models ranging from simple (e.g., logistic regression) to advanced techniques (e.g., Multivariate Adaptive Regression Splines (MARS)). Surprisingly, recent research on the NASA dataset suggests that the performance of a defect prediction model is not significantly impacted by the classification technique that is used to train it. However, the dataset that is used in the prior study is both: (a) noisy, i.e., contains erroneous entries and (b) biased, i.e., only contains software developed in one setting. Hence, we set out to replicate this prior study in two experimental settings. First, we apply the replicated procedure to the same (known-to-be noisy) NASA dataset, where we derive similar results to the prior study, i.e., the impact that classification techniques have appear to be minimal. Next, we apply the replicated procedure to two new datasets: (a) the cleaned version of the NASA dataset and (b) the PROMISE dataset, which contains open source software developed in a variety of settings (e.g., Apache, GNU). The results in these new datasets show a clear, statistically distinct separation of groups of techniques, i.e., the choice of classification technique has an impact on the performance of defect prediction models. Indeed, contrary to earlier research, our results suggest that some classification techniques tend to produce defect prediction models that outperform others.",
"title": ""
}
] |
scidocsrr
|
450f00608de97d4ee3b9ea41eaea964d
|
PIRM Challenge on Perceptual Image Enhancement on Smartphones: Report
|
[
{
"docid": "36f8d1e7cd7a6e2a68c3dd4336e91da8",
"text": "Although the accuracy of super-resolution (SR) methods based on convolutional neural networks (CNN) soars high, the complexity and computation also explode with the increased depth and width of the network. Thus, we propose the convolutional anchored regression network (CARN) for fast and accurate single image super-resolution (SISR). Inspired by locally linear regression methods (A+ and ARN), the new architecture consists of regression blocks that map input features from one feature space to another. Different from A+ and ARN, CARN is no longer relying on or limited by hand-crafted features. Instead, it is an end-to-end design where all the operations are converted to convolutions so that the key concepts, i.e., features, anchors, and regressors, are learned jointly. The experiments show that CARN achieves the best speed and accuracy trade-off among the SR methods. The code is available at https://github.com/ofsoundof/CARN.",
"title": ""
}
] |
[
{
"docid": "008ef5b90cf6e9bac922c6a8a1b4a4eb",
"text": "Durante los días 5 y 6 de mayo de 2011 se celebró en el Campus de Teruel de la Universidad de Zaragoza (España) la Segunda Conferencia Internacional en Fomento e Innovación con Nuevas Tecnologías en la Docencia de la Ingeniería (FINTDI) (http://fintdi.unizar.es/). Esta conferencia, promovida por el Capítulo Español de la Sociedad de Educación del IEEE, surgió en el año 2009 con el objetivo de dar a conocer y poner en común las experiencias de innovación docente que se están desarrollando en el ámbito de la ingeniería en las diferentes universidades españolas e iberoamericanas. Así mismo, con su creación se pretendía implantar un foro donde valorar conjuntamente las repercusiones de la utilización de nuevos métodos, materiales y herramientas docentes, con los que los profesores universitarios están trabajando desde la implantación del EEES, en una búsqueda continuada por aumentar la calidad de su docencia. En esta segunda edición, y con el fin de promover los puntos de encuentros, debate y reflexión, se introdujeron algunos cambios de formato. Entre otros: a) Se incorporaron sesiones de póster, para dar visibilidad a los trabajos que todavía estaban en proceso de desarrollo (Work in Progress), pero cuyas ideas podrían servir como referente para otros docentes o como punto de partida para futuras colaboraciones entre universidades. b) Se modificó el formato de las sesiones: como ya es habitual los participantes tenían un tiempo limitado para exponer, pero en esta ocasión el turno de preguntas se reservó para el final de la sesión. Se intentó de este modo que los asistentes a una sesión participaran hasta el final, fomentando el debate y la reflexión conjunta sobre los trabajos presentados. c) Con idea de apoyar el aprendizaje de los asistentes, se impartieron tres talleres de forma gratuita, gracias a la desinteresada labor de los ponentes. En concreto, se impartieron los talleres: 1) “Generación de Objetos Educativos Reutilizables” – a cargo de D. Oscar Martínez Bonastre, 2) “Uso del laboratorio remoto VISIR para circuitos electrónicos básicos Entorno VISIR (laboratorio remoto para electrónica) y sus aplicaciones en el aula”, a cargo de D. Unai Hernández Jayo y D. Javier García Zubía y 3) “Tabletas + tinta digital: una oportunidad para mejorar la interacción en nuestras aulas”, impartido por D. José-V. Benlloch-Dualde. Además, el congreso se vio enriquecido con dos conferencias plenarias: * “Caught in the Storm: Engineers, Ethics, and Hurricane Katrina” – a cargo del Dr. Charles Fleddermann, Editor Jefe de la revista internacional IEEETransactions on Education. * “Developing skill to work on multidisciplinar teams: taking part in the PDT project” – a cargo de la doctora Paloma Díaz, Catedrática del Departamento de Informática de la Universidad Carlos III de Madrid (Escuela Politécnica Superior).",
"title": ""
},
{
"docid": "ab1c7ede012bd20f30bab66fcaec49fa",
"text": "Visual-inertial navigation systems (VINS) have prevailed in various applications, in part because of the complementary sensing capabilities and decreasing costs as well as sizes. While many of the current VINS algorithms undergo inconsistent estimation, in this paper we introduce a new extended Kalman filter (EKF)-based approach towards consistent estimates. To this end, we impose both state-transition and obervability constraints in computing EKF Jacobians so that the resulting linearized system can best approximate the underlying nonlinear system. Specifically, we enforce the propagation Jacobian to obey the semigroup property, thus being an appropriate state-transition matrix. This is achieved by parametrizing the orientation error state in the global, instead of local, frame of reference, and then evaluating the Jacobian at the propagated, instead of the updated, state estimates. Moreover, the EKF linearized system ensures correct observability by projecting the most-accurate measurement Jacobian onto the observable subspace so that no spurious information is gained. The proposed algorithm is validated by both Monte-Carlo simulation and real-world experimental tests.",
"title": ""
},
{
"docid": "de4c44363fd6bb6da7ec0c9efd752213",
"text": "Modeling the structure of coherent texts is a task of great importance in NLP. The task of organizing a given set of sentences into a coherent order has been commonly used to build and evaluate models that understand such structure. In this work we propose an end-to-end neural approach based on the recently proposed set to sequence mapping framework to address the sentence ordering problem. Our model achieves state-of-the-art performance in the order discrimination task on two datasets widely used in the literature. We also consider a new interesting task of ordering abstracts from conference papers and research proposals and demonstrate strong performance against recent methods. Visualizing the sentence representations learned by the model shows that the model has captured high level logical structure in these paragraphs. The model also learns rich semantic sentence representations by learning to order texts, performing comparably to recent unsupervised representation learning methods in the sentence similarity and paraphrase detection tasks.",
"title": ""
},
{
"docid": "3623bb72ecc6c178c1b9412745025354",
"text": "Languages are transmitted from person to person and generation to generation via a process of iterated learning: people learn a language from other people who once learned that language themselves. We analyze the consequences of iterated learning for learning algorithms based on the principles of Bayesian inference, assuming that learners compute a posterior distribution over languages by combining a prior (representing their inductive biases) with the evidence provided by linguistic data. We show that when learners sample languages from this posterior distribution, iterated learning converges to a distribution over languages that is determined entirely by the prior. Under these conditions, iterated learning is a form of Gibbs sampling, a widely-used Markov chain Monte Carlo algorithm. The consequences of iterated learning are more complicated when learners choose the language with maximum posterior probability, being affected by both the prior of the learners and the amount of information transmitted between generations. We show that in this case, iterated learning corresponds to another statistical inference algorithm, a variant of the expectation-maximization (EM) algorithm. These results clarify the role of iterated learning in explanations of linguistic universals and provide a formal connection between constraints on language acquisition and the languages that come to be spoken, suggesting that information transmitted via iterated learning will ultimately come to mirror the minds of the learners.",
"title": ""
},
{
"docid": "b82f7b7a317715ba0c7ca87db92c7bf6",
"text": "Regions of hypoxia in tumours can be modelled in vitro in 2D cell cultures with a hypoxic chamber or incubator in which oxygen levels can be regulated. Although this system is useful in many respects, it disregards the additional physiological gradients of the hypoxic microenvironment, which result in reduced nutrients and more acidic pH. Another approach to hypoxia modelling is to use three-dimensional spheroid cultures. In spheroids, the physiological gradients of the hypoxic tumour microenvironment can be inexpensively modelled and explored. In addition, spheroids offer the advantage of more representative modelling of tumour therapy responses compared with 2D culture. Here, we review the use of spheroids in hypoxia tumour biology research and highlight the different methodologies for spheroid formation and how to obtain uniformity. We explore the challenge of spheroid analyses and how to determine the effect on the hypoxic versus normoxic components of spheroids. We discuss the use of high-throughput analyses in hypoxia screening of spheroids. Furthermore, we examine the use of mathematical modelling of spheroids to understand more fully the hypoxic tumour microenvironment.",
"title": ""
},
{
"docid": "1fd8b9ea33ad60c23fa90b3b971be111",
"text": "Precise positioning of an automobile to within lane-level precision can enable better navigation and context-awareness. However, GPS by itself cannot provide such precision in obstructed urban environments. In this paper, we present a system called CARLOC for lane-level positioning of automobiles. CARLOC uses three key ideas in concert to improve positioning accuracy: it uses digital maps to match the vehicle to known road segments; it uses vehicular sensors to obtain odometry and bearing information; and it uses crowd-sourced location of estimates of roadway landmarks that can be detected by sensors available in modern vehicles. CARLOC unifies these ideas in a probabilistic position estimation framework, widely used in robotics, called the sequential Monte Carlo method. Through extensive experiments on a real vehicle, we show that CARLOC achieves sub-meter positioning accuracy in an obstructed urban setting, an order-of-magnitude improvement over a high-end GPS device.",
"title": ""
},
{
"docid": "c2c85e02b2eb3c73ece4e43aae42ff28",
"text": "The security of many computer systems hinges on the secrecy of a single word – if an adversary obtains knowledge of a password, they will gain access to the resources controlled by this password. Human users are the ‘weakest link’ in password control, due to our propensity to reuse passwords and to create weak ones. Policies which forbid such unsafe password practices are often violated, even if these policies are well-advertised. We have studied how users perceive their accounts and their passwords. Our participants mentally classified their accounts and passwords into a few groups, based on a small number of perceived similarities. Our participants used stronger passwords, and reused passwords less, in account groups which they considered more important. Our participants thus demonstrated awareness of the basic tenets of password safety, but they did not behave safely in all respects. Almost half of our participants reused at least one of the passwords in their high-importance accounts. Our findings add to the body of evidence that a typical computer user suffers from ‘password overload’. Our concepts of password and account grouping point the way toward more intuitive user interfaces for passwordand account-management systems. .",
"title": ""
},
{
"docid": "5116079b69aeb1858177429fabd10f80",
"text": "Deep convolutional neural networks (CNN) have shown their promise as a universal representation for recognition. However, global CNN activations at present lack geometric invariance, which limits their robustness for tasks such as classification and matching of highly variable scenes. To improve the invariance of CNN activations without degrading their discriminative power, this paper presents a simple but effective scheme called multi-scale orderless pooling (or MOP-CNN for short). This approach works by extracting CNN activations for local patches at multiple scales, followed by orderless VLAD pooling of these activations at each scale level and concatenating the result. This feature representation decisively outperforms global CNN activations and achieves state-of-the-art performance for scene classification on such challenging benchmarks as SUN397, MIT Indoor Scenes, and ILSVRC2012, as well as for instance-level retrieval on the Holidays dataset.",
"title": ""
},
{
"docid": "b2c24d93d1326ac8ce62cb5c5328689d",
"text": "The effects of a training program consisting of weight lifting combined with plyometric exercises on kicking performance, myosin heavy-chain composition (vastus lateralis), physical fitness, and body composition (using dual-energy X-ray absorptiometry (DXA)) was examined in 37 male physical education students divided randomly into a training group (TG: 16 subjects) and a control group (CG: 21 subjects). The TG followed 6 weeks of combined weight lifting and plyometric exercises. In all subjects, tests were performed to measure their maximal angular speed of the knee during in-step kicks on a stationary ball. Additional tests for muscle power (vertical jump), running speed (30 m running test), anaerobic capacity (Wingate and 300 m running tests), and aerobic power (20 m shuttle run tests) were also performed. Training resulted in muscle hypertrophy (+4.3%), increased peak angular velocity of the knee during kicking (+13.6%), increased percentage of myosin heavy-chain (MHC) type IIa (+8.4%), increased 1 repetition maximum (1 RM) of inclined leg press (ILP) (+61.4%), leg extension (LE) (+20.2%), leg curl (+15.9%), and half squat (HQ) (+45.1%), and enhanced performance in vertical jump (all p < or = 0.05). In contrast, MHC type I was reduced (-5.2%, p < or = 0.05) after training. In the control group, these variables remained unchanged. In conclusion, 6 weeks of strength training combining weight lifting and plyometric exercises results in significant improvement of kicking performance, as well as other physical capacities related to success in football (soccer).",
"title": ""
},
{
"docid": "e1826cd431b40bc4ac7c853eee6bf1b6",
"text": "Convolutional Neural Networks (CNNs) have been established as a powerful class of models for image recognition problems. Inspired by a blog post [1], we tried to predict the probability of an image getting a high number of likes on Instagram. We modified a pre-trained AlexNet ImageNet CNN model using Caffe on a new dataset of Instagram images with hashtag ‘me’ to predict the likability of photos. We achieved a cross validation accuracy of 60% and a test accuracy of 57% using different approaches. Even though this task is difficult because of the inherent noise in the data, we were able to train the model to identify certain characteristics of photos which result in more likes.",
"title": ""
},
{
"docid": "83f6d9a404f5050b3b7eef68e1de6206",
"text": "We propose a simple, yet effective approach for real-time hand pose estimation from single depth images using three-dimensional Convolutional Neural Networks (3D CNNs). Image based features extracted by 2D CNNs are not directly suitable for 3D hand pose estimation due to the lack of 3D spatial information. Our proposed 3D CNN taking a 3D volumetric representation of the hand depth image as input can capture the 3D spatial structure of the input and accurately regress full 3D hand pose in a single pass. In order to make the 3D CNN robust to variations in hand sizes and global orientations, we perform 3D data augmentation on the training data. Experiments show that our proposed 3D CNN based approach outperforms state-of-the-art methods on two challenging hand pose datasets, and is very efficient as our implementation runs at over 215 fps on a standard computer with a single GPU.",
"title": ""
},
{
"docid": "18c885e8cb799086219585e419140ba5",
"text": "Reaction-time and eye-fixation data are analyzed to investigate how people infer the kinematics of simple mechanical systems (pulley systems) from diagrams showing their static configuration. It is proposed that this mental animation process involves decomposing the representation of a pulley system into smaller units corresponding to the machine components and animating these components in a sequence corresponding to the causal sequence of events in the machine's operation. Although it is possible for people to make inferences against the chain of causality in the machine, these inferences are more difficult, and people have a preference for inferences in the direction of causality. The mental animation process reflects both capacity limitations and limitations of mechanical knowledge.",
"title": ""
},
{
"docid": "5fbd1f14c8f4e8dc82bc86ad8b27c115",
"text": "Computer-animated characters are common in popular culture and have begun to be used as experimental tools in social cognitive neurosciences. Here we investigated how appearance of these characters' influences perception of their actions. Subjects were presented with different characters animated either with motion data captured from human actors or by interpolating between poses (keyframes) designed by an animator, and were asked to categorize the motion as biological or artificial. The response bias towards 'biological', derived from the Signal Detection Theory, decreases with characters' anthropomorphism, while sensitivity is only affected by the simplest rendering style, point-light displays. fMRI showed that the response bias correlates positively with activity in the mentalizing network including left temporoparietal junction and anterior cingulate cortex, and negatively with regions sustaining motor resonance. The absence of significant effect of the characters on the brain activity suggests individual differences in the neural responses to unfamiliar artificial agents. While computer-animated characters are invaluable tools to investigate the neural bases of social cognition, further research is required to better understand how factors such as anthropomorphism affect their perception, in order to optimize their appearance for entertainment, research or therapeutic purposes.",
"title": ""
},
{
"docid": "e261d9989f23831be7b1269755a43bf6",
"text": "Taxi services and product delivery services are instrumental for our modern society. Thanks to the emergence of sharing economy, ride-sharing services such as Uber, Didi, Lyft and Google's Waze Rider are becoming more ubiquitous and grow into an integral part of our everyday lives. However, the efficiency of these services are severely limited by the sub-optimal and imbalanced matching between the supply and demand. We need a generalized framework and corresponding efficient algorithms to address the efficient matching, and hence optimize the performance of these markets. Existing studies for taxi and delivery services are only applicable in scenarios of the one-sided market. In contrast, this work investigates a highly generalized model for the taxi and delivery services in the market economy (abbreviated as\"taxi and delivery market\") that can be widely used in two-sided markets. Further, we present efficient online and offline algorithms for different applications. We verify our algorithm with theoretical analysis and trace-driven simulations under realistic settings.",
"title": ""
},
{
"docid": "c5d74c69c443360d395a8371055ef3e2",
"text": "The supply of oxygen and nutrients and the disposal of metabolic waste in the organs depend strongly on how blood, especially red blood cells, flow through the microvascular network. Macromolecular plasma proteins such as fibrinogen cause red blood cells to form large aggregates, called rouleaux, which are usually assumed to be disaggregated in the circulation due to the shear forces present in bulk flow. This leads to the assumption that rouleaux formation is only relevant in the venule network and in arterioles at low shear rates or stasis. Thanks to an excellent agreement between combined experimental and numerical approaches, we show that despite the large shear rates present in microcapillaries, the presence of either fibrinogen or the synthetic polymer dextran leads to an enhanced formation of robust clusters of red blood cells, even at haematocrits as low as 1%. Robust aggregates are shown to exist in microcapillaries even for fibrinogen concentrations within the healthy physiological range. These persistent aggregates should strongly affect cell distribution and blood perfusion in the microvasculature, with putative implications for blood disorders even within apparently asymptomatic subjects.",
"title": ""
},
{
"docid": "d2a0ff28b7163203a03be27977b9b425",
"text": "The various types of shadows are characterized. Most existing shadow algorithms are described, and their complexities, advantages, and shortcomings are discussed. Hard shadows, soft shadows, shadows of transparent objects, and shadows for complex modeling primitives are considered. For each type, shadow algorithms within various rendering techniques are examined. The aim is to provide readers with enough background and insight on the various methods to allow them to choose the algorithm best suited to their needs and to help identify the areas that need more research and point to possible solutions.<<ETX>>",
"title": ""
},
{
"docid": "6275b2b0fea6478f5af3c6d7e71eff18",
"text": "The immune system of fish is very similar to vertebrates, although there are some important differences. Fish are free-living organisms from the embryonic stage of life in their aquatic environment. They have mechanisms to protect themselves from a wide variety of microorganisms. Consequently, fish rely on their innate immune system for an extended period of time, beginning at the early stages of embryogenesis. The components of the innate immune response are divided into physical, cellular and humoral factors and include humoral and cellular receptor molecules that are soluble in plasma and other body fluids. The lymphoid organs found in fish include the thymus, spleen and kidney. Immunoglobulins are the principal components of the immune response against pathogenic organisms. Immunomodulatory products, including nucleotides, glucans and probiotics, are increasingly used in aquaculture production. The use of these products reduces the need for therapeutic treatments, enhances the effects of vaccines and, in turn, improves the indicators of production. The aim of this review is to provide a review of the immune system in fish, including the ontogeny, mechanisms of unspecific and acquired immunity and the action of some immunomodulators.",
"title": ""
},
{
"docid": "b00ec93bf47aab14aa8ced69612fc39a",
"text": "In today’s increasingly rich material life, people are shifting their focus from the physical world to the spiritual world. In order to identify and care for people’s emotions, human-machine interaction systems have been created. The currently available human-machine interaction systems often support the interaction between human and robot under the line-of-sight (LOS) propagation environment, while most communications in terms of human-to-human and human-to-machine are non-LOS (NLOS). In order to break the limitation of the traditional human–machine interaction system, we propose the emotion communication system based on NLOS mode. Specifically, we first define the emotion as a kind of multimedia which is similar to voice and video. The information of emotion can not only be recognized, but can also be transmitted over a long distance. Then, considering the real-time requirement of the communications between the involved parties, we propose an emotion communication protocol, which provides a reliable support for the realization of emotion communications. We design a pillow robot speech emotion communication system, where the pillow robot acts as a medium for user emotion mapping. Finally, we analyze the real-time performance of the whole communication process in the scene of a long distance communication between a mother-child users’ pair, to evaluate the feasibility and effectiveness of emotion communications.",
"title": ""
},
{
"docid": "f6c1aa22e2afd24a6ad111d5dfdfc3f3",
"text": "This work describes the development of a social chatbot for the football domain. The chatbot, named chatbol, aims at answering a wide variety of questions related to the Spanish football league “La Liga”. Chatbol is deployed as a Slack client for text-based input interaction with users. One of the main Chatbol’s components, a NLU block, is trained to extract the intents and associated entities related to user’s questions about football players, teams, trainers and fixtures. The information for the entities is obtained by making sparql queries to Wikidata site in real time. Then, the retrieved data is used to update the specific chatbot responses. As a fallback strategy, a retrieval-based conversational engine is incorporated to the chatbot system. It allows for a wider variety and freedom of responses, still football oriented, for the case when the NLU module was unable to reply with high confidence to the user. The retrieval-based response database is composed of real conversations collected both from a IRC football channel and from football-related excerpts picked up across movie captions, extracted from the OpenSubtitles database.",
"title": ""
},
{
"docid": "503101a7b0f923f8fecb6dc9bb0bde37",
"text": "In-vehicle electronic equipment aims to increase safety, by detecting risk factors and taking/suggesting corrective actions. This paper presents a knowledge-based framework for assisting a driver via her PDA. Car data extracted under On Board Diagnostics (OBD-II) protocol, data acquired from PDA embedded micro-devices and information retrieved from the Web are properly combined: a simple data fusion algorithm has been devised to collect and semantically annotate relevant safety events. Finally, a logic-based matchmaking allows to infer potential risk factors, enabling the system to issue accurate and timely warnings. The proposed approach has been implemented in a prototypical application for the Apple iPhone platform, in order to provide experimental evaluation in real-world test drives for corroborating the approach. Keywords-Semantic Web; On Board Diagnostics; Ubiquitous Computing; Data Fusion; Intelligent Transportation Systems",
"title": ""
}
] |
scidocsrr
|
93996156638bfb537083fa8e5bbf2e82
|
Modelling Information Needs in Collaborative Search Conversations
|
[
{
"docid": "1a44645ee469e4bbaa978216d01f7e0d",
"text": "The growing popularity of mobile search and the advancement in voice recognition technologies have opened the door for web search users to speak their queries, rather than type them. While this kind of voice search is still in its infancy, it is gradually becoming more widespread. In this paper, we examine the logs of a commercial search engine's mobile interface, and compare the spoken queries to the typed-in queries. We place special emphasis on the semantic and syntactic characteristics of the two types of queries. %Our analysis suggests that voice queries focus more on audio-visual content and question answering, and less on social networking and adult domains. We also conduct an empirical evaluation showing that the language of voice queries is closer to natural language than typed queries. Our analysis reveals further differences between voice and text search, which have implications for the design of future voice-enabled search tools.",
"title": ""
}
] |
[
{
"docid": "b010e6982626ffe76da4ade5d5a6800b",
"text": "In this communication, a triangular-shaped dielectric antenna fed by substrate-integrated waveguide (SIW) is proposed and researched. The effect of the extended substrate's length on performance of the proposed antenna is first researched. In order to reduce sidelobe level (SLL) at high frequency as well as increase the adjustability of the proposed antenna while maintaining its planar structure, air vias are periodically perforated into the loaded substrate to modify the permittivity. The variation trend of modifying permittivity with changing performance of the proposed antenna is studied, followed by analyzing function of the transition stage(s) of the extended substrate. Through optimizing the dielectric length and modifying the diameters of air vias to change the permittivity, the proposed antenna with wide operating bandwidth and low SLL can be realized. Measured results indicate that the proposed antenna works from 17.6 to 26.7 GHz, which almost covers the whole K band. Besides, stable end-fire radiation patterns in the whole operating band are obtained. Moreover, at least 8.3-dBi peak gains with low SLL are achieved as well.",
"title": ""
},
{
"docid": "ed9f79cab2dfa271ee436b7d6884bc13",
"text": "This study conducts a phylogenetic analysis of extant African papionin craniodental morphology, including both quantitative and qualitative characters. We use two different methods to control for allometry: the previously described narrow allometric coding method, and the general allometric coding method, introduced herein. The results of this study strongly suggest that African papionin phylogeny based on molecular systematics, and that based on morphology, are congruent and support a Cercocebus/Mandrillus clade as well as a Papio/Lophocebus/Theropithecus clade. In contrast to previous claims regarding papionin and, more broadly, primate craniodental data, this study finds that such data are a source of valuable phylogenetic information and removes the basis for considering hard tissue anatomy \"unreliable\" in phylogeny reconstruction. Among highly sexually dimorphic primates such as papionins, male morphologies appear to be particularly good sources of phylogenetic information. In addition, we argue that the male and female morphotypes should be analyzed separately and then added together in a concatenated matrix in future studies of sexually dimorphic taxa. Character transformation analyses identify a series of synapomorphies uniting the various papionin clades that, given a sufficient sample size, should potentially be useful in future morphological analyses, especially those involving fossil taxa.",
"title": ""
},
{
"docid": "d87e7ccbc2938bc32ca1943d90a3b817",
"text": "Initial timing acquisition in narrow-band IoT (NB- IoT) devices is done by detecting a periodically transmitted known sequence. The detection has to be done at lowest possible latency, because the RF- transceiver, which dominates downlink power consumption of an NB-IoT modem, has to be turned on throughout this time. Auto-correlation detectors show low computational complexity from a signal processing point of view at the price of a higher detection latency. In contrast a maximum likelihood cross-correlation detector achieves low latency at a higher complexity as shown in this paper. We present a hardware implementation of the maximum likelihood cross-correlation detection. The detector achieves an average detection latency which is a factor of two below that of an auto- correlation method and is able to reduce the required energy per timing acquisition by up to 34%.",
"title": ""
},
{
"docid": "36c3bd9e1203b9495d92a40c5fa5f2c0",
"text": "A 14-year-old boy presented with asymptomatic right hydronephrosis detected on routine yearly ultrasound examination. Previously, he had at least two normal renal ultrasonograms, 4 years after remission of acute myeloblastic leukemia, treated by AML-BFM-93 protocol. A function of the right kidney and no damage on the left was confirmed by a DMSA scan. Right retroperitoneoscopic nephrectomy revealed 3 renal arteries with the lower pole artery lying on the pelviureteric junction. Histologically chronic tubulointerstitial nephritis was detected. In the pathogenesis of this severe unilateral renal damage, we suspect the exacerbation of deleterious effects of cytostatic therapy on kidneys with intermittent hydronephrosis.",
"title": ""
},
{
"docid": "20707cdc68b15fe46aaece52ca6aff62",
"text": "The potential cardiovascular benefits of several trending foods and dietary patterns are still incompletely understood, and nutritional science continues to evolve. However, in the meantime, a number of controversial dietary patterns, foods, and nutrients have received significant media exposure and are mired by hype. This review addresses some of the more popular foods and dietary patterns that are promoted for cardiovascular health to provide clinicians with accurate information for patient discussions in the clinical setting.",
"title": ""
},
{
"docid": "65783d05434b9f176c6983a1da042d36",
"text": "We present a novel technique that optimizes the dispatching of incident tickets to the agents in an IT Service Support Environment. Unlike the common skill-based dispatching, our approach also takes empirical evidence on the agent's speed from historical data into account. Our solution consists of two parts. First, a novel technique clusters historic tickets into incident categories that are discriminative in terms of agent's performance. Second, a dispatching policy selects, for an incoming ticket, the fastest available agent according to the target cluster. We show that, for ticket data collected from several Service Delivery Units, our new dispatching technique can reduce service time between $35\\%$ and $44\\%$.",
"title": ""
},
{
"docid": "4fa68f011f7cb1b4874dd4b10070be17",
"text": "This paper demonstrates the development of ontology for satellite databases. First, I create a computational ontology for the Union of Concerned Scientists (UCS) Satellite Database (UCSSD for short), called the UCS Satellite Ontology (or UCSSO). Second, in developing UCSSO I show that The Space Situational Awareness Ontology (SSAO)-—an existing space domain reference ontology—-and related ontology work by the author (Rovetto 2015, 2016) can be used either (i) with a database-specific local ontology such as UCSSO, or (ii) in its stead. In case (i), local ontologies such as UCSSO can reuse SSAO terms, perform term mappings, or extend it. In case (ii), the author_s orbital space ontology work, such as the SSAO, is usable by the UCSSD and organizations with other space object catalogs, as a reference ontology suite providing a common semantically-rich domain model. The SSAO, UCSSO, and the broader Orbital Space Environment Domain Ontology project is online at https://purl.org/space-ontology and GitHub. This ontology effort aims, in part, to provide accurate formal representations of the domain for various applications. Ontology engineering has the potential to facilitate the sharing and integration of satellite data from federated databases and sensors for safer spaceflight.",
"title": ""
},
{
"docid": "39436b8277d09c49b85c60b0078a638b",
"text": "This review paper is intended for scholars with different backgrounds, possibly in only one of the subjects covered, and therefore little background knowledge is assumed. The first part is an introduction to classical and quantum information theory (CIT, QIT): basic definitions and tools of CIT are introduced, such as the information content of a random variable, the typical set, and some principles of data compression. Some concepts and results of QIT are then introduced, such as the qubit, the pure and mixed states, the Holevo theorem, the no-cloning theorem, and the quantum complementarity. In the second part, two applications of QIT to open problems in theoretical physics are discussed. The black hole (BH) information paradox is related to the phenomenon of the Hawking radiation (HR). Considering a BH starting in a pure state, after its complete evaporation only the Hawking radiation will remain, which is shown to be in a mixed state. This either describes a non-unitary evolution of an isolated system, contradicting the evolution postulate of quantum mechanics and violating the no-cloning theorem, or it implies that the initial information content can escape the BH, therefore contradicting general relativity. The progress toward the solution of the paradox is discussed. The renormalization group (RG) aims at the extraction of the macroscopic description of a physical system from its microscopic description. This passage from microscopic to macroscopic can be described in terms of several steps from one scale to another, and is therefore formalized as the action of a group. The c-theorem proves the existence, under certain conditions, of a function which is monotonically decreasing along the group transformations. This result suggests an interpretation of this function as entropy, and its use to study the information flow along the RG transformations.",
"title": ""
},
{
"docid": "96d8e375616a7ee137276d385c14a18a",
"text": "Constructivism is a theory of learning which claims that students construct knowledge rather than merely receive and store knowledge transmitted by the teacher. Constructivism has been extremely influential in science and mathematics education, but not in computer science education (CSE). This paper surveys constructivism in the context of CSE, and shows how the theory can supply a theoretical basis for debating issues and evaluating proposals.",
"title": ""
},
{
"docid": "b82805187bdfd14a4dd5efc6faf70f10",
"text": "8 Cloud computing has gained tremendous popularity in recent years. By outsourcing computation and 9 storage requirements to public providers and paying for the services used, customers can relish upon the 10 advantages of this new paradigm. Cloud computing provides with a comparably lower-cost, scalable, a 11 location-independent platform for managing clients’ data. Compared to a traditional model of computing, 12 which uses dedicated in-house infrastructure, cloud computing provides unprecedented benefits regarding 13 cost and reliability. Cloud storage is a new cost-effective paradigm that aims at providing high 14 availability, reliability, massive scalability and data sharing. However, outsourcing data to a cloud service 15 provider introduces new challenges from the perspectives of data correctness and security. Over the years, 16 many data integrity schemes have been proposed for protecting outsourced data. This paper aims to 17 enhance the understanding of security issues associated with cloud storage and highlights the importance 18 of data integrity schemes for outsourced data. In this paper, we have presented a taxonomy of existing 19 data integrity schemes use for cloud storage. A comparative analysis of existing schemes is also provided 20 along with a detailed discussion on possible security attacks and their mitigations. Additionally, we have 21 discussed design challenges such as computational efficiency, storage efficiency, communication 22 efficiency, and reduced I/O in these schemes. Furthermore; we have highlighted future trends and open 23 issues, for future research in cloud storage security. 24",
"title": ""
},
{
"docid": "8615959de53d6579613e1213a53e6525",
"text": "This paper addresses the problem of frequency domain packet scheduling (FDPS) incorporating spatial division multiplexing (SDM) multiple input multiple output (MIMO) techniques on the 3GPP Long Term Evolution (LTE) downlink. We impose the LTE MIMO constraint of selecting only one MIMO mode (spatial multiplexing or transmit diversity) per user per transmission time interval (TTI). First, we address the optimal MIMO mode selection (multiplexing or diversity) per user in each TTI in order to maximize the proportional fair (PF) criterion extended to frequency and spatial domains. We prove that the SU-MIMO (single-user MIMO) FDPS problem under the LTE requirement is NP-hard and therefore, we develop two approximation algorithms (one with full channel feedback and the other with partial channel feedback) with provable performance bounds. Based on 3GPP LTE system model simulations, the approximation algorithm with partial channel feedback is shown to have comparable performance to the one with full channel feedback, while significantly reducing the channel feedback overhead by nearly 50%.",
"title": ""
},
{
"docid": "6176cd7b7c3e38dbc3053c2e011b5060",
"text": "This paper addresses the low power mechanisms provided by the ZigBee and the 6LoWPAN Protocol, providing comparative assessments based on the results obtained by different researchers and available in specialized literature, running through experimental measurements on digital test banks. For a performance comparison, the parameters of each protocol have been adjusted so that it is able to function properly in low power mode and make the measurement scenarios equivalent in terms of traffic and energy efficiency. The comparison focuses on the impact of the mechanisms of low power in the performance of the network. Experimental evaluations mentioned, show strengths and weaknesses of both protocols when working in a low power mode.",
"title": ""
},
{
"docid": "e083b5fdf76bab5cdc8fcafc77db23f7",
"text": "Working under a model of privacy in which data remains private even from the statistician, we study the tradeoff between privacy guarantees and the risk of the resulting statistical estimators. We develop private versions of classical information-theoretic bounds, in particular those due to Le Cam, Fano, and Assouad. These inequalities allow for a precise characterization of statistical rates under local privacy constraints and the development of provably (minimax) optimal estimation procedures. We provide a treatment of several canonical families of problems: mean estimation and median estimation, multinomial probability estimation, and nonparametric density estimation. For all of these families, we provide lower and upper bounds that match up to constant factors, and exhibit new (optimal) privacy-preserving mechanisms and computationally efficient estimators that achieve the bounds. Additionally, we present a variety of experimental results for estimation problems involving sensitive data, including salaries, censored blog posts and articles, and drug abuse; these experiments demonstrate the importance of deriving optimal procedures.",
"title": ""
},
{
"docid": "a39f988fa6f7a55662f5a8821e9ad87c",
"text": "We develop an algorithm which exceeds the performance of board certified cardiologists in detecting a wide range of heart arrhythmias from electrocardiograms recorded with a single-lead wearable monitor. We build a dataset with more than 500 times the number of unique patients than previously studied corpora. On this dataset, we train a 34-layer convolutional neural network which maps a sequence of ECG samples to a sequence of rhythm classes. Committees of boardcertified cardiologists annotate a gold standard test set on which we compare the performance of our model to that of 6 other individual cardiologists. We exceed the average cardiologist performance in both recall (sensitivity) and precision (positive predictive value).",
"title": ""
},
{
"docid": "72b763ff3360d87696f8c43606b3cc8f",
"text": "This paper presents Netbait, a planetary-scale service for distributed detection of Internet worms. Netbait allows users to pose queries that identify which machines on a given network have been compromised based on the collective view of a geographically distributed set of machines. It is based on a distributed query processing architecture that evaluates queries expressed using a subset of SQL against a single logical database table. This single logical table is realized using a distributed set of relational databases, each populated by local intrusion detection systems running on Netbait server nodes. For speed, queries in Netbait are processed in parallel by distributing them over dynamically constructed query processing trees built over Tapestry, a distributed object and location routing (DOLR) layer. For efficiency, query results are compressed using application-specific aggregation and compact encodings. We have implemented a prototype system based on a simplified version of the architecture and have deployed it on 90 nodes of the PlanetLab testbed at 42 sites spread across three continents. The system has been continuously running for over a month now and has been collecting probe information from machines compromised by both the Code Red and Nimda worms. Early results based on this data are promising. First, we observe that by having multiple machines sharing probe information from infected machines, we can identify a substantially larger set of infected hosts that would be possible otherwise. Second, we also observe that by having multiple viewpoints of the network, Netbait is able to identify compromised machines that otherwise would have been difficult to detect in cases where worms have an affinity to certain regions of the IP address space.",
"title": ""
},
{
"docid": "c2a297417553cb46fd98353d8b8351ac",
"text": "Recent advances in methods and techniques enable us to develop an interactive overlay to the global map of science based on aggregated citation relations among the 9,162 journals contained in the Science Citation Index and Social Science Citation Index 2009 combined. The resulting mapping is provided by VOSViewer. We first discuss the pros and cons of the various options: cited versus citing, multidimensional scaling versus spring-embedded algorithms, VOSViewer versus Gephi, and the various clustering algorithms and similarity criteria. Our approach focuses on the positions of journals in the multidimensional space spanned by the aggregated journal-journal citations. A number of choices can be left to the user, but we provide default options reflecting our preferences. Some examples are also provided; for example, the potential of using this technique to assess the interdisciplinarity of organizations and/or document sets.",
"title": ""
},
{
"docid": "7bce92a72a19aef0079651c805883eb5",
"text": "Highly realistic virtual human models are rapidly becoming commonplace in computer graphics. These models, often represented by complex shape and requiring labor-intensive process, challenge the problem of automatic modeling. This paper studies the problem and solutions to automatic modeling of animatable virtual humans. Methods for capturing the shape of real people, parameterization techniques for modeling static shape (the variety of human body shapes) and dynamic shape (how the body shape changes as it moves) of virtual humans are classified, summarized and compared. Finally, methods for clothed virtual humans are reviewed.",
"title": ""
},
{
"docid": "bef4cf486ddc37d8ff4d5ed7a2b72aba",
"text": "We propose an on-line algorithm for simultaneous localization and mapping of dynamic environments. Our algorithm is capable of differentiating static and dynamic parts of the environment and representing them appropriately on the map. Our approach is based on maintaining two occupancy grids. One grid models the static parts of the environment, and the other models the dynamic parts of the environment. The union of the two grid maps provides a complete description of the environment over time. We also maintain a third map containing information about static landmarks detected in the environment. These landmarks provide the robot with localization. Results in simulation and real robots experiments show the efficiency of our approach and also show how the differentiation of dynamic and static entities in the environment and SLAM can be mutually beneficial.",
"title": ""
},
{
"docid": "ce79fa8530128024c5a9abeff10fd244",
"text": "A bi-directional chip scale power MOSFET device is introduced for use in cell phone battery protection circuits. This device utilises the monolithic integration of two common drain power MOSFETs and solder bump technology to produce an ultra low footprint solution. Battery protection MOSFET packaging technology is reviewed along with cell phone current requirements. Comparisons are drawn between the in circuit performance of the bi-directional device and an industry standard TSSOP8. Device related losses in relation to IC drive strategy and conclusions on the effects of Rdson and package volume on cell phone talk and standby times are presented.",
"title": ""
},
{
"docid": "5854e4fe2abddc407273b5df65b0c97b",
"text": "This reprint is provided for personal and noncommercial use. For any other use, please send a request to Permissions,",
"title": ""
}
] |
scidocsrr
|
a81044866d1f14ec97f4e4553db82ebe
|
Deanonymization in the Bitcoin P 2 P Network
|
[
{
"docid": "49911f2cf2d6dbef9545c1cb56648128",
"text": "Bitcoin is a digital currency which relies on a distributed set of miners to mint coins and on a peer-to-peer network to broadcast transactions. The identities of Bitcoin users are hidden behind pseudonyms (public keys) which are recommended to be changed frequently in order to increase transaction unlinkability.\n We present an efficient method to deanonymize Bitcoin users, which allows to link user pseudonyms to the IP addresses where the transactions are generated. Our techniques work for the most common and the most challenging scenario when users are behind NATs or firewalls of their ISPs. They allow to link transactions of a user behind a NAT and to distinguish connections and transactions of different users behind the same NAT. We also show that a natural countermeasure of using Tor or other anonymity services can be cut-off by abusing anti-DoS countermeasures of the Bitcoin network. Our attacks require only a few machines and have been experimentally verified. The estimated success rate is between 11% and 60% depending on how stealthy an attacker wants to be. We propose several countermeasures to mitigate these new attacks.",
"title": ""
},
{
"docid": "f2c6a7f205f1aa6b550418cd7e93f7d2",
"text": "This paper addresses the problem of a single rumor source detection with multiple observations, from a statistical point of view of a spreading over a network, based on the susceptible-infectious model. For tree networks, multiple sequential observations for one single instance of rumor spreading cannot improve over the initial snapshot observation. The situation dramatically improves for multiple independent observations. We propose a unified inference framework based on the union rumor centrality, and provide explicit detection performance for degree-regular tree networks. Surprisingly, even with merely two observations, the detection probability at least doubles that of a single observation, and further approaches one, i.e., reliable detection, with increasing degree. This indicates that a richer diversity enhances detectability. For general graphs, a detection algorithm using a breadth-first search strategy is also proposed and evaluated. Besides rumor source detection, our results can be used in network forensics to combat recurring epidemic-like information spreading such as online anomaly and fraudulent email spams.",
"title": ""
},
{
"docid": "f53e743819b577a5460e17910907fb11",
"text": "The Bitcoin network relies on peer-to-peer broadcast to distribute pending transactions and confirmed blocks. The topology over which this broadcast is distributed affects which nodes have advantages and whether some attacks are feasible. As such, it is particularly important to understand not just which nodes participate in the Bitcoin network, but how they are connected. In this paper, we introduce AddressProbe, a technique that discovers peer-to-peer links in Bitcoin, and apply this to the live topology. To support AddressProbe and other tools, we develop CoinScope, an infrastructure to manage short, but large-scale experiments in Bitcoin. We analyze the measured topology to discover both highdegree nodes and a well connected giant component. Yet, efficient propagation over the Bitcoin backbone does not necessarily result in a transaction being accepted into the block chain. We introduce a “decloaking” method to find influential nodes in the topology that are well connected to a mining pool. Our results find that in contrast to Bitcoin’s idealized vision of spreading mining responsibility to each node, mining pools are prevalent and hidden: roughly 2% of the (influential) nodes represent threequarters of the mining power.",
"title": ""
}
] |
[
{
"docid": "ef6d25f1fc67962876100301d8bdb6a5",
"text": "The Strategic Information Systems Planning (SISP) process is critical for ensuring the effectiveness of the contribution of Information Technology (IT)/Information Systems (IS) to the organisation. A sophisticated SISP process can greatly increase the chances of positive planning outcomes. While effective IS capabilities are seen as crucial to an organisation’s ability to generate IT-enabled competitive advantages, there exists a gap in the understanding of the IS competencies which contribute to the forming of an effective SISP capability. In light of these gaps, this study investigates how do IS competencies impact the SISP process, and its outcomes? To address this question, a model for investigating the impact of IS collaboration and IS personnel competencies on the SISP process is proposed. Further research is planned to undertake a survey of top Australian organisations in industries characterised by high IT innovation and competition, to test the proposed model and hypotheses.",
"title": ""
},
{
"docid": "81c02e708a21532d972aca0b0afd8bb5",
"text": "We propose a new tree-based ORAM scheme called Circuit ORAM. Circuit ORAM makes both theoretical and practical contributions. From a theoretical perspective, Circuit ORAM shows that the well-known Goldreich-Ostrovsky logarithmic ORAM lower bound is tight under certain parameter ranges, for several performance metrics. Therefore, we are the first to give an answer to a theoretical challenge that remained open for the past twenty-seven years. Second, Circuit ORAM earns its name because it achieves (almost) optimal circuit size both in theory and in practice for realistic choices of block sizes. We demonstrate compelling practical performance and show that Circuit ORAM is an ideal candidate for secure multi-party computation applications.",
"title": ""
},
{
"docid": "53b2e1524dad8dbb9bbfb967d5ce2736",
"text": "Cardiac output (CO) monitoring is essential for indicating the perfusion status of the human cardiovascular system under different physiological conditions. However, it is currently limited to hospital use due to the need for either skilled operators or big, expensive measurement devices. Therefore, in this paper we devise a new CO indicator which can easily be incorporated into existing wearable devices. To this end, we propose an index, the inflection and harmonic area ratio (IHAR), from standard photoplethysmographic (PPG) signals, which can be used to continuously monitor CO. We evaluate the success of our index by testing on sixteen normotensive subjects before and after bicycle exercise. The results showed a strong intra-subject correlation between IHAR and COimp measured by the bio-impedance method in fifteen subjects (mean r = 3D 0.82, p<0.01). After least squares linear regression, the precision between COimp and CO estimated from IHAR (COIHAR) was 1.40 L/min. The total percentage error of the results was 16.2%, which was well below the clinical acceptance limit of 30%. The results suggest that IHAR is a promising indicator for wearable and noninvasive CO monitoring.",
"title": ""
},
{
"docid": "8b79f49d57b301e8572a0bb59f9a92fb",
"text": "This article is the second of a two-part synthesis of research regarding the effects of occupational therapy to improve activity and participation and to reduce impairment for persons with stroke. Part I synthesized research findings for restoration of role participation and activity performance. Part II synthesizes research findings regarding the effects of occupational therapy to remediate psychosocial, cognitive-perceptual, and sensorimotor impairments. Only 29 studies involving 832 participants (mean age = 64.3 years) addressed these goals. No studies directly researched the effects of occupational therapy on depression after stroke. Eight studies addressed cognitive-perceptual abilities. The findings indicated that homemaking tasks resulted in greater improvement of cognitive ability than paper-and-pencil drills and that tasks that forced awareness of neglected space, including movement of the opposite limb into that space, improved unilateral neglect. Fifteen studies examined the effect of occupational therapy on various motor capacities after stroke. Coordinated movement improved under these conditions: (a) following written and illustrated guides for movement exercises, (b) using meaningful goal objects as targets, (c) practicing movements with specific goals, (d) moving both arms simultaneously but independently, and (e) imagining functional use of the affected limb. Research on inhibitory splinting was inconclusive. Based on these few studies and lack of replication, we could make only tentative recommendations for practice. Further definitive research is needed.",
"title": ""
},
{
"docid": "ea4b4c2182392311b72da0630297fa11",
"text": "The popularity of microstrip antennas is increasing day by day because of ease of analysis and fabrication. This paper concerned on enhancement of gain for microstrip patch planar array antenna by using air substrate. The structure in this project is 2 by 2 microstrip patch planar array antenna with substrate εr= 1at frequency 5.8 GHz. Frequency for 5.8GHz can be applied to unlicensed WiMAX. In this project, the simulation is performed by using the simulation software Computer Simulation Technology (CST) Microwave Studio which is a commercially available electromagnetic simulator based on finite difference time domain technique. In this design the application of air to replace regular Flame Retardant 4 (FR-4) materials as the substrate. The substrate used for the design is air which is the thickness equal to 3mm . The thickness for the patch in this design is 0.12mm while the thickness for the ground is equal to 0.035mm. The performance of the designed antenna was analyzed in term of bandwidth, gain, return loss, VSWR, and radiation pattern.",
"title": ""
},
{
"docid": "30a5f7d18d57e4bc295fa42a9ce1b36a",
"text": "The idea of democracy had come a long way before it was given its first modern forms in the liberal ideas of the 17th and 18th centuries. Now the premises of this hierarchical and representative political system are crumbling, and we must seriously consider the need to revitalize democracy. This article aims at clarifying the overall preconditions for the revitalization of democracy, and demonstrates how to build a comprehensive framework for a multidimensional institutional design in which the potentials of ICTs are made to serve relevant democratic purposes. What conditions the functioning of any contemporary democratic system includes such contextual factors as increased global interdependency, extended use of market-based mechanisms, significant impacts of media and ICTs, new forms of governance, and individualism in its various forms. One of the most burning issues is how to develop new democracy in such a complex setting so that it accords with people’s ways of thinking and acting. To ensure this, citizens with all their collective actions and willingness to influence public affairs must be placed in the overall framework of e-transformation in politics [11]. This implies that we go beyond the dichotomous discourse that suggests that we have a choice to make between democracy-as-usual and direct e-democracy [9].",
"title": ""
},
{
"docid": "46d3cec76fc52fb7141fc6d999931d6e",
"text": "Numerous studies suggest that infants delivered by cesarean section are at a greater risk of non-communicable diseases than their vaginal counterparts. In particular, epidemiological studies have linked Cesarean delivery with increased rates of asthma, allergies, autoimmune disorders, and obesity. Mode of delivery has also been associated with differences in the infant microbiome. It has been suggested that these differences are attributable to the \"bacterial baptism\" of vaginal birth, which is bypassed in cesarean deliveries, and that the abnormal establishment of the early-life microbiome is the mediator of later-life adverse outcomes observed in cesarean delivered infants. This has led to the increasingly popular practice of \"vaginal seeding\": the iatrogenic transfer of vaginal microbiota to the neonate to promote establishment of a \"normal\" infant microbiome. In this review, we summarize and critically appraise the current evidence for a causal association between Cesarean delivery and neonatal dysbiosis. We suggest that, while Cesarean delivery is certainly associated with alterations in the infant microbiome, the lack of exposure to vaginal microbiota is unlikely to be a major contributing factor. Instead, it is likely that indication for Cesarean delivery, intrapartum antibiotic administration, absence of labor, differences in breastfeeding behaviors, maternal obesity, and gestational age are major drivers of the Cesarean delivery microbial phenotype. We, therefore, call into question the rationale for \"vaginal seeding\" and support calls for the halting of this practice until robust evidence of need, efficacy, and safety is available.",
"title": ""
},
{
"docid": "e7bfcc9cf345ae1570f7dfddb8cf2444",
"text": "Motivated by the need to provide services to alleviate range anxiety of electric vehicles, we consider the problem of balancing charging demand across a network of charging stations. Our objective is to reduce the potential for excessively long queues to build up at some charging stations, although other charging stations are underutilized. A stochastic balancing algorithm is presented to achieve these goals. A further feature of this algorithm is that it is fully decentralized and facilitates a plug-and-play type of behavior. Using our system, the charging stations can join and leave the network without any changes to, or communication with, a centralized infrastructure. Analysis and simulations are presented to illustrate the efficacy of our algorithm.",
"title": ""
},
{
"docid": "0a755c9777bb41d22c3adc81b516f8f1",
"text": "BACKGROUND\nThe characteristics and the clinical course of antiphospholipid syndrome (APS) in high-risk patients that are positive for all three recommended tests that detect the presence of antiphospholipid (aPL) antibodies have not been described.\n\n\nMETHODS\nThis retrospective analysis of prospectively collected data examined patients referred to Italian Thrombosis Centers that were diagnosed with definite APS and tested positive for aPL [lupus anticoagulant (LA), anti-cardiolipin (aCL), and anti-beta2-glycoprotein I (beta2GPI) antibodies]. Laboratory data were confirmed in a central reference laboratory.\n\n\nRESULTS\nOne hundred and sixty patients were enrolled in this cohort study. The qualifying events at diagnosis were venous thromboembolism (76 cases; 47.5%), arterial thromboembolism (69 cases; 43.1%) and pregnancy morbidity (11 cases; 9.7%). The remaining four patients (2.5%) suffered from catastrophic APS. The cumulative incidence of thromboembolic events in the follow-up period was 12.2% (95% CI, 9.6-14.8) after 1 year, 26.1% (95% CI, 22.3-29.9) after 5 years and 44.2% (95% CI, 38.6-49.8) after 10 years. This was significantly higher in those patients not taking oral anticoagulants as compared with those on treatment (HR=2.4 95% CI 1.3-4.1; P<0.003). Major bleeding associated with oral anticoagulant therapy was low (0.8% patient/years). Ten patients died (seven were cardiovascular deaths).\n\n\nCONCLUSIONS\nPatients with APS and triple positivity for aPL are at high risk of developing future thromboembolic events. Recurrence remains frequent despite the use of oral anticoagulants, which significantly reduces the risk of thromboembolism.",
"title": ""
},
{
"docid": "ed5185ea36f61a9216c6f0183b81d276",
"text": "Blockchain technology enables the creation of a decentralized environment where transactions and data are not under the control of any third party organization. Any transaction ever completed is recorded in a public ledger in a verifiable and permanent way. Based on blockchain technology, we propose a global higher education credit platform, named EduCTX. This platform is based on the concept of the European Credit Transfer and Accumulation System (ECTS). It constitutes a globally trusted, decentralized higher education credit and grading system that can offer a globally unified viewpoint for students and higher education institutions (HEIs), as well as for other potential stakeholders such as companies, institutions and organizations. As a proof of concept, we present a prototype implementation of the environment, based on the open-source Ark Blockchain Platform. Based on a globally distributed peer-to-peer network, EduCTX will process, manage and control ECTX tokens, which represent credits that students gain for completed courses such as ECTS. HEIs are the peers of the blockchain network. The platform is a first step towards a more transparent and technologically advanced form of higher education systems. The EduCTX platform represents the basis of the EduCTX initiative which anticipates that various HEIs would join forces in order to create a globally efficient, simplified and ubiquitous environment in order to avoid language and administrative barriers. Therefore we invite and encourage HEIs to join the EduCTX initiative and the EduCTX blockchain network.",
"title": ""
},
{
"docid": "2e6d63bd9daf8b6fca3c911b9ada52e4",
"text": "In this paper, an ultra low power front-end circuit for an UHF RFID tag is presented. In order to minimize the power consumption, a novel data decoder is proposed, which removes the need for high frequency oscillator. In addition, a dual voltage multiplier scheme is employed, which increases the power efficiency. Simulation results shows that the proposed circuit reduces the power consumption by an order magnitude compared to conventional RFID front-end circuits that use high frequency oscillators and single voltage multiplier.",
"title": ""
},
{
"docid": "7626820b6a57768fa363b4dd03fafa5a",
"text": "To facilitate experimentation with creating new, complex refactorings, we want to reuse existing transformation and analysis code as orchestrated parts of a larger refactoring: i.e., to script refactorings. The language we use to perform this scripting must be able to deal with the diversity of languages, tools, analyses, and transformations that arise in practice. To illustrate one solution to this problem, in this paper we describe, in detail, a specific refactoring script for switching from the Visitor design pattern to the Interpreter design pattern. This script, written in the meta-programming language Rascal, and targeting an interpreter written in Java, extracts facts from the interpreter code using the Eclipse JDT, performs the needed analysis in Rascal, and then transforms the interpreter code using a combination of Rascal code and existing JDT refactorings. Using this script we illustrate how a new, real and complex refactoring can be scripted in a few hundred lines of code and within a short timeframe. We believe the key to successfully building such refactorings is the ability to pair existing tools, focused on specific languages, with general-purpose meta-programming languages.",
"title": ""
},
{
"docid": "a205d93fb0ce6dfc24a4367dd3461055",
"text": "Smart devices are gaining popularity in our homes with the promise to make our lives easier and more comfortable. However, the increased deployment of such smart devices brings an increase in potential security risks. In this work, we propose an intrusion detection and mitigation framework, called IoT-IDM, to provide a network-level protection for smart devices deployed in home environments. IoT-IDM monitors the network activities of intended smart devices within the home and investigates whether there is any suspicious or malicious activity. Once an intrusion is detected, it is also capable of blocking the intruder in accessing the victim device on the fly. The modular design of IoT-IDM gives its users the flexibility to employ customized machine learning techniques for detection based on learned signature patterns of known attacks. Software-defined networking technology and its enabling communication protocol, OpenFlow, are used to realise this framework. Finally, a prototype of IoT-IDM is developed and the applicability and efficiency of proposed framework demonstrated through a real IoT device: a smart light bulb.",
"title": ""
},
{
"docid": "5e5123b8641c7154d311b58e9b1c6524",
"text": "Human designers understand a range of potential purposes behind objects and configurations when creating content, which are only partially addressed in typical procedural content generation techniques. This paper describes our research into the provision and use of semantic information to guide logical solver-based content generation, in order to feasibly generate meaningful and valid content. Initial results show we can use answer set programming to generate basic roguelike dungeon layouts from a provided semantic knowledge base, and we intend to extend this to generate a range of other content types. By using semantic models as input for a content-agnostic generation system, we hope to provide more domain-general content generation.",
"title": ""
},
{
"docid": "9309ce05609d1cbdadcdc89fe8937473",
"text": "There is an increase use of ontology-driven approaches to support requirements engineering (RE) activities, such as elicitation, analysis, specification, validation and management of requirements. However, the RE community still lacks a comprehensive understanding of how ontologies are used in RE process. Thus, the main objective of this work is to investigate and better understand how ontologies support RE as well as identify to what extent they have been applied to this field. In order to meet our goal, we conducted a systematic literature review (SLR) to identify the primary studies on the use of ontologies in RE, following a predefined review protocol. We then identified the main RE phases addressed, the requirements modelling styles that have been used in conjunction with ontologies, the types of requirements that have been supported by the use of ontologies and the ontology languages that have been adopted. We also examined the types of contributions reported and looked for evidences of the benefits of ontology-driven RE. In summary, the main findings of this work are: (1) there are empirical evidences of the benefits of using ontologies in RE activities both in industry and academy, specially for reducing ambiguity, inconsistency and incompleteness of requirements; (2) the majority of studies only partially address the RE process; (3) there is a great diversity of RE modelling styles supported by ontologies; (4) most studies addressed only functional requirements; (5) several studies describe the use/development of tools to support different types of ontology-driven RE approaches; (6) about half of the studies followed W3C recommendations on ontology-related languages; and (7) a great variety of RE ontologies were identified; nevertheless, none of them has been broadly adopted by the community. Finally, we conclude this work by showing several promising research opportunities that are quite important and interesting but underexplored in current research and practice.",
"title": ""
},
{
"docid": "3c6dcd92cbbf0cf4a5175dc61b401aae",
"text": "Increased number of malware samples have created many challenges for Antivirus companies. One of these challenges is clustering the large number of malware samples they receive daily. Malware authors use malware generation kits to create different instances of the same malware. So most of these malicious samples are polymorphic instances of previously known malware family only. Clustering these large number of samples rapidly and accurately without spending much time on processing the sample have become a critical requirement. In this paper we proposed, implemented and evaluated a method, called ByteFreq that can cluster large number of samples using byte frequency. Byte frequency is represented as time series and SAX (Symbolic Aggregation approXimation)[1] is used to convert the time series in symbolic representation. We evaluated proposed system on real world malware samples and achieved 0.92 precision and 0.96 recall accuracy.",
"title": ""
},
{
"docid": "fbff176c8731cdb9dcbf354cf72b3148",
"text": "Polar code, newly formulated by Erdal Arikan, has got a wide recognition from the information theory community. Polar code achieves the capacity of the class of symmetric binary memory less channels. In this paper, we propose efficient hardware architecture on a FPGA platform using Xilinx Virtex VI for implementing the advanced encoding and decoding schemes. The performance of the proposed architecture out performs the existing techniques such as: successive cancellation decoder, list successive cancellation, belief propagation etc; with respect to bit error rate and resource utilization.",
"title": ""
},
{
"docid": "3451c521dd27c90c324f66360991178c",
"text": "Compliant motion of a manipulator occurs when the manipulator position is constrained by the task geometry. Compliant motion may be produced either by a passive mechanical compliance built in to the manipulator, or by an active compliance implemented in the control servo loop. The second method, called force control, is the subject of this paper. In particular a theory of force control based on formal models of the manipulator and the task geometry is presented. The ideal effector is used to model the manipulator, the ideal surface is used to model the task geometry, and the goal trajectory is used to model the desired behavior of the manipulator. Models are also defined for position control and force control, providing a precise semantics for compliant motion primitives in manipulation programming languages. The formalism serves as a simple interface between the manipulator and the programmer, isolating the programmer from the fundamental complexity of low-level manipulator control. A method of automatically synthesizing a restricted class of manipulator programs based on the formal models of task and goal trajectory is also provided by the formalism.",
"title": ""
},
{
"docid": "7bdf177ec07c613e15ec154aea9e2751",
"text": "The state-of-the-art mobile edge applications are generating intense traffic and posing rigorous latency requirements to service providers. While resource sharing across multiple service providers today requires a centralized, trusted repository maintained by all parties for service providers to share status. We propose EdgeChain, a blockchain-based architecture to make mobile edge application placement decisions for multiple service providers, based on a stochastic programming problem minimizing the placement cost for mobile edge application placement scenarios. All placement transactions are stored on the blockchain and are traceable by every mobile edge service provider and application vendor who consumes resources at the mobile edge.",
"title": ""
},
{
"docid": "05c93e5ddb9cb3e7abd3a1ea38bc32dc",
"text": "BACKGROUND\nThis national study focused on posttreatment outcomes of community treatments of cocaine dependence. Relapse to weekly (or more frequent) cocaine use in the first year after discharge from 3 major treatment modalities was examined in relation to patient problem severity at admission to the treatment program and length of stay.\n\n\nMETHODS\nWe studied 1605 cocaine-dependent patients from 11 cities located throughout the United States using a naturalistic, nonexperimental evaluation design. They were sequentially admitted from November 1991 to December 1993 to 55 community-based treatment programs in the national Drug Abuse Treatment Outcome Studies. Included were 542 patients admitted to 19 long-term residential programs, 458 patients admitted to 24 outpatient drug-free programs, and 605 patients admitted to 12 short-term inpatient programs.\n\n\nRESULTS\nOf 1605 patients, 377 (23.5%) reported weekly cocaine use in the year following treatment (dropping from 73.1% in the year before admission). An additional 18.0% had returned to another drug treatment program. Higher severity of patient problems at program intake and shorter stays in treatment (<90 days) were related to higher cocaine relapse rates.\n\n\nCONCLUSIONS\nPatients with the most severe problems were more likely to enter long-term residential programs, and better outcomes were reported by those treated 90 days or longer. Dimensions of psychosocial problem severity and length of stay are, therefore, important considerations in the treatment of cocaine dependence. Cocaine relapse rates for patients with few problems at program intake were most favorable across all treatment conditions, but better outcomes for patients with medium- to high-level problems were dependent on longer treatment stays.",
"title": ""
}
] |
scidocsrr
|
8526f702de583cb96afe0aa4ab60d277
|
Mining the impact of object oriented metrics for change prediction using Machine Learning and Search-based techniques
|
[
{
"docid": "00d512bce77790afd830ffc4fa49c317",
"text": "How can we find data for quality prediction? Early in the life cycle, projects may lack the data needed to build such predictors. Prior work assumed that relevant training data was found nearest to the local project. But is this the best approach? This paper introduces the Peters filter which is based on the following conjecture: When local data is scarce, more information exists in other projects. Accordingly, this filter selects training data via the structure of other projects. To assess the performance of the Peters filter, we compare it with two other approaches for quality prediction. Within-company learning and cross-company learning with the Burak filter (the state-of-the-art relevancy filter). This paper finds that: 1) within-company predictors are weak for small data-sets; 2) the Peters filter+cross-company builds better predictors than both within-company and the Burak filter+cross-company; and 3) the Peters filter builds 64% more useful predictors than both within-company and the Burak filter+cross-company approaches. Hence, we recommend the Peters filter for cross-company learning.",
"title": ""
},
{
"docid": "60bbbe8b7df7155565af9758116db66c",
"text": "Cross-project defect prediction is very appealing because (i) it allows predicting defects in projects for which the availability of data is limited, and (ii) it allows producing generalizable prediction models. However, existing research suggests that cross-project prediction is particularly challenging and, due to heterogeneity of projects, prediction accuracy is not always very good. This paper proposes a novel, multi-objective approach for cross-project defect prediction, based on a multi-objective logistic regression model built using a genetic algorithm. Instead of providing the software engineer with a single predictive model, the multi-objective approach allows software engineers to choose predictors achieving a compromise between number of likely defect-prone artifacts (effectiveness) and LOC to be analyzed/tested (which can be considered as a proxy of the cost of code inspection). Results of an empirical evaluation on 10 datasets from the Promise repository indicate the superiority and the usefulness of the multi-objective approach with respect to single-objective predictors. Also, the proposed approach outperforms an alternative approach for cross-project prediction, based on local prediction upon clusters of similar classes.",
"title": ""
}
] |
[
{
"docid": "ff04d4c2b6b39f53e7ddb11d157b9662",
"text": "Chiu proposed a clustering algorithm adjusting the numeric feature weights automatically for k-anonymity implementation and this approach gave a better clustering quality over the traditional generalization and suppression methods. In this paper, we propose an improved weighted-feature clustering algorithm which takes the weight of categorical attributes and the thesis of optimal k-partition into consideration. To show the effectiveness of our method, we do some information loss experiments to compare it with greedy k-member clustering algorithm.",
"title": ""
},
{
"docid": "6f1550434a03ff0cf47c73ae9592a2f6",
"text": "This paper presents focused synthetic aperture radar (SAR) processing of airborne radar sounding data acquired with the High-Capability Radar Sounder system at 60 MHz. The motivation is to improve basal reflection analysis for water detection and to improve layer detection and tracking. The processing and reflection analyses are applied to data from Kamb Ice Stream, West Antarctica. The SAR processor correlates the radar data with reference echoes from subsurface point targets. The references are 1-D responses limited by the pulse nadir footprint or 2-D responses that include echo tails. Unfocused SAR and incoherent integration are included for comparison. Echoes are accurately preserved from along-track slopes up to about 0.5deg for unfocused SAR, 3deg for 1-D correlations, and 10deg for 2-D correlations. The noise/clutter levels increase from unfocused SAR to 1-D and 2-D correlations, but additional gain compensates at the basal interface. The basal echo signal-to-noise ratio improvement is typically about 5 dB, and up to 10 dB for 2-D correlations in rough regions. The increased noise degrades the clarity of internal layers in the 2-D correlations, but detection of layers with slopes greater than 3deg is improved. Reflection coefficients are computed for basal water detection, and the results are compared for the different processing methods. There is a significant increase in the detected water from unfocused SAR to 1-D correlations, indicating that substantial basal water exists on moderately sloped interfaces. Very little additional water is detected from the 2-D correlations. The results from incoherent integration are close to the focused SAR results, but the noise/clutter levels are much greater.",
"title": ""
},
{
"docid": "a1b4d28871c8b2f8b38d314db52c00b0",
"text": "Relationships between 43 high-risk adolescents and their caregivers were examined qualitatively. Parents and other formal and informal caregivers such as youth workers and foster parents were found to exert a large influence on the behaviors that bolster mental health among high-risk youth marginalized by poverty, social stigma, personal and physical characteristics, ethnicity, and poor social or academic performance. Participants' accounts of their intergenerational relationships with caregivers showed that teenagers seek close relationships with adults in order to negotiate for powerful self-constructions as resilient. High-risk teens say they want the adults in their lives to serve as an audience in front of whom they can perform the identities they construct both inside and outside their homes. This pattern was evident even among youth who presented as being more peer-than family-oriented. The implications of these findings to interventions with caregivers and teens is discussed.",
"title": ""
},
{
"docid": "b44a9da1f384680742270f6c82ee9e31",
"text": "Person re-identification aims at finding a person of interest in an image gallery by comparing the probe image of this person with all the gallery images. It is generally treated as a retrieval problem, where the affinities between the probe image and gallery images (P2G affinities) are used to rank the retrieved gallery images. However, most existing methods only consider P2G affinities but ignore the affinities between all the gallery images (G2G affinity). Some frameworks incorporated G2G affinities into the testing process, which is not end-to-end trainable for deep neural networks. In this paper, we propose a novel group-shuffling random walk network for fully utilizing the affinity information between gallery images in both the training and testing processes. The proposed approach aims at end-to-end refining the P2G affinities based on G2G affinity information with a simple yet effective matrix operation, which can be integrated into deep neural networks. Feature grouping and group shuffle are also proposed to apply rich supervisions for learning better person features. The proposed approach outperforms state-of-the-art methods on the Market-1501, CUHK03, and DukeMTMC datasets by large margins, which demonstrate the effectiveness of our approach.",
"title": ""
},
{
"docid": "b0b076cad755cd05380f1ede06eda104",
"text": "• Sentiment Analysis (SA) requires large human labeled data; which is costly to obtain. • Domain Adaptation(DA) techniques help in performing SA with minimum human labeled data. • Two techniques, Feedback EM and Rocchio SVM are proposed for data selection/filtering. • Use of Mutual Information(MI) and Cosine Distance(CD) to measure similarity between In and Out-Domain distributions.",
"title": ""
},
{
"docid": "837c34e3999714c0aa0dcf901aa278cf",
"text": "A novel high temperature superconducting interdigital bandpass filter is proposed by using coplanar waveguide quarter-wavelength resonators. The CPW resonators are arranged in parallel, and consequently the filter becomes very compact. The filter is a 5-pole Chebyshev BPF with a midband frequency of 5.0GHz and an equal-ripple fractional bandwidth of 3.2%. It is fabricated using a YBCO film deposited on an MgO substrate. The measured filtering characteristics agree well with EM simulations and show a low insertion loss in spite of the small size of the filter.",
"title": ""
},
{
"docid": "c678ea5e9bc8852ec80a8315a004c7f0",
"text": "Educators, researchers, and policy makers have advocated student involvement for some time as an essential aspect of meaningful learning. In the past twenty years engineering educators have implemented several means of better engaging their undergraduate students, including active and cooperative learning, learning communities, service learning, cooperative education, inquiry and problem-based learning, and team projects. This paper focuses on classroom-based pedagogies of engagement, particularly cooperative and problem-based learning. It includes a brief history, theoretical roots, research support, summary of practices, and suggestions for redesigning engineering classes and programs to include more student engagement. The paper also lays out the research ahead for advancing pedagogies aimed at more fully enhancing students’ involvement in their learning.",
"title": ""
},
{
"docid": "61a50d735f6cc037f8c383fc29365f9a",
"text": "Traffic sign detection is a technology by which a vehicle is able to recognize the different traffic signs located on the road and used to regulate the traffic. Traffic signs are detected by analyzes color information contained on the images having capability of detection and recognition of traffic signs even with bad visual artifacts those originate from different conditions. The feature based method is intended for traffic sign detection, in this method two sets of features are to be detected in the reference and sensed images, identifying key points in the images and match among those points to find the similarity, the SURF descriptor is used for key points and point matching. After detecting the shape of the traffic sign the optical character recognition (OCR) method is used to recognize the character present in the detected shape. A technique, based on Maximally Stable Extremal Regions (MSER) region and canny edge detector is supervised for character recognition in traffic sign detection.",
"title": ""
},
{
"docid": "9a1bb9370031cbe9b6b3175b216aeea5",
"text": "The area of an image multi-label classification is increase continuously in last few years, in machine learning and computer vision. Multi-label classification has attracted significant attention from researchers and has been applied to an image annotation. In multi-label classification, each instance is assigned to multiple classes; it is a common problem in data analysis. In this paper, represent general survey on the research work is going on in the field of multi-label classification. Finally, paper is concluded towards challenges in multi-label classification for images for future research.",
"title": ""
},
{
"docid": "a5999023893d996f0485abcf991ffbe1",
"text": "In this paper, we address the issue of recovering and segmenting the apparent velocity field in sequences of images. As for motion estimation, we minimize an objective function involving two robust terms. The first one cautiously captures the optical flow constraint, while the second (a priori) term incorporates a discontinuity-preserving smoothness constraint. To cope with the nonconvex minimization problem thus defined, we design an efficient deterministic multigrid procedure. It converges fast toward estimates of good quality, while revealing the large discontinuity structures of flow fields. We then propose an extension of the model by attaching to it a flexible object-based segmentation device based on deformable closed curves (different families of curve equipped with different kinds of prior can be easily supported). Experimental results on synthetic and natural sequences are presented, including an analysis of sensitivity to parameter tuning.",
"title": ""
},
{
"docid": "333fd7802029f38bda35cd2077e7de59",
"text": "Human shape estimation is an important task for video editing, animation and fashion industry. Predicting 3D human body shape from natural images, however, is highly challenging due to factors such as variation in human bodies, clothing and viewpoint. Prior methods addressing this problem typically attempt to fit parametric body models with certain priors on pose and shape. In this work we argue for an alternative representation and propose BodyNet, a neural network for direct inference of volumetric body shape from a single image. BodyNet is an end-to-end trainable network that benefits from (i) a volumetric 3D loss, (ii) a multi-view re-projection loss, and (iii) intermediate supervision of 2D pose, 2D body part segmentation, and 3D pose. Each of them results in performance improvement as demonstrated by our experiments. To evaluate the method, we fit the SMPL model to our network output and show state-of-the-art results on the SURREAL and Unite the People datasets, outperforming recent approaches. Besides achieving state-of-the-art performance, our method also enables volumetric bodypart segmentation.",
"title": ""
},
{
"docid": "e937bc38ca1627bc330da67fc2f31e22",
"text": "In this paper, we propose and describe efficient multiclass classification and recognition of unconstrained handwritten Arabic words using machine learning approaches which include the K-nearest neighbor (K-NN) clustering, and the neural network (NN). The technical details are presented in terms of three stages, namely preprocessing, feature extraction and classification. Firstly, words are segmented from input scripts and also normalized in size. Secondly, from each of the segmented words various feature extraction methods are introduced. Finally, these features are utilized to train the K-NN and the NN classifiers for classification. In order to validate the proposed techniques, extensive experiments are conducted using the K-NN and the NN. The proposed algorithms are tested on the IFN/ENIT database which contains 32492 Arabic words; the proposed algorithms give good accuracy when compared with other methods.",
"title": ""
},
{
"docid": "e4c23ebf305f9f1a3e3d016b6f22e683",
"text": "Accurate detection of the human metaphase chromosome centromere is a critical element of cytogenetic diagnostic techniques, including chromosome enumeration, karyotyping and radiation biodosimetry. Existing centromere detection methods tends to perform poorly in the presence of irregular boundaries, shape variations and premature sister chromatid separation. We present a centromere detection algorithm that uses a novel contour partitioning technique to generate centromere candidates followed by a machine learning approach to select the best candidate that enhances the detection accuracy. The contour partitioning technique evaluates various combinations of salient points along the chromosome boundary using a novel feature set and is able to identify telomere regions as well as detect and correct for sister chromatid separation. This partitioning is used to generate a set of centromere candidates which are then evaluated based on a second set of proposed features. The proposed algorithm outperforms previously published algorithms and is shown to do so with a larger set of chromosome images. A highlight of the proposed algorithm is the ability to rank this set of centromere candidates and create a centromere confidence metric which may be used in post-detection analysis. When tested with a larger metaphase chromosome database consisting of 1400 chromosomes collected from 40 metaphase cell images, the proposed algorithm was able to accurately localize 1220 centromere locations yielding a detection accuracy of 87%.",
"title": ""
},
{
"docid": "d3b03d65b61b98db03445bda899b44ba",
"text": "Positioning is basis for providing location information to mobile users, however, with the growth of wireless and mobile communications technologies. Mobile phones are equipped with several radio frequency technologies for driving the positioning information like GSM, Wi-Fi or Bluetooth etc. In this way, the objective of this thesis was to implement an indoor positioning system relying on Bluetooth Received Signal Strength (RSS) technology and it integrates into the Global Positioning Module (GPM) to provide precise information inside the building. In this project, we propose indoor positioning system based on RSS fingerprint and footprint architecture that smart phone users can get their position through the assistance collections of Bluetooth signals, confining RSSs by directions, and filtering burst noises that can overcome the server signal fluctuation problem inside the building. Meanwhile, this scheme can raise more accuracy in finding the position inside the building.",
"title": ""
},
{
"docid": "c47f251cc62b405be1eb1b105f443466",
"text": "The conceptualization of gender variant populations within studies have consisted of imposed labels and a diversity of individual identities that preclude any attempt at examining the variations found among gender variant populations, while at the same time creating artificial distinctions between groups that may not actually exist. Data were collected from 90 transgender/transsexual people using confidential, self-administered questionnaires. Factors like age of transition, being out to others, and participant's race and class were associated with experiences of transphobic life events. Discrimination can have profound impact on transgender/transsexual people's lives, but different factors can influence one's experience of transphobia. Further studies are needed to examine how transphobia manifests, and how gender characteristics impact people's lives.",
"title": ""
},
{
"docid": "1fa7c954f5e352679c33d8946f4cac4e",
"text": "In some cases, such as in the estimation of impulse responses, it has been found that for plausible sample sizes the coverage accuracy of single bootstrap confidence intervals can be poor. The error in the coverage probability of single bootstrap confidence intervals may be reduced by the use of double bootstrap confidence intervals. The computer resources required for double bootstrap confidence intervals are often prohibitive, especially in the context of Monte Carlo studies. Double bootstrap confidence intervals can be estimated using computational algorithms incorporating simple deterministic stopping rules that avoid unnecessary computations. These algorithms may make the use and Monte Carlo evaluation of double bootstrap confidence intervals feasible in cases where otherwise they would not be feasible. The efficiency gains due to the use of these algorithms are examined by means of a Monte Carlo study for examples of confidence intervals for a mean and for the cumulative impulse response in a second order autoregressive model.",
"title": ""
},
{
"docid": "406a7130ad2bff751f99ed9248cf667b",
"text": "In this paper, a novel hybrid cubemap projection (HCP) is proposed to improve the 360-degree video coding efficiency. HCP allows adaptive sampling adjustments in the horizontal and vertical directions within each cube face. HCP parameters of each cube face can be adjusted based on the input 360-degree video content characteristics for a better sampling efficiency. The HCP parameters can be updated periodically to adapt to temporal content variation. An efficient HCP parameter estimation algorithm is proposed to reduce the computational complexity of parameter estimation. Experimental results demonstrate that HCP format achieves on average luma (Y) BD-rate reduction of 11.51%, 8.0%, and 0.54% compared to equirectangular projection format, cubemap projection format, and adjusted cubemap projection format, respectively, in terms of end-to-end WS-PSNR.",
"title": ""
},
{
"docid": "d74874cf15642c87c7de51e54275f9be",
"text": "We used a three layer Convolutional Neural Network (CNN) to make move predictions in chess. The task was defined as a two-part classification problem: a piece-selector CNN is trained to score which white pieces should be made to move, and move-selector CNNs for each piece produce scores for where it should be moved. This approach reduced the intractable class space in chess by a square root. The networks were trained using 20,000 games consisting of 245,000 moves made by players with an ELO rating higher than 2000 from the Free Internet Chess Server. The piece-selector network was trained on all of these moves, and the move-selector networks trained on all moves made by the respective piece. Black moves were trained on by using a data augmentation to frame it as a move made by the",
"title": ""
},
{
"docid": "897efb599e554bf453a7b787c5874d48",
"text": "The Rampant growth of wireless technology and Mobile devices in this era is creating a great impact on our lives. Some early efforts have been made to combine and utilize both of these technologies in advancement of hospitality industry. This research work aims to automate the food ordering process in restaurant and also improve the dining experience of customers. In this paper we discuss about the design & implementation of automated food ordering system with real time customer feedback (AOS-RTF) for restaurants. This system, implements wireless data access to servers. The android application on user’s mobile will have all the menu details. The order details from customer’s mobile are wirelessly updated in central database and subsequently sent to kitchen and cashier respectively. The restaurant owner can manage the menu modifications easily. The wireless application on mobile devices provide a means of convenience, improving efficiency and accuracy for restaurants by saving time, reducing human errors and real-time customer feedback. This system successfully over comes the drawbacks in earlier PDA based food ordering system and is less expensive and more effective than the multi-touchable restaurant management systems.",
"title": ""
},
{
"docid": "f33f6263ef10bd702ddb18664b68a09f",
"text": "Research over the past five years has shown significant performance improvements using a technique called adaptive compilation. An adaptive compiler uses a compile-execute-analyze feedback loop to find the combination of optimizations and parameters that minimizes some performance goal, such as code size or execution time.Despite its ability to improve performance, adaptive compilation has not seen widespread use because of two obstacles: the large amounts of time that such systems have used to perform the many compilations and executions prohibits most users from adopting these systems, and the complexity inherent in a feedback-driven adaptive system has made it difficult to build and hard to use.A significant portion of the adaptive compilation process is devoted to multiple executions of the code being compiled. We have developed a technique called virtual execution to address this problem. Virtual execution runs the program a single time and preserves information that allows us to accurately predict the performance of different optimization sequences without running the code again. Our prototype implementation of this technique significantly reduces the time required by our adaptive compiler.In conjunction with this performance boost, we have developed a graphical-user interface (GUI) that provides a controlled view of the compilation process. By providing appropriate defaults, the interface limits the amount of information that the user must provide to get started. At the same time, it lets the experienced user exert fine-grained control over the parameters that control the system.",
"title": ""
}
] |
scidocsrr
|
c47223d6ff536a6c6dbc850fd922fd87
|
Data is More Than Knowledge: Implications of the Reversed Knowledge Hierarchy for Knowledge Management and Organizational Memory
|
[
{
"docid": "11f84f99de269ca5ca43fc6d761504b7",
"text": "Effective use of distributed collaboration environments requires shared mental models that guide users in sensemaking and categorization. In Lotus Notes -based collaboration systems, such shared models are usually implemented as views and document types. TeamRoom, developed at Lotus Institute, implements in its design a theory of effective social process that creates a set of team-specific categories, which can then be used as a basis for knowledge sharing, collaboration, and team memory. This paper reports an exploratory study in collective concept formation in the TeamRoom environment. The study was run in an ecological setting, while the team members used the system for their everyday work. We apply theory developed by Lev Vygotsky, and use a modified version of an experiment on concept formation, devised by Lev Sakharov, and discussed in Vygotsky (1986). Vygotsky emphasized the role of language, cognitive artifacts, and historical and social sources in the development of thought processes. Within the Vygotskian framework it becomes clear that development of thinking does not end in adolescence. In teams of adult people, learning and knowledge creation are continuous processes. New concepts are created, shared, and developed into systems. The question, then, becomes how spontaneous concepts are collectively generated in teams, how they become integrated as systems, and how computer mediated collaboration environments affect these processes. d in ittle ons",
"title": ""
}
] |
[
{
"docid": "d954d72a2674bb8fdde3b189deca152b",
"text": "measures are worthless. To use a performance measure—to extract information from it—a manager needs a specific, comparative gauge, plus an understanding of the relevant context. A truck has been driven 6.0 million. Six million what? Six million miles? That’s impressive. Six million feet? That’s only 1,136 miles. Six million inches? That’s not even 95 miles. Big deal—unless those 95 miles were driven in two hours along a dirt road on a very rainy night. To use performance measures to achieve any of these eight purposes, the public manager needs some kind of standard with which the measure can be compared. 1. To use a measure to evaluate performance, public managers need some kind of desired result with which to compare the data, and thus judge performance. 2. To use a measure of performance to control behavior, public managers need first to establish the desired behavioral or input standard from which to gauge individual or collective deviance. 3. To use efficiency measures to budget, public managers need an idea of what is a good, acceptable, or poor level of efficiency. 4. To use performance measures to motivate people, public managers need some sense of what are reasonable and significant targets. 5. To use performance measures to promote an agency’s competence, public managers need to understand what the public cares about. 6. To use performance measures to celebrate, public managers need to discern the kinds of achievements that employees and collaborators think are worth celebrating. 7. To use performance measures to learn, public managers need to be able to detect unexpected (and significant) developments and anticipate a wide variety of common organizational, human, and societal behaviors. 8. To use performance measures to improve, public managers need an understanding (or prediction) of how their actions affect the inside-the-black-box behavior of the people who contribute to their desired outputs and outcomes. All of the eight purposes require (explicitly or implicitly) a baseline with which the measure can be compared. And, of course, the appropriate baseline depends on the context. The standard against which to compare current performance can come from a variety of sources—each with its own advantages and liabilities. The agency may use its historical record as a baseline, looking to see how much it has improved. It may use comparative information from similar organizations, such as the data collected by the Comparative Performance Measurement Consortium organized by the International City/County Management Association (1999), or the effort to measure and compare the performance of local jurisdictions in North Carolina organized by the University of North Carolina (Rivenbark and Few 2000). Of course, comparative data also may come from dissimilar organizations; citizens may compare—implicitly or quite explicitly—the ease of navigating a government Web site with the ease of navigating those created by private businesses. Or the standard may be an explicit performance target established by the legislature, by political executives, or by career managers. Even to",
"title": ""
},
{
"docid": "2ddc4919771402dabedd2020649d1938",
"text": "Increase in energy demand has made the renewable resources more attractive. Additionally, use of renewable energy sources reduces combustion of fossil fuels and the consequent CO2 emission which is the principal cause of global warming. The concept of photovoltaic-Wind hybrid system is well known and currently thousands of PV-Wind based power systems are being deployed worldwide, for providing power to small, remote, grid-independent applications. This paper shows the way to design the aspects of a hybrid power system that will target remote users. It emphasizes the renewable hybrid power system to obtain a reliable autonomous system with the optimization of the components size and the improvement of the cost. The system can provide electricity for a remote located village. The main power of the hybrid system comes from the photovoltaic panels and wind generators, while the batteries are used as backup units. The optimization software used for this paper is HOMER. HOMER is a design model that determines the optimal architecture and control strategy of the hybrid system. The simulation results indicate that the proposed hybrid system would be a feasible solution for distributed generation of electric power for stand-alone applications at remote locations",
"title": ""
},
{
"docid": "92b2a85eedb6f3614f75199a55faa963",
"text": "Anonymity online is important to people at times in their lives. Anonymous communication applications such as Whisper and YikYak enable people to communicate with strangers anonymously through their smartphones. We report results from semi-structured interviews with 18 users of these apps. The goal of our study was to identify why and how people use anonymous apps, their perceptions of their audience and interactions on the apps, and how these apps compare with other online social communities. We present a typology of the content people share, and their motivations for participation in anonymous apps. People share various types of content that range from deep confessions and secrets to lighthearted jokes and momentary feelings. An important driver for participation and posting is to get social validation from others, even though they are anonymous strangers. We also find that participants believe these anonymous apps allow more honesty, openness, and diversity of opinion than they can find elsewhere. Our results provide implications for how anonymity in mobile apps can encourage expressiveness and interaction among users.",
"title": ""
},
{
"docid": "ba5b5732dd7c48874e4f216903bba0b1",
"text": "This article presents a review of the application of insole plantar pressure sensor system in recognition and analysis of the hemiplegic gait in stroke patients. Based on the review, tailor made 3D insoles for plantar pressure measurement were designed and fabricated. The function is to compare with that of conventional flat insoles. Tailor made 3D contour of the insole can improve the contact between insole and foot and enable sampling plantar pressure at a high reproducibility.",
"title": ""
},
{
"docid": "4b8fc6a74f10dcded2b533ead98905e0",
"text": "For a safe, natural and effective human-robot social interaction, it is essential to develop a system that allows a robot to demonstrate the perceivable responsive behaviors to complex human behaviors. We introduce the Multimodal Deep Attention Recurrent Q-Network using which the robot exhibits human-like social interaction skills after 14 days of interacting with people in an uncontrolled real world. Each and every day during the 14 days, the system gathered robot interaction experiences with people through a hit-and-trial method and then trained the MDARQN on these experiences using end-to-end reinforcement learning approach. The results of interaction based learning indicate that the robot has learned to respond to complex human behaviors in a perceivable and socially acceptable manner.",
"title": ""
},
{
"docid": "d01321dc65ef31beedb6a92689ab91be",
"text": "This paper proposes a content-constrained spatial (CCS) model to recover the mathematical layout (M-layout, or MLme) of an mathematical expression (ME) from its font setting layout (F-layout, or FLme). The M-layout can be used for content analysis applications such as ME based indexing and retrieval of documents. The first of the two-step process is to divide a compounded ME into blocks based on explicit mathematical structure primitives such as fraction lines, radical signs, fence, etc. Subscripts and superscripts within a block are resolved by probabilistic inference of their likelihood based on a global optimization model. The dual peak distributions of the features to capture the relative position between sibling blocks as super/subscript call for a sampling based non-parametric probability distribution estimation method to resolve their ambiguity. The notion of spatial constraint indicators is proposed to reduce the search space while improving the prediction performance. The proposed scheme is tested using the InftyCDB data set to achieve the F1 score of 0.98.",
"title": ""
},
{
"docid": "2ec768f19fc39d392ffb86ae59497004",
"text": "Recently, adversarial erasing for weakly-supervised object attention has been deeply studied due to its capability in localizing integral object regions. However, such a strategy raises one key problem that attention regions will gradually expand to non-object regions as training iterations continue, which significantly decreases the quality of the produced attention maps. To tackle such an issue as well as promote the quality of object attention, we introduce a simple yet effective SelfErasing Network (SeeNet) to prohibit attentions from spreading to unexpected background regions. In particular, SeeNet leverages two self-erasing strategies to encourage networks to use reliable object and background cues for learning to attention. In this way, integral object regions can be effectively highlighted without including much more background regions. To test the quality of the generated attention maps, we employ the mined object regions as heuristic cues for learning semantic segmentation models. Experiments on Pascal VOC well demonstrate the superiority of our SeeNet over other state-of-the-art methods.",
"title": ""
},
{
"docid": "78829447a6cbf0aa020ef098a275a16d",
"text": "Black soldier fly (BSF), Hermetia illucens (L.) is widely used in bio-recycling of human food waste and manure of livestock. Eggs of BSF were commonly collected by egg-trapping technique for mass rearing. To find an efficient lure for BSF egg-trapping, this study compared the number of egg batch trapped by different lures, including fruit, food waste, chicken manure, pig manure, and dairy manure. The result showed that fruit wastes are the most efficient on trapping BSF eggs. To test the effects of fruit species, number of egg batch trapped by three different fruit species, papaya, banana, and pineapple were compared, and no difference were found among fruit species. Environmental factors including temperature, relative humidity, and light intensity were measured and compared in different study sites to examine their effects on egg-trapping. The results showed no differences on temperature, relative humidity, and overall light intensity between sites, but the stability of light environment differed between sites. BSF tend to lay more eggs in site with stable light environment.",
"title": ""
},
{
"docid": "997adb89f1e02b66f8e3edc6f2b6aed2",
"text": "Chimeric antigen receptor (CAR)-engineered T cells (CAR-T cells) have yielded unprecedented efficacy in B cell malignancies, most remarkably in anti-CD19 CAR-T cells for B cell acute lymphoblastic leukemia (B-ALL) with up to a 90% complete remission rate. However, tumor antigen escape has emerged as a main challenge for the long-term disease control of this promising immunotherapy in B cell malignancies. In addition, this success has encountered significant hurdles in translation to solid tumors, and the safety of the on-target/off-tumor recognition of normal tissues is one of the main reasons. In this mini-review, we characterize some of the mechanisms for antigen loss relapse and new strategies to address this issue. In addition, we discuss some novel CAR designs that are being considered to enhance the safety of CAR-T cell therapy in solid tumors.",
"title": ""
},
{
"docid": "3a0275d7834a6fb1359bb7d3bef14e97",
"text": "With the Internet of Things (IoT) becoming a major component of our daily life, understanding how to improve quality of service (QoS) in IoT networks is becoming a challenging problem. Currently most interaction between the IoT devices and the supporting back-end servers is done through large scale cloud data centers. However, with the exponential growth of IoT devices and the amount of data they produce, communication between \"things\" and cloud will be costly, inefficient, and in some cases infeasible. Fog computing serves as solution for this as it provides computation, storage, and networking resource for IoT, closer to things and users. One of the promising advantages of fog is reducing service delay for end user applications, whereas cloud provides extensive computation and storage capacity with a higher latency. Thus it is necessary to understand the interplay between fog computing and cloud, and to evaluate the effect of fog computing on the IoT service delay and QoS. In this paper we will introduce a general framework for IoT-fog-cloud applications, and propose a delay-minimizing policy for fog-capable devices that aims to reduce the service delay for IoT applications. We then develop an analytical model to evaluate our policy and show how the proposed framework helps to reduce IoT service delay.",
"title": ""
},
{
"docid": "aee5eb38d6cbcb67de709a30dd37c29a",
"text": "Correct disassembly of the HIV-1 capsid shell, called uncoating, is increasingly recognised as central for multiple steps during retroviral replication. However, the timing, localisation and mechanism of uncoating are poorly understood and progress in this area is hampered by difficulties in measuring the process. Previous work suggested that uncoating occurs soon after entry of the viral core into the cell, but recent studies report later uncoating, at or in the nucleus. Furthermore, inhibiting reverse transcription delays uncoating, linking these processes. Here, we have used a combined approach of experimental interrogation of viral mutants and mathematical modelling to investigate the timing of uncoating with respect to reverse transcription. By developing a minimal, testable, model and employing multiple uncoating assays to overcome the disadvantages of each single assay, we find that uncoating is not concomitant with the initiation of reverse transcription. Instead, uncoating appears to be triggered once reverse transcription reaches a certain stage, namely shortly after first strand transfer. Using multiple approaches, we have identified a point during reverse transcription that induces uncoating of the HIV-1 CA shell. We propose that uncoating initiates after the first strand transfer of reverse transcription.",
"title": ""
},
{
"docid": "d28ab4d2979872bf868ef9b7fe8487bb",
"text": "We have developed an easy-to-use and cost-effective system to construct textured 3D animated face models from videos with minimal user interaction. This is a particularly challenging task for faces due to a lack of prominent textures. We develop a robust system by following a model-based approach: we make full use of generic knowledge of faces in head motion determination, head tracking, model fitting, and multiple-view bundle adjustment. Our system first takes, with an ordinary video camera, images of a face of a person sitting in front of the camera turning their head from one side to the other. After five manual clicks on two images to indicate the position of the eye corners, nose tip and mouth corners, the system automatically generates a realistic looking 3D human head model that can be animated immediately (different poses, facial expressions and talking). A user, with a PC and a video camera, can use our system to generate his/her face model in a few minutes. The face model can then be imported in his/her favorite game, and the user sees themselves and their friends take part in the game they are playing. We have demonstrated the system on a laptop computer live at many events, and constructed face models for hundreds of people. It works robustly under various environment settings.",
"title": ""
},
{
"docid": "0ad00a5bed02bf2deff12ad9c3dfd2c6",
"text": "This letter presents a micromachined silicon Lorentz force magnetometer, which consists of a flexural beam resonator coupled to current-carrying silicon beams via a microleverage mechanism. The flexural beam resonator is a force sensor, which measures the magnetic field through resonant frequency shift induced by the Lorentz force, which acts as an axial load. Previous frequency-modulated Lorentz force magnetometers suffer from low sensitivity, limited by both fabrication restrictions and lack of a force amplification mechanism. In this letter, the microleverage mechanism amplifies the Lorentz force, thereby enhancing the sensitivity of the magnetometer by a factor of 42. The device has a measured sensitivity of 6687 ppm/(mA · T), which is two orders of magnitude larger than the prior state-of-the-art. The measured results agree with an analytical model and finite-element analysis. The frequency stability of the sensor is limited by the quality factor (Q) of 540, which can be increased through improved vacuum packaging.",
"title": ""
},
{
"docid": "5ded801b3c778d012a78aa467e01bd89",
"text": "To overcome limitations of fusion welding of the AA7050-T7451aluminum alloy friction stir welding (FSW) has become a prominent process which uses a non-consumable FSW tool to weld the two abutting plates of the workpiece. The FSW produces a joint with advantages of high joint strength, lower distortion and absence of metallurgical defects. Process parameters such as tool rotational speed, tool traverse speed and axial force and tool dimensions play an important role in obtaining a specific temperature distribution and subsequent flow stresses within the material being welded. Friction stir welding of AA7050-T7451 aluminum alloy has been simulated to obtain the temperature profiles & flow stresses using a recent FEA software called HyperWorks.; the former controlling the microstruture and in turn, mechanical properties and later, the flow of material which depends up on the peak temperatures obtained during FSW. A software based study has been carried out to avoid the difficulty in measuring the temperatures directly and explore the capabilities of the same to provide a basis for further research work related to the said aluminum alloy.",
"title": ""
},
{
"docid": "289694f2395a6a2afc7d86d475b9c02d",
"text": "Recently, large breakthroughs have been observed in saliency modeling. The top scores on saliency benchmarks have become dominated by neural network models of saliency, and some evaluation scores have begun to saturate. Large jumps in performance relative to previous models can be found across datasets, image types, and evaluation metrics. Have saliency models begun to converge on human performance? In this paper, we re-examine the current state-of-the-art using a finegrained analysis on image types, individual images, and image regions. Using experiments to gather annotations for high-density regions of human eye fixations on images in two established saliency datasets, MIT300 and CAT2000, we quantify up to 60% of the remaining errors of saliency models. We argue that to continue to approach human-level performance, saliency models will need to discover higher-level concepts in images: text, objects of gaze and action, locations of motion, and expected locations of people in images. Moreover, they will need to reason about the relative importance of image regions, such as focusing on the most important person in the room or the most informative sign on the road. More accurately tracking performance will require finer-grained evaluations and metrics. Pushing performance further will require higher-level image understanding.",
"title": ""
},
{
"docid": "1d50d61d6b0abb0d5bec74d613ffe172",
"text": "We propose a novel hardware-accelerated voxelization algorithm for polygonal models. Compared with previous approaches, our algorithm has a major advantage that it guarantees the conservative correctness in voxelization: every voxel intersecting the input model is correctly recognized. This property is crucial for applications like collision detection, occlusion culling and visibility processing. We also present an efficient and robust implementation of the algorithm in the GPU. Experiments show that our algorithm has a lower memory consumption than previous approaches and is more efficient when the volume resolution is high. In addition, our algorithm requires no preprocessing and is suitable for voxelizing deformable models.",
"title": ""
},
{
"docid": "944d467bb6da4991127b76310fec585b",
"text": "One of the challenges in evaluating multi-object video detection, tracking and classification systems is having publically available data sets with which to compare different systems. However, the measures of performance for tracking and classification are different. Data sets that are suitable for evaluating tracking systems may not be appropriate for classification. Tracking video data sets typically only have ground truth track IDs, while classification video data sets only have ground truth class-label IDs. The former identifies the same object over multiple frames, while the latter identifies the type of object in individual frames. This paper describes an advancement of the ground truth meta-data for the DARPA Neovision2 Tower data set to allow both the evaluation of tracking and classification. The ground truth data sets presented in this paper contain unique object IDs across 5 different classes of object (Car, Bus, Truck, Person, Cyclist) for 24 videos of 871 image frames each. In addition to the object IDs and class labels, the ground truth data also contains the original bounding box coordinates together with new bounding boxes in instances where un-annotated objects were present. The unique IDs are maintained during occlusions between multiple objects or when objects re-enter the field of view. This will provide: a solid foundation for evaluating the performance of multi-object tracking of different types of objects, a straightforward comparison of tracking system performance using the standard Multi Object Tracking (MOT) framework, and classification performance using the Neovision2 metrics. These data have been hosted publically.",
"title": ""
},
{
"docid": "d95c080140dd50d8131bc7d43a4358e2",
"text": "The link between affect, defined as the capacity for sentimental arousal on the part of a message, and virality, defined as the probability that it be sent along, is of significant theoretical and practical importance, e.g. for viral marketing. The basic measure of virality in Twitter is the probability of retweet and we are interested in which dimensions of the content of a tweet leads to retweeting. We hypothesize that negative news content is more likely to be retweeted, while for non-news tweets positive sentiments support virality. To test the hypothesis we analyze three corpora: A complete sample of tweets about the COP15 climate summit, a random sample of tweets, and a general text corpus including news. The latter allows us to train a classifier that can distinguish tweets that carry news and non-news information. We present evidence that negative sentiment enhances virality in the news segment, but not in the non-news segment. Our findings may be summarized ’If you want to be cited: Sweet talk your friends or serve bad news to the public’.",
"title": ""
},
{
"docid": "d5e3eb5555cc149ef7fd8dea60eb0c9f",
"text": "Cognitive radio ad hoc networks (CRAHNs) constitute a viable solution to solve the current problems of inefficiency in the spectrum allocation, and to deploy highly reconfigurable and self-organizingwireless networks. Cognitive radio (CR) devices are envisaged to utilize the spectrum in an opportunistic way by dynamically accessing different licensed portions of the spectrum. To this aim, most of the recent research has mainly focused on devising spectrum sensing and sharing algorithms at the link layer, so that CR devices can operate without interfering with the transmissions of other licensed users, also called primary users (PUs). However, it is also important to consider the impact of such schemes on the higher layers of the protocol stack, in order to provide efficient end-to-end data delivery. At present, routing and transport layer protocols constitute an important yet not deeply investigated area of research over CRAHNs. This paper provides three main contributions on the modeling and performance evaluation of end-to-end protocols (e.g. routing and transport layer protocols) for CRAHNs. First, we describe NS2-CRAHN, an extension of the NS-2 simulator, which is designed to support realistic simulation of CRAHNs. NS2-CRAHN contains an accurate yet flexible modeling of the activities of PUs and of the cognitive cycle implemented by each CR user. Second, we analyze the impact of CRAHNs characteristics over the route formation process, by considering different routing metrics and route discovery algorithms. Finally, we study TCP performance over CRAHNs, by considering the impact of three factors ondifferent TCP variants: (i) spectrumsensing cycle, (ii) interference from PUs and (iii) channel heterogeneity. Simulation results highlight the differences of CRAHNs with traditional ad hoc networks and provide useful directions for the design of novel end-to-end protocols for CRAHNs. © 2010 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "0342f89c44e0b86026953196de34b608",
"text": "In this paper, we introduce an approach for recognizing the absence of opposing arguments in persuasive essays. We model this task as a binary document classification and show that adversative transitions in combination with unigrams and syntactic production rules significantly outperform a challenging heuristic baseline. Our approach yields an accuracy of 75.6% and 84% of human performance in a persuasive essay corpus with various topics.",
"title": ""
}
] |
scidocsrr
|
ae39984567ca197bd17ab7b5e78f7f87
|
Deep Learning in Semantic Kernel Spaces
|
[
{
"docid": "1e464db177e96b6746f8f827c582cc31",
"text": "In order to respond correctly to a free form factual question given a large collection of text data, one needs to understand the question to a level that allows determining some of the constraints the question imposes on a possible answer. These constraints may include a semantic classification of the sought after answer and may even suggest using different strategies when looking for and verifying a candidate answer. This work presents the first work on a machine learning approach to question classification. Guided by a layered semantic hierarchy of answer types, we develop a hierarchical classifier that classifies questions into fine-grained classes. This work also performs a systematic study of the use of semantic information sources in natural language classification tasks. It is shown that, in the context of question classification, augmenting the input of the classifier with appropriate semantic category information results in significant improvements to classification accuracy. We show accurate results on a large collection of free-form questions used in TREC 10 and 11.",
"title": ""
}
] |
[
{
"docid": "b2e2402a13c075f52ca9161f2a19fa11",
"text": "The rise of machine learning as a service multiplies scenarios where one faces a privacy dilemma: either sensitive user data must be revealed to the entity that evaluates the cognitive model (e.g., in the Cloud), or the model itself must be revealed to the user so that the evaluation can take place locally. Fully Homomorphic Encryption (FHE) offers an elegant way to reconcile these conflicting interests in the Cloudbased scenario and also preserve non-interactivity. However, due to the inefficiency of existing FHE schemes, most applications prefer to use Somewhat Homomorphic Encryption (SHE), where the complexity of the computation to be performed has to be known in advance, and the efficiency of the scheme depends on this global complexity. In this paper, we present a new framework for homomorphic evaluation of neural networks, that we call FHE–DiNN, whose complexity is strictly linear in the depth of the network and whose parameters can be set beforehand. To obtain this scale-invariance property, we rely heavily on the bootstrapping procedure. We refine the recent FHE construction by Chillotti et al. (ASIACRYPT 2016) in order to increase the message space and apply the sign function (that we use to activate the neurons in the network) during the bootstrapping. We derive some empirical results, using TFHE library as a starting point, and classify encrypted images from the MNIST dataset with more than 96% accuracy in less than 1.7 seconds. Finally, as a side contribution, we analyze and introduce some variations to the bootstrapping technique of Chillotti et al. that offer an improvement in efficiency at the cost of increasing the storage requirements.",
"title": ""
},
{
"docid": "7436bf163d0dcf6d2fbe8ccf66431caf",
"text": "Zh h{soruh ehkdylrudo h{sodqdwlrqv iru vxe0rswlpdo frusrudwh lqyhvwphqw ghflvlrqv1 Irfxvlqj rq wkh vhqvlwlylw| ri lqyhvwphqw wr fdvk rz/ zh dujxh wkdw shuvrqdo fkdudfwhulvwlfv ri fklhi h{hfxwlyh r fhuv/ lq sduwlfxodu ryhufrq ghqfh/ fdq dffrxqw iru wklv zlghvsuhdg dqg shuvlvwhqw lqyhvwphqw glvwruwlrq1 Ryhufrq ghqw FHRv ryhuhvwlpdwh wkh txdolw| ri wkhlu lqyhvwphqw surmhfwv dqg ylhz h{whuqdo qdqfh dv xqgxo| frvwo|1 Dv d uhvxow/ wkh| lqyhvw pruh zkhq wkh| kdyh lqwhuqdo ixqgv dw wkhlu glvsrvdo1 Zh whvw wkh ryhufrq ghqfh k|srwkhvlv/ xvlqj gdwd rq shuvrqdo sruwirolr dqg frusrudwh lqyhvwphqw ghflvlrqv ri FHRv lq Iruehv 833 frpsdqlhv1 Zh fodvvli| FHRv dv ryhufrq ghqw li wkh| uhshdwhgo| idlo wr h{huflvh rswlrqv wkdw duh kljko| lq wkh prqh|/ ru li wkh| kdelwxdoo| dftxluh vwrfn ri wkhlu rzq frpsdq|1 Wkh pdlq uhvxow lv wkdw lqyhvwphqw lv vljql fdqwo| pruh uhvsrqvlyh wr fdvk rz li wkh FHR glvsod|v ryhufrq ghqfh1 Lq dgglwlrq/ zh lghqwli| shuvrqdo fkdudfwhulvwlfv rwkhu wkdq ryhufrq ghqfh +hgxfdwlrq/ hpsor|phqw edfnjurxqg/ frkruw/ plolwdu| vhuylfh/ dqg vwdwxv lq wkh frpsdq|, wkdw vwurqjo| d hfw wkh fruuhodwlrq ehwzhhq lqyhvwphqw dqg fdvk rz1",
"title": ""
},
{
"docid": "8d98529cd3fc92eba091e09ea223df4e",
"text": "Exploring small connected and induced subgraph patterns (CIS patterns, or graphlets) has recently attracted considerable attention. Despite recent efforts on computing the number of instances a specific graphlet appears in a large graph (i.e., the total number of CISes isomorphic to the graphlet), little attention has been paid to characterizing a node’s graphlet degree, i.e., the number of CISes isomorphic to the graphlet that include the node, which is an important metric for analyzing complex networks such as social and biological networks. Similar to global graphlet counting, it is challenging to compute node graphlet degrees for a large graph due to the combinatorial nature of the problem. Unfortunately, previous methods of computing global graphlet counts are not suited to solve this problem. In this paper we propose sampling methods to estimate node graphlet degrees for undirected and directed graphs, and analyze the error of our estimates. To the best of our knowledge, we are the first to study this problem and give a fast scalable solution. We conduct experiments on a variety of real-word datasets that demonstrate that our methods accurately and efficiently estimate node graphlet degrees for graphs with millions of edges.",
"title": ""
},
{
"docid": "5897b87a82d5bc11757e33a8a46b1f21",
"text": "BACKGROUND\nProspective data from over 10 years of follow-up were used to examine neighbourhood deprivation, social fragmentation and trajectories of health.\n\n\nMETHODS\nFrom the third phase (1991-93) of the Whitehall II study of British civil servants, SF-36 health functioning was measured on up to five occasions for 7834 participants living in 2046 census wards. Multilevel linear regression models assessed the Townsend deprivation index and social fragmentation index as predictors of initial health and health trajectories.\n\n\nRESULTS\nIndependent of individual socioeconomic factors, deprivation was inversely associated with initial SF-36 physical component summary (PCS) score. Social fragmentation was not associated with PCS scores. Deprivation and social fragmentation were inversely associated with initial mental component summary (MCS) score. Neighbourhood characteristics were not associated with trajectories of PCS score or MCS score for the whole set. However, restricted analysis on longer term residents revealed that residents in deprived or socially fragmented neighbourhoods had lowest initial and smallest improvements in MCS score.\n\n\nCONCLUSIONS\nThis longitudinal study provides evidence that residence in a deprived or fragmented neighbourhood is associated with poorer mental health and that longer exposure to such neighbourhood environments has incremental effects. Associations between physical health functioning and neighbourhood characteristics were less clear. Mindful of the importance of individual socioeconomic factors, the findings warrant more detailed examination of materially and socially deprived neighbourhoods and their consequences for health.",
"title": ""
},
{
"docid": "173d791e05859ec4cc28b9649c414c62",
"text": "Breast cancer is the most common invasive cancer in females worldwide. It usually presents with a lump in the breast with or without other manifestations. Diagnosis of breast cancer depends on physical examination, mammographic findings and biopsy results. Treatment of breast cancer depends on the stage of the disease. Lines of treatment include mainly surgical removal of the tumor followed by radiotherapy or chemotherapy. Other lines including immunotherapy, thermochemotherapy and alternative medicine may represent a hope for breast cancer",
"title": ""
},
{
"docid": "3d238cc92a56e64f32f08e0833d117b3",
"text": "The efficiency of two biomass pretreatment technologies, dilute acid hydrolysis and dissolution in an ionic liquid, are compared in terms of delignification, saccharification efficiency and saccharide yields with switchgrass serving as a model bioenergy crop. When subject to ionic liquid pretreatment (dissolution and precipitation of cellulose by anti-solvent) switchgrass exhibited reduced cellulose crystallinity, increased surface area, and decreased lignin content compared to dilute acid pretreatment. Pretreated material was characterized by powder X-ray diffraction, scanning electron microscopy, Fourier transform infrared spectroscopy, Raman spectroscopy and chemistry methods. Ionic liquid pretreatment enabled a significant enhancement in the rate of enzyme hydrolysis of the cellulose component of switchgrass, with a rate increase of 16.7-fold, and a glucan yield of 96.0% obtained in 24h. These results indicate that ionic liquid pretreatment may offer unique advantages when compared to the dilute acid pretreatment process for switchgrass. However, the cost of the ionic liquid process must also be taken into consideration.",
"title": ""
},
{
"docid": "cb4518f95b82e553b698ae136362bd59",
"text": "Optimal control theory is a mature mathematical discipline with numerous applications in both science and engineering. It is emerging as the computational framework of choice for studying the neural control of movement, in much the same way that probabilistic inference is emerging as the computational framework of choice for studying sensory information processing. Despite the growing popularity of optimal control models, however, the elaborate mathematical machinery behind them is rarely exposed and the big picture is hard to grasp without reading a few technical books on the subject. While this chapter cannot replace such books, it aims to provide a self-contained mathematical introduction to optimal control theory that is su¢ ciently broad and yet su¢ ciently detailed when it comes to key concepts. The text is not tailored to the
eld of motor control (apart from the last section, and the overall emphasis on systems with continuous state) so it will hopefully be of interest to a wider audience. Of special interest in the context of this book is the material on the duality of optimal control and probabilistic inference; such duality suggests that neural information processing in sensory and motor areas may be more similar than currently thought. The chapter is organized in the following sections:",
"title": ""
},
{
"docid": "4028060b859f413accca51792773d68c",
"text": "This study develops an original framework to explore the influence of excessive product packaging on green brand attachment and to discuss the mediation roles of green brand attitude and green brand image. Structural Equation Modeling (SEM) is applied to verify the research framework. The results from a dataset of 238 valid questionnaires show that excessive product packaging has no direct effect on green brand attachment. However, green brand attitude and green brand image fully mediate the negative relationship between excessive product packaging and green brand attachment. Managerially, this study helps firms understand that excessive product packaging may bring damage to green brand attitude and green brand image, which positively relate to green brand attachment. Thus, committing to promoting the functional benefit of green products, firms must not neglect the negative effects of excessive product packaging.",
"title": ""
},
{
"docid": "58612d7c22f6bd0bf1151b7ca5da0f7c",
"text": "In this paper we present a novel method for clustering words in micro-blogs, based on the similarity of the related temporal series. Our technique, named SAX*, uses the Symbolic Aggregate ApproXimation algorithm to discretize the temporal series of terms into a small set of levels, leading to a string for each. We then define a subset of “interesting” strings, i.e. those representing patterns of collective attention. Sliding temporal windows are used to detect co-occurring clusters of tokens with the same or similar string. To assess the performance of the method we first tune the model parameters on a 2-month 1 % Twitter stream, during which a number of world-wide events of differing type and duration (sports, politics, disasters, health, and celebrities) occurred. Then, we evaluate the quality of all discovered events in a 1-year stream, “googling” with the most frequent cluster n-grams and manually assessing how many clusters correspond to published news in the same temporal slot. Finally, we perform a complexity evaluation and we compare SAX* with three alternative methods for event discovery. Our evaluation shows that SAX* is at least one order of magnitude less complex than other temporal and non-temporal approaches to micro-blog clustering.",
"title": ""
},
{
"docid": "f34e6c34a499b7b88c18049eec221d36",
"text": "The double-gimbal mechanism (DGM) is a multibody mechanical device composed of three rigid bodies, namely, a base, an inner gimbal, and an outer gimbal, interconnected by two revolute joints. A typical DGM, where the cylindrical base is connected to the outer gimbal by a revolute joint, and the inner gimbal, which is the disk-shaped payload, is connected to the outer gimbal by a revolute joint. The DGM is an integral component of an inertially stabilized platform, which provides motion to maintain line of sight between a target and a platform payload sensor. Modern, commercially available gimbals use two direct-drive or gear-driven motors on orthogonal axes to actuate the joints. Many of these mechanisms are constrained to a reduced operational region, while moresophisticated models use a slip ring to allow continuous rotation about an axis. Angle measurements for each axis are obtained from either a rotary encoder or a resolver. The DGM is a fundamental component of pointing and tracking applications that include missile guidance systems, ground-based telescopes, antenna assemblies, laser communication systems, and close-in weapon systems (CIWSs) such as the Phalanx 1B.",
"title": ""
},
{
"docid": "ad9f00a73306cba20073385c7482ba43",
"text": "We present a novel algorithm for fuzzy segmentation of magnetic resonance imaging (MRI) data and estimation of intensity inhomogeneities using fuzzy logic. MRI intensity inhomogeneities can be attributed to imperfections in the radio-frequency coils or to problems associated with the acquisition sequences. The result is a slowly varying shading artifact over the image that can produce errors with conventional intensity-based classification. Our algorithm is formulated by modifying the objective function of the standard fuzzy c-means (FCM) algorithm to compensate for such inhomogeneities and to allow the labeling of a pixel (voxel) to be influenced by the labels in its immediate neighborhood. The neighborhood effect acts as a regularizer and biases the solution toward piecewise-homogeneous labelings. Such a regularization is useful in segmenting scans corrupted by salt and pepper noise. Experimental results on both synthetic images and MR data are given to demonstrate the effectiveness and efficiency of the proposed algorithm.",
"title": ""
},
{
"docid": "647f8e9ece2c7663e2b8767f0694fec5",
"text": "Modern retrieval systems are often driven by an underlying machine learning model. The goal of such systems is to identify and possibly rank the few most relevant items for a given query or context. Thus, such systems are typically evaluated using a ranking-based performance metric such as the area under the precision-recall curve, the Fβ score, precision at fixed recall, etc. Obviously, it is desirable to train such systems to optimize the metric of interest. In practice, due to the scalability limitations of existing approaches for optimizing such objectives, large-scale retrieval systems are instead trained to maximize classification accuracy, in the hope that performance as measured via the true objective will also be favorable. In this work we present a unified framework that, using straightforward building block bounds, allows for highly scalable optimization of a wide range of ranking-based objectives. We demonstrate the advantage of our approach on several real-life retrieval problems that are significantly larger than those considered in the literature, while achieving substantial improvement in performance over the accuracyobjective baseline. Proceedings of the 20 International Conference on Artificial Intelligence and Statistics (AISTATS) 2017, Fort Lauderdale, Florida, USA. JMLR: W&CP volume 54. Copyright 2017 by the author(s).",
"title": ""
},
{
"docid": "d57555ce6b3fdd12052ea667bff915ed",
"text": "This paper presents a novel structure for ultra broadband 4:1 broadside-coupled PCB impedance transformer. Analysis, simulations and measurements of the developed transformer are introduced and discussed. Three prototypes of the proposed structure are implemented at center frequencies 5.65 GHz, 4.35 GHz and 3.65 GHz, respectively with fractional bandwidth of greater than 180 %. The implemented transformers show an ultra broadband performance with a transmission loss less than 1 dB and return loss at least 10 dB across the desired bandwidth. During comparison, simulations and measurements are found very close to each other. To the author's best knowledge the achieved performance of the designed transformer is better than so far published state of the art results.",
"title": ""
},
{
"docid": "01a70ee73571e848575ed992c1a3a578",
"text": "BACKGROUND\nNursing turnover is a major issue for health care managers, notably during the global nursing workforce shortage. Despite the often hierarchical structure of the data used in nursing studies, few studies have investigated the impact of the work environment on intention to leave using multilevel techniques. Also, differences between intentions to leave the current workplace or to leave the profession entirely have rarely been studied.\n\n\nOBJECTIVE\nThe aim of the current study was to investigate how aspects of the nurse practice environment and satisfaction with work schedule flexibility measured at different organisational levels influenced the intention to leave the profession or the workplace due to dissatisfaction.\n\n\nDESIGN\nMultilevel models were fitted using survey data from the RN4CAST project, which has a multi-country, multilevel, cross-sectional design. The data analysed here are based on a sample of 23,076 registered nurses from 2020 units in 384 hospitals in 10 European countries (overall response rate: 59.4%). Four levels were available for analyses: country, hospital, unit, and individual registered nurse. Practice environment and satisfaction with schedule flexibility were aggregated and studied at the unit level. Gender, experience as registered nurse, full vs. part-time work, as well as individual deviance from unit mean in practice environment and satisfaction with work schedule flexibility, were included at the individual level. Both intention to leave the profession and the hospital due to dissatisfaction were studied.\n\n\nRESULTS\nRegarding intention to leave current workplace, there is variability at both country (6.9%) and unit (6.9%) level. However, for intention to leave the profession we found less variability at the country (4.6%) and unit level (3.9%). Intention to leave the workplace was strongly related to unit level variables. Additionally, individual characteristics and deviance from unit mean regarding practice environment and satisfaction with schedule flexibility were related to both outcomes. Major limitations of the study are its cross-sectional design and the fact that only turnover intention due to dissatisfaction was studied.\n\n\nCONCLUSIONS\nWe conclude that measures aiming to improve the practice environment and schedule flexibility would be a promising approach towards increased retention of registered nurses in both their current workplaces and the nursing profession as a whole and thus a way to counteract the nursing shortage across European countries.",
"title": ""
},
{
"docid": "ddade87617f832b3b93719c7788d2363",
"text": "Attributed network embedding has been widely used in modeling real-world systems. The obtained low-dimensional vector representations of nodes preserve their proximity in terms of both network topology and node attributes, upon which different analysis algorithms can be applied. Recent advances in explanation-based learning and human-in-the-loop models show that by involving experts, the performance of many learning tasks can be enhanced. It is because experts have a better cognition in the latent information such as domain knowledge, conventions, and hidden relations. It motivates us to employ experts to transform their meaningful cognition into concrete data to advance network embedding. However, learning and incorporating the expert cognition into the embedding remains a challenging task. Because expert cognition does not have a concrete form, and is difficult to be measured and laborious to obtain. Also, in a real-world network, there are various types of expert cognition such as the comprehension of word meaning and the discernment of similar nodes. It is nontrivial to identify the types that could lead to a significant improvement in the embedding. In this paper, we study a novel problem of exploring expert cognition for attributed network embedding and propose a principled framework NEEC. We formulate the process of learning expert cognition as a task of asking experts a number of concise and general queries. Guided by the exemplar theory and prototype theory in cognitive science, the queries are systematically selected and can be generalized to various real-world networks. The returned answers from the experts contain their valuable cognition. We model them as new edges and directly add into the attributed network, upon which different embedding methods can be applied towards a more informative embedding representation. Experiments on real-world datasets verify the effectiveness and efficiency of NEEC. ACM Reference Format: Xiao Huang, Qingquan Song, Jundong Li, Xia Hu. 2018. Exploring Expert Cognition for Attributed Network Embedding. In Proceedings of WSDM’18. ACM, New York, NY, USA, 9 pages. https://doi.org/10.1145/3159652.3159655",
"title": ""
},
{
"docid": "246f9ed9cd1abc235bc2b2f9cd03f448",
"text": "This paper presents a comprehensive review of the studies conducted in the application of data mining techniques focus on credit scoring from 2000 to 2012. Yet, there isn‟t adequate literature reviews in the field of data mining applications in credit scoring. Using a novel research approach, this paper investigates academic and systematic literature review and includes all of the journals in the Science direct online journal database. The studies are categorized and classified into enterprise, individual and small and midsized (SME) companies credit scoring. Data mining techniques are also categorized to single classifier, Hybrid methods and Ensembles. Variable selection methods are also investigated separately because there is a major issue in a credit scoring problem. The findings of this literature review reveals that data mining techniques are mostly applied to an individual credit score and there is inadequate research on enterprise and SME credit scoring. Also ensemble methods, support vector machines and neural network methods are the most favorite techniques used recently. Hybrid methods are investigated in four categories and two of the frequently used combinations are “classification and classification” and “clustering and classification”. This review of literature analysis provides scope for future research and concludes with some helpful suggestions for further research.",
"title": ""
},
{
"docid": "1c6e9cbb9d935cdbe8e2f361b07398d9",
"text": "We present a fluid-dynamic model for the simulation of urban traffic networks with road sections of different lengths and capacities. The model allows one to efficiently simulate the transitions between free and congested traffic, taking into account congestion-responsive traffic assignment and adaptive traffic control. We observe dynamic traffic patterns which significantly depend on the respective network topology. Synchronization is only one interesting example and implies the emergence of green waves. In this connection, we will discuss adaptive strategies of traffic light control which can considerably improve throughputs and travel times, using self-organization principles based on local interactions between vehicles and traffic lights. Similar adaptive control principles can be applied to other queueing networks such as production systems. In fact, we suggest to turn push operation of traffic systems into pull operation: By removing vehicles as fast as possible from the network, queuing effects can be most efficiently avoided. The proposed control concept can utilize the cheap sensor technologies available in the future and leads to reasonable operation modes. It is flexible,",
"title": ""
},
{
"docid": "dcc23635c83035dcc6d535dc27842abe",
"text": "A Ballbot is a self-balanced mobile robot designed for omnidirectional mobility. The structure self-balanced on a ball giving to the system only one contact point with the ground. In this paper the dynamical model of a Ballbot system is investigated in order to find a linearized model which is able to describe the three-dimensional dynamics of the mechatronic system by a simpler set of equations. Due to the system's complexity, the equations of motion are often obtained by the energy method of Lagrange, they consist of a vast nonlinear ordinary differential equations (ODE), which are often numerically linearized for small perturbations. The present paper proposes to model the whole 3D dynamics of the Ballbot with the Newton-Euler formalism and Tait-Bryan angles in order to describe the model in terms of the system's physical parameters without resorting to numeric solution. This physical modelling is introduced to allow the simplification of the dynamic motion control of the ballbot.",
"title": ""
},
{
"docid": "78afd117aa7fba5987481de3a2a605b8",
"text": "Character-based sequence labeling framework is flexible and efficient for Chinese word segmentation (CWS). Recently, many character-based neural models have been applied to CWS. While they obtain good performance, they have two obvious weaknesses. The first is that they heavily rely on manually designed bigram feature, i.e. they are not good at capturing n-gram features automatically. The second is that they make no use of full word information. For the first weakness, we propose a convolutional neural model, which is able to capture rich n-gram features without any feature engineering. For the second one, we propose an effective approach to integrate the proposed model with word embeddings. We evaluate the model on two benchmark datasets: PKU and MSR. Without any feature engineering, the model obtains competitive performance — 95.7% on PKU and 97.3% on MSR. Armed with word embeddings, the model achieves state-of-the-art performance on both datasets — 96.5% on PKU and 98.0% on MSR, without using any external labeled resource.",
"title": ""
},
{
"docid": "9c98023ef208a8c15515bd46737b056e",
"text": "Web usage Mining is an area of web mining which deals with the extraction of interesting knowledge from logging information produced by web server. Different data mining techniques can be applied on web usage data to extract user access patterns and this knowledge can be used in variety of applications such as system improvement, web site modification, business intelligence etc. Web usage mining requires data abstraction for pattern discovery. This data abstraction is achieved through data preprocessing. In this paper we survey about the data preprocessing activities like data cleaning, data reduction and related algorithms.",
"title": ""
}
] |
scidocsrr
|
a02464fbe216c481ad21f81a990c7add
|
Improving Short Text Classification Using Unlabeled Background Knowledge
|
[
{
"docid": "a2fd33f276a336e2a33d84c2a0abc283",
"text": "The Smart information retrieval project emphasizes completely automatic approaches to the understanding and retrieval of large quantities of text. We continue our work in TREC 3, performing runs in the routing, ad-hoc, and foreign language environments. Our major focus is massive query expansion: adding from 300 to 530 terms to each query. These terms come from known relevant documents in the case of routing, and from just the top retrieved documents in the case of ad-hoc and Spanish. This approach improves e ectiveness from 7% to 25% in the various experiments. Other ad-hoc work extends our investigations into combining global similarities, giving an overall indication of how a document matches a query, with local similarities identifying a smaller part of the document which matches the query. Using an overlapping text window de nition of \\local\", we achieve a 16% improvement.",
"title": ""
},
{
"docid": "70e6148316bd8915afd8d0908fb5ab0d",
"text": "We consider the problem of using a large unla beled sample to boost performance of a learn ing algorithm when only a small set of labeled examples is available In particular we con sider a problem setting motivated by the task of learning to classify web pages in which the description of each example can be partitioned into two distinct views For example the de scription of a web page can be partitioned into the words occurring on that page and the words occurring in hyperlinks that point to that page We assume that either view of the example would be su cient for learning if we had enough labeled data but our goal is to use both views together to allow inexpensive unlabeled data to augment a much smaller set of labeled ex amples Speci cally the presence of two dis tinct views of each example suggests strategies in which two learning algorithms are trained separately on each view and then each algo rithm s predictions on new unlabeled exam ples are used to enlarge the training set of the other Our goal in this paper is to provide a PAC style analysis for this setting and more broadly a PAC style framework for the general problem of learning from both labeled and un labeled data We also provide empirical results on real web page data indicating that this use of unlabeled examples can lead to signi cant improvement of hypotheses in practice This paper is to appear in the Proceedings of the Conference on Computational Learning Theory This research was supported in part by the DARPA HPKB program under contract F and by NSF National Young Investigator grant CCR INTRODUCTION In many machine learning settings unlabeled examples are signi cantly easier to come by than labeled ones One example of this is web page classi cation Suppose that we want a program to electronically visit some web site and download all the web pages of interest to us such as all the CS faculty member pages or all the course home pages at some university To train such a system to automatically classify web pages one would typically rely on hand labeled web pages These labeled examples are fairly expensive to obtain because they require human e ort In contrast the web has hundreds of millions of unlabeled web pages that can be inexpensively gathered using a web crawler Therefore we would like our learning algorithm to be able to take as much advantage of the unlabeled data as possible This web page learning problem has an interesting feature Each example in this domain can naturally be described using several di erent kinds of information One kind of information about a web page is the text appearing on the document itself A second kind of information is the anchor text attached to hyperlinks pointing to this page from other pages on the web The two problem characteristics mentioned above availability of both labeled and unlabeled data and the availability of two di erent kinds of information about examples suggest the following learning strat egy Using an initial small set of labeled examples nd weak predictors based on each kind of information for instance we might nd that the phrase research inter ests on a web page is a weak indicator that the page is a faculty home page and we might nd that the phrase my advisor on a link is an indicator that the page being pointed to is a faculty page Then attempt to bootstrap from these weak predictors using unlabeled data For instance we could search for pages pointed to with links having the phrase my advisor and use them as probably positive examples to further train a learning algorithm based on the words on the text page and vice versa We call this type of bootstrapping co training and it has a close connection to bootstrapping from incomplete data in the Expectation Maximization setting see for instance The question this raises is is there any reason to believe co training will help Our goal is to address this question by developing a PAC style theoretical framework to better understand the issues involved in this approach We also give some preliminary empirical results on classifying university web pages see Section that are encouraging in this context More broadly the general question of how unlabeled examples can be used to augment labeled data seems a slippery one from the point of view of standard PAC as sumptions We address this issue by proposing a notion of compatibility between a data distribution and a target function Section and discuss how this relates to other approaches to combining labeled and unlabeled data Section",
"title": ""
}
] |
[
{
"docid": "9cebb39b2eb340a21c4f64c1bb42217e",
"text": "Text characters and strings in natural scene can provide valuable information for many applications. Extracting text directly from natural scene images or videos is a challenging task because of diverse text patterns and variant background interferences. This paper proposes a method of scene text recognition from detected text regions. In text detection, our previously proposed algorithms are applied to obtain text regions from scene image. First, we design a discriminative character descriptor by combining several state-of-the-art feature detectors and descriptors. Second, we model character structure at each character class by designing stroke configuration maps. Our algorithm design is compatible with the application of scene text extraction in smart mobile devices. An Android-based demo system is developed to show the effectiveness of our proposed method on scene text information extraction from nearby objects. The demo system also provides us some insight into algorithm design and performance improvement of scene text extraction. The evaluation results on benchmark data sets demonstrate that our proposed scheme of text recognition is comparable with the best existing methods.",
"title": ""
},
{
"docid": "c6134f5691f1422921f6ee260b936745",
"text": "High-level feature extraction and hierarchical feature representation of image objects with a convolutional neural network (CNN) can overcome the limitations of the traditional building detection models using middle/low-level features extracted from a complex background. Aiming at the drawbacks of manual village location, high cost, and the limited accuracy of building detection in the existing rural building detection models, a two-stage CNN model is proposed in this letter to detect rural buildings in high-resolution imagery. Simulating the hierarchical processing mechanism of human vision, the proposed model is constructed with two CNNs, whose architectures can automatically locate villages and efficiently detect buildings, respectively. This two-stage CNN model effectively reduces the complexity of the background and improves the efficiency of rural building detection. The experiments showed that the proposed model could automatically locate all the villages in the two study areas, achieving a building detection accuracy of 88%. Compared with the existing models, the proposed model was proved to be effective in detecting buildings in rural areas with a complex background.",
"title": ""
},
{
"docid": "cb5ec5bc55e825289fc8c3251c5b8f92",
"text": "This research presents a review of the psychometric measures on boredom that have been developed over the past 25 years. Specifically, the author examined the Boredom Proneness Scale (BPS; R. Farmer & N. D. Sundberg, 1986), the job boredom scales by E. A. Grubb (1975) and T. W. Lee (1986), a boredom coping measure (J. A. Hamilton, R. J. Haier, & M. S. Buchsbaum, 1984), 2 scales that assess leisure and free-time boredom (S. E. Iso-Ahola & E. Weissinger, 1990; M. G. Ragheb & S. P. Merydith, 2001), the Sexual Boredom Scale (SBS; J. D. Watt & J. E. Ewing, 1996), and the Boredom Susceptibility (BS) subscale of the Sensation Seeking Scale (M. Zuckerman, 1979a). Particular attention is devoted to discussing the literature regarding the psychometric properties of the BPS because it is the only full-scale measure on the construct of boredom.",
"title": ""
},
{
"docid": "16560cdfe50fc908ae46abf8b82e620f",
"text": "While there seems to be a general agreement that next years' systems will include many processing cores, it is often overlooked that these systems will also include an increasing number of different cores (we already see dedicated units for graphics or network processing). Orchestrating the diversity of processing functionality is going to be a major challenge in the upcoming years, be it to optimize for performance or for minimal energy consumption.\n We expect field-programmable gate arrays (FPGAs or \"programmable hardware\") to soon play the role of yet another processing unit, found in commodity computers. It is clear that the new resource is going to be too precious to be ignored by database systems, but it is unclear how FPGAs could be integrated into a DBMS. With a focus on database use, this tutorial introduces into the emerging technology, demonstrates its potential, but also pinpoints some challenges that need to be addressed before FPGA-accelerated database systems can go mainstream. Attendees will gain an intuition of an FPGA development cycle, receive guidelines for a \"good\" FPGA design, but also learn the limitations that hardware-implemented database processing faces. Our more high-level ambition is to spur a broader interest in database processing on novel hardware technology.",
"title": ""
},
{
"docid": "af9c5eb4d0e9173a214fc9741056b8e4",
"text": "Point cloud matching is one of the key technologies of optical three-dimensional contour measurement. Most of the point cloud matching without landmark used the iterative closest point algorithm. In order to improve the performance of the iterative closest point algorithm, the two-step iterative closest point algorithm was proposed. The improved algorithm is divided into a rough matching step and accurate matching step. Rough matching used the principal component analysis algorithm, while the fine matching used the improved iterative closest point algorithm. Compared with the classic iterative closest point algorithm, the improved algorithm can match the partial coincident point cloud. At the same time, the experiment can validate the effectiveness of the proposed algorithm.",
"title": ""
},
{
"docid": "fc820c587741b24e1b0ac436077b8947",
"text": "In this paper, we will examine the problem of clustering massive graph streams. Graph clustering poses significant challenges because of the complex structures which may be present in the underlying data. The massive size of the underlying graph makes explicit structural enumeration very difficult. Consequently, most techniques for clustering multi-dimensional data are difficult to generalize to the case of massive graphs. Recently, methods have been proposed for clustering graph data, though these methods are designed for static data, and are not applicable to the case of graph streams. Furthermore, these techniques are especially not effective for the case of massive graphs, since a huge number of distinct edges may need to be tracked simultaneously. This results in storage and computational challenges during the clustering process. In order to deal with the natural problems arising from the use of massive disk-resident graphs, we will propose a technique for creating hash-compressed micro-clusters from graph streams. The compressed micro-clusters are designed by using a hash-based compression of the edges onto a smaller domain space. We will provide theoretical results which show that the hash-based compression continues to maintain bounded accuracy in terms of distance computations. We will provide experimental results which illustrate the accuracy and efficiency of the underlying method.",
"title": ""
},
{
"docid": "a7181a3ddebed92d352ecf67e76c6e81",
"text": "Empirical, hypothesis-driven, experimentation is at the heart of the scientific discovery process and has become commonplace in human-factors related fields. To enable the integration of visual analytics in such experiments, we introduce VEEVVIE, the Visual Explorer for Empirical Visualization, VR and Interaction Experiments. VEEVVIE is comprised of a back-end ontology which can model several experimental designs encountered in these fields. This formalization allows VEEVVIE to capture experimental data in a query-able form and makes it accessible through a front-end interface. This front-end offers several multi-dimensional visualization widgets with built-in filtering and highlighting functionality. VEEVVIE is also expandable to support custom experimental measurements and data types through a plug-in visualization widget architecture. We demonstrate VEEVVIE through several case studies of visual analysis, performed on the design and data collected during an experiment on the scalability of high-resolution, immersive, tiled-display walls.",
"title": ""
},
{
"docid": "c1820cff90539d3455ce8e552dce3ddc",
"text": "Java is becoming a viable platform for hard real-time computing. There are production and research real-time Java VMs, as well as applications in both military and civil sector. Technological advances and increased adoption of Real-time Java contrast significantly with the lack of real-time benchmarks. The few benchmarks that exist are either low-level synthetic micro-benchmarks, or benchmarks used internally by companies, making it difficult to independently verify and repeat reported results.\n This paper presents the x (Collision Detector) benchmark suite, an open source application benchmark suite that targets different hard and soft real-time virtual machines. x is, at its core, a real-time benchmark with a single periodic task, which implements aircraft collision detection based on simulated radar frames. The benchmark can be configured to use different sets of real-time features and comes with a number of workloads. We describe the architecture of the benchmark and characterize the workload based on input parameters.",
"title": ""
},
{
"docid": "838c50eaf711cfb30839feb826e30171",
"text": "Security is a concern in the design of a wide range of embedded systems. Extensive research has been devoted to the development of cryptographic algorithms that provide the theoretical underpinnings of information security. Functional security mechanisms, such as security protocols, suitably employ these mathematical primitives in order to achieve the desired security objectives. However, functional security mechanisms alone cannot ensure security, since most embedded systems present attackers with an abundance of opportunities to observe or interfere with their implementation, and hence to compromise their theoretical strength. This paper surveys various tamper or attack techniques, and explains how they can be used to undermine or weaken security functions in embedded systems. Tamper-resistant design refers to the process of designing a system architecture and implementation that is resistant to such attacks. We outline approaches that have been proposed to design tamper-resistant embedded systems, with examples drawn from recent commercial products.",
"title": ""
},
{
"docid": "40bdadc044f5342534ba5387c47c6456",
"text": "A numerical study of atmospheric turbulence effects on wind-turbine wakes is presented. Large-eddy simulations of neutrally-stratified atmospheric boundary layer flows through stand-alone wind turbines were performed over homogeneous flat surfaces with four different aerodynamic roughness lengths. Emphasis is placed on the structure and characteristics of turbine wakes in the cases where the incident flows to the turbine have the same mean velocity at the hub height but different mean wind shears and turbulence intensity levels. The simulation results show that the different turbulence intensity levels of the incoming flow lead to considerable influence on the spatial distribution of the mean velocity deficit, turbulence intensity, and turbulent shear stress in the wake region. In particular, when the turbulence intensity level of the incoming flow is higher, the turbine-induced wake (velocity deficit) recovers faster, and the locations of the maximum turbulence intensity and turbulent stress are closer to the turbine. A detailed analysis of the turbulence kinetic energy budget in the wakes reveals also an important effect of the incoming flow turbulence level on the magnitude and spatial distribution of the shear production and transport terms.",
"title": ""
},
{
"docid": "519ca18e1450581eb3a7387568dce7cf",
"text": "This paper illustrates the design of a process compensated bias for asynchronous CML dividers for a low power, high performance LO divide chain operating at 4Ghz of input RF frequency. The divider chain provides division by 4,8,12,16,20, and 24. It provides a differential CML level signal for the in-loop modulated transmitter, and 25% duty cycle non-overlapping rail to rail waveforms for I/Q receiver for driving passive mixer. Asynchronous dividers have been used to realize divide by 3 and 5 with 50% duty cycle, quadrature outputs. All the CML dividers use a process compensated bias to compensate for load resistor variation and tail current variation using dual analog feedback loops. Frabricated in 180nm CMOS technology, the divider chain operate over industrial temperature range (−40 to 90°C), and provide outputs in 138–960Mhz range, consuming 2.2mA from 1.8V regulated supply at the highest output frequency.",
"title": ""
},
{
"docid": "f56f2119b3e65970db35676fe1cac9ba",
"text": "While behavioral and social sciences occupations comprise one of the largest portions of the \"STEM\" workforce, most studies of diversity in STEM overlook this population, focusing instead on fields such as biomedical or physical sciences. This study evaluates major demographic trends and productivity in the behavioral and social sciences research (BSSR) workforce in the United States during the past decade. Our analysis shows that the demographic trends for different BSSR fields vary. In terms of gender balance, there is no single trend across all BSSR fields; rather, the problems are field-specific, and disciplines such as economics and political science continue to have more men than women. We also show that all BSSR fields suffer from a lack of racial and ethnic diversity. The BSSR workforce is, in fact, less representative of racial and ethnic minorities than are biomedical sciences or engineering. Moreover, in many BSSR subfields, minorities are less likely to receive funding. We point to various funding distribution patterns across different demographic groups of BSSR scientists, and discuss several policy implications.",
"title": ""
},
{
"docid": "86d705256c19f63dac90162b33818a9b",
"text": "Despite the recent success of deep-learning based semantic segmentation, deploying a pre-trained road scene segmenter to a city whose images are not presented in the training set would not achieve satisfactory performance due to dataset biases. Instead of collecting a large number of annotated images of each city of interest to train or refine the segmenter, we propose an unsupervised learning approach to adapt road scene segmenters across different cities. By utilizing Google Street View and its timemachine feature, we can collect unannotated images for each road scene at different times, so that the associated static-object priors can be extracted accordingly. By advancing a joint global and class-specific domain adversarial learning framework, adaptation of pre-trained segmenters to that city can be achieved without the need of any user annotation or interaction. We show that our method improves the performance of semantic segmentation in multiple cities across continents, while it performs favorably against state-of-the-art approaches requiring annotated training data.",
"title": ""
},
{
"docid": "dbe636c3e37e8ee83b696274e27ee6df",
"text": "Redox state is a term used widely in the research field of free radicals and oxidative stress. Unfortunately, it is used as a general term referring to relative changes that are not well defined or quantitated. In this review we provide a definition for the redox environment of biological fluids, cell organelles, cells, or tissue. We illustrate how the reduction potential of various redox couples can be estimated with the Nernst equation and show how pH and the concentrations of the species comprising different redox couples influence the reduction potential. We discuss how the redox state of the glutathione disulfide-glutathione couple (GSSG/2GSH) can serve as an important indicator of redox environment. There are many redox couples in a cell that work together to maintain the redox environment; the GSSG/2GSH couple is the most abundant redox couple in a cell. Changes of the half-cell reduction potential (E(hc)) of the GSSG/2GSH couple appear to correlate with the biological status of the cell: proliferation E(hc) approximately -240 mV; differentiation E(hc) approximately -200 mV; or apoptosis E(hc) approximately -170 mV. These estimates can be used to more fully understand the redox biochemistry that results from oxidative stress. These are the first steps toward a new quantitative biology, which hopefully will provide a rationale and understanding of the cellular mechanisms associated with cell growth and development, signaling, and reductive or oxidative stress.",
"title": ""
},
{
"docid": "a16c21e6a296c95ccc647e5bb6d2bb61",
"text": "A noncoherent amplitude shift keying (ASK)-based RF-interconnect (RF-I) system design for off-chip communication is analyzed. The proposed RF-I system exploits the simple architecture and characteristics of noncoherent ASK modulation. This provides an efficient way of increasing interconnect bandwidth by transmitting an RF-modulated data stream simultaneously with a conventional baseband counterpart over a shared off-chip transmission line. Both analysis and tested results prove that the performance of the proposed dual-band (RF+baseband) interconnect system is not limited by thermal noise interference. Therefore, a more sophisticated modulation scheme and/or coherent receiving scheme becomes unnecessary within the scope of system requirements. In addition, it confirms that the proposed inductive coupling network is able to support simultaneous bidirectional communications without using complicated replica circuits or additional filters to isolate simultaneous baseband and RF-band data streams.",
"title": ""
},
{
"docid": "d23c5fc626d0f7b1d9c6c080def550b8",
"text": "Gamification of education is a developing approach for increasing learners’ motivation and engagement by incorporating game design elements in educational environments. With the growing popularity of gamification and yet mixed success of its application in educational contexts, the current review is aiming to shed a more realistic light on the research in this field by focusing on empirical evidence rather than on potentialities, beliefs or preferences. Accordingly, it critically examines the advancement in gamifying education. The discussion is structured around the used gamification mechanisms, the gamified subjects, the type of gamified learning activities, and the study goals, with an emphasis on the reliability and validity of the reported outcomes. To improve our understanding and offer a more realistic picture of the progress of gamification in education, consistent with the presented evidence, we examine both the outcomes reported in the papers and how they have been obtained. While the gamification in education is still a growing phenomenon, the review reveals that (i) insufficient evidence exists to support the long-term benefits of gamification in educational contexts; (ii) the practice of gamifying learning has outpaced researchers’ understanding of its mechanisms and methods; (iii) the knowledge of how to gamify an activity in accordance with the specifics of the educational context is still limited. The review highlights the need for systematically designed studies and rigorously tested approaches confirming the educational benefits of gamification, if gamified learning is to become a recognized instructional approach.",
"title": ""
},
{
"docid": "bd3637e0bd664392670d22c6d29b8f33",
"text": "Traffic prediction has drawn increasing attention in AI research field due to the increasing availability of large-scale traffic data and its importance in the real world. For example, an accurate taxi demand prediction can assist taxi companies in pre-allocating taxis. The key challenge of traffic prediction lies in how to model the complex spatial dependencies and temporal dynamics. Although both factors have been considered in modeling, existing works make strong assumptions about spatial dependence and temporal dynamics, i.e., spatial dependence is stationary in time, and temporal dynamics is strictly periodical. However, in practice the spatial dependence could be dynamic (i.e., changing from time to time), and the temporal dynamics could have some perturbation from one period to another period. In this paper, we make two important observations: (1) the spatial dependencies between locations are dynamic; and (2) the temporal dependency follows daily and weekly pattern but it is not strictly periodic for its dynamic temporal shifting. To address these two issues, we propose a novel Spatial-Temporal Dynamic Network (STDN), in which a flow gating mechanism is introduced to learn the dynamic similarity between locations, and a periodically shifted attention mechanism is designed to handle long-term periodic temporal shifting. To the best of our knowledge, this is the first work that tackle both issues in a unified framework. Our experimental results on real-world traffic datasets verify the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "7ea07852113eff21d045579f2f089507",
"text": "The E.U. Framework Directive of 2008 draws a clear strategy for waste management, which places prevention as a priority, followed by reuse and recovery of materials and energy. Specific objectives are fixed by European legislation [1]. Italy has set a reduction target of municipal solid wastes and special nonhazardous wastes per unit of population by 5% in 2020 compared to 2010, while has also fully implemented the European target for recovery of materials by 2020 and is preparing to adapt the target of recycling [2]. Wastes production in Italy is equal to 170 million tons in 2010 [3] and shows a slight increase compared to 2009. This includes wastes called “special” (divided into hazardous and non hazardous); the municipal solid wastes constitute about one fifth of the total. In 2010, the special wastes amounted to around 138 million tons, of which about 130 Mt are non-hazardous wastes from construction and demolition, industries and waste treatment processes.",
"title": ""
},
{
"docid": "6eace0f6216d17b9041f1bed42459c40",
"text": "Predicting possible code-switching points can help develop more accurate methods for automatically processing mixed-language text, such as multilingual language models for speech recognition systems and syntactic analyzers. We present in this paper exploratory results on learning to predict potential codeswitching points in Spanish-English. We trained different learning algorithms using a transcription of code-switched discourse. To evaluate the performance of the classifiers, we used two different criteria: 1) measuring precision, recall, and F-measure of the predictions against the reference in the transcription, and 2) rating the naturalness of artificially generated code-switched sentences. Average scores for the code-switched sentences generated by our machine learning approach were close to the scores of those generated by humans.",
"title": ""
},
{
"docid": "d1eed1d7875930865944c98fbab5f7e1",
"text": "Optic disc (OD) and fovea locations are two important anatomical landmarks in automated analysis of retinal disease in color fundus photographs. This paper presents a new, fast, fully automatic optic disc and fovea localization algorithm developed for diabetic retinopathy (DR) screening. The optic disc localization methodology comprises of two steps. First, the OD location is identified using template matching and directional matched filter. To reduce false positives due to bright areas of pathology, we exploit vessel characteristics inside the optic disc. The location of the fovea is estimated as the point of lowest matched filter response within a search area determined by the optic disc location. Second, optic disc segmentation is performed. Based on the detected optic disc location, a fast hybrid level-set algorithm which combines the region information and edge gradient to drive the curve evolution is used to segment the optic disc boundary. Extensive evaluation was performed on 1200 images (Messidor) composed of 540 images of healthy retinas, 431 images with DR but no risk of macular edema (ME), and 229 images with DR and risk of ME. The OD location methodology obtained 98.3% success rate, while fovea location achieved 95% success rate. The average mean absolute distance (MAD) between the OD segmentation algorithm and “gold standard” is 10.5% of estimated OD radius. Qualitatively, 97% of the images achieved Excellent to Fair performance for OD segmentation. The segmentation algorithm performs well even on blurred images.",
"title": ""
}
] |
scidocsrr
|
e99eb6b1b6c60663175a0234bb02946a
|
A data protection model for fog computing
|
[
{
"docid": "7d308c302065253ee1adbffad04ff3f1",
"text": "Cloud computing opens a new era in IT as it can provide various elastic and scalable IT services in a pay-as-you-go fashion, where its users can reduce the huge capital investments in their own IT infrastructure. In this philosophy, users of cloud storage services no longer physically maintain direct control over their data, which makes data security one of the major concerns of using cloud. Existing research work already allows data integrity to be verified without possession of the actual data file. When the verification is done by a trusted third party, this verification process is also called data auditing, and this third party is called an auditor. However, such schemes in existence suffer from several common drawbacks. First, a necessary authorization/authentication process is missing between the auditor and cloud service provider, i.e., anyone can challenge the cloud service provider for a proof of integrity of certain file, which potentially puts the quality of the so-called `auditing-as-a-service' at risk; Second, although some of the recent work based on BLS signature can already support fully dynamic data updates over fixed-size data blocks, they only support updates with fixed-sized blocks as basic unit, which we call coarse-grained updates. As a result, every small update will cause re-computation and updating of the authenticator for an entire file block, which in turn causes higher storage and communication overheads. In this paper, we provide a formal analysis for possible types of fine-grained data updates and propose a scheme that can fully support authorized auditing and fine-grained update requests. Based on our scheme, we also propose an enhancement that can dramatically reduce communication overheads for verifying small updates. Theoretical analysis and experimental results demonstrate that our scheme can offer not only enhanced security and flexibility, but also significantly lower overhead for big data applications with a large number of frequent small updates, such as applications in social media and business transactions.",
"title": ""
},
{
"docid": "7512d936d3d170774ad34bac9b8adef3",
"text": "Recently, the concept of Internet of Things (IoT) is attracting much attention due to the huge potential. IoT uses the Internet as a key infrastructure to interconnect numerous geographically diversified IoT nodes which usually have scare resources, and therefore cloud is used as a key back-end supporting infrastructure. In the literature, the collection of the IoT nodes and the cloud is collectively called as an IoT cloud. Unfortunately, the IoT cloud suffers from various drawbacks such as huge network latency as the volume of data which is being processed within the system increases. To alleviate this issue, the concept of fog computing is introduced, in which foglike intermediate computing buffers are located between the IoT nodes and the cloud infrastructure to locally process a significant amount of regional data. Compared to the original IoT cloud, the communication latency as well as the overhead at the backend cloud infrastructure could be significantly reduced in the fog computing supported IoT cloud, which we will refer as IoT fog. Consequently, several valuable services, which were difficult to be delivered by the traditional IoT cloud, can be effectively offered by the IoT fog. In this paper, however, we argue that the adoption of IoT fog introduces several unique security threats. We first discuss the concept of the IoT fog as well as the existing security measures, which might be useful to secure IoT fog. Then, we explore potential threats to IoT fog.",
"title": ""
}
] |
[
{
"docid": "380838601f3233b01a40b5e0314b507e",
"text": "Effective organizational beaming requires high absorptive capacity, which has two major elements: prior knowledge base and intensity of effort. Hyundai Motor Company, the most dynamic automobile producer in developing countries, pursued a strategy of independence in developing absorptive capacity. In its process of advancing from one phase to the next through the preparation for and acquisition, assimilation, and improvement of foreign technologies, Hyundai acquired migratory knowledge to expand its prior knowledge base and proactively constructed crises as a strategic means of intensifying its beaming effort. Unlike externally evoked crises, proactively constructed internal crises present a clew performance gap, shift beaming orientation from imitation to innovation, and increase the intensity of effort in organizational learning. Such crisis construction is an evocative and galvanizing device in the personal repertoires of proactive top managers. A similar process of opportunistic learning is also evident in other industries in Korea. (Organizational Learning; Absorptive Capacity; Crisis Construction; Knowledge; Catching-up; Hyundai Motor; Korea) Organizational learning and innovation have become crucially important subjects in management. Research on these subjects, however, is concentrated mainly in advanced countries (e.g., Argyris and Schon 1978, Dodgson 1993, Nonaka and Takeuchi 1995, Utterback 1994, von Hippel 1988). Despite the fact that many developing countries have made significant progress in industrial, educational, and technological development, research on learning, capability building, and innovation in those countries is scanty (e.g., Fransman and King 1984, Kim 1997, Kim and Kirn 1985). Models that capture organizational learning and technological change in developing countries are essential to understand the dynamic process of capability building in catching-up in such countries and to extend the theories developed in advanced countries. Understanding the catching-up process is also relevant and important to firins in advanced countries. Not all firrns can be pioneers of novel breakdiroughs, even in those countries. Most firms must invest in second-hand learning to remain competitive. Nevertheless, much less attention is paid to the imitative catching-up process than to the innovative pioneering process. For instance, ABI/Inform, a computerized business database, lists a total of 9,006 articles on the subject of innovation but only 145 on imitation (Schnaars 1994). A crisis is usually regarded as an unpopular, largely negative phenomenon in management. It can, however, be an appropriate metaphor for strategic and technological transformation. Several observers postulate that constructing and then resolving organizational crises can be an effective means of opportunistic learning (e.g., Nonaka 1988, Pitt 1990, Schon 1967, Weick 1988), but no one has clearly linked the construct variable to corresponding empirical evidence. The purpose of this article is to develop a model of organizational beaming in an imitative catching-up process, and at the same time a model of crisis construction and organizational learning, by empirically analysing the history of technological transformation at the Hyundai Motor Company (hereinafter Hyundai), the most dynamic automaker in developing countries, as a case in point. Despite the prediction that none of South Korea's automakers will survive the global shakeout of the 1990s, having been driven out or relegated to niche markets dependent on alliances with leading foreign car producers (Far Eastern Economic Review 1992), Hyundai is determined to become a leading automaker on its own. Unlike most other automobile companies in developing countries, Hyundai followed an explicit policy of maintaining full ownership of all of its 45 subsidiaries, entering the auto industry in 1967 as a latecomer without foreign equity participation. Hyundai has progressed remarkably since then. In quantitative terms, Hyundai increased its production more than tenfold every decade, from 614 cars in 1968, to 7,009 in 1973, to 103,888 in 1983, and to 1,134,611 in 1994, rapidly surpassing other automakers in Korea, and steadily ascending from being the sixteenth-largest producer in the world in 1991 to being the thirteenth largest in 1994. Hyundai is now the largest automobile producer in a developing country. It produced its one millionth car in January 1986, taking 18 years to reach that level of production in contrast to 29 years for Toyota and 43 years for Mazda (Hyun and Lee 1989). In qualitative terms, Hyundai began assembling a Ford compact car on a knockdown basis in 1967. It rapidly assimilated foreign technology and developed sufficient capability to unveil its own designs, Accent and Avante, in 1994 and 1995, respectively. The company thus eliminated the royalty payment on the foreign license and was able to export production and design technology abroad. Hyundai's rapid surge raises several research questions: (1) How did Hyundai acquire the technological capability to transform itself so expeditiously from imitative \"learning by doing\" to innovative \"learning by research\"? (2) How does learning in the catching-up process in a developing country differ from beaming in the pioneering process in advanced countries? (3) Why is crisis construction an effective mechanism for organizational learning? (4) Can Hyundai's learning model be emulated by other catching-up firms? (5) What are the implications of Hyundai's model for future research? The following section briefly reviews theories related to organizational learning and knowledge creation. Then Hyundai is analyzed as a case in point to illustrate how the Korean firm has expedited organizational learning and to answer the research questions. Crises and Organizational Learning Organizational learning, whether to imitate or to innovate, takes place at two levels: the individual and organizational. The prime actors in the process of organizational learning are individuals within the firm. Organizational learning is not, however, the simple sum of individual learning (Hedberg 1981); rather, it is the process whereby knowledge is created, is distributed across the organization, is communicated among organization members, has consensual validity, and is integrated into the strategy and management of the organization (Duncan and Weiss 1978). Individual learning is therefore an indispensable condition for organizational learning but cannot be the sufficient condition. Organizations learn only when individual insights and skills become embodied in organizational routines, practices, and beliefs (Attewell 1992). Only effective organizations can translate individual learning into organizational learning (Hedberg 1980, Kim 1993, Shrivastava 1983). Absorptive Capacity Organizational learning is a function of an organization's absorptive capacity. Absorptive capacity requires learning capability and develops problem-solving skills. Learning capability is the capacity to assimilate knowledge (for innovation), whereas problem-solving skills represent a capacity to create new knowledge (for innovation). Absorptive capacity has two important elements, prior knowledge base and intensity of effort (Cohen and Levinthal 1990). Prior knowledge base consists of individual units of knowledge available within the organization. Accumulated prior knowledge increases the ability to make sense of and to assimilate and use new knowledge. Relevant prior knowledge base comprises basic skills and general knowledge in the case of developing countries, but includes the most recent scientific and technological knowledge in the case of industrially advanced countries. Hence, prior knowledge base should be assessed in relation to task difficulty (Kim 1995). Intensity of effort represents the amount of energy expended by organizational members to solve problems. Exposure of a firm to relevant external knowledge is insufficient unless an effort is made to internalize it. Learning how to solve problems is usually accomplished through many practice trials involving related problems (Harlow 1959). Hence, considerable time and effort must be directed to learning how to solve problems before complex problems can be addressed. Such effort intensifies interaction among organizational members, thus facilitating knowledge conversion and creation at the organizational level. As shown in Figure 1, prior knowledge base and intensity of effort in the organization constitute a 2 X 2 matrix that indicates the level of absorptive capacity. When both are high (quadrant 1), absorptive capacity is high; when both are low (quadrant 4), absorptive capacity is low. Organizations with high prior knowledge in relation to task difficulty and low intensity of effort (quadrant 2) will gradually lose their absorptive capacity, moving rapidly down to quadrant 4, because their prior knowledge base will become obsolete as task-related technology moves along its trajectory. In contrast, organizations with low prior knowledge in relation to task difficulty and high intensity of effort (quadrant 3) will be able to acquire absorptive capacity, moving progressively to quadrant 1, as repeated efforts to learn and solve problems elevate the level of relevant prior knowledge (Kim 1995). Knowledge and Learning Many social scientists have attempted to delineate knowledge dimensions (Garud and Nayyar 1994, Kogut and Zander 1992, Polanyi 1966, Rogers 1983, Winter 1987). Polanyi's two dimensions, explicit and tacit, are the most widely accepted. Explicit knowledge is knowledge that is codified and transmittable in formal, systematic language. It therefore can be acquired in the form of books, technical specifications, and designs, or as embodied in machines. Tacit knowledge, in contrast, is so deeply rooted in the human mind and body that it is difficult to codify and communicate and can be expressed only th",
"title": ""
},
{
"docid": "5d963d172ac029ba8d3c414e8650db7b",
"text": "Although the Internet has transformed the way our world operates, it has also served as a venue for cyberbullying, a serious form of misbehavior among youth. With many of today's youth experiencing acts of cyberbullying, a growing body of literature has begun to document the prevalence, predictors, and outcomes of this behavior, but the literature is highly fragmented and lacks theoretical focus. Therefore, our purpose in the present article is to provide a critical review of the existing cyberbullying research. The general aggression model is proposed as a useful theoretical framework from which to understand this phenomenon. Additionally, results from a meta-analytic review are presented to highlight the size of the relationships between cyberbullying and traditional bullying, as well as relationships between cyberbullying and other meaningful behavioral and psychological variables. Mixed effects meta-analysis results indicate that among the strongest associations with cyberbullying perpetration were normative beliefs about aggression and moral disengagement, and the strongest associations with cyberbullying victimization were stress and suicidal ideation. Several methodological and sample characteristics served as moderators of these relationships. Limitations of the meta-analysis include issues dealing with causality or directionality of these associations as well as generalizability for those meta-analytic estimates that are based on smaller sets of studies (k < 5). Finally, the present results uncover important areas for future research. We provide a relevant agenda, including the need for understanding the incremental impact of cyberbullying (over and above traditional bullying) on key behavioral and psychological outcomes.",
"title": ""
},
{
"docid": "a338df86cf504d246000c42512473f93",
"text": "Natural Language Processing (NLP) has emerged with a wide scope of research in the area. The Burmese language, also called the Myanmar Language is a resource scarce, tonal, analytical, syllable-timed and principally monosyllabic language with Subject-Object-Verb (SOV) ordering. NLP of Burmese language is also challenged by the fact that it has no white spaces and word boundaries. Keeping these facts in view, the current paper is a first formal attempt to present a bibliography of research works pertinent to NLP tasks in Burmese language. Instead of presenting mere catalogue, the current work is also specifically elaborated by annotations as well as classifications of NLP task research works in NLP related categories. The paper presents the state-of-the-art of Burmese NLP tasks. Both annotations and classifications of NLP tasks of Burmese language are useful to the scientific community as it shows where the field of research in Burmese NLP is going. In fact, to the best of author’s knowledge, this is first work of its kind worldwide for any language. For a period spanning more than 25 years, the paper discusses Burmese language Word Identification, Segmentation, Disambiguation, Collation, Semantic Parsing and Tokenization followed by Part-Of-Speech (POS) Tagging, Machine Translation Systems (MTS), Text Keying/Input, Recognition and Text Display Methods. Burmese language WordNet, Search Engine and influence of other languages on Burmese language are also discussed.",
"title": ""
},
{
"docid": "54a6a5a6dfb38861a94f779d001bacb4",
"text": "The information security community has come to realize that the weakest link in a cybersecurity chain is human behavior. To develop effective cybersecurity training programs for employees in the workplace, it is necessary to identify factors that contribute to employees’ cybersecurity behaviors and then build a theoretical model to understand how these factors affect employees’ self-reported security behavior in the workplace. Supported by a grant from the National Science Foundation (NSF), we developed a model for studying employees’ self-reported cybersecurity behaviors, and conducted a survey study to investigate the cybersecurity behavior and beliefs of employees. Five-hundred-seventy-nine employees from various U.S. organizations and companies completed an online survey with 87 items carefully designed by six experts in cybersecurity, information technology, psychology, and decision science. The results from statistical analysis of the cybersecurity behavior survey questionnaire will be presented in this TREO Talk. Some of the key findings include: Prior Experience was correlated with self-reported cyber security behavior. However, it was not identified as a unique predictor in our regression analysis. This suggests that the prior training may indirectly affect cybersecurity behavior through other variables. Peer Behavior was not a unique predictor of self-reported cybersecurity behavior. Perceptions of peer behavior may reflect people’s own self-efficacy with cybersecurity and their perceptions of the benefits from cybersecurity behaviors. The regression model revealed four unique predictors of self-reported cybersecurity behavior: Computer Skill, Perceived Benefits, Perceived Barriers, and Security Self-efficacy. These variables should be assessed to identify employees who are at risk of cyber attacks and could be the target of interventions. There are statistically significant gender-wise differences in terms of computer skills, prior experience, cues-to-action, security self-efficacy and self-reported cybersecurity behaviors. Since women’s self-efficacy is significantly lower than men, women’s self-efficacy may be a target for intervention.",
"title": ""
},
{
"docid": "b2ebad4a19cdfce87e6b69a25ba6ab49",
"text": "Collaborative filtering have become increasingly important with the development of Web 2.0. Online shopping service providers aim to provide users with quality list of recommended items that will enhance user satisfaction and loyalty. Matrix factorization approaches have become the dominant method as they can reduce the dimension of the data set and alleviate the sparsity problem. However, matrix factorization approaches are limited because they depict each user as one preference vector. In practice, we observe that users may have different preferences when purchasing different subsets of items, and the periods between purchases also vary from one user to another. In this work, we propose a probabilistic approach to learn latent clusters in the large user-item matrix, and incorporate temporal information into the recommendation process. Experimental results on a real world dataset demonstrate that our approach significantly improves the conversion rate, precision and recall of state-of-the-art methods.",
"title": ""
},
{
"docid": "c4d1d0d636e23c377473fe631022bef1",
"text": "Electronic concept mapping tools provide a flexible vehicle for constructing concept maps, linking concept maps to other concept maps and related resources, and distributing concept maps to others. As electronic concept maps are constructed, it is often helpful for users to consult additional resources, in order to jog their memories or to locate resources to link to the map under construction. The World Wide Web provides a rich range of resources for these tasks—if the right resources can be found. This paper presents ongoing research on how to automatically generate Web queries from concept maps under construction, in order to proactively suggest related information to aid concept mapping. First, it examines how concept map structure and content can be exploited to automatically select terms to include in initial queries, based on studies of (1) how concept map structure influences human judgments of concept importance, and (2) the relative value of including information from concept labels and linking phrases. Second, it examines how a concept map can be used to refine future queries by reinforcing the weights of terms that have proven to be good discriminators for the topic of the concept map. The described methods are being applied to developing “intelligent suggesters” to support the concept mapping process.",
"title": ""
},
{
"docid": "8ce498cdbdec9bda55970d39bd9d6bee",
"text": "This paper is about the good side of modal logic, the bad side of modal logic, and how hybrid logic takes the good and fixes the bad. In essence, modal logic is a simple formalism for working with relational structures (or multigraphs). But modal logic has no mechanism for referring to or reasoning about the individual nodes in such structures, and this lessens its effectiveness as a representation formalism. In their simplest form, hybrid logics are upgraded modal logics in which reference to individual nodes is possible. But hybrid logic is a rather unusual modal upgrade. It pushes one simple idea as far as it will go: represent all information as formulas. This turns out to be the key needed to draw together a surprisingly diverse range of work (for example, feature logic, description logic and labelled deduction). Moreover, it displays a number of knowledge representation issues in a new light, notably the importance of sorting.",
"title": ""
},
{
"docid": "5c9652bf8620394b8f87cd898ad0699c",
"text": "FIHT2 algorithm defined by p = x . cos 0 + y .sin 0 + (a/(21()) . x . s ine at 0 5 6 < a / 2 and at p = x . cose + y . sin 0 + (aJ(2It')) . y . cos 0 at a / 2 5 0 < a is a Hough transform which requires nothing of the trigonometric and functional operations to generate the Hough distributions. It is demonstrated in this paper that the FIHT2 is a complete alternative of the usual Hough transform(HT) defined by p = x.cos O+y.sin 0 in the sense that the both transforms could work perfectly as a line detector. It is easy to show that the Hough curves of the FIIJT2 can be generated in a incremental way where addition operation is exclusively needed. It is also investigated that the difference between HT and FIHT2 could be estimated to be neglected.",
"title": ""
},
{
"docid": "caea6d9ec4fbaebafc894167cfb8a3d6",
"text": "Although the positive effects of different kinds of physical activity (PA) on cognitive functioning have already been demonstrated in a variety of studies, the role of cognitive engagement in promoting children's executive functions is still unclear. The aim of the current study was therefore to investigate the effects of two qualitatively different chronic PA interventions on executive functions in primary school children. Children (N = 181) aged between 10 and 12 years were assigned to either a 6-week physical education program with a high level of physical exertion and high cognitive engagement (team games), a physical education program with high physical exertion but low cognitive engagement (aerobic exercise), or to a physical education program with both low physical exertion and low cognitive engagement (control condition). Executive functions (updating, inhibition, shifting) and aerobic fitness (multistage 20-m shuttle run test) were measured before and after the respective condition. Results revealed that both interventions (team games and aerobic exercise) have a positive impact on children's aerobic fitness (4-5% increase in estimated VO2max). Importantly, an improvement in shifting performance was found only in the team games and not in the aerobic exercise or control condition. Thus, the inclusion of cognitive engagement in PA seems to be the most promising type of chronic intervention to enhance executive functions in children, providing further evidence for the importance of the qualitative aspects of PA.",
"title": ""
},
{
"docid": "fdbcf90ffeebf9aab41833df0fff23e6",
"text": "(Under the direction of Anselmo Lastra) For image synthesis in computer graphics, two major approaches for representing a surface's appearance are texture mapping, which provides spatial detail, such as wallpaper, or wood grain; and the 4D bi-directional reflectance distribution function (BRDF) which provides angular detail, telling how light reflects off surfaces. I combine these two modes of variation to form the 6D spatial bi-directional reflectance distribution function (SBRDF). My compact SBRDF representation simply stores BRDF coefficients at each pixel of a map. I propose SBRDFs as a surface appearance representation for computer graphics and present a complete system for their use. I acquire SBRDFs of real surfaces using a device that simultaneously measures the BRDF of every point on a material. The system has the novel ability to measure anisotropy (direction of threads, scratches, or grain) uniquely at each surface point. I fit BRDF parameters using an efficient nonlinear optimization approach specific to BRDFs. SBRDFs can be rendered using graphics hardware. My approach yields significantly more detailed, general surface appearance than existing techniques for a competitive rendering cost. I also propose an SBRDF rendering method for global illumination using prefiltered environment maps. This improves on existing prefiltered environment map techniques by decoupling the BRDF from the environment maps, so a single set of maps may be used to illuminate the unique BRDFs at each surface point. I demonstrate my results using measured surfaces including gilded wallpaper, plant leaves, upholstery fabrics, wrinkled gift-wrapping paper and glossy book covers. iv To Tiffany, who has worked harder and sacrificed more for this than have I. ACKNOWLEDGMENTS I appreciate the time, guidance and example of Anselmo Lastra, my advisor. I'm grateful to Steve Molnar for being my mentor throughout graduate school. I'm grateful to the other members of my committee, Henry Fuchs, Gary Bishop, and Lars Nyland for helping and teaching me and creating an environment that allows research to be done successfully and pleasantly. I am grateful for the effort and collaboration of Ben Cloward, who masterfully modeled the Carolina Inn lobby, patiently worked with my software, and taught me much of how artists use computer graphics. I appreciate the collaboration of Wolfgang Heidrich, who worked hard on this project and helped me get up to speed on shading with graphics hardware. I'm thankful to Steve Westin, for patiently teaching me a great deal about surface appearance and light measurement. I'm grateful for …",
"title": ""
},
{
"docid": "2d5d72944f12446a93e63f53ffce7352",
"text": "Standardization of transanal total mesorectal excision requires the delineation of the principal procedural components before implementation in practice. This technique is a bottom-up approach to a proctectomy with the goal of a complete mesorectal excision for optimal outcomes of oncologic treatment. A detailed stepwise description of the approach with technical pearls is provided to optimize one's understanding of this technique and contribute to reducing the inherent risk of beginning a new procedure. Surgeons should be trained according to standardized pathways including online preparation, observational or hands-on courses as well as the potential for proctorship of early cases experiences. Furthermore, technological pearls with access to the \"video-in-photo\" (VIP) function, allow surgeons to link some of the images in this article to operative demonstrations of certain aspects of this technique.",
"title": ""
},
{
"docid": "08addfff95406d97e22246967d14efbc",
"text": "Oral squamous papillomas are benign proliferating lesions induced by human papilloma virus. These lesions are painless and slowly growing masses. As an oral lesion, it raises concern because of its clinical appearance. These lesions commonly occur between age 30 and 50 years, and sometimes can occur before the age of 10 years. Oral squamous papilloma accounts for 8% of all oral tumors in children. Common site predilection for the lesion is the tongue and soft palate, and may occur on any other surface of the oral cavity such as the uvula and vermilion of the lip. Here, we are presenting a case of squamous papilloma on the palate.",
"title": ""
},
{
"docid": "3a106eb1d70a5a867d13a7a976f8c49a",
"text": "Many large organizations are adopting agile software development as part of their continuous push towards higher flexibility and shorter lead times, yet few reports on large-scale agile transformations are available in the literature. In this paper we report how Ericsson introduced agile in a new R&D product development program developing a XaaS platform and a related set of services, while simultaneously scaling it up aggressively. The overarching goal for the R&D organization, distributed to five sites at two continents, was to achieve continuous feature delivery. This single case study is based on 45 semi-structured interviews during visits at four sites, and five observation sessions at three sites. We describe how the organization experimented with different set-ups for their tens of agile teams aiming for rapid end-to-end development: from component-based virtual teams to totally cross-functional, cross-component, cross-site teams. Moreover, we discuss the challenges the organization faced and how they mitigated them on their journey towards continuous and rapid software engineering. We present four lessons learned for large-scale agile transformations: 1) consider using an experimental approach to transformation, 2) consider implementing the transformation step-wise in complex large-scale settings, 3) team inter-changeability can be limited in a complex large-scale product — specialization might be needed, and 4) not using a common agile framework for the whole organization, in combination with insufficient common trainings and coaching may lead to a lack of common direction in the agile implementation. Further in-depth case studies on large-scale agile transformations, on customizing agile to large-scale settings, as well as on the use of scaling frameworks are needed.",
"title": ""
},
{
"docid": "e58d7f537b0d703fa1381eee2d721a34",
"text": "BACKGROUND\nProvision of high quality transitional care is a challenge for health care providers in many western countries. This systematic review was conducted to (1) identify and synthesise research, using randomised control trial designs, on the quality of transitional care interventions compared with standard hospital discharge for older people with chronic illnesses, and (2) make recommendations for research and practice.\n\n\nMETHODS\nEight databases were searched; CINAHL, Psychinfo, Medline, Proquest, Academic Search Complete, Masterfile Premier, SocIndex, Humanities and Social Sciences Collection, in addition to the Cochrane Collaboration, Joanna Briggs Institute and Google Scholar. Results were screened to identify peer reviewed journal articles reporting analysis of quality indicator outcomes in relation to a transitional care intervention involving discharge care in hospital and follow-up support in the home. Studies were limited to those published between January 1990 and May 2013. Study participants included people 60 years of age or older living in their own homes who were undergoing care transitions from hospital to home. Data relating to study characteristics and research findings were extracted from the included articles. Two reviewers independently assessed studies for risk of bias.\n\n\nRESULTS\nTwelve articles met the inclusion criteria. Transitional care interventions reported in most studies reduced re-hospitalizations, with the exception of general practitioner and primary care nurse models. All 12 studies included outcome measures of re-hospitalization and length of stay indicating a quality focus on effectiveness, efficiency, and safety/risk. Patient satisfaction was assessed in six of the 12 studies and was mostly found to be high. Other outcomes reflecting person and family centred care were limited including those pertaining to the patient and carer experience, carer burden and support, and emotional support for older people and their carers. Limited outcome measures were reported reflecting timeliness, equity, efficiencies for community providers, and symptom management.\n\n\nCONCLUSIONS\nGaps in the evidence base were apparent in the quality domains of timeliness, equity, efficiencies for community providers, effectiveness/symptom management, and domains of person and family centred care. Further research that involves the person and their family/caregiver in transitional care interventions is needed.",
"title": ""
},
{
"docid": "759a44aa610befecc766e7c4cbe19734",
"text": "This survey introduces the current state of the art in image and video retargeting and describes important ideas and technologies that have influenced the recent work. Retargeting is the process of adapting an image or video from one screen resolution to another to fit different displays, for example, when watching a wide screen movie on a normal television screen or a mobile device. As there has been considerable work done in this field already, this survey provides an overview of the techniques. It is meant to be a starting point for new research in the field. We include explanations of basic terms and operators, as well as the basic workflow of the different methods.",
"title": ""
},
{
"docid": "42ab434d5628a3bfc01ca866c85b2545",
"text": "This work discusses the design of a GaN power amplifier demonstrating high efficiency over more than a decade bandwidth using coaxial baluns and transformer matching networks to achieve over a 50MHz-500 MHz bandwidth. The power amplifier demonstrates a power added efficiency of 83%-64% over full bandwidth with 15 dB compressed gain at peak PAE.",
"title": ""
},
{
"docid": "d6a6ee23cd1d863164c79088f75ece30",
"text": "In our work, 3D objects classification has been dealt with convolutional neural networks which is a common paradigm recently in image recognition. In the first phase of experiments, 3D models in ModelNet10 and ModelNet40 data sets were voxelized and scaled with certain parameters. Classical CNN and 3D Dense CNN architectures were designed for training the pre-processed data. In addition, the two trained CNNs were ensembled and the results of them were observed. A success rate of 95.37% achieved on ModelNet10 by using 3D dense CNN, a success rate of 91.24% achieved with ensemble of two CNNs on ModelNet40.",
"title": ""
},
{
"docid": "c7dc157e36d443924c17c9d607097873",
"text": "This paper presents an extremely simple human detection algorithm based on correlating edge magnitude images with a filter. The key is the technology used to train the filter: Average of Synthetic Exact Filters (ASEF). The ASEF based detector can process images at over 25 frames per second and achieves a 94.5% detection rate with less than one false detection per frame for sparse crowds. Filter training is also fast, taking only 12 seconds to train the detector on 32 manually annotated images. Evaluation is performed on the PETS 2009 dataset and results are compared to the OpenCV cascade classifier and a state-of-the-art deformable parts based person detector.",
"title": ""
},
{
"docid": "a4fd46d3a28f6642cb800209355728a0",
"text": "Bidens pilosa L. var. radiata Scherff (BP) is a plant used as a traditional folk medicine. BP, cultivated with only green manure on Miyako Island, Okinawa prefecture, was processed to powder and is referred to as MMBP. We have reported that MMBP has antioxidant, anti-inflammatory, and anti-allergy properties. In this study, we investigated the effects of MMBP on several experimental gastric lesions induced by HCl/EtOH, a non-steroidal anti-inflammatory drug, or cold-restraint stress, comparing these results with those of rutin or anti-ulcerogenic drugs (cimetidine or sucralfate) based on the lesion index and hemorrhage from the gastric lesions. Orally administered MMBP prevented the progression of the gastric lesions. Moreover, treatment with MMBP, rutin, or sucralfate, which had potent antioxidative activity, inhibited increases in the levels of thiobarbituric acid reactive substances (TBARS) in the gastric mucosal lesions. The inhibition of the gastric mucosal TBARS content by MMBP may have been due to the antioxidant effects of MMBP. These results indicate that MMBP prevents the progression of acute gastric mucosal lesions, possibly by suppressing oxidative stress in the gastric mucosa.",
"title": ""
},
{
"docid": "8a73a42bed30751cbb6798398b81571d",
"text": "In this paper, we study the problem of learning image classification models with label noise. Existing approaches depending on human supervision are generally not scalable as manually identifying correct or incorrect labels is time-consuming, whereas approaches not relying on human supervision are scalable but less effective. To reduce the amount of human supervision for label noise cleaning, we introduce CleanNet, a joint neural embedding network, which only requires a fraction of the classes being manually verified to provide the knowledge of label noise that can be transferred to other classes. We further integrate CleanNet and conventional convolutional neural network classifier into one framework for image classification learning. We demonstrate the effectiveness of the proposed algorithm on both of the label noise detection task and the image classification on noisy data task on several large-scale datasets. Experimental results show that CleanNet can reduce label noise detection error rate on held-out classes where no human supervision available by 41.5% compared to current weakly supervised methods. It also achieves 47% of the performance gain of verifying all images with only 3.2% images verified on an image classification task. Source code and dataset will be available at kuanghuei.github.io/CleanNetProject.",
"title": ""
}
] |
scidocsrr
|
31466155e3a889256e90b8bddec07ce0
|
Performance Evaluation of DES and Blowfish Algorithms
|
[
{
"docid": "34ceb0e84b4e000b721f87bcbec21094",
"text": "The principal goal guiding the design of any encryption algorithm must be security against unauthorized attacks. However, for all practical applications, performance and the cost of implementation are also important concerns. A data encryption algorithm would not be of much use if it is secure enough but slow in performance because it is a common practice to embed encryption algorithms in other applications such as e-commerce, banking, and online transaction processing applications. Embedding of encryption algorithms in other applications also precludes a hardware implementation, and is thus a major cause of degraded overall performance of the system. In this paper, the four of the popular secret key encryption algorithms, i.e., DES, 3DES, AES (Rijndael), and the Blowfish have been implemented, and their performance is compared by encrypting input files of varying contents and sizes, on different Hardware platforms. The algorithms have been implemented in a uniform language, using their standard specifications, to allow a fair comparison of execution speeds. The performance results have been summarized and a conclusion has been presented. Based on the experiments, it has been concluded that the Blowfish is the best performing algorithm among the algorithms chosen for implementation.",
"title": ""
}
] |
[
{
"docid": "8ea2dadd6024e2f1b757818e0c5d76fa",
"text": "BACKGROUND\nLysergic acid diethylamide (LSD) is a potent serotonergic hallucinogen or psychedelic that modulates consciousness in a marked and novel way. This study sought to examine the acute and mid-term psychological effects of LSD in a controlled study.\n\n\nMETHOD\nA total of 20 healthy volunteers participated in this within-subjects study. Participants received LSD (75 µg, intravenously) on one occasion and placebo (saline, intravenously) on another, in a balanced order, with at least 2 weeks separating sessions. Acute subjective effects were measured using the Altered States of Consciousness questionnaire and the Psychotomimetic States Inventory (PSI). A measure of optimism (the Revised Life Orientation Test), the Revised NEO Personality Inventory, and the Peter's Delusions Inventory were issued at baseline and 2 weeks after each session.\n\n\nRESULTS\nLSD produced robust psychological effects; including heightened mood but also high scores on the PSI, an index of psychosis-like symptoms. Increased optimism and trait openness were observed 2 weeks after LSD (and not placebo) and there were no changes in delusional thinking.\n\n\nCONCLUSIONS\nThe present findings reinforce the view that psychedelics elicit psychosis-like symptoms acutely yet improve psychological wellbeing in the mid to long term. It is proposed that acute alterations in mood are secondary to a more fundamental modulation in the quality of cognition, and that increased cognitive flexibility subsequent to serotonin 2A receptor (5-HT2AR) stimulation promotes emotional lability during intoxication and leaves a residue of 'loosened cognition' in the mid to long term that is conducive to improved psychological wellbeing.",
"title": ""
},
{
"docid": "4c48737ffa2a1e385cd93255ce440584",
"text": "Even though the emerging field of user experience generally acknowledges the importance of aesthetic qualities in interactive products and services, there is a lack of approaches recognizing the fundamentally temporal nature of interaction aesthetics. By means of interaction criticism, I introduce four concepts that begin to characterize the aesthetic qualities of interaction. Pliability refers to the sense of malleability and tightly coupled interaction that makes the use of an interactive visualization captivating. Rhythm is an important characteristic of certain types of interaction, from the sub-second pacing of musical interaction to the hour-scale ebb and flow of peripheral emotional communication. Dramaturgical structure is not only a feature of online role-playing games, but plays an important role in several design genres from the most mundane to the more intellectually sophisticated. Fluency is a way to articulate the gracefulness with which we are able to handle multiple demands for our attention and action in augmented spaces.",
"title": ""
},
{
"docid": "a212a2969c0c72894dcde880bbf29fa7",
"text": "Machine learning is useful for building robust learning models, and it is based on a set of features that identify a state of an object. Unfortunately, some data sets may contain a large number of features making, in some cases, the learning process time consuming and the generalization capability of machine learning poor. To make a data set easy to learn and understand, it is typically recommended to remove the most irrelevant features from the set. However, choosing what data should be kept or eliminated may be performed by complex selection algorithms, and optimal feature selection may require an exhaustive search of all possible subsets of features which is computationally expensive. This paper proposes a simple method to perform feature selection using artificial neural networks. It is shown experimentally that genetic algorithms in combination with artificial neural networks can easily be used to extract those features that are required to produce a desired result. Experimental results show that very few hidden neurons are required for feature selection as artificial neural networks are only used to assess the quality of an individual, which is a chosen subset of features.",
"title": ""
},
{
"docid": "f3a7e0f63d85c069e3f2ab75dcedc671",
"text": "The commit processing in a Distributed Real Time Database (DRTDBS) can significantly increase execution time of a transaction. Therefore, designing a good commit protocol is important for the DRTDBS; the main challenge is the adaptation of standard commit protocol into the real time database system and so, decreasing the number of missed transaction in the systems. In these papers we review the basic commit protocols and the other protocols depend on it, for enhancing the transaction performance in DRTDBS. We propose a new commit protocol for reducing the number of transaction that missing their deadline. Keywords— DRTDBS, Commit protocols, Commit processing, 2PC protocol, 3PC protocol, Missed Transaction, Abort Transaction.",
"title": ""
},
{
"docid": "6a85677755a82b147cb0874ae8299458",
"text": "Data mining involves the process of recovering related, significant and credential information from a large collection of aggregated data. A major area of current research in data mining is the field of clinical investigations that involve disease diagnosis, prognosis and drug therapy. The objective of this paper is to identify an efficient classifier for prognostic breast cancer data. This research work involves designing a data mining framework that incorporates the task of learning patterns and rules that will facilitate the formulation of decisions in new cases. The machine learning techniques employed to train the proposed system are based on feature relevance analysis and classification algorithms. Wisconsin Prognostic Breast Cancer (WPBC) data from the UCI machine learning repository is utilized by means of data mining techniques to completely train the system on 198 individual cases, each comprising of 33 predictor values. This paper highlights the performance of feature reduction and classification algorithms on the training dataset. We evaluate the number of attributes for split in the Random tree algorithm and the confidence level and minimum size of the leaves in the C4.5 algorithm to produce 100 percent classification accuracy. Our results demonstrate that Random Tree and Quinlan’s C4.5 classification algorithm produce 100 percent accuracy in the training and test phase of classification with proper evaluation of algorithmic parameters.",
"title": ""
},
{
"docid": "af5aaf2d834eec9bf5e47a89be6a30d8",
"text": "An often-cited advantage of automatic speech recognition (ASR) is that it is ‘fast’; it is quite easy for a person to speak at several hundred words a minute, well above the rates that are possible using other modes of data entry. However, in order to conduct a fair comparison between alternative data entry methods, it is necessary to consider not the input rate per se, but the rate at which it is possible to enter information that is fully correct. This paper describes a model for predicting the relative success of alternative method of data entry in terms of the effective ‘throughput’ that is achievable taking into account typical input data entry rates, error rates and error correction times. Results are presented for the entry of both conventional and SMS-style text.",
"title": ""
},
{
"docid": "267f3d176f849bf24dfab7e78d93b153",
"text": "The long-running debate between the ‘rational design’ and ‘emergent process’ schools of strategy formation has involved caricatures of firms’ strategic planning processes, but little empirical evidence of whether and how companies plan. Despite the presumption that environmental turbulence renders conventional strategic planning all but impossible, the evidence from the corporate sector suggests that reports of the demise of strategic planning are greatly exaggerated. The goal of this paper is to fill this empirical gap by describing the characteristics of the strategic planning systems of multinational, multibusiness companies faced with volatile, unpredictable business environments. In-depth case studies of the planning systems of eight of the world’s largest oil companies identified fundamental changes in the nature and role of strategic planning since the end of the 1970s. The findings point to a possible reconciliation of ‘design’ and ‘process’ approaches to strategy formulation. The study pointed to a process of planned emergence in which strategic planning systems provided a mechanism for coordinating decentralized strategy formulation within a structure of demanding performance targets and clear corporate guidelines. The study shows that these planning systems fostered adaptation and responsiveness, but showed limited innovation and analytical sophistication. Copyright 2003 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "612330c4bfbfddd07251ee0a07912526",
"text": "Radiofrequency-induced calf muscle volume reduction is a commonly used method for cosmetic shaping of the lower leg contour. Functional disabilities associated with the use of the radiofrequency (RF) technique, with this procedure targeting the normal gastrocnemius muscle, still have not been reported. However, the authors have experienced several severe ankle equinus cases after RF-induced calf muscle volume reduction. This study retrospectively reviewed 19 calves of 12 patients who showed more than 20° of fixed equinus even though they underwent physical therapy for more than 6 months. All were women with a mean age of 32 years (range, 23–41 years). Of the 12 patients, 7 were bilateral. All the patients received surgical Achilles lengthening for deformity correction. To evaluate the clinical outcome, serial ankle dorsiflexion was measured, and the American Orthopedic Foot and Ankle Society (AOFAS) score was evaluated at the latest follow-up visit. The presence of soleus muscle involvement and an ongoing lesion that might affect the postoperative results of preoperative magnetic resonance imaging (MRI) were investigated. Statistical analysis was conducted to analyze preoperative factors strongly associated with patient clinical outcomes. The mean follow-up period after surgery was 18.6 months (range, 12–28 months). At the latest follow-up visit, the mean ankle dorsiflexion was 9° (range, 0–20°), and the mean AOFAS score was 87.7 (range, 80–98). On preoperative MRI, 13 calves showed soleus muscle involvement. Seven calves had ongoing lesions. Five of the ongoing lesions were muscle edema, and the remaining two lesions were cystic mass lesions resulting from muscle necrosis. Ankle dorsiflexion and AOFAS scores at the latest follow-up evaluation were insufficient in the ongoing lesions group. Although RF-induced calf muscle reduction is believed to be a safer method than conventional procedures, careful handling is needed because of the side effects that may occur in some instances. The slow progression of fibrosis could be observed after RF-induced calf reduction. Therefore, long-term follow-up evaluation is needed after the procedure. Therapeutic case series.",
"title": ""
},
{
"docid": "36e3fc3b9a24277a8eb5a736047f9525",
"text": "The quantitative analysis of a randomized system, modeled by a Markov decision process, against an LTL formula can be performed by a combination of graph algorithms, automata-theoretic concepts and numerical methods to compute maximal or minimal reachability probabilities. In this paper, we present various reduction techniques that serve to improve the performance of the quantitative analysis, and report on their implementation on the top of the probabilistic model checker \\LiQuor. Although our techniques are purely heuristic and cannot improve the worst-case time complexity of standard algorithms for the quantitative analysis, a series of examples illustrates that the proposed methods can yield a major speed-up.",
"title": ""
},
{
"docid": "3e3dc575858c21806edbe6149475f5e0",
"text": "This paper describes a new model for understanding natural language commands given to autonomous systems that perform navigation and mobile manipulation in semi-structured environments. Previous approaches have used models with fixed structure to infer the likelihood of a sequence of actions given the environment and the command. In contrast, our framework, called Generalized Grounding Graphs (G), dynamically instantiates a probabilistic graphical model for a particular natural language command according to the command’s hierarchical and compositional semantic structure. Our system performs inference in the model to successfully find and execute plans corresponding to natural language commands such as “Put the tire pallet on the truck.” The model is trained using a corpus of commands collected using crowdsourcing. We pair each command with robot actions and use the corpus to learn the parameters of the model. We evaluate the robot’s performance by inferring plans from natural language commands, executing each plan in a realistic robot simulator, and asking users to evaluate the system’s performance. We demonstrate that our system can successfully follow many natural language commands from the corpus.",
"title": ""
},
{
"docid": "8ae986abd5f31a06d04a2762ee3bcb91",
"text": "Theta and gamma frequency oscillations occur in the same brain regions and interact with each other, a process called cross-frequency coupling. Here, we review evidence for the following hypothesis: that the dual oscillations form a code for representing multiple items in an ordered way. This form of coding has been most clearly demonstrated in the hippocampus, where different spatial information is represented in different gamma subcycles of a theta cycle. Other experiments have tested the functional importance of oscillations and their coupling. These involve correlation of oscillatory properties with memory states, correlation with memory performance, and effects of disrupting oscillations on memory. Recent work suggests that this coding scheme coordinates communication between brain regions and is involved in sensory as well as memory processes.",
"title": ""
},
{
"docid": "1499fd10ee703afd1d5b3ec35defa26b",
"text": "It is challenging to analyze the aerial locomotion of bats because of the complicated and intricate relationship between their morphology and flight capabilities. Developing a biologically inspired bat robot would yield insight into how bats control their body attitude and position through the complex interaction of nonlinear forces (e.g., aerodynamic) and their intricate musculoskeletal mechanism. The current work introduces a biologically inspired soft robot called Bat Bot (B2). The overall system is a flapping machine with 5 Degrees of Actuation (DoA). This work reports on some of the preliminary untethered flights of B2. B2 has a nontrivial morphology and it has been designed after examining several biological bats. Key DoAs, which contribute significantly to bat flight, are picked and incorporated in B2's flight mechanism design. These DoAs are: 1) forelimb flapping motion, 2) forelimb mediolateral motion (folding and unfolding) and 3) hindlimb dorsoventral motion (upward and downward movement).",
"title": ""
},
{
"docid": "78ca8af920cce95476fe87bd7b015b6f",
"text": "The popularity of Bayesian optimization methods for efficient exploration of parameter spaces has lead to a series of papers applying Gaussian processes as surrogates in the optimization of functions. However, most proposed approaches only allow the exploration of the parameter space to occur sequentially. Often, it is desirable to simultaneously propose batches of parameter values to explore. This is particularly the case when large parallel processing facilities are available. These could either be computational or physical facets of the process being optimized. Batch methods, however, require the modeling of the interaction between the different evaluations in the batch, which can be expensive in complex scenarios. We investigate this issue and propose a highly effective heuristic based on an estimate of the function’s Lipschitz constant that captures the most important aspect of this interaction— local repulsion—at negligible computational overhead. A penalized acquisition function is used to collect batches of points minimizing the non-parallelizable computational effort. The resulting algorithm compares very well, in run-time, with much more elaborate alternatives.",
"title": ""
},
{
"docid": "ad062a5906071caa1b555fdcb32bba2e",
"text": "The world's ageing population and prevalence of chronic diseases have lead to high demand for tele-home healthcare, in which vital-signs monitoring is essential. An overview of state-of-art wearable technologies for remote patient-monitoring is presented, followed by case studies on a cuffless blood pressure meter, ring-type heart rate monitor, and Bluetooth/spl trade/-based ECG monitor. Aim of our project is to develop a tele-home healthcare system which utilizes wearable devices, wireless communication technologies, and multisensor data fusion methods. As an important part of this system, a cuffless BP meter has been developed and tested on 30 subjects in a total of 71 trials over a period of five months. Preliminary results show a mean error (ME) of 1.82 mmHg and standard deviation of error (SDE) of 7.62 mmHg in systolic pressure; while ME and SDE in diastolic pressure are 0.45 mmHg and 5.27 mmHg, respectively.",
"title": ""
},
{
"docid": "cf02d97cdcc1a4be51ed0af2af771b7d",
"text": "Bowen's disease is a squamous cell carcinoma in situ and has the potential to progress to a squamous cell carcinoma. The authors treated two female patients (a 39-year-old and a 41-year-old) with Bowen's disease in the vulva area using topical photodynamic therapy (PDT), involving the use of 5-aminolaevulinic acid and a light-emitting diode device. The light was administered at an intensity of 80 mW/cm(2) for a dose of 120 J/cm(2) biweekly for 6 cycles. The 39-year-old patient showed excellent clinical improvement, but the other patient achieved only a partial response. Even though one patient underwent a total excision 1 year later due to recurrence, both patients were satisfied with the cosmetic outcomes of this therapy and the partial improvement over time. The common side effect of PDT was a stinging sensation. PDT provides a relatively effective and useful alternative treatment for Bowen's disease in the vulva area.",
"title": ""
},
{
"docid": "8ea957bbc072c395efa6248b39764fa6",
"text": "The space of graphs is often characterised by a non-trivial geometry, which complicates performing inference in practical applications. A common approach is to use embedding techniques to represent graphs as points in a conventional Euclidean space, but non-Euclidean spaces have often been shown to be better suited for embedding graphs. Among these, constantcurvature Riemannian manifolds (CCMs) offer embedding spaces suitable for studying the statistical properties of a graph distribution, as they provide ways to easily compute metric geodesic distances. In this paper, we focus on the problem of detecting changes in a stream of attributed graphs. To this end, we introduce a novel change detection framework based on neural networks and CCMs that takes into account the non-Euclidean nature of graphs. Our contributions in this work are twofold. First, via a novel approach based on adversarial learning, we compute graph embeddings by training an autoencoder to represent graphs on CCMs. Second, we introduce two novel change detection tests operating on CCMs. We perform experiments on synthetic graph streams, and on sequences of functional networks extracted from intracranial EEG data with the aim of predicting the onset of epileptic seizures. Results show that the proposed methods are able to detect even small changes in the graphgenerating process, consistently outperforming approaches based on Euclidean embeddings.",
"title": ""
},
{
"docid": "fb2287cb1c41441049288335f10fd473",
"text": "One of the important types of information on the Web is the opinions expressed in the user generated content, e.g., customer reviews of products, forum posts, and blogs. In this paper, we focus on customer reviews of products. In particular, we study the problem of determining the semantic orientations (positive, negative or neutral) of opinions expressed on product features in reviews. This problem has many applications, e.g., opinion mining, summarization and search. Most existing techniques utilize a list of opinion (bearing) words (also called opinion lexicon) for the purpose. Opinion words are words that express desirable (e.g., great, amazing, etc.) or undesirable (e.g., bad, poor, etc) states. These approaches, however, all have some major shortcomings. In this paper, we propose a holistic lexicon-based approach to solving the problem by exploiting external evidences and linguistic conventions of natural language expressions. This approach allows the system to handle opinion words that are context dependent, which cause major difficulties for existing algorithms. It also deals with many special words, phrases and language constructs which have impacts on opinions based on their linguistic patterns. It also has an effective function for aggregating multiple conflicting opinion words in a sentence. A system, called Opinion Observer, based on the proposed technique has been implemented. Experimental results using a benchmark product review data set and some additional reviews show that the proposed technique is highly effective. It outperforms existing methods significantly",
"title": ""
},
{
"docid": "c66e38f3be7760c8ca0b6ef2dfc5bec2",
"text": "Gesture recognition remains a very challenging task in the field of computer vision and human computer interaction (HCI). A decade ago the task seemed to be almost unsolvable with the data provided by a single RGB camera. Due to recent advances in sensing technologies, such as time-of-flight and structured light cameras, there are new data sources available, which make hand gesture recognition more feasible. In this work, we propose a highly precise method to recognize static gestures from a depth data, provided from one of the above mentioned devices. The depth images are used to derive rotation-, translation- and scale-invariant features. A multi-layered random forest (MLRF) is then trained to classify the feature vectors, which yields to the recognition of the hand signs. The training time and memory required by MLRF are much smaller, compared to a simple random forest with equivalent precision. This allows to repeat the training procedure of MLRF without significant effort. To show the advantages of our technique, we evaluate our algorithm on synthetic data, on publicly available dataset, containing 24 signs from American Sign Language(ASL) and on a new dataset, collected using recently appeared Intel Creative Gesture Camera.",
"title": ""
},
{
"docid": "8c6b7ef0da1b54b84f6e3912238bae04",
"text": "With rapid increasing text information, the need for a computer system to processing and analyzing this information are felt. One of the systems that exist in analyzing and processing of text is a text summarization in which large volume of text is summarized based on different algorithms. In this paper, by using BabelNet knowledge base and its concept graph, a system for summarizing text is offered. In proposed approach, concepts of words by using BabelNet knowledge base are extracted and concept graphs are produced and sentences, according to concepts and resulting graph are rated. Therefore, these rating concepts are utilized in final summarization. Also, a replication control approach is proposed in a way that selected concepts in each state are punished and this causes to produce summaries with less redundancy. To compare and evaluate the performance of the proposed method, DUC2004 is used and ROUGE used as evaluation metric. The proposed method by compared to other methods produces summaries with more quality and fewer redundancies.",
"title": ""
},
{
"docid": "2e2cffc777e534ad1ab7a5c638e0574e",
"text": "BACKGROUND\nPoly(ADP-ribose)polymerase-1 (PARP-1) is a highly promising novel target in breast cancer. However, the expression of PARP-1 protein in breast cancer and its associations with outcome are yet poorly characterized.\n\n\nPATIENTS AND METHODS\nQuantitative expression of PARP-1 protein was assayed by a specific immunohistochemical signal intensity scanning assay in a range of normal to malignant breast lesions, including a series of patients (N = 330) with operable breast cancer to correlate with clinicopathological factors and long-term outcome.\n\n\nRESULTS\nPARP-1 was overexpressed in about a third of ductal carcinoma in situ and infiltrating breast carcinomas. PARP-1 protein overexpression was associated to higher tumor grade (P = 0.01), estrogen-negative tumors (P < 0.001) and triple-negative phenotype (P < 0.001). The hazard ratio (HR) for death in patients with PARP-1 overexpressing tumors was 7.24 (95% CI; 3.56-14.75). In a multivariate analysis, PARP-1 overexpression was an independent prognostic factor for both disease-free (HR 10.05; 95% CI 5.42-10.66) and overall survival (HR 1.82; 95% CI 1.32-2.52).\n\n\nCONCLUSIONS\nNuclear PARP-1 is overexpressed during the malignant transformation of the breast, particularly in triple-negative tumors, and independently predicts poor prognosis in operable invasive breast cancer.",
"title": ""
}
] |
scidocsrr
|
b253bca97a902a20eeef2043d89e193a
|
A Review of Intelligent Driving Style Analysis Systems and Related Artificial Intelligence Algorithms
|
[
{
"docid": "61953c398f2bcd4fd0ff4662689293a0",
"text": "Today's smartphones and mobile devices typically embed advanced motion sensors. Due to their increasing market penetration, there is a potential for the development of distributed sensing platforms. In particular, over the last few years there has been an increasing interest in monitoring vehicles and driving data, aiming to identify risky driving maneuvers and to improve driver efficiency. Such a driver profiling system can be useful in fleet management, insurance premium adjustment, fuel consumption optimization or CO2 emission reduction. In this paper, we analyze how smartphone sensors can be used to identify driving maneuvers and propose SenseFleet, a driver profile platform that is able to detect risky driving events independently from the mobile device and vehicle. A fuzzy system is used to compute a score for the different drivers using real-time context information like route topology or weather conditions. To validate our platform, we present an evaluation study considering multiple drivers along a predefined path. The results show that our platform is able to accurately detect risky driving events and provide a representative score for each individual driver.",
"title": ""
},
{
"docid": "c406d734f32cc4b88648c037d9d10e46",
"text": "In this paper, we review the state-of-the-art technologies for driver inattention monitoring, which can be classified into the following two main categories: 1) distraction and 2) fatigue. Driver inattention is a major factor in most traffic accidents. Research and development has actively been carried out for decades, with the goal of precisely determining the drivers' state of mind. In this paper, we summarize these approaches by dividing them into the following five different types of measures: 1) subjective report measures; 2) driver biological measures; 3) driver physical measures; 4) driving performance measures; and 5) hybrid measures. Among these approaches, subjective report measures and driver biological measures are not suitable under real driving conditions but could serve as some rough ground-truth indicators. The hybrid measures are believed to give more reliable solutions compared with single driver physical measures or driving performance measures, because the hybrid measures minimize the number of false alarms and maintain a high recognition rate, which promote the acceptance of the system. We also discuss some nonlinear modeling techniques commonly used in the literature.",
"title": ""
},
{
"docid": "b01bc5df28e670c82d274892a407b0aa",
"text": "We propose that many human behaviors can be accurately described as a set of dynamic models (e.g., Kalman filters) sequenced together by a Markov chain. We then use these dynamic Markov models to recognize human behaviors from sensory data and to predict human behaviors over a few seconds time. To test the power of this modeling approach, we report an experiment in which we were able to achieve 95 accuracy at predicting automobile drivers' subsequent actions from their initial preparatory movements.",
"title": ""
},
{
"docid": "54ba9715a8ef99ee7ca259dc60553999",
"text": "The proliferation of smartphones and mobile devices embedding different types of sensors sets up a prodigious and distributed sensing platform. In particular, in the last years there has been an increasing necessity to monitor drivers to identify bad driving habits in order to optimize fuel consumption, to reduce CO2 emissions or, indeed, to design new reliable and fair pricing schemes for the insurance market. In this paper, we analyze the driver sensing capacity of smartphones. We propose a mobile tool that makes use of the most common sensors embedded in current smartphones and implement a Fuzzy Inference System that scores the overall driving behavior by combining different fuzzy sensing data.",
"title": ""
}
] |
[
{
"docid": "9c9e1458740337c7b074710297a386a8",
"text": "Seed dormancy is an innate seed property that defines the environmental conditions in which the seed is able to germinate. It is determined by genetics with a substantial environmental influence which is mediated, at least in part, by the plant hormones abscisic acid and gibberellins. Not only is the dormancy status influenced by the seed maturation environment, it is also continuously changing with time following shedding in a manner determined by the ambient environment. As dormancy is present throughout the higher plants in all major climatic regions, adaptation has resulted in divergent responses to the environment. Through this adaptation, germination is timed to avoid unfavourable weather for subsequent plant establishment and reproductive growth. In this review, we present an integrated view of the evolution, molecular genetics, physiology, biochemistry, ecology and modelling of seed dormancy mechanisms and their control of germination. We argue that adaptation has taken place on a theme rather than via fundamentally different paths and identify similarities underlying the extensive diversity in the dormancy response to the environment that controls germination.",
"title": ""
},
{
"docid": "28720ce70b52adf92d8924143377ddd6",
"text": "This article describes an approach to building a cost-effective and research-grade visual-inertial (VI) odometry-aided vertical takeoff and landing (VTOL) platform. We utilize an off-the-shelf VI sensor, an onboard computer, and a quadrotor platform, all of which are factory calibrated and mass produced, thereby sharing similar hardware and sensor specifications [e.g., mass, dimensions, intrinsic and extrinsic of camera-inertial measurement unit (IMU) systems, and signal-to-noise ratio]. We then perform system calibration and identification, enabling the use of our VI odometry, multisensor fusion (MSF), and model predictive control (MPC) frameworks with off-the-shelf products. This approach partially circumvents the tedious parameter-tuning procedures required to build a full system. The complete system is extensively evaluated both indoors using a motioncapture system and outdoors using a laser tracker while performing hover and step responses and trajectory-following tasks in the presence of external wind disturbances. We achieve root-mean-square (rms) pose errors of 0.036 m with respect to reference hover trajectories. We also conduct relatively long distance (.180 m) experiments on a farm site, demonstrating a 0.82% drift error of the total flight distance. This article conveys the insights we acquired about the platform and sensor module and offers open-source code with tutorial documentation to the community.",
"title": ""
},
{
"docid": "78a1ebceb57a90a15357390127c443b7",
"text": "In this paper we present a clean, yet effective, model for word sense disambiguation. Our approach leverage a bidirectional long short-term memory network which is shared between all words. This enables the model to share statistical strength and to scale well with vocabulary size. The model is trained end-to-end, directly from the raw text to sense labels, and makes effective use of word order. We evaluate our approach on two standard datasets, using identical hyperparameter settings, which are in turn tuned on a third set of held out data. We employ no external resources (e.g. knowledge graphs, part-of-speech tagging, etc), language specific features, or hand crafted rules, but still achieve statistically equivalent results to the best state-of-the-art systems, that employ no such limitations.",
"title": ""
},
{
"docid": "da81734b6ade71bc8eee499af4003f85",
"text": "We propose a reinforcement learning approach to learning to teach. Following Torrey and Taylor’s framework [18], an agent (the “teacher”) advises another one (the “student”) by suggesting actions the latter should take while learning a specific task in a sequential decision problem; the teacher is limited by a “budget” (the number of times such advice can be given). Our approach assumes a principled decision-theoretic setting; both the student and the teacher are modeled as reinforcement learning agents. We provide experimental results with the Mountain car domain, showing how our approach outperforms the heuristics proposed by Torrey and Taylor [18]. Moreover, we propose a technique for a student to take into account advice more efficiently and we experimentally show that performances are improved in Torrey and Taylor’s setting.",
"title": ""
},
{
"docid": "7ac57f2d521a4db22e203c232a126ac4",
"text": ".................................................................................................................................. iii ACKNOWLEDGEMENTS ............................................................................................................ v TABLE OF CONTENTS .............................................................................................................. vii LIST OF TABLES ....................................................................................................................... viii LIST OF FIGURES ....................................................................................................................... ix CHAPTER 1: INTRODUCTION ................................................................................................... 1 CHAPTER 2: REVIEW OF RELATED LITERATURE ............................................................... 4 Flexibility Interventions .............................................................................................................. 4 Athletic Performance Interventions .......................................................................................... 18 Recovery Interventions ............................................................................................................. 29 Methodology & Supporting Arguments ................................................................................... 35 CHAPTER 3: METHODOLOGY ................................................................................................ 37 CHAPTER 4: RESULTS .............................................................................................................. 43 CHAPTER 5: DISCUSSION ........................................................................................................ 48 APPENDIX A: PRE-RESEARCH QUESTIONNAIRE .............................................................. 54 APPENDIX B: NUMERIC PRESSURE SCALE ........................................................................ 55 APPENDIX C: DATA COLLECTION FIGURES ...................................................................... 56 REFERENCES ............................................................................................................................. 58 CURRICULUM VITAE ............................................................................................................... 61",
"title": ""
},
{
"docid": "02209c1215a39c17b4099603ef700c97",
"text": "The goal of the Automated Evaluation of Scientific Writing (AESW) Shared Task is to analyze the linguistic characteristics of scientific writing to promote the development of automated writing evaluation tools that can assist authors in writing scientific papers. The proposed task is to predict whether a given sentence requires editing to ensure its “fit” with the scientific writing genre. We describe the proposed task, training, development, and test data sets, and evaluation metrics. Quality means doing it right when no one is looking. – Henry Ford",
"title": ""
},
{
"docid": "b11a161588bd1a3d4d7cd78ecce4aa64",
"text": "This article analyses different types of reference models applicable to support the set up and (re)configuration of Virtual Enterprises (VEs). Reference models are models capturing concepts common to VEs aiming to convert the task of setting up a VE into a configuration task, and hence reducing the time needed for VE creation. The reference models are analysed through a mapping onto the Virtual Enterprise Reference Architecture (VERA) based upon GERAM and created in the IMS GLOBEMEN project.",
"title": ""
},
{
"docid": "4b26fb8de5384f888aee354104b3dbd2",
"text": "This paper presents a fully automated algorithm for segmentation of multiple sclerosis (MS) lesions from multispectral magnetic resonance (MR) images. The method performs intensity-based tissue classification using a stochastic model for normal brain images and simultaneously detects MS lesions as outliers that are not well explained by the model. It corrects for MR field inhomogeneities, estimates tissue-specific intensity models from the data itself, and incorporates contextual information in the classification using a Markov random field. The results of the automated method are compared with lesion delineations by human experts, showing a high total lesion load correlation. When the degree of spatial correspondence between segmentations is taken into account, considerable disagreement is found, both between expect segmentations, and between expert and automatic measurements.",
"title": ""
},
{
"docid": "71ff52158a45b1869500630cd5cb041b",
"text": "Heat shock proteins (HSPs) are a set of highly conserved proteins that can serve as intestinal gate keepers in gut homeostasis. Here, effects of a probiotic, Lactobacillus rhamnosus GG (LGG), and two novel porcine isolates, Lactobacillus johnsonii strain P47-HY and Lactobacillus reuteri strain P43-HUV, on cytoprotective HSP expression and gut barrier function, were investigated in a porcine IPEC-J2 intestinal epithelial cell line model. The IPEC-J2 cells polarized on a permeable filter exhibited villus-like cell phenotype with development of apical microvilli. Western blot analysis detected HSP expression in IPEC-J2 and revealed that L. johnsonii and L. reuteri strains were able to significantly induce HSP27, despite high basal expression in IPEC-J2, whereas LGG did not. For HSP72, only the supernatant of L. reuteri induced the expression, which was comparable to the heat shock treatment, which indicated that HSP72 expression was more stimulus specific. The protective effect of lactobacilli was further studied in IPEC-J2 under an enterotoxigenic Escherichia coli (ETEC) challenge. ETEC caused intestinal barrier destruction, as reflected by loss of cell-cell contact, reduced IPEC-J2 cell viability and transepithelial electrical resistance, and disruption of tight junction protein zonula occludens-1. In contrast, the L. reuteri treatment substantially counteracted these detrimental effects and preserved the barrier function. L. johnsonii and LGG also achieved barrier protection, partly by directly inhibiting ETEC attachment. Together, the results indicate that specific strains of Lactobacillus can enhance gut barrier function through cytoprotective HSP induction and fortify the cell protection against ETEC challenge through tight junction protein modulation and direct interaction with pathogens.",
"title": ""
},
{
"docid": "48842e5bf95700acf2b1bb18771aeb00",
"text": "We present a simple and natural greedy algorithm for the metric uncapacitated facility location problem achieving an approximation guarantee of 1.61. We use this algorithm to find better approximation algorithms for the capacitated facility location problem with soft capacities and for a common generalization of the k-median and facility location problems. We also prove a lower bound of 1+2/e on the approximability of the k-median problem. At the end, we present a discussion about the techniques we have used in the analysis of our algorithm, including a computer-aided method for proving bounds on the approximation factor.",
"title": ""
},
{
"docid": "d16a787399db6309ab4563f4265e91b9",
"text": "The real-time information on news sites, blogs and social networking sites changes dynamically and spreads rapidly through the Web. Developing methods for handling such information at a massive scale requires that we think about how information content varies over time, how it is transmitted, and how it mutates as it spreads.\n We describe the News Information Flow Tracking, Yay! (NIFTY) system for large scale real-time tracking of \"memes\" - short textual phrases that travel and mutate through the Web. NIFTY is based on a novel highly-scalable incremental meme-clustering algorithm that efficiently extracts and identifies mutational variants of a single meme. NIFTY runs orders of magnitude faster than our previous Memetracker system, while also maintaining better consistency and quality of extracted memes.\n We demonstrate the effectiveness of our approach by processing a 20 terabyte dataset of 6.1 billion blog posts and news articles that we have been continuously collecting for the last four years. NIFTY extracted 2.9 billion unique textual phrases and identified more than 9 million memes. Our meme-tracking algorithm was able to process the entire dataset in less than five days using a single machine. Furthermore, we also provide a live deployment of the NIFTY system that allows users to explore the dynamics of online news in near real-time.",
"title": ""
},
{
"docid": "3c9caac182d644d87236e51e34065aed",
"text": "This paper deals with automatic supervised classification of documents. The approach suggested is based on a vector representation of the documents centred not on the words but on the n-grams of characters for varying n. The effects of this method are examined in several experiments using the multivariate chi-square to reduce the dimensionality, the cosine and Kullback&Liebler distances, and two benchmark corpuses the reuters-21578 newswire articles and the 20 newsgroups data for evaluation. The evaluation was done, by using the macroaveraged F1 function. The results show the effectiveness of this approach compared to the Bag-OfWord and stem representations.",
"title": ""
},
{
"docid": "8a77882cfe06eaa88db529432ed31b0c",
"text": "We present an approach to interpret the major surfaces, objects, and support relations of an indoor scene from an RGBD image. Most existing work ignores physical interactions or is applied only to tidy rooms and hallways. Our goal is to parse typical, often messy, indoor scenes into floor, walls, supporting surfaces, and object regions, and to recover support relationships. One of our main interests is to better understand how 3D cues can best inform a structured 3D interpretation. We also contribute a novel integer programming formulation to infer physical support relations. We offer a new dataset of 1449 RGBD images, capturing 464 diverse indoor scenes, with detailed annotations. Our experiments demonstrate our ability to infer support relations in complex scenes and verify that our 3D scene cues and inferred support lead to better object segmentation.",
"title": ""
},
{
"docid": "13800973a4bc37f26319c0bb76fce731",
"text": "Light fields are a powerful concept in computational imaging and a mainstay in image-based rendering; however, so far their acquisition required either carefully designed and calibrated optical systems (micro-lens arrays), or multi-camera/multi-shot settings. Here, we show that fully calibrated light field data can be obtained from a single ordinary photograph taken through a partially wetted window. Each drop of water produces a distorted view on the scene, and the challenge of recovering the unknown mapping from pixel coordinates to refracted rays in space is a severely underconstrained problem. The key idea behind our solution is to combine ray tracing and low-level image analysis techniques (extraction of 2D drop contours and locations of scene features seen through drops) with state-of-the-art drop shape simulation and an iterative refinement scheme to enforce photo-consistency across features that are seen in multiple views. This novel approach not only recovers a dense pixel-to-ray mapping, but also the refractive geometry through which the scene is observed, to high accuracy. We therefore anticipate that our inherently self-calibrating scheme might also find applications in other fields, for instance in materials science where the wetting properties of liquids on surfaces are investigated.",
"title": ""
},
{
"docid": "fbfbb339657f2a0a97f8a65dfb99ffbc",
"text": "This work describes a novel technique of designing a high gain low noise CMOS instrumentation amplifier for biomedical applications like ECG signal processing. A three opamp instrumentation amplifier have been designed by using two simple op-amps at the two input stages and a folded cascode opamp at the output stage. Both op-amps at the input and output are 2-stage. Most of the previous or earlier designed op-amp in literature uses same type of op-amp at the input and output stages of instrumentation amplifier. By using folded cascode op-amp at the output, we had achieved significant improvement in gain and CMRR. Transistors sizing plays a major role in achieving high gain and CMRR. To achieve a desirable common mode rejection ratio (CMRR), Gain and other performance metrics, selection of most appropriable op-amp circuit topologies & optimum transistor sizing was the main criteria for designing of instrumentation amplifier for biomedical applications. The complete instrumentation amplifier design is simulated using Cadence Spectre tool and layout is designed and simulated in Cadence Layout editor at 0.18μm CMOS technology. Each of the input two stage op-amp provides a gain and CMRR of 45dB and 72dB respectively. The output two stage folded cascode amplifier provides a CMRR of 92dB and a gain of 82dB. The design achieves an overall CMRR and gain of 92dB and 67db respectively. The overall power consumed by instrumentation amplifier is 263μW which is suitable for biomedical signal processing applications.",
"title": ""
},
{
"docid": "7a85696a99cf329960bd07cd73d99cf7",
"text": "Recent quality scandals reveal the importance of quality management from a supply chain perspective. Although there has been many related studies focusing on supply chain quality management, the technologies used still have difficulties in resolving problems arising from the lack of trust in supply chains. The root reason lies in three challenges brought to the traditional centralized trust mechanism: self-interests of supply chain members, information asymmetry in production processes, costs and limitations of quality inspections. Blockchain is a promising technology to address these problems. In this paper, we discuss how to improve the supply chain quality management by adopting the blockchain technology, and propose a framework for blockchain-based supply chain quality management.",
"title": ""
},
{
"docid": "ffba00cedb97777174a418fbcfc2c687",
"text": "Quantum computing is moving rapidly to the point of deployment of technology. Functional quantum devices will require the ability to correct error in order to be scalable and effective. A leading choice of error correction, in particular for modular or distributed architectures, is the surface code with logical two-qubit operations realised via “lattice surgery”. These operations consist of “merges” and “splits” acting non-unitarily on the logical states and are not easily captured by standard circuit notation. This raises the question of how best to reason about lattice surgery in order efficiently to use quantum states and operations in architectures with complex resource management issues. In this paper we demonstrate that the operations of the ZX calculus, a form of quantum diagrammatic reasoning designed using category theory, match exactly the operations of lattice surgery. Red and green “spider” nodes match rough and smooth merges and splits, and follow the axioms of a dagger special associative Frobenius algebra. Some lattice surgery operations can require non-trivial correction operations, which are captured natively in the use of the ZX calculus in the form of ensembles of diagrams. We give a first taste of the power of the calculus as a language for surgery by considering two operations (magic state use and producing a CNOT ) and show how ZX diagram re-write rules give lattice surgery procedures for these operations that are novel, efficient, and highly configurable.",
"title": ""
},
{
"docid": "336d6407a2f8ec8506fe1b3a976f6c63",
"text": "Given a large collection of time series, such as web-click logs, electric medical records and motion capture sensors, how can we efficiently and effectively find typical patterns? How can we statistically summarize all the sequences, and achieve a meaningful segmentation? What are the major tools for forecasting and outlier detection? Time-series data analysis is becoming of increasingly high importance, thanks to the decreasing cost of hardware and the increasing on-line processing capability.\n The objective of this tutorial is to provide a concise and intuitive overview of the most important tools that can help us find patterns in large-scale time-series sequences. We review the state of the art in four related fields: (1) similarity search and pattern discovery, (2) linear modeling and summarization, (3) non-linear modeling and forecasting, and (4) the extension of time-series mining and tensor analysis. The emphasis of the tutorial is to provide the intuition behind these powerful tools, which is usually lost in the technical literature, as well as to introduce case studies that illustrate their practical use.",
"title": ""
},
{
"docid": "e73060d189e9a4f4fd7b93e1cab22955",
"text": "We have recently shown that deep Long Short-Term Memory (LSTM) recurrent neural networks (RNNs) outperform feed forward deep neural networks (DNNs) as acoustic models for speech recognition. More recently, we have shown that the performance of sequence trained context dependent (CD) hidden Markov model (HMM) acoustic models using such LSTM RNNs can be equaled by sequence trained phone models initialized with connectionist temporal classification (CTC). In this paper, we present techniques that further improve performance of LSTM RNN acoustic models for large vocabulary speech recognition. We show that frame stacking and reduced frame rate lead to more accurate models and faster decoding. CD phone modeling leads to further improvements. We also present initial results for LSTM RNN models outputting words directly.",
"title": ""
},
{
"docid": "a40c00b1dc4a8d795072e0a8cec09d7a",
"text": "Summary form only given. Most of current job scheduling systems for supercomputers and clusters provide batch queuing support. With the development of metacomputing and grid computing, users require resources managed by multiple local job schedulers. Advance reservations are becoming essential for job scheduling systems to be utilized within a large-scale computing environment with geographically distributed resources. COSY is a lightweight implementation of such a local job scheduler with support for both queue scheduling and advance reservations. COSY queue scheduling utilizes the FCFS algorithm with backfilling mechanisms and priority management. Advance reservations with COSY can provide effective QoS support for exact start time and latest completion time. Scheduling polices are defined to reject reservations with too short notice time so that there is no start time advantage to making a reservation over submitting to a queue. Further experimental results show that as a larger percentage of reservation requests are involved, a longer mandatory shortest notice time for advance reservations must be applied in order not to sacrifice queue scheduling efficiency.",
"title": ""
}
] |
scidocsrr
|
ecf7dc7bef907355b83130c39abffad3
|
Zozzle: Low-overhead Mostly Static JavaScript Malware Detection
|
[
{
"docid": "7b4f6382a7421fa08177c045eb9fdd66",
"text": "Cross-site scripting (XSS) vulnerabilities are among the most common and serious web application vulnerabilities. XSS vulnerabilities are difficult to prevent because it is difficult for web applications to anticipate client-side semantics. We present Noncespaces, a technique that enables web clients to distinguish between trusted and untrusted content to prevent exploitation of XSS vulnerabilities. Using Noncespaces, a web application randomizes the XML namespace tags in each document before delivering it to the client. As long as the attacker is unable to predict the randomized prefixes, the client can distinguish between trusted content created by the web application and untrusted content provided by the attacker. Noncespaces uses client-side policy enforcement to avoid semantic ambiguities between the client and server. To implement Noncespaces with minimal changes to web applications, we leverage a popular web application architecture to automatically apply Noncespaces to static content processed through a popular PHP template engine. We show that with simple policies Noncespaces thwarts popular XSS attack vectors. As an additional benefit, the client-side policy not only allows a web application to restrict security-relevant capabilities to untrusted content but also narrows the application’s remaining attack vectors, which deserve more scrutiny by security auditors.",
"title": ""
}
] |
[
{
"docid": "4d6d614795fa374a13a1a124bd3c50cd",
"text": "We automatically create enormous, free and multilingual silver-standard training annotations for named entity recognition (ner) by exploiting the text and structure of Wikipedia. Most ner systems rely on statistical models of annotated data to identify and classify names of people, locations and organisations in text. This dependence on expensive annotation is the knowledge bottleneck our work overcomes. We first classify each Wikipedia article into named entity (ne) types, training and evaluating on 7,200 manually-labelled Wikipedia articles across nine languages. Our cross-lingual approach achieves up to 95% accuracy. We transform the links between articles into ne annotations by projecting the target article’s classifications onto the anchor text. This approach yields reasonable annotations, but does not immediately compete with existing gold-standard data. By inferring additional links and heuristically tweaking the Wikipedia corpora, we better align our automatic annotations to gold standards. We annotate millions of words in nine languages, evaluating English, German, Spanish, Dutch and Russian Wikipedia-trained models against conll Shared Task data and other gold-standard corpora. Our approach outperforms other approaches to automatic ne annotation (Richman and Schone, 2008; Mika et al., 2008); competes with gold-standard training when tested on an evaluation corpus from a different source; and performs 10% better than newswire-trained models on manually-annotated Wikipedia text.",
"title": ""
},
{
"docid": "1f24bb842dacf71c9cde6ab66abd1de8",
"text": "An appropriate aging description from face image is the prime influential factor in human age recognition, but still there is an absence of a specially engineered aging descriptor, which can characterize discernible facial aging cues (e.g., craniofacial growth, skin aging) from a detailed and more finer point of view. To address this issue, we propose a local face descriptor, directional age-primitive pattern (DAPP), which inherits discernible aging cue information and is functionally more robust and discriminative than existing local descriptors. We introduce three attributes for coding the DAPP description. First, we introduce Age-Primitives encoding aging related to the most crucial texture primitives, yielding a reasonable and clear aging definition. Second, we introduce an encoding concept dubbed as Latent Secondary Direction, which preserves compact structural information in the code avoiding uncertain codes. Third, a globally adaptive thresholding mechanism is initiated to facilitate more discrimination in a flat and textured region. We apply DAPP on separate age group recognition and age estimation tasks. Applying the same approach to both of these tasks is seldom explored in the literature. Carefully conducted experiments show that the proposed DAPP description outperforms the existing approaches by an acceptable margin.",
"title": ""
},
{
"docid": "15884b99bf0f288377bd1fe01423bdfd",
"text": "This is an innovative work for the field of web usage mining. The main feature of our work a complete framework and findings in mining Web usage patterns from Web log files of a real Web site that has all the difficult aspects of real-life Web usage mining, including developing user profiles and external data describing an ontology of the Web content. We are presenting a method for discovering and tracking evolving user profiles. Profiles are also enriched with other domain-specific information facets that give a panoramic view of the discovered mass usage modes. An objective validation plan is also used to assess the quality of the mined profiles, in particular their adaptability in the face of evolving user behaviour. Keywords— Web mining, Cookies, Session.",
"title": ""
},
{
"docid": "853703c46af2dda7735e7783b56cba44",
"text": "PURPOSE\nWe compared the efficacy and safety of sodium hyaluronate (SH) and carboxymethylcellulose (CMC) in treating mild to moderate dry eye.\n\n\nMETHODS\nSixty-seven patients with mild to moderate dry eye were enrolled in this prospective, randomized, blinded study. They were treated 6 times a day with preservative-free unit dose formula eyedrops containing 0.1% SH or 0.5% CMC for 8 weeks. Corneal and conjunctival staining with fluorescein, tear film breakup time, subjective symptoms, and adverse reactions were assessed at baseline, 4 weeks, and 8 weeks after treatment initiation.\n\n\nRESULTS\nThirty-two patients were randomly assigned to the SH group and 33 were randomly assigned to the CMC group. Both the SH and CMC groups showed statistically significant improvements in corneal and conjunctival staining sum scores, tear film breakup time, and dry eye symptom score at 4 and 8 weeks after treatment initiation. However, there were no statistically significant differences in any of the indices between the 2 treatment groups. There were no significant adverse reactions observed during follow-up.\n\n\nCONCLUSIONS\nThe efficacies of SH and CMC were equivalent in treating mild to moderate dry eye. SH and CMC preservative-free artificial tear formulations appropriately manage dry eye sign and symptoms and show safety and efficacy when frequently administered in a unit dose formula.",
"title": ""
},
{
"docid": "1ce8e79e7fe4761858b3e83c49b80c80",
"text": "Taking the concept of thin clients to the limit, this paper proposes that desktop machines should just be simple, stateless I/O devices (display, keyboard, mouse, etc.) that access a shared pool of computational resources over a dedicated interconnection fabric --- much in the same way as a building's telephone services are accessed by a collection of handset devices. The stateless desktop design provides a useful mobility model in which users can transparently resume their work on any desktop console.This paper examines the fundamental premise in this system design that modern, off-the-shelf interconnection technology can support the quality-of-service required by today's graphical and multimedia applications. We devised a methodology for analyzing the interactive performance of modern systems, and we characterized the I/O properties of common, real-life applications (e.g. Netscape, streaming video, and Quake) executing in thin-client environments. We have conducted a series of experiments on the Sun Ray™ 1 implementation of this new system architecture, and our results indicate that it provides an effective means of delivering computational services to a workgroup.We have found that response times over a dedicated network are so low that interactive performance is indistinguishable from a dedicated workstation. A simple pixel encoding protocol requires only modest network resources (as little as a 1Mbps home connection) and is quite competitive with the X protocol. Tens of users running interactive applications can share a processor without any noticeable degradation, and many more can share the network. The simple protocol over a 100Mbps interconnection fabric can support streaming video and Quake at display rates and resolutions which provide a high-fidelity user experience.",
"title": ""
},
{
"docid": "be91ec9b4f017818f32af09cafbb2a9a",
"text": "Brainard et al. 2 INTRODUCTION Object recognition is difficult because there is no simple relation between an object's properties and the retinal image. Where the object is located, how it is oriented, and how it is illuminated also affect the image. Moreover, the relation is under-determined: multiple physical configurations can give rise to the same retinal image. In the case of object color, the spectral power distribution of the light reflected from an object depends not only on the object's intrinsic surface reflectance but also factors extrinsic to the object, such as the illumination. The relation between intrinsic reflectance, extrinsic illumination, and the color signal reflected to the eye is shown schematically in Figure 1. The light incident on a surface is characterized by its spectral power distribution E(λ). A small surface element reflects a fraction of the incident illuminant to the eye. The surface reflectance function S(λ) specifies this fraction as a function of wavelength. The spectrum of the light reaching the eye is called the color signal and is given by C(λ) = E(λ)S(λ). Information about C(λ) is encoded by three classes of cone photoreceptors, the L-, M-, and Scones. The top two patches rendered in Plate 1 illustrate the large effect that a typical change in natural illumination (see Wyszecki and Stiles, 1982) can have on the color signal. This effect might lead us to expect that the color appearance of objects should vary radically, depending as much on the current conditions of illumination as on the object's surface reflectance. Yet the very fact that we can sensibly refer to objects as having a color indicates otherwise. Somehow our visual system stabilizes the color appearance of objects against changes in illumination, a perceptual effect that is referred to as color constancy. Because the illumination is the most salient object-extrinsic factor that affects the color signal, it is natural that emphasis has been placed on understanding how changing the illumination affects object color appearance. In a typical color constancy experiment, the independent variable is the illumination and the dependent variable is a measure of color appearance experiments employ different stimulus configurations and psychophysical tasks, but taken as a whole they support the view that human vision exhibits a reasonable degree of color constancy. Recall that the top two patches of Plate 1 illustrate the limiting case where a single surface reflectance is seen under multiple illuminations. Although this …",
"title": ""
},
{
"docid": "072b842bb999a348ac2b6aa4a44f5ff2",
"text": "Eating disorders, such as anorexia nervosa are a major health concern affecting many young individuals. Given the extensive adoption of social media technologies in the anorexia affected demographic, we study behavioral characteristics of this population focusing on the social media Tumblr. Aligned with observations in prior literature, we find the presence of two prominent anorexia related communities on Tumblr -- pro-anorexia and pro-recovery. Empirical analyses on several thousand Tumblr posts show use of the site as a media-rich platform replete with triggering content for enacting anorexia as a lifestyle choice. Through use of common pro-anorexia tags, the pro-recovery community however attempts to \"permeate\" into the pro-anorexia community to educate them of the health risks of anorexia. Further, the communities exhibit distinctive affective, social, cognitive, and linguistic style markers. Compared with recover- ing anorexics, pro-anorexics express greater negative affect, higher cognitive impairment, and greater feelings of social isolation and self-harm. We also observe that these characteristics may be used in a predictive setting to detect anorexia content with 80% accuracy. Based on our findings, clinical implications of detecting anorexia related content on social media are discussed.",
"title": ""
},
{
"docid": "dcc9f54b92068b956c64307e800b66c4",
"text": "Abstract. We introduce an unsupervised feature learning approach that embeds 3D shape information into a single-view image representation. The main idea is a self-supervised training objective that, given only a single 2D image, requires all unseen views of the object to be predictable from learned features. We implement this idea as an encoderdecoder convolutional neural network. The network maps an input image of an unknown category and unknown viewpoint to a latent space, from which a deconvolutional decoder can best “lift” the image to its complete viewgrid showing the object from all viewing angles. Our class-agnostic training procedure encourages the representation to capture fundamental shape primitives and semantic regularities in a data-driven manner— without manual semantic labels. Our results on two widely-used shape datasets show 1) our approach successfully learns to perform “mental rotation” even for objects unseen during training, and 2) the learned latent space is a powerful representation for object recognition, outperforming several existing unsupervised feature learning methods.",
"title": ""
},
{
"docid": "86c0547368eb9003beed2ba7eefc75a4",
"text": "Electronic social media offers new opportunities for informal communication in written language, while at the same time, providing new datasets that allow researchers to document dialect variation from records of natural communication among millions of individuals. The unprecedented scale of this data enables the application of quantitative methods to automatically discover the lexical variables that distinguish the language of geographical areas such as cities. This can be paired with the segmentation of geographical space into dialect regions, within the context of a single joint statistical model — thus simultaneously identifying coherent dialect regions and the words that distinguish them. Finally, a diachronic analysis reveals rapid changes in the geographical distribution of these lexical features, suggesting that statistical analysis of social media may offer new insights on the diffusion of lexical change.",
"title": ""
},
{
"docid": "ec989c3afdfebd6fe50dcb2205ac3ea3",
"text": "Recently, result diversification has attracted a lot of attention as a means to improve the quality of results retrieved by user queries. In this article, we introduce a novel definition of diversity called DisC diversity. Given a tuning parameter r, which we call radius, we consider two items to be similar if their distance is smaller than or equal to r. A DisC diverse subset of a result contains items such that each item in the result is represented by a similar item in the diverse subset and the items in the diverse subset are dissimilar to each other. We show that locating a minimum DisC diverse subset is an NP-hard problem and provide algorithms for its approximation. We extend our definition to the multiple radii case, where each item is associated with a different radius based on its importance, relevance, or other factors. We also propose adapting DisC diverse subsets to a different degree of diversification by adjusting r, that is, increasing the radius (or zooming-out) and decreasing the radius (or zooming-in). We present efficient implementations of our algorithms based on the M-tree, a spatial index structure, and experimentally evaluate their performance.",
"title": ""
},
{
"docid": "015278103692164cde2dbfab823c4742",
"text": "Sparse code multiple access (SCMA) is a novel non-orthogonal multiple access scheme, in which multiple users access the same channel with user-specific sparse codewords. In this paper, we consider an uplink SCMA system employing channel coding, and develop an iterative multiuser receiver which fully utilizes the diversity gain and coding gain in the system. The simulation results demonstrate the superiority of the proposed iterative receiver over the non-iterative one, and the performance gain increases with the system load. It is also shown that SCMA can work well in highly overloaded scenario, and the link-level performance does not degrade even if the load is as high as 300%.",
"title": ""
},
{
"docid": "f20c0ace77f7b325d2ae4862d300d440",
"text": "http://dx.doi.org/10.1016/j.knosys.2014.02.003 0950-7051/ 2014 Elsevier B.V. All rights reserved. ⇑ Corresponding author. Address: Zhejiang University, Hangzhou 310027, China. Tel.: +86 571 87951453. E-mail addresses: [email protected] (X. Zheng), [email protected] (Z. Lin), [email protected] (X. Wang), [email protected] (K.-J. Lin), [email protected] (M. Song). 1 http://www.yelp.com/. Xiaolin Zheng a,b,⇑, Zhen Lin , Xiaowei Wang , Kwei-Jay Lin , Meina Song e",
"title": ""
},
{
"docid": "79ca2676dab5da0c9f39a0996fcdcfd8",
"text": "Estimation of human shape from images has numerous applications ranging from graphics to surveillance. A single image provides insufficient constraints (e.g. clothing), making human shape estimation more challenging. We propose a method to simultaneously estimate a person’s clothed and naked shapes from a single image of that person wearing clothing. The key component of our method is a deformable model of clothed human shape. We learn our deformable model, which spans variations in pose, body, and clothes, from a training dataset. These variations are derived by the non-rigid surface deformation, and encoded in various low-dimension parameters. Our deformable model can be used to produce clothed 3D meshes for different people in different poses, which neither appears in the training dataset. Afterward, given an input image, our deformable model is initialized with a few user-specified 2D joints and contours of the person. We optimize the parameters of the deformable model by pose fitting and body fitting in an iterative way. Then the clothed and naked 3D shapes of the person can be obtained simultaneously. We illustrate our method for texture mapping and animation. The experimental results on real images demonstrate the effectiveness of our method.",
"title": ""
},
{
"docid": "1cd45a4f897ea6c473d00c4913440836",
"text": "What is the computational goal of auditory scene analysis? This is a key issue to address in the Marrian information-processing framework. It is also an important question for researchers in computational auditory scene analysis (CASA) because it bears directly on how a CASA system should be evaluated. In this chapter I discuss different objectives used in CASA. I suggest as a main CASA goal the use of the ideal time-frequency (T-F) binary mask whose value is one for a T-F unit where the target energy is greater than the interference energy and is zero otherwise. The notion of the ideal binary mask is motivated by the auditory masking phenomenon. Properties of the ideal binary mask are discussed, including their relationship to automatic speech recognition and human speech intelligibility. This CASA goal has led to algorithms that directly estimate the ideal binary mask in monaural and binaural conditions, and these algorithms have substantially advanced the state-of-the-art performance in speech separation.",
"title": ""
},
{
"docid": "047c36e2650b8abde75cccaeb0368c88",
"text": "Pancreas segmentation in computed tomography imaging has been historically difficult for automated methods because of the large shape and size variations between patients. In this work, we describe a custom-build 3D fully convolutional network (FCN) that can process a 3D image including the whole pancreas and produce an automatic segmentation. We investigate two variations of the 3D FCN architecture; one with concatenation and one with summation skip connections to the decoder part of the network. We evaluate our methods on a dataset from a clinical trial with gastric cancer patients, including 147 contrast enhanced abdominal CT scans acquired in the portal venous phase. Using the summation architecture, we achieve an average Dice score of 89.7 ± 3.8 (range [79.8, 94.8])% in testing, achieving the new state-of-the-art performance in pancreas segmentation on this dataset.",
"title": ""
},
{
"docid": "409d104fa3e992ac72c65b004beaa963",
"text": "The 19-item Body-Image Questionnaire, developed by our team and first published in this journal in 1987 by Bruchon-Schweitzer, was administered to 1,222 male and female French subjects. A principal component analysis of their responses yielded an axis we interpreted as a general Body Satisfaction dimension. The four-factor structure observed in 1987 was not replicated. Body Satisfaction was associated with sex, health, and with current and future emotional adjustment.",
"title": ""
},
{
"docid": "b141c5a1b7a92856b9dc3e3958a91579",
"text": "Field-programmable analog arrays (FPAAs) provide a method for rapidly prototyping analog systems. Currently available commercial and academic FPAAs are typically based on operational amplifiers (or other similar analog primitives) with only a few computational elements per chip. While their specific architectures vary, their small sizes and often restrictive interconnect designs leave current FPAAs limited in functionality and flexibility. For FPAAs to enter the realm of large-scale reconfigurable devices such as modern field-programmable gate arrays (FPGAs), new technologies must be explored to provide area-efficient accurately programmable analog circuitry that can be easily integrated into a larger digital/mixed-signal system. Recent advances in the area of floating-gate transistors have led to a core technology that exhibits many of these qualities, and current research promises a digitally controllable analog technology that can be directly mated to commercial FPGAs. By leveraging these advances, a new generation of FPAAs is introduced in this paper that will dramatically advance the current state of the art in terms of size, functionality, and flexibility. FPAAs have been fabricated using floating-gate transistors as the sole programmable element, and the results of characterization and system-level experiments on the most recent FPAA are shown.",
"title": ""
},
{
"docid": "f107ba1eef32a7d1c7b4c6f56470f05e",
"text": "Modern biomedical research aims at drawing biological conclusions from large, highly complex biological datasets. It has become common practice to make extensive use of high-throughput technologies that produce big amounts of heterogeneous data. In addition to the ever-improving accuracy, methods are getting faster and cheaper, resulting in a steadily increasing need for scalable data management and easily accessible means of analysis. We present qPortal, a platform providing users with an intuitive way to manage and analyze quantitative biological data. The backend leverages a variety of concepts and technologies, such as relational databases, data stores, data models and means of data transfer, as well as front-end solutions to give users access to data management and easy-to-use analysis options. Users are empowered to conduct their experiments from the experimental design to the visualization of their results through the platform. Here, we illustrate the feature-rich portal by simulating a biomedical study based on publically available data. We demonstrate the software's strength in supporting the entire project life cycle. The software supports the project design and registration, empowers users to do all-digital project management and finally provides means to perform analysis. We compare our approach to Galaxy, one of the most widely used scientific workflow and analysis platforms in computational biology. Application of both systems to a small case study shows the differences between a data-driven approach (qPortal) and a workflow-driven approach (Galaxy). qPortal, a one-stop-shop solution for biomedical projects offers up-to-date analysis pipelines, quality control workflows, and visualization tools. Through intensive user interactions, appropriate data models have been developed. These models build the foundation of our biological data management system and provide possibilities to annotate data, query metadata for statistics and future re-analysis on high-performance computing systems via coupling of workflow management systems. Integration of project and data management as well as workflow resources in one place present clear advantages over existing solutions.",
"title": ""
},
{
"docid": "59d6765507415b0365f3193843d01459",
"text": "Password typing is the most widely used identity verification method in World Wide Web based Electronic Commerce. Due to its simplicity, however, it is vulnerable to imposter attacks. Keystroke dynamics and password checking can be combined to result in a more secure verification system. We propose an autoassociator neural network that is trained with the timing vectors of the owner's keystroke dynamics and then used to discriminate between the owner and an imposter. An imposter typing the correct password can be detected with very high accuracy using the proposed approach. This approach can be effectively implemented by a Java applet and used in the World Wide Web.",
"title": ""
},
{
"docid": "e4a1f577cb232f6f76fba149a69db58f",
"text": "During software development, the activities of requirements analysis, functional specification, and architectural design all require a team of developers to converge on a common vision of what they are developing. There have been remarkably few studies of conceptual design during real projects. In this paper, we describe a detailed field study of a large industrial software project. We observed the development team's conceptual design activities for three months with follow-up observations and discussions over the following eight months. In this paper, we emphasize the organization of the project and how patterns of collaboration affected the team's convergence on a common vision. Three observations stand out: First, convergence on a common vision was not only painfully slow but was punctuated by several reorientations of direction; second, the design process seemed to be inherently forgetful, involving repeated resurfacing of previously discussed issues; finally, a conflict of values persisted between team members responsible for system development and those responsible for overseeing the development process. These findings have clear implications for collaborative support tools and process interventions.",
"title": ""
}
] |
scidocsrr
|
25b556bc9bc825572a48a2432d58b92a
|
Comparative evaluation of latency reducing and tolerating techniques
|
[
{
"docid": "100c29e6250afec55b8374806b794cbe",
"text": "Using simulation, we examine the efficiency of several distributed, hardware-based solutions to the cache coherence problem in shared-bus multiprocessors. For each of the approaches, the associated protocol is outlined. The simulation model is described, and results from that model are presented. The magnitude of the potential performance difference between the various approaches indicates that the choice of coherence solution is very important in the design of an efficient shared-bus multiprocessor, since it may limit the number of processors in the system.",
"title": ""
}
] |
[
{
"docid": "e6a5ff945613e3b4db9df925d4ff7d28",
"text": "Fear recognition, which aims at predicting whether a movie segment can induce fear or not, is a promising area in movie emotion recognition. Research in this area, however, has reached a bottleneck. Difficulties may partly result from the imbalanced database. In this paper, we propose an imbalance learning-based framework for movie fear recognition. A data rebalance module is adopted before classification. Several sampling methods, including the proposed softsampling and hardsampling which combine the merits of both undersampling and oversampling, are explored in this module. Experiments are conducted on the MediaEval 2017 Emotional Impact of Movies Task. Compared with the current state-of-the-art, we achieve an improvement of 8.94% on F1, proving the effectiveness of proposed framework.",
"title": ""
},
{
"docid": "e8ba260c18576f7f8b9f90afed0348e5",
"text": "This paper is aimed at recognition of offline handwritten characters in a given scanned text document with the help of neural networks. Image preprocessing, segmentation and feature extraction are various phases involved in character recognition. The first step is image acquisition followed by noise filtering, smoothing and image normalization of scanned image. Segmentation decomposes image into sub images and feature extraction extracts features from input image. Neural Network is created and trained to classify and recognize handwritten characters.",
"title": ""
},
{
"docid": "cf1332882cb6f68549d3c64029db3e9a",
"text": "In this paper, we look at the historical place that chickens have held in media depictions and as entertainment, analyse several types of representations of chickens in video games, and draw out reflections on society in the light of these representations. We also look at real-life, modern historical, and archaeological evidence of chicken treatment and the evolution of social attitudes with regard to animal rights, and deconstruct the depiction of chickens in video games in this light.",
"title": ""
},
{
"docid": "40ad6bf9f233b58e13cf6a709daba2ca",
"text": "While syntactic dependency annotations concentrate on the surface or functional structure of a sentence, semantic dependency annotations aim to capture betweenword relationships that are more closely related to the meaning of a sentence, using graph-structured representations. We extend the LSTM-based syntactic parser of Dozat and Manning (2017) to train on and generate these graph structures. The resulting system on its own achieves stateof-the-art performance, beating the previous, substantially more complex stateof-the-art system by 1.9% labeled F1. Adding linguistically richer input representations pushes the margin even higher, allowing us to beat it by 2.6% labeled F1.",
"title": ""
},
{
"docid": "c8948a93e138ca0ac8cae3247dc9c81a",
"text": "Sharpness is an important determinant in visual assessment of image quality. The human visual system is able to effortlessly detect blur and evaluate sharpness of visual images, but the underlying mechanism is not fully understood. Existing blur/sharpness evaluation algorithms are mostly based on edge width, local gradient, or energy reduction of global/local high frequency content. Here we understand the subject from a different perspective, where sharpness is identified as strong local phase coherence (LPC) near distinctive image features evaluated in the complex wavelet transform domain. Previous LPC computation is restricted to be applied to complex coefficients spread in three consecutive dyadic scales in the scale-space. Here we propose a flexible framework that allows for LPC computation in arbitrary fractional scales. We then develop a new sharpness assessment algorithm without referencing the original image. We use four subject-rated publicly available image databases to test the proposed algorithm, which demonstrates competitive performance when compared with state-of-the-art algorithms.",
"title": ""
},
{
"docid": "57b7c2ea44c397fc1e31beca533468c2",
"text": "A power electronic transformer converter made of cascaded multiple cells with a medium frequency link can replace the bulky traditional transformer in various applications with several advantages, including reduced size and weight, and energy savings. This paper describes the design of a 1.2MW power electronic transformer (PET) demonstrator for traction applications. Several design challenges, i.e. the selection of the IGBT modules, the design of the multiple-functional medium frequency transformer, and the mechanical arrangement to achieve the high voltage insulation between the grid voltage and ground potential, have been discussed. The experimental results, obtained from the developed PET demonstrator, are presented.",
"title": ""
},
{
"docid": "99b6873a9f3fd01ecfd4ba141df21f12",
"text": "This paper shows how a rational Bitcoin miner should select transactions from his node’s mempool, when creating a new block, in order to maximize his profit in the absence of a block size limit. To show this, the paper introduces the block space supply curve and the mempool demand curve. The former describes the cost for a miner to supply block space by accounting for orphaning risk. The latter represents the fees offered by the transactions in mempool, and is expressed versus the minimum block size required to claim a given portion of the fees. The paper explains how the supply and demand curves from classical economics are related to the derivatives of these two curves, and proves that producing the quantity of block space indicated by their intersection point maximizes the miner’s profit. The paper then shows that an unhealthy fee market—where miners are incentivized to produce arbitrarily large blocks—cannot exist since it requires communicating information at an arbitrarily fast rate. The paper concludes by considering the conditions under which a rational miner would produce big, small or empty blocks, and by estimating the cost of a spam attack.",
"title": ""
},
{
"docid": "705b72cc6b535f1745d75fb945e5925e",
"text": "An increasing number of military systems are being developed using Service Oriented Architecture (SOA). Some of the features that make SOA appealing, like loose coupling, dynamism and composition-oriented system construction, make securing service-based systems more complicated. We have been developing Advanced Protected Services (APS) technologies for improving the resilience and survival of SOA services under cyber attack. These technologies introduce a layer to absorb, contain, and adapt to cyber attacks prior to the attacks reaching critical services. This paper describes an evaluation of these advanced protection technologies using a set of cooperative red team exercises. In these exercises, an independent red team launched attacks on a protected enclave in order to evaluate the efficacy and efficiency of the prototype protection technologies. The red team was provided full knowledge of the system under test and its protections, was given escalating levels of access to the system, and operated within agreed upon rules of engagement designed to scope the testing on useful evaluation results. We also describe the evaluation results and the use of cooperative red teaming as an effective means of evaluating cyber security.",
"title": ""
},
{
"docid": "d505a0fe73296fe19f0f683773c9520d",
"text": "Abstractive text summarization is a complex task whose goal is to generate a concise version of a text without necessarily reusing the sentences from the original source, but still preserving the meaning and the key contents. In this position paper we address this issue by modeling the problem as a sequence to sequence learning and exploiting Recurrent Neural Networks (RNN). Moreover, we discuss the idea of combining RNNs and probabilistic models in a unified way in order to incorporate prior knowledge, such as linguistic features. We believe that this approach can obtain better performance than the state-of-the-art models for generating well-formed summaries.",
"title": ""
},
{
"docid": "ba0e3d6cc397adb6cc9fa901aff1ff22",
"text": "Though deep learning has pushed the boundaries of classification forward, in recent years hints of the limits of standard classification have begun to emerge. Problems such as fooling, adding new classes over time, and the need to retrain learning models only for small changes to the original problem all point to a potential shortcoming in the classic classification regime, where a comprehensive a priori knowledge of the possible classes or concepts is critical. Without such knowledge, classifiers misjudge the limits of their knowledge and overgeneralization therefore becomes a serious obstacle to consistent performance. In response to these challenges, this paper extends the classic regime by reframing classification instead with the assumption that concepts present in the training set are only a sample of the hypothetical final set of concepts. To bring learning models into this new paradigm, a novel elaboration of standard architectures called the competitive overcomplete output layer (COOL) neural network is introduced. Experiments demonstrate the effectiveness of COOL by applying it to fooling, separable concept learning, one-class neural networks, and standard classification benchmarks. The results suggest that, unlike conventional classifiers, the amount of generalization in COOL networks can be tuned to match the problem.",
"title": ""
},
{
"docid": "1b9e7f9abf5115cecbb337f962d679bf",
"text": "Literature on the use of machine learning (ML) algorithms for classifying IP traffic has relied on full-flows or the first few packets of flows. In contrast, many real-world scenarios require a classification decision well before a flow has finished even if the flow's beginning is lost. This implies classification must be achieved using statistics derived from the most recent N packets taken at any arbitrary point in a flow's lifetime. We propose training the classifier on a combination of short sub-flows (extracted from full-flow examples of the target application's traffic). We demonstrate this optimisation using the naive Bayes ML algorithm, and show that our approach results in excellent performance even when classification is initiated mid-way through a flow with windows as small as 25 packets long. We suggest future use of unsupervised ML algorithms to identify optimal sub-flows for training",
"title": ""
},
{
"docid": "23405156faf3cf650544887a85cad226",
"text": "A Wilkinson power divider operating not only at one frequency f/sub 0/, but also at its first harmonic 2f/sub 0/ is presented. This power divider consists of two branches of impedance transformer, each of which consists of two sections of 1/6-wave transmission-line with different characteristic impedance. The two outputs are connected through a resistor, an inductor, and a capacitor. All the features of a conventional Wilkinson power divider, such as an equal power split, impedance matching at all ports, and a good isolation between the two output ports, can be fulfilled at f/sub 0/ and 2f/sub 0/, simultaneously.",
"title": ""
},
{
"docid": "acbd639a034cf73f021be3ed78f849bb",
"text": "The paper proposes the integration of new cognitive capabilities within the well known OpenBTS architecture in order to make the system able to react in a smart way to the changes of the radio channel. In particular, the proposed spectrum sensing strategy allows the OpenBTS system to be aware of other active transmissions by forcing to choose a new radio channel, within the GSM frequency band, when a licensed primary user has to transmit on a busy channel. The implemented scheme, representing a solid step forward in the cognitive direction, has been validated throughout a detailed testbed pointing out strengths and limitations in realistic communication environments.",
"title": ""
},
{
"docid": "e58b15d705923a519fe52688c951ee99",
"text": "Automatic glasses detection on real face images is a challenging problem due to different appearance variations. Nevertheless, glasses detection on face images has not been thoroughly investigated. In this paper, an innovative algorithm for automatic glasses detection based on Robust Local Binary Pattern and robust alignment is proposed. Firstly, images are preprocessed and normalized in order to deal with scale and rotation. Secondly, eye glasses region is detected considering that the nosepiece of the glasses is usually placed at the same level as the center of the eyes in both height and width. Thirdly, Robust Local Binary Pattern is built to describe the eyes region, and finally, support vector machine is used to classify the LBP features. This algorithm can be applied as the first step of a glasses removal algorithm due to its robustness and speed. The proposed algorithm has been tested over the Labeled Faces in the Wild database showing a 98.65 % recognition rate. Influences of the resolution, the alignment of the normalized images and the number of divisions in the LBP operator are also investigated.",
"title": ""
},
{
"docid": "d757f4c2294092f0735a3d822c2b870c",
"text": "This paper is concerned with the Multi-Objective Next Release Problem (MONRP), a problem in search-based requirements engineering. Previous work has considered only single objective formulations. In the multi-objective formulation, there are at least two (possibly conflicting) objectives that the software engineer wishes to optimize. It is argued that the multi-objective formulation is more realistic, since requirements engineering is characterised by the presence of many complex and conflicting demands, for which the software engineer must find a suitable balance. The paper presents the results of an empirical study into the suitability of weighted and Pareto optimal genetic algorithms, together with the NSGA-II algorithm, presenting evidence to support the claim that NSGA-II is well suited to the MONRP. The paper also provides benchmark data to indicate the size above which the MONRP becomes non--trivial.",
"title": ""
},
{
"docid": "c3473e7fe7b46628d384cbbe10bfe74c",
"text": "STUDY OBJECTIVE\nTo (1) examine the prevalence of abnormal genital findings in a large cohort of female children presenting with concerns of sexual abuse; and (2) explore how children use language when describing genital contact and genital anatomy.\n\n\nDESIGN\nIn this prospective study we documented medical histories and genital findings in all children who met inclusion criteria. Findings were categorized as normal, indeterminate, and diagnostic of trauma. Logistic regression analysis was used to determine the effects of key covariates on predicting diagnostic findings. Children older than 4 years of age were asked questions related to genital anatomy to assess their use of language.\n\n\nSETTING\nA regional, university-affiliated sexual abuse clinic.\n\n\nPARTICIPANTS\nFemale children (N = 1500) aged from birth to 17 years (inclusive) who received an anogenital examination with digital images.\n\n\nINTERVENTIONS AND MAIN OUTCOME MEASURES\nPhysical exam findings, medical history, and the child's use of language were recorded.\n\n\nRESULTS\nPhysical findings were determined in 99% (n = 1491) of patients. Diagnostic findings were present in 7% (99 of 1491). After adjusting for age, acuity, and type of sexual contact reported by the adult, the estimated odds of diagnostic findings were 12.5 times higher for children reporting genital penetration compared with those who reported only contact (95% confidence interval, 3.46-45.34). Finally, children used the word \"inside\" to describe contact other than penetration of the vaginal canal (ie, labial penetration).\n\n\nCONCLUSION\nA history of penetration by the child was the primary predictor of diagnostic findings. Interpretation of children's use of \"inside\" might explain the low prevalence of diagnostic findings and warrants further study.",
"title": ""
},
{
"docid": "29a78035b59010fe091eee1a77bc9c3b",
"text": "AIM\nThis paper is a report of a study to develop and test the psychometric properties of the Perceived Maternal Parenting Self-Efficacy tool.\n\n\nBACKGROUND\nMothers' perceptions of their ability to parent (maternal parenting self-efficacy) is a critical mechanism guiding their interactions with their preterm newborns. A robust measure is needed which can measure mothers' perceptions of their ability to understand and care for their hospitalized preterm neonates as well as being sensitive to the various levels and tasks in parenting.\n\n\nMETHODS\nUsing a mixed sampling methodology (convenience or randomized cluster control trial) 165 relatively healthy and hospitalized mother-preterm infant dyads were recruited in 2003-2005 from two intensive care neonatal units in the United Kingdom (UK). Mothers were recruited within the first 28 days after giving birth to a preterm baby. The Perceived Maternal Parenting Self-Efficacy tool, which is made up of 20 items representing four theorized subscales, was tested for reliability and validity.\n\n\nRESULTS\nInternal consistency reliability of the Perceived Maternal Parenting Self-Efficacy tool was 0.91, external/test-retest reliability was 0.96, P<0.01. Divergent validity using the Maternal Self-Report Inventory was r(s)=0.4, P<0.05 and using the Maternal Postnatal Attachment Scale was r(s)=0.31, P<0.01.\n\n\nCONCLUSION\nThe Perceived Maternal Parenting Self-Efficacy tool is a psychometrically robust, reliable and valid measure of parenting self-efficacy in mothers of relatively healthy hospitalized preterm neonates. Although application outside the UK will require further cross-cultural validation, the tool has the potential to provide healthcare professionals with a reliable method of identifying mothers of preterm hospitalized babies who are in need of further support.",
"title": ""
},
{
"docid": "1c5591bec1b8bfab63309aa2eb488e83",
"text": "When performing visualization and classification, people often confront the problem of dimensionality reduction. Isomap is one of the most promising nonlinear dimensionality reduction techniques. However, when Isomap is applied to real-world data, it shows some limitations, such as being sensitive to noise. In this paper, an improved version of Isomap, namely S-Isomap, is proposed. S-Isomap utilizes class information to guide the procedure of nonlinear dimensionality reduction. Such a kind of procedure is called supervised nonlinear dimensionality reduction. In S-Isomap, the neighborhood graph of the input data is constructed according to a certain kind of dissimilarity between data points, which is specially designed to integrate the class information. The dissimilarity has several good properties which help to discover the true neighborhood of the data and, thus, makes S-Isomap a robust technique for both visualization and classification, especially for real-world problems. In the visualization experiments, S-Isomap is compared with Isomap, LLE, and WeightedIso. The results show that S-Isomap performs the best. In the classification experiments, S-Isomap is used as a preprocess of classification and compared with Isomap, WeightedIso, as well as some other well-established classification methods, including the K-nearest neighbor classifier, BP neural network, J4.8 decision tree, and SVM. The results reveal that S-Isomap excels compared to Isomap and WeightedIso in classification, and it is highly competitive with those well-known classification methods.",
"title": ""
},
{
"docid": "1ab0308539bc6508b924316b39a963ca",
"text": "Daily wafer fabrication in semiconductor foundry depends on considerable metrology operations for tool-quality and process-quality assurance. The metrology operations required a lot of metrology tools, which increase FAB's investment. Also, these metrology operations will increase cycle time of wafer process. Metrology operations do not bring any value added to wafer but only quality assurance. This article provides a new method denoted virtual metrology (VM) to utilize sensor data collected from 300 mm FAB's tools to forecast quality data of wafers and tools. This proposed method designs key steps to establish a VM control model based on neural networks and to develop and deploy applications following SEMI EDA (equipment data acquisition) standards.",
"title": ""
},
{
"docid": "dedf96c3e23dc7fd873c5fe27620a959",
"text": "This paper presents a monocular algorithm for front and rear vehicle detection, developed as part of the FP7 V-Charge project's perception system. The system is made of an AdaBoost classifier with Haar Features Decision Stump. It processes several virtual perspective images, obtained by un-warping 4 monocular fish-eye cameras mounted all-around an autonomous electric car. The target scenario is the automated valet parking, but the presented technique fits well in any general urban and highway environment. A great attention has been given to optimize the computational performance. The accuracy in the detection and a low computation costs are provided by combining a multiscale detection scheme with a Soft-Cascade classifier design. The algorithm runs in real time on the project's hardware platform. The system has been tested on a validation set, compared with several AdaBoost schemes, and the corresponding results and statistics are also reported.",
"title": ""
}
] |
scidocsrr
|
dffc068fc44ed963f45587de548e87aa
|
(Cross-)Browser Fingerprinting via OS and Hardware Level Features
|
[
{
"docid": "2b23a37f6047128e6c8a577e2f4343be",
"text": "Worldwide, the number of people and the time spent browsing the web keeps increasing. Accordingly, the technologies to enrich the user experience are evolving at an amazing pace. Many of these evolutions provide for a more interactive web (e.g., boom of JavaScript libraries, weekly innovations in HTML5), a more available web (e.g., explosion of mobile devices), a more secure web (e.g., Flash is disappearing, NPAPI plugins are being deprecated), and a more private web (e.g., increased legislation against cookies, huge success of extensions such as Ghostery and AdBlock). Nevertheless, modern browser technologies, which provide the beauty and power of the web, also provide a darker side, a rich ecosystem of exploitable data that can be used to build unique browser fingerprints. Our work explores the validity of browser fingerprinting in today's environment. Over the past year, we have collected 118,934 fingerprints composed of 17 attributes gathered thanks to the most recent web technologies. We show that innovations in HTML5 provide access to highly discriminating attributes, notably with the use of the Canvas API which relies on multiple layers of the user's system. In addition, we show that browser fingerprinting is as effective on mobile devices as it is on desktops and laptops, albeit for radically different reasons due to their more constrained hardware and software environments. We also evaluate how browser fingerprinting could stop being a threat to user privacy if some technological evolutions continue (e.g., disappearance of plugins) or are embraced by browser vendors (e.g., standard HTTP headers).",
"title": ""
}
] |
[
{
"docid": "16ff5b993508f962550b6de495c9d651",
"text": "Finding similar procedures in stripped binaries has various use cases in the domains of cyber security and intellectual property. Previous works have attended this problem and came up with approaches that either trade throughput for accuracy or address a more relaxed problem.\n In this paper, we present a cross-compiler-and-architecture approach for detecting similarity between binary procedures, which achieves both high accuracy and peerless throughput. For this purpose, we employ machine learning alongside similarity by composition: we decompose the code into smaller comparable fragments, transform these fragments to vectors, and build machine learning-based predictors for detecting similarity between vectors that originate from similar procedures.\n We implement our approach in a tool called Zeek and evaluate it by searching similarities in open source projects that we crawl from the world-wide-web. Our results show that we perform 250X faster than state-of-the-art tools without harming accuracy.",
"title": ""
},
{
"docid": "edab0c2cc3f04bd56fa76d8e6b339525",
"text": "In this letter, a compact ultrathin quad-band polarization-insensitive metamaterial absorber with a wide angle of absorption is proposed. The unit cell of the proposed structure comprises conductive cross dipoles loaded with split-ring resonators. The proposed absorber exhibits simulated peak absorption of <inline-formula> <tex-math notation=\"LaTeX\">$\\text{96.15}\\% $</tex-math></inline-formula>, <inline-formula><tex-math notation=\"LaTeX\"> $\\text{99.17}\\% $</tex-math></inline-formula>, <inline-formula><tex-math notation=\"LaTeX\">$\\text{99.75}\\%,$</tex-math> </inline-formula> and <inline-formula><tex-math notation=\"LaTeX\">$\\text{98.75}\\% $</tex-math></inline-formula> at 3.68, 8.58, 10.17, and 14.93 GHz, respectively. The proposed multiband absorber is ultrathin and compact in configuration with a thickness of <inline-formula><tex-math notation=\"LaTeX\">$\\text{0.0122}\\,\\lambda $</tex-math> </inline-formula> and a unit cell size of 0.122 <inline-formula><tex-math notation=\"LaTeX\">$\\lambda$</tex-math> </inline-formula> (corresponding to the lowest frequency). Moreover, by understanding the interaction of the unit cell with incident electromagnetic radiation, a conceptual equivalent circuit model is developed, which is used to understand the influence of coupling on the quad band of absorption. The simulated response of the proposed design demonstrates that it has quad-band polarization-insensitive absorption characteristics. In addition, the proposed absorber shows high absorption for an oblique incidence angle up to <inline-formula><tex-math notation=\"LaTeX\">$6{0^ \\circ }$</tex-math></inline-formula> for both transverse-electric and transverse-magnetic polarizations.",
"title": ""
},
{
"docid": "f9580093dcf61a9d6905265cfb3a0d32",
"text": "The rapid adoption of electronic health records (EHR) provides a comprehensive source for exploratory and predictive analytic to support clinical decision-making. In this paper, we investigate how to utilize EHR to tailor treatments to individual patients based on their likelihood to respond to a therapy. We construct a heterogeneous graph which includes two domains (patients and drugs) and encodes three relationships (patient similarity, drug similarity, and patient-drug prior associations). We describe a novel approach for performing a label propagation procedure to spread the label information representing the effectiveness of different drugs for different patients over this heterogeneous graph. The proposed method has been applied on a real-world EHR dataset to help identify personalized treatments for hypercholesterolemia. The experimental results demonstrate the effectiveness of the approach and suggest that the combination of appropriate patient similarity and drug similarity analytics could lead to actionable insights for personalized medicine. Particularly, by leveraging drug similarity in combination with patient similarity, our method could perform well even on new or rarely used drugs for which there are few records of known past performance.",
"title": ""
},
{
"docid": "77da7651b0e924d363c859d926e8c9da",
"text": "Manual feedback in basic robot-assisted minimally invasive surgery (RMIS) training can consume a significant amount of time from expert surgeons’ schedule and is prone to subjectivity. In this paper, we explore the usage of different holistic features for automated skill assessment using only robot kinematic data and propose a weighted feature fusion technique for improving score prediction performance. Moreover, we also propose a method for generating ‘task highlights’ which can give surgeons a more directed feedback regarding which segments had the most effect on the final skill score. We perform our experiments on the publicly available JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS) and evaluate four different types of holistic features from robot kinematic data—sequential motion texture (SMT), discrete Fourier transform (DFT), discrete cosine transform (DCT) and approximate entropy (ApEn). The features are then used for skill classification and exact skill score prediction. Along with using these features individually, we also evaluate the performance using our proposed weighted combination technique. The task highlights are produced using DCT features. Our results demonstrate that these holistic features outperform all previous Hidden Markov Model (HMM)-based state-of-the-art methods for skill classification on the JIGSAWS dataset. Also, our proposed feature fusion strategy significantly improves performance for skill score predictions achieving up to 0.61 average spearman correlation coefficient. Moreover, we provide an analysis on how the proposed task highlights can relate to different surgical gestures within a task. Holistic features capturing global information from robot kinematic data can successfully be used for evaluating surgeon skill in basic surgical tasks on the da Vinci robot. Using the framework presented can potentially allow for real-time score feedback in RMIS training and help surgical trainees have more focused training.",
"title": ""
},
{
"docid": "f27cf894faef9a475b011f44fbf57777",
"text": "Traditional convolutional neural networks (CNN) are stationary and feedforward. They neither change their parameters during evaluation nor use feedback from higher to lower layers. Real brains, however, do. So does our Deep Attention Selective Network (dasNet) architecture. DasNet’s feedback structure can dynamically alter its convolutional filter sensitivities during classification. It harnesses the power of sequential processing to improve classification performance, by allowing the network to iteratively focus its internal attention on some of its convolutional filters. Feedback is trained through direct policy search in a huge million-dimensional parameter space, through scalable natural evolution strategies (SNES). On the CIFAR-10 and CIFAR-100 datasets, dasNet outperforms the previous state-of-the-art model on unaugmented datasets.",
"title": ""
},
{
"docid": "807e008d5c7339706f8cfe71e9ced7ba",
"text": "Current competitive challenges induced by globalization and advances in information technology have forced companies to focus on managing customer relationships, and in particular customer satisfaction, in order to efficiently maximize revenues. This paper reports exploratory research based on a mail survey addressed to the largest 1,000 Greek organizations. The objectives of the research were: to investigate the extent of the usage of customerand market-related knowledge management (KM) instruments and customer relationship management (CRM) systems by Greek organizations and their relationship with demographic and organizational variables; to investigate whether enterprises systematically carry out customer satisfaction and complaining behavior research; and to examine the impact of the type of the information system used and managers’ attitudes towards customer KM practices. In addition, a conceptual model of CRM development stages is proposed. The findings of the survey show that about half of the organizations of the sample do not adopt any CRM philosophy. The remaining organizations employ instruments to conduct customer satisfaction and other customer-related research. However, according to the proposed model, they are positioned in the first, the preliminary CRM development stage. The findings also suggest that managers hold positive attitudes towards CRM and that there is no significant relationship between the type of the transactional information system used and the extent to which customer satisfaction research is performed by the organizations. The paper concludes by discussing the survey findings and proposing future",
"title": ""
},
{
"docid": "f71b1df36ee89cdb30a1dd29afc532ea",
"text": "Finite state machines are a standard tool to model event-based control logic, and dynamic programming is a staple of optimal decision-making. We combine these approaches in the context of radar resource management for Naval surface warfare. There is a friendly (Blue) force in the open sea, equipped with one multi-function radar and multiple ships. The enemy (Red) force consists of missiles that target the Blue force's radar. The mission of the Blue force is to foil the enemy's threat by careful allocation of radar resources. Dynamically composed finite state machines are used to formalize the model of the battle space and dynamic programming is applied to our dynamic state machine model to generate an optimal policy. To achieve this in near-real-time and a changing environment, we use approximate dynamic programming methods. Example scenario illustrating the model and simulation results are presented.",
"title": ""
},
{
"docid": "6fb23797eebcdcacf1805ef51af7557b",
"text": "Global Positioning System (GPS) is a satellite based navigation system developed and declared operational by the U.S department of defense in the year 1995. It provides position, velocity and time everywhere, on or near the surface of the earth. To achieve nation's security different countries are developing regional navigation satellite systems. In this context India also has developed its regional navigation satellite system called as Indian Regional Navigation Satellite System (IRNSS) with a constellation of seven satellites. The IRNSS is expected to provide positional accuracy of 10 m over Indian landmass and 20 m, over Indian Ocean. IRNSS is featured with highly accurate position, velocity and timing information for authorized users. Studying the satellite coverage area is very essential because it is an important parameter for the analysis of user positioning. In this paper, an algorithm to estimate coverage area of GPS and IRNSS is explained. Using this algorithm, earth's surface coverage of IRNSS 5 and 7 satellite vehicles (SVs) are investigated. The best and worst cases of IRNSS 5 and 7 SV's constellations are analyzed. It is observed that in the worst case the coverage is reduced to a large extent. Subsequently, IRNSS is augmented with GPS and the earth's coverage is estimated. Comparative analysis of IRNSS, GPS and IRNSS augmented with GPS is also performed in terms of surface coverage. The augmentation has caused improvement in the specified performance parameter.",
"title": ""
},
{
"docid": "16fa5c87b0877188b3b225458012df0f",
"text": "Segmentation is one of the essential tasks in image processing. Thresholding is one of the simplest techniques for performing image segmentation. Multilevel thresholding is a simple and effective technique. The primary objective of bi-level or multilevel thresholding for image segmentation is to determine a best thresholding value. To achieve multilevel thresholding various techniques has been proposed. A study of some nature inspired metaheuristic algorithms for multilevel thresholding for image segmentation is conducted. Here, we study about Particle swarm optimization (PSO) algorithm, artificial bee colony optimization (ABC), Ant colony optimization (ACO) algorithm and Cuckoo search (CS) algorithm. Keywords—Ant colony optimization, Artificial bee colony optimization, Cuckoo search algorithm, Image segmentation, Multilevel thresholding, Particle swarm optimization.",
"title": ""
},
{
"docid": "00dc409a1dea3d6fe773b0262afe2392",
"text": "In this paper, we present a study of a novel problem, i.e. topic-based citation recommendation, which involves recommending papers to be referred to. Traditionally, this problem is usually treated as an engineering issue and dealt with using heuristics. This paper gives a formalization of topic-based citation recommendation and proposes a discriminative approach to this problem. Specifically, it proposes a two-layer Restricted Boltzmann Machine model, called RBMCS, which can discover topic distributions of paper content and citation relationship simultaneously. Experimental results demonstrate that RBM-CS can significantly outperform baseline methods for citation recommendation.",
"title": ""
},
{
"docid": "eeff4d71a0af418828d5783a041b466f",
"text": "In recent years, advances in hardware technology have facilitated ne w ways of collecting data continuously. In many applications such as network monitorin g, the volume of such data is so large that it may be impossible to store the data on disk. Furthermore, even when the data can be stored, the volume of th incoming data may be so large that it may be impossible to process any partic ular record more than once. Therefore, many data mining and database op erati ns such as classification, clustering, frequent pattern mining and indexing b ecome significantly more challenging in this context. In many cases, the data patterns may evolve continuously, as a result of which it is necessary to design the mining algorithms effectively in order to accou nt f r changes in underlying structure of the data stream. This makes the solution s of the underlying problems even more difficult from an algorithmic and computa tion l point of view. This book contains a number of chapters which are caref ully chosen in order to discuss the broad research issues in data streams. The purp ose of this chapter is to provide an overview of the organization of the stream proces sing and mining techniques which are covered in this book.",
"title": ""
},
{
"docid": "b7c0864be28d70d49ae4a28fb7d78f04",
"text": "UNLABELLED\nThe replacement of crowns and bridges is a common procedure for many dental practitioners. When correctly planned and executed, fixed prostheses will provide predictable function, aesthetics and value for money. However, when done poorly, they are more likely to fail prematurely and lead to irreversible damage to the teeth and supporting structures beneath. Sound diagnosis, assessment and technical skills are essential when dealing with failed or failing fixed restorations. These skills are essential for the 21st century dentist. This paper, with treated clinical examples, illustrates the areas of technical skill and clinical decisions needed for this type of work. It also provides advice on how the risk of premature failure can, in general, be further reduced. The article also confirms the very real risk in the UK of dento-legal problems when patients experience unexpected problems with their crowns and bridges.\n\n\nCLINICAL RELEVANCE\nThis paper outlines clinical implications of failed fixed prosthodontics to the dental surgeon. It also discusses factors that we can all use to predict and reduce the risk of premature restoration failure. Restoration design, clinical execution and patient factors are the most frequent reasons for premature problems. It is worth remembering (and informing patients) that the health of the underlying supporting dental tissue is often irreversibly compromised at the time of fixed restoration failure.",
"title": ""
},
{
"docid": "8a905d0abdc1a6a8daeb44137fa980ee",
"text": "In the mobile game industry, Free-to-Play games are dominantly released, and therefore player retention and purchases have become important issues. In this paper, we propose a game player model for predicting when players will leave a game. Firstly, we define player churn in the game and extract features that contain the properties of the player churn from the player logs. And then we tackle the problem of imbalanced datasets. Finally, we exploit classification algorithms from machine learning and evaluate the performance of the proposed prediction model using cross-validation. Experimental results show that the proposed model has high accuracy enough to predict churn for real-world application.",
"title": ""
},
{
"docid": "0e68fa08edfc2dcb52585b13d0117bf1",
"text": "Knowledge graphs contain knowledge about the world and provide a structured representation of this knowledge. Current knowledge graphs contain only a small subset of what is true in the world. Link prediction approaches aim at predicting new links for a knowledge graph given the existing links among the entities. Tensor factorization approaches have proved promising for such link prediction problems. Proposed in 1927, Canonical Polyadic (CP) decomposition is among the first tensor factorization approaches. CP generally performs poorly for link prediction as it learns two independent embedding vectors for each entity, whereas they are really tied. We present a simple enhancement of CP (which we call SimplE) to allow the two embeddings of each entity to be learned dependently. The complexity of SimplE grows linearly with the size of embeddings. The embeddings learned through SimplE are interpretable, and certain types of background knowledge can be incorporated into these embeddings through weight tying. We prove SimplE is fully expressive and derive a bound on the size of its embeddings for full expressivity. We show empirically that, despite its simplicity, SimplE outperforms several state-of-the-art tensor factorization techniques. SimplE’s code is available on GitHub at https://github.com/Mehran-k/SimplE.",
"title": ""
},
{
"docid": "bb8cf42ab1b066e4647ce53a6666af35",
"text": "This paper presents a high energy efficient, parasitic free and low complex readout integrated circuit for capacitive sensors. A very low power consumption is achieved by replacing a power hungry operation amplifier by a subthreshold inverter instead in a switched capacitor amplifier(SC-amp) and reducing the supply voltage of all digital circuits in the system. A fast respond finite gain compensation method is utilized to reduce the gain error of the SC-amp and increase the energy efficiency of the readout circuit. A two-step auto calibration is applied to eliminate the offset from nonideal effects of the SC-amp and comparator delay. The readout system is implemented and simulated in TSMC 90 nm CMOS technology. With supply voltage of 1 V, simulation shows that the circuit can achieve 10.4 bit resolution while consuming only 3 μW during 640 μs conversion time. The digital output code has little sensitivity to temperature variation.",
"title": ""
},
{
"docid": "bc06e1fe5064a2b68d6b181b2953b4e2",
"text": "Now, we come to offer you the right catalogues of book to open. hackers heroes of the computer revolution is one of the literary work in this world in suitable to be reading material. That's not only this book gives reference, but also it will show you the amazing benefits of reading a book. Developing your countless minds is needed; moreover you are kind of people with great curiosity. So, the book is very appropriate for you.",
"title": ""
},
{
"docid": "d34759a882df6bc482b64530999bcda3",
"text": "The Static Single Assignment (SSA) form is a program representation used in many optimizing compilers. The key step in converting a program to SSA form is called φ-placement. Many algorithms for φ-placement have been proposed in the literature, but the relationships between these algorithms are not well understood.In this article, we propose a framework within which we systematically derive (i) properties of the SSA form and (ii) φ-placement algorithms. This framework is based on a new relation called merge which captures succinctly the structure of a program's control flow graph that is relevant to its SSA form. The φ-placement algorithms we derive include most of the ones described in the literature, as well as several new ones. We also evaluate experimentally the performance of some of these algorithms on the SPEC92 benchmarks.Some of the algorithms described here are optimal for a single variable. However, their repeated application is not necessarily optimal for multiple variables. We conclude the article by describing such an optimal algorithm, based on the transitive reduction of the merge relation, for multi-variable φ-placement in structured programs. The problem for general programs remains open.",
"title": ""
},
{
"docid": "12b1f774967739ea12a1ddcfe43f2faf",
"text": "Herbal drug authentication is an important task in traditional medicine; however, it is challenged by the limitations of traditional authentication methods and the lack of trained experts. DNA barcoding is conspicuous in almost all areas of the biological sciences and has already been added to the British pharmacopeia and Chinese pharmacopeia for routine herbal drug authentication. However, DNA barcoding for the Korean pharmacopeia still requires significant improvements. Here, we present a DNA barcode reference library for herbal drugs in the Korean pharmacopeia and developed a species identification engine named KP-IDE to facilitate the adoption of this DNA reference library for the herbal drug authentication. Using taxonomy records, specimen records, sequence records, and reference records, KP-IDE can identify an unknown specimen. Currently, there are 6,777 taxonomy records, 1,054 specimen records, 30,744 sequence records (ITS2 and psbA-trnH) and 285 reference records. Moreover, 27 herbal drug materials were collected from the Seoul Yangnyeongsi herbal medicine market to give an example for real herbal drugs authentications. Our study demonstrates the prospects of the DNA barcode reference library for the Korean pharmacopeia and provides future directions for the use of DNA barcoding for authenticating herbal drugs listed in other modern pharmacopeias.",
"title": ""
},
{
"docid": "99c99f927c3c416ba8c01c15c0c2f28c",
"text": "Online Social Rating Networks (SRNs) such as Epinions and Flixter, allow users to form several implicit social networks, through their daily interactions like co-commenting on the same products, or similarly co-rating products. The majority of earlier work in Rating Prediction and Recommendation of products (e.g. Collaborative Filtering) mainly takes into account ratings of users on products. However, in SRNs users can also built their explicit social network by adding each other as friends. In this paper, we propose Social-Union, a method which combines similarity matrices derived from heterogeneous (unipartite and bipartite) explicit or implicit SRNs. Moreover, we propose an effective weighting strategy of SRNs influence based on their structured density. We also generalize our model for combining multiple social networks. We perform an extensive experimental comparison of the proposed method against existing rating prediction and product recommendation algorithms, using synthetic and two real data sets (Epinions and Flixter). Our experimental results show that our Social-Union algorithm is more effective in predicting rating and recommending products in SRNs.",
"title": ""
},
{
"docid": "5d527ad4493860a8d96283a5c58c3979",
"text": "Phase retrieval problems involve solving linear equations, but with missing sign (or phase, for complex numbers) information. More than four decades after it was first proposed, the seminal error reduction algorithm of Gerchberg and Saxton and Fienup is still the popular choice for solving many variants of this problem. The algorithm is based on alternating minimization; i.e., it alternates between estimating the missing phase information, and the candidate solution. Despite its wide usage in practice, no global convergence guarantees for this algorithm are known. In this paper, we show that a (resampling) variant of this approach converges geometrically to the solution of one such problem-finding a vector x from y, A, where y = |ATx| and |z| denotes a vector of element-wise magnitudes of z-under the assumption that A is Gaussian. Empirically, we demonstrate that alternating minimization performs similar to recently proposed convex techniques for this problem (which are based on “lifting” to a convex matrix problem) in sample complexity and robustness to noise. However, it is much more efficient and can scale to large problems. Analytically, for a resampling version of alternating minimization, we show geometric convergence to the solution, and sample complexity that is off by log factors from obvious lower bounds. We also establish close to optimal scaling for the case when the unknown vector is sparse. Our work represents the first theoretical guarantee for alternating minimization (albeit with resampling) for any variant of phase retrieval problems in the non-convex setting.",
"title": ""
}
] |
scidocsrr
|
d618728fb63447b2dd7186bf1a075975
|
A plant-based diet for the prevention and treatment of type 2 diabetes
|
[
{
"docid": "cf6f0a6d53c3b615f27a20907e6eb93f",
"text": "OBJECTIVE\nWe sought to investigate whether a low-fat vegan diet improves glycemic control and cardiovascular risk factors in individuals with type 2 diabetes.\n\n\nRESEARCH DESIGN AND METHODS\nIndividuals with type 2 diabetes (n = 99) were randomly assigned to a low-fat vegan diet (n = 49) or a diet following the American Diabetes Association (ADA) guidelines (n = 50). Participants were evaluated at baseline and 22 weeks.\n\n\nRESULTS\nForty-three percent (21 of 49) of the vegan group and 26% (13 of 50) of the ADA group participants reduced diabetes medications. Including all participants, HbA(1c) (A1C) decreased 0.96 percentage points in the vegan group and 0.56 points in the ADA group (P = 0.089). Excluding those who changed medications, A1C fell 1.23 points in the vegan group compared with 0.38 points in the ADA group (P = 0.01). Body weight decreased 6.5 kg in the vegan group and 3.1 kg in the ADA group (P < 0.001). Body weight change correlated with A1C change (r = 0.51, n = 57, P < 0.0001). Among those who did not change lipid-lowering medications, LDL cholesterol fell 21.2% in the vegan group and 10.7% in the ADA group (P = 0.02). After adjustment for baseline values, urinary albumin reductions were greater in the vegan group (15.9 mg/24 h) than in the ADA group (10.9 mg/24 h) (P = 0.013).\n\n\nCONCLUSIONS\nBoth a low-fat vegan diet and a diet based on ADA guidelines improved glycemic and lipid control in type 2 diabetic patients. These improvements were greater with a low-fat vegan diet.",
"title": ""
}
] |
[
{
"docid": "450808fb3512ffd3bac692523e785c73",
"text": "This paper focuses on approaches to building a text automatic summarization model for news articles, generating a one-sentence summarization that mimics the style of a news title given some paragraphs. We managed to build and train two relatively complex deep learning models that outperformed our baseline model, which is a simple feed forward neural network. We explored Recurrent Neural Network models with encoder-decoder using LSTM and GRU cells, and with/without attention. We obtained some results that we then measured by calculating their respective ROUGE scores with respect to the actual references. For future work, we believe abstractive method of text summarization is a power way of summarizing texts, and we will continue with this approach. We think that the deficiencies currently embedded in our language model can be improved by better fine-tuning the model, more deep-learning method exploration, as well as larger training dataset.",
"title": ""
},
{
"docid": "34457120b309211281ab3459f6da12b6",
"text": "Recent technological advances in wireless communications offer new opportunities and challenges for wireless ad hoc networking. In the absence of the fixed infrastructure that characterizes traditional wireless networks, control and management of wireless ad hoc networks must be distributed across the nodes, thus requiring carefully designed medium access control (MAC) layer protocols. In this article we survey, classify, and analyze 34 MAC layer protocols for wireless ad hoc networks, ranging from industry standards to research proposals. Through this analysis, six key features emerge: (1) channel separation and access; (2) topology; (3) power; (4) transmission initiation; (5) traffic load and scalability; and (6) range. These features allow us to characterize and classify the protocols, to analyze the tradeoffs produced by different design decisions, and to assess the suitability of various design combinations for ad hoc network applications. The classification and the tradeoff analysis yield design guidelines for future wireless ad hoc network MAC layer protocols.",
"title": ""
},
{
"docid": "38bc206d9caac1d2dbe767d7e39b7aa0",
"text": "We discuss the idea that addictions can be treated by changing the mechanisms involved in self-control with or without regard to intention. The core clinical symptoms of addiction include an enhanced incentive for drug taking (craving), impaired self-control (impulsivity and compulsivity), negative mood, and increased stress re-activity. Symptoms related to impaired self-control involve reduced activity in control networks including anterior cingulate (ACC), adjacent prefrontal cortex (mPFC), and striatum. Behavioral training such as mindfulness meditation can increase the function of control networks and may be a promising approach for the treatment of addiction, even among those without intention to quit.",
"title": ""
},
{
"docid": "c7f465088265f34fe798bca8994e98fe",
"text": "Purpose – The purpose of this paper is to foster a common understanding of business process management (BPM) by proposing a set of ten principles that characterize BPM as a research domain and guide its successful use in organizational practice. Design/methodology/approach – The identification and discussion of the principles reflects our viewpoint, which was informed by extant literature and focus groups, including 20 BPM experts from academia and practice. Findings – We identify ten principles which represent a set of capabilities essential for mastering contemporary and future challenges in BPM. Their antonyms signify potential roadblocks and bad practices in BPM. We also identify a set of open research questions that can guide future BPM research. Research limitation/implication – Our findings suggest several areas of research regarding each of the identified principles of good BPM. Also, the principles themselves should be systematically and empirically examined in future studies. Practical implications – Our findings allow practitioners to comprehensively scope their BPM initiatives and provide a general guidance for BPM implementation. Moreover, the principles may also serve to tackle contemporary issues in other management areas. Originality/value – This is the first paper that distills principles of BPM in the sense of both good and bad practice recommendations. The value of the principles lies in providing normative advice to practitioners as well as in identifying open research areas for academia, thereby extending the reach and richness of BPM beyond its traditional frontiers.",
"title": ""
},
{
"docid": "e2691019d3d102dfa3bcff764b0482b9",
"text": "Like many verb-final languages, German displays considerable word-order freedom: there is no syntactic constraint on the ordering of the nominal arguments of a verb, as long as the verb remains in final position. This effect is referred to as “scrambling”, and is interpreted in transformational frameworks as leftward movement of the arguments. Furthermore, arguments from an embedded clause may move out of their clause; this effect is referred to as “long-distance scrambling”. While scrambling has recently received considerable attention in the syntactic literature, the status of long-distance scrambling has only rarely been addressed. The reason for this is the problematic status of the data: not only is long-distance scrambling highly dependent on pragmatic context, it also is strongly subject to degradation due to processing constraints. As in the case of center-embedding, it is not immediately clear whether to assume that observed unacceptability of highly complex sentences is due to grammatical restrictions, or whether we should assume that the competence grammar does not place any restrictions on scrambling (and that, therefore, all such sentences are in fact grammatical), and the unacceptability of some (or most) of the grammatically possible word orders is due to processing limitations. In this paper, we will argue for the second view by presenting a processing model for German. Comments University of Pennsylvania Institute for Research in Cognitive Science Technical Report No. IRCS-95-13. This technical report is available at ScholarlyCommons: http://repository.upenn.edu/ircs_reports/127",
"title": ""
},
{
"docid": "97de6efcdba528f801cbfa087498ab3f",
"text": "Abstract: Educational Data Mining refers to techniques, tools, and research designed for automatically extracting meaning from large repositories of data generated by or related to people' learning activities in educational settings.[1] It is an emerging discipline, concerned with developing methods for exploring the unique types of data that come from educational settings, and using those methods to better understand students, and the settings which they learn in.[2]",
"title": ""
},
{
"docid": "6ab38099b989f1d9bdc504c9b50b6bbe",
"text": "Users' search tactics often appear naïve. Much research has endeavored to understand the rudimentary query typically seen in log analyses and user studies. Researchers have tested a number of approaches to supporting query development, including information literacy training and interaction design these have tried and often failed to induce users to use more complex search strategies. To further investigate this phenomenon, we combined established HCI methods with models from cultural studies, and observed customers' mediated searches for books in bookstores. Our results suggest that sophisticated search techniques demand mental models that many users lack.",
"title": ""
},
{
"docid": "9b0ddf08b06c625ea579d9cee6c8884b",
"text": "A frequency-reconfigurable bow-tie antenna for Bluetooth, WiMAX, and WLAN applications is proposed. The bow-tie radiator is printed on two sides of the substrate and is fed by a microstripline continued by a pair of parallel strips. By embedding p-i-n diodes over the bow-tie arms, the effective electrical length of the antenna can be changed, leading to an electrically tunable operating band. The simple biasing circuit used in this design eliminates the need for extra bias lines, and thus avoids distortion of the radiation patterns. Measured results are in good agreement with simulations, which shows that the proposed antenna can be tuned to operate in either 2.2-2.53, 2.97-3.71, or 4.51-6 GHz band with similar radiation patterns.",
"title": ""
},
{
"docid": "be96e232576fc736a2cba00f03a9c3fd",
"text": "Data collection is a major bottleneck in machine learning and an active research topic in multiple communities. There are largely two reasons data collection has recently become a critical issue. First, as machine learning is becoming more widely-used, we are seeing new applications that do not necessarily have enough labeled data. Second, unlike traditional machine learning where feature engineering is the bottleneck, deep learning techniques automatically generate features, but instead require large amounts of labeled data. Interestingly, recent research in data collection comes not only from the machine learning, natural language, and computer vision communities, but also from the data management community due to the importance of handling large amounts of data. In this survey, we perform a comprehensive study of data collection from a data management point of view. Data collection largely consists of data acquisition, data labeling, and improvement of existing data or models. We provide a research landscape of these operations, provide guidelines on which technique to use when, and identify interesting research challenges. The integration of machine learning and data management for data collection is part of a larger trend of Big data and Artificial Intelligence (AI) integration and opens many opportunities for new research.",
"title": ""
},
{
"docid": "47c04a0167166b666d69f8add35c6c3e",
"text": "This paper describes a trajectory planning algorithm for mobile robot navigation in crowded environments; the aim is to solve the problem of planning a valid path through moving people. The proposed solution relies on an algorithm based on the Informed Optimal Rapidly-exploring Random Tree (InformedRRT*), where the planner continuously computes a valid path to navigate in crowded environments. While the robot executes the trajectory of the current path, this re-planning method always allows a feasible and optimal solution to be obtained. Compared to other state-of-the-art algorithms, this solution does not compute the entire path each time an obstacle is detected, instead it evaluating the current solution validity, i.e., the presence of moving obstacles on the current path; in this case the algorithm tries to repair the current solution. Only if the current path is completely unacceptable is a new path computed from scratch. Thanks to its reactivity, our solution always guarantees a valid path that brings the robot to the desired goal position. This dynamic approach is validated in a real case scenario where a mobile robot moves through a human crowd in a safe and reliable way.",
"title": ""
},
{
"docid": "bb335297dae74b8c5f45666d8ccb1c6b",
"text": "The popularity of Twitter attracts more and more spammers. Spammers send unwanted tweets to Twitter users to promote websites or services, which are harmful to normal users. In order to stop spammers, researchers have proposed a number of mechanisms. The focus of recent works is on the application of machine learning techniques into Twitter spam detection. However, tweets are retrieved in a streaming way, and Twitter provides the Streaming API for developers and researchers to access public tweets in real time. There lacks a performance evaluation of existing machine learning-based streaming spam detection methods. In this paper, we bridged the gap by carrying out a performance evaluation, which was from three different aspects of data, feature, and model. A big ground-truth of over 600 million public tweets was created by using a commercial URL-based security tool. For real-time spam detection, we further extracted 12 lightweight features for tweet representation. Spam detection was then transformed to a binary classification problem in the feature space and can be solved by conventional machine learning algorithms. We evaluated the impact of different factors to the spam detection performance, which included spam to nonspam ratio, feature discretization, training data size, data sampling, time-related data, and machine learning algorithms. The results show the streaming spam tweet detection is still a big challenge and a robust detection technique should take into account the three aspects of data, feature, and model.",
"title": ""
},
{
"docid": "29360e31131f37830e0d6271bab63a6e",
"text": "In this letter, the performance of non-orthogonal multiple access (NOMA) is investigated in a cellular downlink scenario with randomly deployed users. The developed analytical results show that NOMA can achieve superior performance in terms of ergodic sum rates; however, the outage performance of NOMA depends critically on the choices of the users' targeted data rates and allocated power. In particular, a wrong choice of the targeted data rates and allocated power can lead to a situation in which the user's outage probability is always one, i.e. the user's targeted quality of service will never be met.",
"title": ""
},
{
"docid": "04fc127c1b6e915060c2f3035aa5067b",
"text": "Recent technological advances have enabled human users to interact with computers in ways previously unimaginable. Beyond the confines of the keyboard and mouse, new modalities for human-computer interaction such as voice, gesture, and force-feedback are emerging. Despite important advances, one necessary ingredient for natural interaction is still missing–emotions. Emotions play an important role in human-to-human communication and interaction, allowing people to express themselves beyond the verbal domain. The ability to understand human emotions is desirable for the computer in several applications. This paper explores new ways of human-computer interaction that enable the computer to be more aware of the user’s emotional and attentional expressions. We present the basic research in the field and the recent advances into the emotion recognition from facial, voice, and pshysiological signals, where the different modalities are treated independently. We then describe the challenging problem of multimodal emotion recognition and we advocate the use of probabilistic graphical models when fusing the different modalities. We also discuss the difficult issues of obtaining reliable affective data, obtaining ground truth for emotion recognition, and the use of unlabeled data.",
"title": ""
},
{
"docid": "62edabfb877e280dfe69035dc7d0f1cb",
"text": "OBJECTIVES\nTo present the importance of Evidence-based Health Informatics (EBHI) and the ethical imperative of this approach; to highlight the work of the IMIA Working Group on Technology Assessment and Quality Improvement and the EFMI Working Group on Assessment of Health Information Systems; and to introduce the further important evaluation and evidence aspects being addressed.\n\n\nMETHODS\nReviews of IMIA, EFMA and other initiatives, together with literature reviews on evaluation methods and on published systematic reviews.\n\n\nRESULTS\nPresentation of the rationale for the health informatics domain to adopt a scientific approach by assessing impact, avoiding harm, and empirically demonstrating benefit and best use; reporting of the origins and rationale of the IMIA- and EQUATOR-endorsed Statement on Reporting of Evaluation Studies in Health Informatics (STARE-HI) and of the IMIA WG's Guideline for Good Evaluation Practice in Health Informatics (GEP-HI); presentation of other initiatives for objective evaluation; and outlining of further work in hand on usability and indicators; together with the case for development of relevant evaluation methods in newer applications such as telemedicine. The focus is on scientific evaluation as a reliable source of evidence, and on structured presentation of results to enable easy retrieval of evidence.\n\n\nCONCLUSIONS\nEBHI is feasible, necessary for efficiency and safety, and ethically essential. Given the significant impact of health informatics on health systems, care delivery and personal health, it is vital that cultures change to insist on evidence-based policies and investment, and that emergent global moves for this are supported.",
"title": ""
},
{
"docid": "4def0dc478dfb5ddb5a0ec59ec7433f5",
"text": "A system that enables continuous slip compensation for a Mars rover has been designed, implemented, and field-tested. This system is composed of several components that allow the rover to accurately and continuously follow a designated path, compensate for slippage, and reach intended goals in high-slip environments. These components include: visual odometry, vehicle kinematics, a Kalman filter pose estimator, and a slip compensation/path follower. Visual odometry tracks distinctive scene features in stereo imagery to estimate rover motion between successively acquired stereo image pairs. The vehicle kinematics for a rocker-bogie suspension system estimates motion by measuring wheel rates, and rocker, bogie, and steering angles. The Kalman filter merges data from an inertial measurement unit (IMU) and visual odometry. This merged estimate is then compared to the kinematic estimate to determine how much slippage has occurred, taking into account estimate uncertainties. If slippage has occurred then a slip vector is calculated by differencing the current Kalman filter estimate from the kinematic estimate. This slip vector is then used to determine the necessary wheel velocities and steering angles to compensate for slip and follow the desired path.",
"title": ""
},
{
"docid": "cb1c0c62269e96555119bd7f8cd666aa",
"text": "The complexity of the visual world creates significant challenges for comprehensive visual understanding. In spite of recent successes in visual recognition, today’s vision systems would still struggle to deal with visual queries that require a deeper reasoning. We propose a knowledge base (KB) framework to handle an assortment of visual queries, without the need to train new classifiers for new tasks. Building such a large-scale multimodal KB presents a major challenge of scalability. We cast a large-scale MRF into a KB representation, incorporating visual, textual and structured data, as well as their diverse relations. We introduce a scalable knowledge base construction system that is capable of building a KB with half billion variables and millions of parameters in a few hours. Our system achieves competitive results compared to purpose-built models on standard recognition and retrieval tasks, while exhibiting greater flexibility in answering richer visual queries.",
"title": ""
},
{
"docid": "c198849c03a98a720ddb156f80311408",
"text": "HIV-1 was isolated 31 years ago, yet models for studying HIV-1 pathogenesis in vivo are still lacking. Recent experiments using an HIV-1 strain engineered to replicate in macaques recapitulate several important features of human AIDS, and provide insight into the genetics of cross-species transmission and emergence of pathogenic retroviruses.",
"title": ""
},
{
"docid": "80b0106e0efd946258034d7c9d866ebe",
"text": "The marketing profession is being challenged to assess and communicate the value created by its actions on shareholder value. These demands create a need to translate marketing resource allocations and their performance consequences into financial and firm value effects. The objective of this paper is to integrate the existing knowledge on the impact of marketing on firm value. The authors first frame the important research questions on marketing and firm value and review the important investor response metrics and relevant analytical models, as they relate to marketing. The authors next summarize the empirical findings to date on how marketing creates shareholder value, including the impact of brand equity, customer equity, customer satisfaction, R&D, product quality and specific marketing-mix actions. In addition the authors review emerging findings on biases in investor response to marketing actions. The paper concludes by formulating an agenda for future research challenges in this emerging area.",
"title": ""
},
{
"docid": "613b014ea02019a78be488a302ff4794",
"text": "In this study, the robustness of approaches to the automatic classification of emotions in speech is addressed. Among the many types of emotions that exist, two groups of emotions are considered, adult-to-adult acted vocal expressions of common types of emotions like happiness, sadness, and anger and adult-to-infant vocal expressions of affective intents also known as ‘‘motherese’’. Specifically, we estimate the generalization capability of two feature extraction approaches, the approach developed for Sony’s robotic dog AIBO (AIBO) and the segment-based approach (SBA) of [Shami, M., Kamel, M., 2005. Segment-based approach to the recognition of emotions in speech. In: IEEE Conf. on Multimedia and Expo (ICME05), Amsterdam, The Netherlands]. Three machine learning approaches are considered, K-nearest neighbors (KNN), Support vector machines (SVM) and Ada-boosted decision trees and four emotional speech databases are employed, Kismet, BabyEars, Danish, and Berlin databases. Single corpus experiments show that the considered feature extraction approaches AIBO and SBA are competitive on the four databases considered and that their performance is comparable with previously published results on the same databases. The best choice of machine learning algorithm seems to depend on the feature extraction approach considered. Multi-corpus experiments are performed with the Kismet–BabyEars and the Danish–Berlin database pairs that contain parallel emotional classes. Automatic clustering of the emotional classes in the database pairs shows that the patterns behind the emotions in the Kismet–BabyEars pair are less database dependent than the patterns in the Danish–Berlin pair. In off-corpus testing the classifier is trained on one database of a pair and tested on the other. This provides little improvement over baseline classification. In integrated corpus testing, however, the classifier is machine learned on the merged databases and this gives promisingly robust classification results, which suggest that emotional corpora with parallel emotion classes recorded under different conditions can be used to construct a single classifier capable of distinguishing the emotions in the merged corpora. Such a classifier is more robust than a classifier learned on a single corpus as it can recognize more varied expressions of the same emotional classes. These findings suggest that the existing approaches for the classification of emotions in speech are efficient enough to handle larger amounts of training data without any reduction in classification accuracy. 2007 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "04d06629a3683536fb94228f6295a7d3",
"text": "User profiling is an important step for solving the problem of personalized news recommendation. Traditional user profiling techniques often construct profiles of users based on static historical data accessed by users. However, due to the frequent updating of news repository, it is possible that a user’s finegrained reading preference would evolve over time while his/her long-term interest remains stable. Therefore, it is imperative to reason on such preference evaluation for user profiling in news recommenders. Besides, in content-based news recommenders, a user’s preference tends to be stable due to the mechanism of selecting similar content-wise news articles with respect to the user’s profile. To activate users’ reading motivations, a successful recommender needs to introduce ‘‘somewhat novel’’ articles to",
"title": ""
}
] |
scidocsrr
|
019a6cc47a0a256790e1ad1313cb07e7
|
Predicting customers' future demand using data mining analysis: A case study of wireless communication customer
|
[
{
"docid": "4283869a4ffa8b4434c5c484a1f92369",
"text": "With the rapid development of mobile technology and large usage rates of mobile phones, mobile instant message (MIM) services have been widely adopted in China. Although previous studies on the adoption of mobile services are quite extensive, few focus on customer satisfaction and loyalty to MIM in China.",
"title": ""
},
{
"docid": "9b5224b94b448d5dabbd545aedd293f8",
"text": "the topic (a) has been dedicated to extolling its use as a decisionmaking criterion; (b) has presented isolated numerical examples of its calculation/determination; and (c) has considered it as part of the general discussions of profitability and discussed its role in customer acquisition decisions and customer acquisition/retention trade-offs. There has been a dearth of general modeling of the topic. This paper presents a series of mathematical models for determination of customer lifetime value. The choice of the models is based on a systematic theoretical taxonomy and on assumptions grounded in customer behavior. In NADA I. NASR is a doctoral student in Marketing at the School addition, selected managerial applications of these general models of of Management, Boston University. customer lifetime value are offered. 1998 John Wiley & Sons, Inc. and Direct Marketing Educational Foundation, Inc. CCC 1094-9968/98/010017-14",
"title": ""
},
{
"docid": "352fdbd4fbb52fd16362349eb2d7aadd",
"text": "Data mining is a powerful new technique to help companies mining the patterns and trends in their customers data, then to drive improved customer relationships, and it is one of well-known tools given to customer relationship management (CRM). However, there are some drawbacks for data mining tool, such as neural networks has long training times and genetic algorithm is brute computing method. This study proposes a new procedure, joining quantitative value of RFM attributes and K-means algorithm into rough set theory (RS theory), to extract meaning rules, and it can effectively improve these drawbacks. Three purposes involved in this study in the following: (1) discretize continuous attributes to enhance the rough sets algorithm; (2) cluster customer value as output (customer loyalty) that is partitioned into 3, 5 and 7 classes based on subjective view, then see which class is the best in accuracy rate; and (3) find out the characteristic of customer in order to strengthen CRM. A practical collected C-company dataset in Taiwan’s electronic industry is employed in empirical case study to illustrate the proposed procedure. Referring to [Hughes, A. M. (1994). Strategic database marketing. Chicago: Probus Publishing Company], this study firstly utilizes RFM model to yield quantitative value as input attributes; next, uses K-means algorithm to cluster customer value; finally, employs rough sets (the LEM2 algorithm) to mine classification rules that help enterprises driving an excellent CRM. In analysis of the empirical results, the proposed procedure outperforms the methods listed in terms of accuracy rate regardless of 3, 5 and 7 classes on output, and generates understandable decision rules. 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "e9698e55abb8cee0f3a5663517bd0037",
"text": "0377-2217/$ see front matter 2008 Elsevier B.V. A doi:10.1016/j.ejor.2008.06.027 * Corresponding author. Tel.: +32 16326817. E-mail address: [email protected] The definition and modeling of customer loyalty have been central issues in customer relationship management since many years. Recent papers propose solutions to detect customers that are becoming less loyal, also called churners. The churner status is then defined as a function of the volume of commercial transactions. In the context of a Belgian retail financial service company, our first contribution is to redefine the notion of customer loyalty by considering it from a customer-centric viewpoint instead of a product-centric one. We hereby use the customer lifetime value (CLV) defined as the discounted value of future marginal earnings, based on the customer’s activity. Hence, a churner is defined as someone whose CLV, thus the related marginal profit, is decreasing. As a second contribution, the loss incurred by the CLV decrease is used to appraise the cost to misclassify a customer by introducing a new loss function. In the empirical study, we compare the accuracy of various classification techniques commonly used in the domain of churn prediction, including two cost-sensitive classifiers. Our final conclusion is that since profit is what really matters in a commercial environment, standard statistical accuracy measures for prediction need to be revised and a more profit oriented focus may be desirable. 2008 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "3b886932b4b036ec4e9ceafc5066397b",
"text": "0957-4174/$ see front matter 2009 Elsevier Ltd. A doi:10.1016/j.eswa.2009.11.083 E-mail address: [email protected] 1 PLN is the abbreviation of the Polish currency unit In this article, we test the usefulness of the popular data mining models to predict churn of the clients of the Polish cellular telecommunication company. When comparing to previous studies on this topic, our research is novel in the following areas: (1) we deal with prepaid clients (previous studies dealt with postpaid clients) who are far more likely to churn, are less stable and much less is known about them (no application, demographical or personal data), (2) we have 1381 potential variables derived from the clients’ usage (previous studies dealt with data with at least tens of variables) and (3) we test the stability of models across time for all the percentiles of the lift curve – our test sample is collected six months after the estimation of the model. The main finding from our research is that linear models, especially logistic regression, are a very good choice when modelling churn of the prepaid clients. Decision trees are unstable in high percentiles of the lift curve, and we do not recommend their usage. 2009 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "9c0985d157970a1eb0ee82311cdb8b93",
"text": "Many search engine users attempt to satisfy an information need by issuing multiple queries, with the expectation that each result will contribute some portion of the required information. Previous research has shown that structured or semi-structured descriptive knowledge bases (such as Wikipedia) can be used to improve search quality and experience for general or entity-centric queries. However, such resources do not have sufficient coverage of procedural knowledge, i.e. what actions should be performed and what factors should be considered to achieve some goal; such procedural knowledge is crucial when responding to task-oriented search queries. This paper provides a first attempt to bridge the gap between two evolving research areas: development of procedural knowledge bases (such as wikiHow) and task-oriented search. We investigate whether task-oriented search can benefit from existing procedural knowledge (search task suggestion) and whether automatic procedural knowledge construction can benefit from users' search activities (automatic procedural knowledge base construction). We propose to create a three-way parallel corpus of queries, query contexts, and task descriptions, and reduce both problems to sequence labeling tasks. We propose a set of textual features and structural features to identify key search phrases from task descriptions, and then adapt similar features to extract wikiHow-style procedural knowledge descriptions from search queries and relevant text snippets. We compare our proposed solution with baseline algorithms, commercial search engines, and the (manually-curated) wikiHow procedural knowledge; experimental results show an improvement of +0.28 to +0.41 in terms of Precision@8 and mean average precision (MAP).",
"title": ""
},
{
"docid": "de5331af1c27428379c16d6009eaa7c8",
"text": "The problem of computing good graph colorings arises in many diverse applications , such as in the estimation of sparse Jacobians and in the development of eecient, parallel iterative methods for solving sparse linear systems. In this paper we present an asynchronous graph coloring heuristic well suited to distributed memory parallel computers. We present experimental results obtained on an Intel iPSC/860 which demonstrate that, for graphs arising from nite element applications , the heuristic exhibits scalable performance and generates colorings usually within three or four colors of the best-known linear time sequential heuristics. For bounded degree graphs, we show that the expected running time of the heuristic under the PRAM computation model is bounded by EO(log(n)= log log(n)). This bound is an improvement over the previously known best upper bound for the expected running time of a random heuristic for the graph coloring problem.",
"title": ""
},
{
"docid": "29d9adfbc8cb0900fa6ebdf9aeede7dc",
"text": "Though it has been easier to build large face datasets by collecting images from the Internet in this Big Data era, the time-consuming manual annotation process prevents researchers from constructing larger ones, which makes the automatic cleaning of noisy labels highly desirable. However, identifying mislabeled faces by machine is quite challenging because the diversity of a person's face images that are captured wildly at all ages is extraordinarily rich. In view of this, we propose a graph-based cleaning method that mainly employs the community detection algorithm and deep CNN models to delete mislabeled images. As the diversity of faces is preserved in multiple large communities, our cleaning results have both high cleanness and rich data diversity. With our method, we clean the extremely large MS-Celeb-1M face dataset (approximately 10 million images with noisy labels) and obtain a clean version of it called C-MS-Celeb (6,464,018 images of 94,682 celebrities). By training a single-net model using our C-MS-Celeb dataset, without fine-tuning, we achieve 99.67% at Equal Error Rate on the LFW face recognition benchmark, which is comparable to other state-of-the-art results. This demonstrates the data cleaning positive effects on the model training. To the best of our knowledge, our C-MS-Celeb is the largest clean face dataset that is publicly available so far, which will benefit face recognition researchers.",
"title": ""
},
{
"docid": "876dd0a985f00bb8145e016cc8593a84",
"text": "This paper presents how to synthesize a texture in a procedural way that preserves the features of the input exemplar. The exemplar is analyzed in both spatial and frequency domains to be decomposed into feature and non-feature parts. Then, the non-feature parts are reproduced as a procedural noise, whereas the features are independently synthesized. They are combined to output a non-repetitive texture that also preserves the exemplar’s features. The proposed method allows the user to control the extent of extracted features and also enables a texture to edited quite effectively.",
"title": ""
},
{
"docid": "a3bba4e862319b73490d34f20cfa7cd6",
"text": "We present and discuss several novel applications of deep learning for the physical layer. By interpreting a communications system as an autoencoder, we develop a fundamental new way to think about communications system design as an end-to-end reconstruction task that seeks to jointly optimize transmitter and receiver components in a single process. We show how this idea can be extended to networks of multiple transmitters and receivers and present the concept of radio transformer networks as a means to incorporate expert domain knowledge in the machine learning model. Lastly, we demonstrate the application of convolutional neural networks on raw IQ samples for modulation classification which achieves competitive accuracy with respect to traditional schemes relying on expert features. This paper is concluded with a discussion of open challenges and areas for future investigation.",
"title": ""
},
{
"docid": "7d287e0451585f4d7600227bbbb93e9f",
"text": "We design a distributed platform with blockchain as a system service for supporting transaction execution in insurance processes. The insurance industry is heavily dependent on multiple processes between transacting parties for initiating, maintaining and closing diverse kind of policies. Transaction processing time, payment settlement time and security protection of the process execution are major concerns. Blockchain technology, originally conceived as an immutable distributed ledger for detecting double spending of cryptocurrencies, is now increasingly used in different FinTech systems to address productivity and security requirements. The application of blockchain in FinTech processing requires a deep understanding of the underlying business processes. It supports automated interactions between the blockchain and existing transaction systems through the notion of smart contracts. In this paper, we focus on the design of an efficient approach for processing insurance related transactions based on a blockchain-enabled platform. An experimental prototype is developed on Hyperledger fabric, an open source permissioned blockchain design framework. We discuss the main design requirements, corresponding design propositions, and encode various insurance processes as smart contracts. Extensive experiments were conducted to analyze performance of our framework and security of the proposed design.",
"title": ""
},
{
"docid": "a8bfa82740973038b08bb03df0ad55dd",
"text": "This study tested predictions from W. Ickes and J. A. Simpson's (1997, 2001) empathic accuracy model. Married couples were videotaped as they tried to resolve a problem in their marriage. Both spouses then viewed a videotape of the interaction, recorded the thoughts and feelings they had at specific time points, and tried to infer their partner's thoughts and feelings. Consistent with the model, when the partner's thoughts and feelings were relationship-threatening (as rated by both the partners and by trained observers), greater empathic accuracy on the part of the perceiver was associated with pre-to-posttest declines in the perceiver's feelings of subjective closeness. The reverse was true when the partner's thoughts and feelings were nonthreatening. Exploratory analyses revealed that these effects were partially mediated through observer ratings of the degree to which partners tried to avoid the discussion issue.",
"title": ""
},
{
"docid": "a13cb63cc71db1f2fa585d28005e9f57",
"text": "This study was carried out to evaluate the effectiveness of cabergoline in the treatment of nonfunctioning pituitary adenomas (NFPA), in a short-term follow-up period. Nineteen patients (10 men and 9 women) followed at the University Hospital of Brasilia and harboring nonfunctioning pituitary macroadenomas were enrolled in the study. Eleven patients were previously submitted to transsphenoidal surgery, and in 8 patients no previous treatment had been instituted. Their response to the use of cabergoline (2 mg/week) by 6 months was evaluated. Significant tumor shrinkage (above 25 % from baseline tumor volume) was observed in 6 (31.6 %) of the 19 patients, and no adverse effects were observed during treatment. In 9 patients (47.4 %), a reduction in tumor volume of at least 10 % was noted, whereas tumor growth was observed in four patients (increase above 25 % was only observed in one patient). Cabergoline (2 mg/week) can lead to significant tumor shrinkage in NFPA in a considerable number of patients, and this effect can be observed early (6 months after starting medication). Thus, this therapeutic strategy may be a low cost and safe alternative for treatment of NFPA in patients with remnant or recurrent tumor after transsphenoidal surgery or in those not operated by contraindications or refusal to surgical procedure.",
"title": ""
},
{
"docid": "4dd474679d26831f0885e2eb1d6fb52d",
"text": "The aim of this paper is to investigate the effectiveness and cost-effectiveness of three malaria preventive measures (use of treated bednets, spray of insecticides and a possible treatment of infective humans that blocks transmission to mosquitoes). For this, we consider a mathematical model for the transmission dynamics of the disease that includes these measures. We first consider the constant control parameters' case, we calculate the basic reproduction number and investigate the existence and stability of equilibria; the model is found to exhibit backward bifurcation. We then assess the relative impact of each of the constant control parameters measures by calculating the sensitivity index of the basic reproductive number to the model's parameters. In the time-dependent constant control case, we use Pontryagin's Maximum Principle to derive necessary conditions for the optimal control of the disease. We also calculate the Infection Averted Ratio (IAR) and the Incremental Cost-Effectiveness Ratio (ICER) to investigate the cost-effectiveness of all possible combinations of the three control measures. One of our findings is that the most cost-effective strategy for malaria control, is the combination of the spray of insecticides and treatment of infective individuals. This strategy requires a 100% effort in both treatment (for 20 days) and spray of insecticides (for 57 days). In practice, this will be extremely difficult, if not impossible to achieve. The second most cost-effective strategy which consists of a 100% use of treated bednets and 87% treatment of infective individuals for 42 and 100 days, respectively, is sustainable and therefore preferable.",
"title": ""
},
{
"docid": "0c4db26b5e0eaddc5f04fd499b483013",
"text": "AIMS\nDespite the lower patency of venous compared with arterial coronary artery bypass grafts, approximately 50% of grafts used are saphenous vein conduits because of their easier accessibility. In a search for ways to increase venous graft patency, we applied the results of a previous pharmacological study screening for non-toxic compounds that inhibit intimal hyperplasia of saphenous vein conduits in organ cultures. Here we analyse the effects and mechanism of action of leoligin [(2S,3R,4R)-4-(3,4-dimethoxybenzyl)-2-(3,4-dimethoxyphenyl)tetrahydrofuran-3-yl]methyl (2Z)-2-methylbut-2-enoat, the major lignan from Edelweiss (Leontopodium alpinum Cass.).\n\n\nMETHODS AND RESULTS\nWe found that leoligin potently inhibits vascular smooth muscle cell (SMC) proliferation by inducing cell cycle arrest in the G1-phase. Leoligin induced cell death neither in SMCs nor, more importantly, in endothelial cells. In a human saphenous vein organ culture model for graft disease, leoligin potently inhibited intimal hyperplasia, and even reversed graft disease in pre-damaged vessels. Furthermore, in an in vivo mouse model for venous bypass graft disease, leoligin potently inhibited intimal hyperplasia.\n\n\nCONCLUSION\nOur data suggest that leoligin might represent a novel non-toxic, non-thrombogenic, endothelial integrity preserving candidate drug for the treatment of vein graft disease.",
"title": ""
},
{
"docid": "9e0a712f598f37fd2a5f4a924707c103",
"text": "Presepsin is a soluble fragment of the cluster-of-differentiation marker protein 14 (CD14) involved in pathogen recognition by innate immunity. We evaluated the relation between its circulating concentration, host response, appropriateness of antibiotic therapy, and mortality in patients with severe sepsis. Plasma presepsin was measured 1, 2, and 7 days after enrollment of 997 patients with severe sepsis or septic shock in the multicenter Albumin Italian Outcome Sepsis (ALBIOS) trial. They were randomized to albumin or crystalloids. We tested with univariate and adjusted models the association of single measurements of presepsin or changes over time with clinical events, organ dysfunctions, appropriateness of antibiotic therapy, and ICU or 90-day mortality. Presepsin concentration at baseline (946 [492–1,887] ng/L) increased with the SOFA score, the number of prevalent organ dysfunctions or failures, and the incidence of new failures of the respiratory, coagulation, liver, and kidney systems. The concentration decreased in ICU over 7 days in patients with negative blood cultures, and in those with positive blood cultures and appropriate antibiotic therapy; it increased with inappropriate antibiotic therapy (p = 0.0009). Baseline presepsin was independently associated with, and correctly reclassified, the risk of ICU and 90-day mortality. Increasing concentrations of presepsin from day 1 to day 2 predicted higher ICU and 90-day mortality (adjusted p < 0.0001 and 0.01, respectively). Albumin had no effect on presepsin concentration. Presepsin is an early predictor of host response and mortality in septic patients. Changes in concentrations over time seem to reflect the appropriateness of antibiotic therapy.",
"title": ""
},
{
"docid": "8c067af7b61fae244340e784149a9c9b",
"text": "Based on EuroNCAP regulations the number of autonomous emergency braking systems for pedestrians (AEB-P) will increase over the next years. According to accident research a considerable amount of severe pedestrian accidents happen at artificial lighting, twilight or total darkness conditions. Because radar sensors are very robust in these situations, they will play an important role for future AEB-P systems. To assess and evaluate systems a pedestrian dummy with reflection characteristics as close as possible to real humans is indispensable. As an extension to existing measurements in literature this paper addresses open issues like the influence of different positions of the limbs or different clothing for both relevant automotive frequency bands. Additionally suggestions and requirements for specification of pedestrian dummies based on results of RCS measurements of humans and first experimental developed dummies are given.",
"title": ""
},
{
"docid": "ad96c93d4a27ec8a5a1a8168519977ff",
"text": "BACKGROUND\nMovement velocity is an acute resistance-training variable that can be manipulated to potentially optimize dynamic muscular strength development. However, it is unclear whether performing faster or slower repetitions actually influences dynamic muscular strength gains.\n\n\nOBJECTIVE\nWe conducted a systematic review and meta-analysis to examine the effect of movement velocity during resistance training on dynamic muscular strength.\n\n\nMETHODS\nFive electronic databases were searched using terms related to movement velocity and resistance training. Studies were deemed eligible for inclusion if they met the following criteria: randomized and non-randomized comparative studies; published in English; included healthy adults; used isotonic resistance-exercise interventions directly comparing fast or explosive training to slower movement velocity training; matched in prescribed intensity and volume; duration ≥4 weeks; and measured dynamic muscular strength changes.\n\n\nRESULTS\nA total of 15 studies were identified that investigated movement velocity in accordance with the criteria outlined. Fast and moderate-slow resistance training were found to produce similar increases in dynamic muscular strength when all studies were included. However, when intensity was accounted for, there was a trend for a small effect favoring fast compared with moderate-slow training when moderate intensities, defined as 60-79% one repetition maximum, were used (effect size 0.31; p = 0.06). Strength gains between conditions were not influenced by training status and age.\n\n\nCONCLUSIONS\nOverall, the results suggest that fast and moderate-slow resistance training improve dynamic muscular strength similarly in individuals within a wide range of training statuses and ages. Resistance training performed at fast movement velocities using moderate intensities showed a trend for superior muscular strength gains as compared to moderate-slow resistance training. Both training practices should be considered for novice to advanced, young and older resistance trainers targeting dynamic muscular strength.",
"title": ""
},
{
"docid": "39492127ee68a86b33a8a120c8c79f5d",
"text": "The Alternating Direction Method of Multipliers (ADMM) has received lots of attention recently due to the tremendous demand from large-scale and data-distributed machine learning applications. In this paper, we present a stochastic setting for optimization problems with non-smooth composite objective functions. To solve this problem, we propose a stochastic ADMM algorithm. Our algorithm applies to a more general class of convex and nonsmooth objective functions, beyond the smooth and separable least squares loss used in lasso. We also demonstrate the rates of convergence for our algorithm under various structural assumptions of the stochastic function: O(1/ √ t) for convex functions and O(log t/t) for strongly convex functions. Compared to previous literature, we establish the convergence rate of ADMM for convex problems in terms of both the objective value and the feasibility violation. A novel application named GraphGuided SVM is proposed to demonstrate the usefulness of our algorithm.",
"title": ""
},
{
"docid": "563e7bb577f53e7b6cda7f91950e1d2e",
"text": "This paper describes a novel backplane transceiver, which uses PAM-4 (pulse amplitude modulated four level) signalling and continuously adaptive transmit based equalization to move 5 Gcbh (channel bits per second) across typical FR-4 backplanes for total distances of up to 50 inches through two sets of backplane connectors. The paper focuses on the implementation of the equalizer and the adaptation algorithms, and includes measured results. The 17 mm2 device is implemented in a 0.25um CMOS process, operates on 2.5 V and 3.3 V supplies and consumes 1.2 W. 11. IMPLEMENTATION A. Transceiver Block Diagram A block diagram of the transceiver is illustrated in figure 1.",
"title": ""
},
{
"docid": "ca81d2df30f75485567c0dec62e6779e",
"text": "Content accessibility is a key feature in highly usable Web sites, but reports in the popular press typically report that 95% or more of all Web sites are inaccessible to users with disabilities. The present study is a content accessibility compliance audit of 50 of the Web's most popular sites, undertaken to determine if content accessibility can be conceived and reported in continuous, rather than dichotomous, terms. Preliminary results suggest that a meaningful ordinal ranking of content accessibility is not only possible, but also correlates significantly with the results of independent automated usability assessment procedures.",
"title": ""
},
{
"docid": "1943e91837f854a6e8e797a5297abed3",
"text": "Counterfactual Regret Minimization and variants (e.g. Public Chance Sampling CFR and Pure CFR) have been known as the best approaches for creating approximate Nash equilibrium solutions for imperfect information games such as poker. This paper introduces CFR, a new algorithm that typically outperforms the previously known algorithms by an order of magnitude or more in terms of computation time while also potentially requiring less memory.",
"title": ""
},
{
"docid": "9d071fe6dd9d773774a7d309e41a8948",
"text": "In this paper, a model-reference-based online identification method is proposed to estimate permanent-magnet synchronous machine (PMSM) parameters during transients and in steady state. It is shown that all parameters are not identifiable in steady state and a selection has to be made according to the user's objectives. Then, large signal convergence of the estimated parameters is analyzed using the second method of Lyapunov and the singular perturbations theory. It is illustrated that this method may be applied with a decoupling control technique that improves convergence dynamics and overall system stability. This method is compared with an extended Kalman filter (EKF)-based online identification approach, and it is shown that, in spite of its implementation complexity with respect to the proposed method, EKF does not give better results than the proposed method. It is also shown that the use of a simple PMSM model makes estimated parameters sensitive to those supposed to be known whatever the estimator is (both the proposed method and EKF). The simulation results as well as the experimental ones, implemented on a nonsalient pole PMSM, illustrate the validity of the analytic approach and confirm the same conclusions.",
"title": ""
},
{
"docid": "0b79fc06afe7782e7bdcdbd96cc1c1a0",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/journals/annals.html. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission.",
"title": ""
},
{
"docid": "44941e8f5b703bcacb51b6357cba7633",
"text": "Convolutional neural networks provide visual features that perform remarkably well in many computer vision applications. However, training these networks requires significant amounts of supervision. This paper introduces a generic framework to train deep networks, end-to-end, with no supervision. We propose to fix a set of target representations, called Noise As Targets (NAT), and to constrain the deep features to align to them. This domain agnostic approach avoids the standard unsupervised learning issues of trivial solutions and collapsing of features. Thanks to a stochastic batch reassignment strategy and a separable square loss function, it scales to millions of images. The proposed approach produces representations that perform on par with state-of-the-art unsupervised methods on ImageNet and PASCAL VOC.",
"title": ""
}
] |
scidocsrr
|
7ee79c65beeda152f928271c3a9cefe0
|
Predicting User Satisfaction with Intelligent Assistants
|
[
{
"docid": "1b6a967402639dd6b3ca7138692fab54",
"text": "Web searchers often exhibit directed search behaviors such as navigating to a particular Website. However, in many circumstances they exhibit different behaviors that involve issuing many queries and visiting many results. In such cases, it is not clear whether the user's rationale is to intentionally explore the results or whether they are struggling to find the information they seek. Being able to disambiguate between these types of long search sessions is important for search engines both in performing retrospective analysis to understand search success, and in developing real-time support to assist searchers. The difficulty of this challenge is amplified since many of the characteristics of exploration (e.g., multiple queries, long duration) are also observed in sessions where people are struggling. In this paper, we analyze struggling and exploring behavior in Web search using log data from a commercial search engine. We first compare and contrast search behaviors along a number dimensions, including query dynamics during the session. We then build classifiers that can accurately distinguish between exploring and struggling sessions using behavioral and topical features. Finally, we show that by considering the struggling/exploring prediction we can more accurately predict search satisfaction.",
"title": ""
}
] |
[
{
"docid": "8dd540b33035904f63c67b57d4c97aa3",
"text": "Wireless local area networks (WLANs) based on the IEEE 802.11 standards are one of today’s fastest growing technologies in businesses, schools, and homes, for good reasons. As WLAN deployments increase, so does the challenge to provide these networks with security. Security risks can originate either due to technical lapse in the security mechanisms or due to defects in software implementations. Standard Bodies and researchers have mainly used UML state machines to address the implementation issues. In this paper we propose the use of GSE methodology to analyse the incompleteness and uncertainties in specifications. The IEEE 802.11i security protocol is used as an example to compare the effectiveness of the GSE and UML models. The GSE methodology was found to be more effective in identifying ambiguities in specifications and inconsistencies between the specification and the state machines. Resolving all issues, we represent the robust security network (RSN) proposed in the IEEE 802.11i standard using different GSE models.",
"title": ""
},
{
"docid": "c7a542f144318fe1f81e81c923345b41",
"text": "In fifth-generation (5G) mobile networks, a major challenge is to effectively improve system capacity and meet dynamic service demands. One promising technology to solve this problem is heterogeneous networks (HetNets), which involve a large number of densified low power nodes (LPNs). This article proposes a software defined network (SDN) based intelligent model that can efficiently manage the heterogeneous infrastructure and resources. In particular, we first review the latest SDN standards and discuss the possible extensions. We then discuss the advantages of SDN in meeting the dynamic nature of services and requirements in 5G HetNets. Finally, we develop a variety of schemes to improve traffic control, subscriber management, and resource allocation. Performance analysis shows that our proposed system is reliable, scalable, and implementable.",
"title": ""
},
{
"docid": "71c4e6e63eaeec06b5e8690c1a915c81",
"text": "Measuring the similarity between words, sentences, paragraphs and documents is an important component in various tasks such as information retrieval, document clustering, word-sense disambiguation, automatic essay scoring, short answer grading, machine translation and text summarization. This survey discusses the existing works on text similarity through partitioning them into three approaches; String-based, Corpus-based and Knowledge-based similarities. Furthermore, samples of combination between these similarities are presented.",
"title": ""
},
{
"docid": "244b583ff4ac48127edfce77bc39e768",
"text": "We introduce a new and increasingly relevant setting for distributed optimization in machine learning, where the data defining the optimization are unevenly distributed over an extremely large number of nodes. The goal is to train a high-quality centralized model. We refer to this setting as Federated Optimization. In this setting, communication efficiency is of the utmost importance and minimizing the number of rounds of communication is the principal goal. A motivating example arises when we keep the training data locally on users’ mobile devices instead of logging it to a data center for training. In federated optimization, the devices are used as compute nodes performing computation on their local data in order to update a global model. We suppose that we have extremely large number of devices in the network — as many as the number of users of a given service, each of which has only a tiny fraction of the total data available. In particular, we expect the number of data points available locally to be much smaller than the number of devices. Additionally, since different users generate data with different patterns, it is reasonable to assume that no device has a representative sample of the overall distribution. We show that existing algorithms are not suitable for this setting, and propose a new algorithm which shows encouraging experimental results for sparse convex problems. This work also sets a path for future research needed in the context of federated optimization.",
"title": ""
},
{
"docid": "653d4c3d5d3246c613983495224ece67",
"text": "The class of problems that involve finding where to place or how to move a solid object in the presence of obstacles is discussed. The solution to this class of problems is essential to the automatic planning of manipulator transfer movements, i.e., the motions to grasp a part and place it at some destination. For example, planning transfer movements requires the ability to plan paths for the manipulator that avoid collisions with objects in the workspace and the ability to choose safe grasp points on objects. The approach to these problems described here is based on a method of computing an explicit representation of the manipulator configurations that would bring about a collision.",
"title": ""
},
{
"docid": "2ad37a4f6f40025263c76a130aa3ed6c",
"text": "Jitter and shimmer are measures of the cycle-to-cycle variations of fundamental frequency and amplitude, respectively, which have been largely used for the description of pathological voice quality. Since they characterise some aspects concerning particular voices, it is a priori expected to find differences in the values of jitter and shimmer among speakers. In this paper, several types of jitter and shimmer measurements have been analysed. Experiments performed with the Switchboard-I conversational speech database show that jitter and shimmer measurements give excellent results in speaker verification as complementary features of spectral and prosodic parameters.",
"title": ""
},
{
"docid": "5ca29a94ac01f9ad20249021802b1746",
"text": "Big Data has become a very popular term. It refers to the enormous amount of structured, semi-structured and unstructured data that are exponentially generated by high-performance applications in many domains: biochemistry, genetics, molecular biology, physics, astronomy, business, to mention a few. Since the literature of Big Data has increased significantly in recent years, it becomes necessary to develop an overview of the state-of-the-art in Big Data. This paper aims to provide a comprehensive review of Big Data literature of the last 4 years, to identify the main challenges, areas of application, tools and emergent trends of Big Data. To meet this objective, we have analyzed and classified 457 papers concerning Big Data. This review gives relevant information to practitioners and researchers about the main trends in research and application of Big Data in different technical domains, as well as a reference overview of Big Data tools.",
"title": ""
},
{
"docid": "2090537c798654c335963afba0c45a5b",
"text": "This paper introduces a novel transductive support vector machine (TSVM) model and compares it with the traditional inductive SVM on a key problem in bioinformatics - promoter recognition. While inductive reasoning is concerned with the development of a model (a function) to approximate data from the whole problem space (induction), and consecutively using this model to predict output values for a new input vector (deduction), in the transductive inference systems a model is developed for every new input vector based on some closest to the new vector data from an existing database and this model is used to predict only the output for this vector. The TSVM outperforms by far the inductive SVM models applied on the same problems. Analysis is given on the advantages and disadvantages of the TSVM. Hybrid TSVM-evolving connections systems are discussed as directions for future research.",
"title": ""
},
{
"docid": "fe715d2094119291f5c13fb0d08cace5",
"text": "The Echinacea species are native to the Atlantic drainage area of the United States of America and Canada. They have been introduced as cultivated medicinal plants in Europe. Echinacea purpurea, E. angustifolia and E. pallida are the species most often used medicinally due to their immune-stimulating properties. This review is focused on morphological and anatomical characteristics of E. purpurea, E. angustifolia, E. pallida, because various species are often misidentified and specimens are often confused in the medicinal plant market.",
"title": ""
},
{
"docid": "18acdeb37257f2f7f10a5baa8957a257",
"text": "Time-memory trade-off methods provide means to invert one way functions. Such attacks offer a flexible trade-off between running time and memory cost in accordance to users' computational resources. In particular, they can be applied to hash values of passwords in order to recover the plaintext. They were introduced by Martin Hellman and later improved by Philippe Oechslin with the introduction of rainbow tables. The drawbacks of rainbow tables are that they do not always guarantee a successful inversion. We address this issue in this paper. In the context of passwords, it is pertinent that frequently used passwords are incorporated in the rainbow table. It has been known that up to 4 given passwords can be incorporated into a chain but it is an open problem if more than 4 passwords can be achieved. We solve this problem by showing that it is possible to incorporate more of such passwords along a chain. Furthermore, we prove that this results in faster recovery of such passwords during the online running phase as opposed to assigning them at the beginning of the chains. For large chain lengths, the average improvement translates to 3 times the speed increase during the online recovery time.",
"title": ""
},
{
"docid": "188a0cad004be51f62968c55f9551ba2",
"text": "This paper investigates the control of an uninterruptible power supply (UPS) using a combined measurement of capacitor and load currents in the same current sensor arrangement. The purpose of this combined measurement is, on one hand, to reach a similar performance as that obtained in the inductor current controller with load current feedforward and, on the other hand, to easily obtain an estimate of the inductor current for overcurrent protection capability. Based on this combined current measurement, a voltage controller based on resonant harmonic filters is investigated in order to compensate for unbalance and harmonic distortion on the load. Adaptation is included to cope with uncertainties in the system parameters. It is shown that after transformations the proposed controller gets a simple and practical form that includes a bank of resonant filters, which is in agreement with the internal model principle and corresponds to similar approaches proposed recently. The controller is based on a frequency-domain description of the periodic disturbances, which include both symmetric components, namely, the negative and positive sequence. Experimental results on the output stage of a three-phase three-wire UPS are presented to assess the performance of the proposed algorithm",
"title": ""
},
{
"docid": "e964d88be0270bc6ee7eb7748868dd3c",
"text": "The standard serial algorithm for strongly connected components is based on depth rst search, which is di cult to parallelize. We describe a divide-and-conquer algorithm for this problem which has signi cantly greater potential for parallelization. For a graph with n vertices in which degrees are bounded by a constant, we show the expected serial running time of our algorithm to be O(n log n).",
"title": ""
},
{
"docid": "95ca78f61a46f6e34edce6210d5e0939",
"text": "Wireless sensor networks (WSNs) have recently gained a lot of attention by scientific community. Small and inexpensive devices with low energy consumption and limited computing resources are increasingly being adopted in different application scenarios including environmental monitoring, target tracking and biomedical health monitoring. In many such applications, node localization is inherently one of the system parameters. Localization process is necessary to report the origin of events, routing and to answer questions on the network coverage ,assist group querying of sensors. In general, localization schemes are classified into two broad categories: range-based and range-free. However, it is difficult to classify hybrid solutions as range-based or range-free. In this paper we make this classification easy, where range-based schemes and range-free schemes are divided into two types: fully schemes and hybrid schemes. Moreover, we compare the most relevant localization algorithms and discuss the future research directions for wireless sensor networks localization schemes.",
"title": ""
},
{
"docid": "c1e9f5759c7dfb8db3dfda60360acaa3",
"text": "What we know is less important than our capacity to continue to learn more until e-learning appeared. While e-learning technology has matured considerably since its inception, there are still many problems that practitioners find when come to implementing e-learning. Today's knowledge society of the 21st century requires a flexible learning environment which is capable to adapt according to teaching and learning objectives, students' profiles and preferences for information and communication technologies and services. Advances in technology offer new opportunities in enhancing teaching and learning. Many advances in learning technologies are taking place throughout the world. The new technologies enable individuals to personalize the environment in which they work or learn, utilizing a range of tools to meet their interests and needs. Research community has believed that an e-learning ecosystem is the next generation e-learning but has faced challenges in optimizing resource allocations, dealing with dynamic demands on getting information and knowledge anywhere and anytime, handling rapid storage growth requirements, cost controlling and greater flexibility. Additionally, e-learning ecosystems need to improve its infrastructure, which can devote the required computation and storage resources for e-learning ecosystems. So, we need flourish, growing, up-to-date and strong infrastructure e-learning ecosystems in a productive and cost-effective way to be able to face rapidly-changing environments. In this paper, an e-learning ecosystem (ELES) which supports modern technologies is introduced and implemented. An integration between cloud computing and Web 2.0 technologies and services will be used to support the development of e-learning ecosystems; cloud computing as an adoptable technology for many of the organizations with its dynamic scalability and usage of virtualized resources as a service through the Internet. Web 2.0 brings new instruments help building dynamic e-learning ecosystem on the web.",
"title": ""
},
{
"docid": "e68da0df82ade1ef0ff2e0b26da4cb4e",
"text": "What service-quality attributes must Internet banks offer to induce consumers to switch to online transactions and keep using them?",
"title": ""
},
{
"docid": "1e8caa9f0a189bafebd65df092f918bc",
"text": "For several decades, the role of hormone-replacement therapy (HRT) has been debated. Early observational data on HRT showed many benefits, including a reduction in coronary heart disease (CHD) and mortality. More recently, randomized trials, including the Women's Health Initiative (WHI), studying mostly women many years after the the onset of menopause, showed no such benefit and, indeed, an increased risk of CHD and breast cancer, which led to an abrupt decrease in the use of HRT. Subsequent reanalyzes of data from the WHI with age stratification, newer randomized and observational data and several meta-analyses now consistently show reductions in CHD and mortality when HRT is initiated soon after menopause. HRT also significantly decreases the incidence of various symptoms of menopause and the risk of osteoporotic fractures, and improves quality of life. In younger healthy women (aged 50–60 years), the risk–benefit balance is positive for using HRT, with risks considered rare. As no validated primary prevention strategies are available for younger women (<60 years of age), other than lifestyle management, some consideration might be given to HRT as a prevention strategy as treatment can reduce CHD and all-cause mortality. Although HRT should be primarily oestrogen-based, no particular HRT regimen can be advocated.",
"title": ""
},
{
"docid": "f2ee604268522b7ba5ff53d068ca0272",
"text": "We study the social structure of Facebook “friendship” networks at one hundred American colleges and universities at a single point in time, and we examine the roles of user attributes—gender, class year, major, high school, and residence—at these institutions. We investigate the influence of common attributes at the dyad level in terms of assortativity coefficients and regression models. We then examine larger-scale groupings by detecting communities algorithmically and comparing them to network partitions based on the user characteristics. We thereby compare the relative importances of different characteristics at different institutions, finding for example that common high school is more important to the social organization of large institutions and that the importance of common major varies significantly between institutions. Our calculations illustrate how microscopic and macroscopic perspectives give complementary insights on the social organization at universities and suggest future studies to investigate such phenomena further. Preprint submitted to Social Networks February 11, 2011",
"title": ""
},
{
"docid": "59f0aead21fc5e0619893d5b5e161ebc",
"text": "The use of plastic materials in agriculture causes serious hazards to the environment. The introduction of biodegradable materials, which can be disposed directly into the soil can be one possible solution to this problem. In the present research results of experimental tests carried out on biodegradable film fabricated from natural waste (corn husk) are presented. The film was characterized by Fourier transform infrared spectroscopy (FTIR), differential scanning calorimeter (DSC), thermal gravimetric analysis (TGA) and atomic force microscope (AFM) observation. The film is shown to be readily degraded within 7-9 months under controlled soil conditions, indicating a high biodegradability rate. The film fabricated was use to produce biodegradable pot (BioPot) for seedlings plantation. The introduction and the expanding use of biodegradable materials represent a really promising alternative for enhancing sustainable and environmentally friendly agricultural activities. Keywords—Environment, waste, plastic, biodegradable.",
"title": ""
},
{
"docid": "691a24c16b926378d5c586c7f2b1ce22",
"text": "Isolated 7p22.3p22.2 deletions are rarely described with only two reports in the literature. Most other reported cases either involve a much larger region of the 7p arm or have an additional copy number variation. Here, we report five patients with overlapping microdeletions at 7p22.3p22.2. The patients presented with variable developmental delays, exhibiting relative weaknesses in expressive language skills and relative strengths in gross, and fine motor skills. The most consistent facial features seen in these patients included a broad nasal root, a prominent forehead a prominent glabella and arched eyebrows. Additional variable features amongst the patients included microcephaly, metopic ridging or craniosynostosis, cleft palate, cardiac defects, and mild hypotonia. Although the patients' deletions varied in size, there was a 0.47 Mb region of overlap which contained 7 OMIM genes: EIP3B, CHST12, LFNG, BRAT1, TTYH3, AMZ1, and GNA12. We propose that monosomy of this region represents a novel microdeletion syndrome. We recommend that individuals with 7p22.3p22.2 deletions should receive a developmental assessment and a thorough cardiac exam, with consideration of an echocardiogram, as part of their initial evaluation.",
"title": ""
},
{
"docid": "b9221d254083fe875c8e81bc8f442403",
"text": "On multi-core processors, applications are run sharing the cache. This paper presents optimization theory to co-locate applications to minimize cache interference and maximize performance. The theory precisely specifies MRC-based composition, optimization, and correctness conditions. The paper also presents a new technique called footprint symbiosis to obtain the best shared cache performance under fair CPU allocation as well as a new sampling technique which reduces the cost of locality analysis. When sampling and optimization are combined, the paper shows that it takes less than 0.1 second analysis per program to obtain a co-run that is within 1.5 percent of the best possible performance. In an exhaustive evaluation with 12,870 tests, the best prior work improves co-run performance by 56 percent on average. The new optimization improves it by another 29 percent. Without single co-run test, footprint symbiosis is able to choose co-run choices that are just 8 percent slower than the best co-run solutions found with exhaustive testing.",
"title": ""
}
] |
scidocsrr
|
f77205aa17db0103c0f08bb43343ac38
|
Global grasp planning using triangular meshes
|
[
{
"docid": "92abe28875dbe72fbc16bdf41b324126",
"text": "We consider the problem of grasping novel objects, specifically ones that are being seen for the first time through vision. Grasping a previously unknown object, one for which a 3-d model is not available, is a challenging problem. Further, even if given a model, one still has to decide where to grasp the object. We present a learning algorithm that neither requires, nor tries to build, a 3-d model of the object. Given two (or more) images of an object, our algorithm attempts to identify a few points in each image corresponding to good locations at which to grasp the object. This sparse set of points is then triangulated to obtain a 3-d location at which to attempt a grasp. This is in contrast to standard dense stereo, which tries to triangulate every single point in an image (and often fails to return a good 3-d model). Our algorithm for identifying grasp locations from an image is trained via supervised learning, using synthetic images for the training set. We demonstrate this approach on two robotic manipulation platforms. Our algorithm successfully grasps a wide variety of objects, such as plates, tape-rolls, jugs, cellphones, keys, screwdrivers, staplers, a thick coil of wire, a strangely shaped power horn, and others, none of which were seen in the training set. We also apply our method to the task of unloading items from dishwashers. 1",
"title": ""
}
] |
[
{
"docid": "173fad08a1115cd95160590038be97c1",
"text": "We consider the problem of embedding one signal (e.g., a digital watermark), within another “host” signal to form a third, “composite” signal. The embedding is designed to achieve efficient trade-offs among the three conflicting goals of maximizing information-embedding rate, minimizing distortion between the host signal and composite signal, and maximizing the robustness of the embedding. We introduce new classes of embedding methods, termed quantization index modulation (QIM) and distortion-compensated QIM (DC-QIM), and develop convenient realizations in the form of what we refer to as dither modulation. Using deterministic models to evaluate digital watermarking methods, we show that QIM is “provably good” against arbitrary bounded and fully-informed attacks, which arise in several copyright applications, and in particular it achieves provably better rate-distortion-robustness trade-offs than currently popular spread-spectrum and low-bit(s) modulation methods. Furthermore, we show that for some important classes of probabilistic models, DC-QIM is optimal (capacity-achieving) and regular QIM is near-optimal. These include both additive white Gaussian noise channels, which may be good models for hybrid transmission applications such as digital audio broadcasting, and mean-square-error constrained attack channels that model private-key watermarking applications.",
"title": ""
},
{
"docid": "531ac7d6500373005bae464c49715288",
"text": "We have used acceleration sensors to monitor the heart motion during surgery. A three-axis accelerometer was made from two commercially available two-axis sensors, and was used to measure the heart motion in anesthetized pigs. The heart moves due to both respiration and heart beating. The heart beating was isolated from respiration by high-pass filtering at 1.0 Hz, and heart wall velocity and position were calculated by numerically integrating the filtered acceleration traces. The resulting curves reproduced the heart motion in great detail, noise was hardly visible. Events that occurred during the measurements, e.g. arrhythmias and fibrillation, were recognized in the curves, and confirmed by comparison with synchronously recorded ECG data. We conclude that acceleration sensors are able to measure heart motion with good resolution, and that such measurements can reveal patterns that may be an indication of heart circulation failure.",
"title": ""
},
{
"docid": "c981432a8096be6a2ff2ed6e21c75e35",
"text": "Task allocation is a fundamental problem that any multirobot system must address. Numerous multi-robot task allocation schemes have been proposed over the past decade. A vast majority of these schemes address the problem of assigning a single robot to each task. However as the complexity of multi-robot tasks increases, often situations arise where multiple robot teams need to be assigned to a set of tasks. This problem, also known as the coalition formation problem has received relatively little attention in the multi-robot community. This paper provides a generic, task independent framework for solutions to this problem for a variety task environments. In particular, the paper introduces RACHNA, a novel auction based coalition formation system for dynamic task environments. This is an extension to our previous work which proposed a static multi-robot coalition formation algorithm based on a popular heuristic from the Distributed Artificial Intelligence",
"title": ""
},
{
"docid": "d5008ed5c6c41c55759bd87dacb82c08",
"text": "Attestation is a mechanism used by a trusted entity to validate the software integrity of an untrusted platform. Over the past few years, several attestation techniques have been proposed. While they all use variants of a challenge-response protocol, they make different assumptions about what an attacker can and cannot do. Thus, they propose intrinsically divergent validation approaches. We survey in this article the different approaches to attestation, focusing in particular on those aimed at Wireless Sensor Networks. We discuss the motivations, challenges, assumptions, and attacks of each approach. We then organise them into a taxonomy and discuss the state of the art, carefully analysing the advantages and disadvantages of each proposal. We also point towards the open research problems and give directions on how to address them.",
"title": ""
},
{
"docid": "fb71d22cad59ba7cf5b9806e37df3340",
"text": "Templates are effective tools for increasing the precision of natural language requirements and for avoiding ambiguities that may arise from the use of unrestricted natural language. When templates are applied, it is important to verify that the requirements are indeed written according to the templates. If done manually, checking conformance to templates is laborious, presenting a particular challenge when the task has to be repeated multiple times in response to changes in the requirements. In this article, using techniques from natural language processing (NLP), we develop an automated approach for checking conformance to templates. Specifically, we present a generalizable method for casting templates into NLP pattern matchers and reflect on our practical experience implementing automated checkers for two well-known templates in the requirements engineering community. We report on the application of our approach to four case studies. Our results indicate that: (1) our approach provides a robust and accurate basis for checking conformance to templates; and (2) the effectiveness of our approach is not compromised even when the requirements glossary terms are unknown. This makes our work particularly relevant to practice, as many industrial requirements documents have incomplete glossaries.",
"title": ""
},
{
"docid": "3688c89588041cd9023486dadd2b866e",
"text": "LEARNING OBJECTIVES\nAfter studying this article, the participant should be able to: 1. Describe the changing epidemiology of mandibular fractures in children and adolescents. 2. Discuss the appropriate use of internal fixation in the treatment of pediatric mandibular fractures. 3. Describe the difficulties posed by the deciduous dentition in the use of interdental wiring. 4. Understand reasons why techniques specific to adult fractures may not be applicable to the growing mandible. 5. Understand the etiology and epidemiology of pediatric mandibular fractures. 6. Understand the reasons for conservative (closed) versus aggressive (open) treatment of mandibular injury.\n\n\nBACKGROUND\nFractures of the pediatric mandible are complicated by the anatomic complexity of the developing mandible, particularly by the presence of tooth buds and the eruption of deciduous and permanent teeth. Traditional methods of fracture reduction and fixation employed in adults have little applicability in the pediatric population.\n\n\nMETHODS\nThe authors describe the surgical techniques that have been used at their institution and those that can be used safely in the pediatric setting.\n\n\nRESULTS\nIn most cases, \"conservative\" management is the preferred option, especially in the treatment of condylar fractures. In cases requiring surgical intervention, interdental wiring, drop wires in combination with circummandibular wires, and acrylic splints are suited well to specific phases of dental maturation.\n\n\nCONCLUSION\nOpen reduction and internal fixation using monocortical screws and microplates or resorbable plates and screws are acceptable techniques in the pediatric patient, but they require special safeguards. Algorithms are presented to simplify management of these complicated injuries.",
"title": ""
},
{
"docid": "d354444d185dab2336ed91b229006ab0",
"text": "Our motivation is to determine whether risks such as implementation error-proneness can be isolated into three types of containers at design time. This paper identifies several container candidates in other research that fit the risk container concept. Two industrial case studies were used to determine which of three container types tested is most effective at isolating and predicting at design time the risk of implementation error-proneness. We found that Design Rule Containers were more effective than Use Case and Resource Containers.",
"title": ""
},
{
"docid": "cebc36cd572740069ab22e8181c405c4",
"text": "Dealing with high-dimensional input spaces, like visual input, is a challenging task for reinforcement learning (RL). Neuroevolution (NE), used for continuous RL problems, has to either reduce the problem dimensionality by (1) compressing the representation of the neural network controllers or (2) employing a pre-processor (compressor) that transforms the high-dimensional raw inputs into low-dimensional features. In this paper, we are able to evolve extremely small recurrent neural network (RNN) controllers for a task that previously required networks with over a million weights. The high-dimensional visual input, which the controller would normally receive, is first transformed into a compact feature vector through a deep, max-pooling convolutional neural network (MPCNN). Both the MPCNN preprocessor and the RNN controller are evolved successfully to control a car in the TORCS racing simulator using only visual input. This is the first use of deep learning in the context evolutionary RL.",
"title": ""
},
{
"docid": "e165cac5eb7ad77b43670e4558011210",
"text": "PURPOSE\nTo retrospectively review our experience in infants with glanular hypospadias or hooded prepuce without meatal anomaly, who underwent circumcision with the plastibell device. Although circumcision with the plastibell device is well described, there are no reported experiences pertaining to hooded prepuce or glanular hypospadias that have been operated on by this technique.\n\n\nMATERIALS AND METHODS\nBetween September 2002 and September 2008, 21 children with hooded prepuce (age 1 to 11 months, mean 4.6 months) were referred for hypospadias repair. Four of them did not have meatal anomaly. Their parents accepted this small anomaly and requested circumcision without glanuloplasty. In all cases, the circumcision was corrected by a plastibell device.\n\n\nRESULTS\nNo complications occurred in the circumcised patients, except delayed falling of bell in one case that was removed by a surgeon, after the tenth day.\n\n\nCONCLUSION\nCircumcision with the plastibell device is a suitable method for excision of hooded prepuce. It can also be used successfully in infants, who have miniglanular hypospadias, and whose parents accepted this small anomaly.",
"title": ""
},
{
"docid": "17b8bff80cf87fb7e3c6c729bb41c99e",
"text": "Off-policy reinforcement learning enables near-optimal policy from suboptimal experience, thereby provisions opportunity for artificial intelligence applications in healthcare. Previous works have mainly framed patient-clinician interactions as Markov decision processes, while true physiological states are not necessarily fully observable from clinical data. We capture this situation with partially observable Markov decision process, in which an agent optimises its actions in a belief represented as a distribution of patient states inferred from individual history trajectories. A Gaussian mixture model is fitted for the observed data. Moreover, we take into account the fact that nuance in pharmaceutical dosage could presumably result in significantly different effect by modelling a continuous policy through a Gaussian approximator directly in the policy space, i.e. the actor. To address the challenge of infinite number of possible belief states which renders exact value iteration intractable, we evaluate and plan for only every encountered belief, through heuristic search tree by tightly maintaining lower and upper bounds of the true value of belief. We further resort to function approximations to update value bounds estimation, i.e. the critic, so that the tree search can be improved through more compact bounds at the fringe nodes that will be back-propagated to the root. Both actor and critic parameters are learned via gradient-based approaches. Our proposed policy trained from real intensive care unit data is capable of dictating dosing on vasopressors and intravenous fluids for sepsis patients that lead to the best patient outcomes.",
"title": ""
},
{
"docid": "7920ac3492c7b3ef07e33857800ef66f",
"text": "Despite of processing elements which are thousands of times faster than the neurons in the brain, modern computers still cannot match quite a few processing capabilities of the brain, many of which we even consider trivial (such as recognizing faces or voices, or following a conversation). A common principle for those capabilities lies in the use of correlations between patterns in order to identify patterns which are similar. Looking at the brain as an information processing mechanism with { maybe among others { associative processing capabilities together with the converse view of associative memories as certain types of artiicial neural networks initiated a number of interesting results, ranging from theoretical considerations to insights in the functioning of neurons, as well as parallel hardware implementations of neural associative memories. This paper discusses three main aspects of neural associative memories: theoretical investigations, e.g. on the information storage capacity, local learning rules, eeective retrieval strategies, and encoding schemes implementation aspects, in particular for parallel hardware and applications One important outcome of our analytical considerations is that the combination of binary synaptic weights, sparsely encoded memory patterns, and local learning rules | in particular Hebbian learning | leads to favorable representation and access schemes. Based on these considerations, a series of parallel hardware architectures has been developed in the last decade; the current one is the Pan-IV (Parallel Associative Network), which uses the special purpose Bacchus{chips and standard memory for realizing 4096 neurons with 128 MBytes of storage capacity.",
"title": ""
},
{
"docid": "8d71cea3459c83a265b81cc37aa14b70",
"text": "BACKGROUND\nThe aim of this study was to determine the relevance of apelin and insulin resistance (IR) with polycystic ovary syndrome (PCOS) and to assess the possible therapeutic effect of the combined therapy of drospirenone-ethinylestradiol (DRSP-EE) combined with metformin.\n\n\nMATERIAL AND METHODS\nSixty-three PCOS patients and 40 non-PCOS infertile patients were recruited. The fasting serum levels of follicle stimulating hormone (FSH), luteinizing hormone (LH), testosterone (T), prolactin (PRL), estradiol (E2), glucose (FBG), insulin (FINS), and apelin at the early follicular phase were measured. To further investigate the relation between apelin and IR, we treated the PCOS patients with DRSP-EE (1 tablet daily, 21 d/month) plus metformin (500 mg tid) for 3 months. All of the above indices were measured again after treatment.\n\n\nRESULTS\n1) Levels of apelin, LH, LH/FSH, T, and FINS, as well as homeostatic model assessment of IR (HOMA-IR) in PCOS patients, were significantly higher than in the control group before treatment. 2) These indices significantly decreased after treatment with DRSP-EE plus metformin. 3) Correlation analysis showed that apelin level was positively correlated with body mass index (BMI), FINS level, and HOMA-IR.\n\n\nCONCLUSIONS\nApelin level significantly increased in PCOS patients. The combined therapy of DRSP-EE plus metformin not only decreases IR, but also improves apelin level. This combination is a superior approach for PCOS treatment.",
"title": ""
},
{
"docid": "e0597a2bc955598ca31209bd6eb82c88",
"text": "Lateral skin stretch is a promising technology for haptic display of information between an autonomous or semi-autonomous car and a driver. We present the design of a steering wheel with an embedded lateral skin stretch display and report on the results of tests (N=10) conducted in a driving vehicle in suburban traffic. Results are generally consistent with previous results utilizing skin stretch in stationary applications, but a slightly higher, and particularly a faster rate of stretch application is preferred for accurate detection of direction and approximate magnitude.",
"title": ""
},
{
"docid": "c676aaeca813e9636a91a30d1ba82f13",
"text": "BACKGROUND\nLateral ankle sprains may result in pain and disability in the short term, decreased sport activity and early retirement from sports in the mid term, and secondary injuries and development of early osteoarthritis to the ankle in the long term.\n\n\nHYPOTHESIS\nThis combined approach to chronic lateral instability and intra-articular lesions of the ankle is safe and in the long term maintains mechanical stability, functional ability, and a good level of sport activity.\n\n\nSTUDY DESIGN\nCase series; Level of evidence, 4.\n\n\nMETHODS\nWe present the long-term outcomes of 42 athletes who underwent ankle arthroscopy and anterior talofibular Broström repair for management of chronic lateral ankle instability. We assessed in all patients preoperative and postoperative anterior drawer test and side-to-side differences, American Orthopaedic Foot and Ankle Society (AOFAS) score, and Kaikkonen grading scales. Patients were asked about return to sport and level of activity. Patients were also assessed for development of degenerative changes to the ankle, and preoperative versus postoperative findings were compared.\n\n\nRESULTS\nThirty-eight patients were reviewed at an average of 8.7 years (range, 5-13 years) after surgery; 4 patients were lost to follow-up. At the last follow-up, patients were significantly improved for ankle laxity, AOFAS scores, and Kaikkonen scales. The mean AOFAS score improved from 51 (range, 32-71) to 90 (range, 67-100), and the mean Kaikkonen score improved from 45 (range, 30-70) to 90 (range, 65-100). According to outcome criteria set preoperatively, there were 8 failures by the AOFAS score and 9 by the Kaikkonen score. Twenty-two (58%) patients practiced sport at the preinjury level, 6 (16%) had changed to lower levels but were still active in less demanding sports (cycling and tennis), and 10 (26%) had abandoned active sport participation although they still were physically active. Six of these patients did not feel safe with their ankle because of the occurrence of new episodes of ankle instability. Of the 27 patients who had no evidence of degenerative changes preoperatively, 8 patients (30%) had radiographic signs of degenerative changes (5 grade I and 3 grade II) of the ankle; 4 of the 11 patients (11%) with preexisting grade I changes remained unchanged, and 7 patients (18%) had progressed to grade II. No correlation was found between osteoarthritis and status of sport activity (P = .72).\n\n\nCONCLUSION\nCombined Broström repair and ankle arthroscopy are safe and allow most patients to return to preinjury daily and sport activities.",
"title": ""
},
{
"docid": "4106a8cf90180e237fdbe847c13d0126",
"text": "The Internet has witnessed the proliferation of applications and services that rely on HTTP as application protocol. Users play games, read emails, watch videos, chat and access web pages using their PC, which in turn downloads tens or hundreds of URLs to fetch all the objects needed to display the requested content. As result, billions of URLs are observed in the network. When monitoring the traffic, thus, it is becoming more and more important to have methodologies and tools that allow one to dig into this data and extract useful information. In this paper, we present CLUE, Clustering for URL Exploration, a methodology that leverages clustering algorithms, i.e., unsupervised techniques developed in the data mining field to extract knowledge from passive observation of URLs carried by the network. This is a challenging problem given the unstructured format of URLs, which, being strings, call for specialized approaches. Inspired by text-mining algorithms, we introduce the concept of URL-distance and use it to compose clusters of URLs using the well-known DBSCAN algorithm. Experiments on actual datasets show encouraging results. Well-separated and consistent clusters emerge and allow us to identify, e.g., malicious traffic, advertising services, and thirdparty tracking systems. In a nutshell, our clustering algorithm offers the means to get insights on the data carried by the network, with applications in the security or privacy protection fields.",
"title": ""
},
{
"docid": "113c07908c1f22c7671553c7f28c0b3f",
"text": "Nearly 80% of children in the United States have at least 1 sibling, indicating that the birth of a baby sibling is a normative ecological transition for most children. Many clinicians and theoreticians believe the transition is stressful, constituting a developmental crisis for most children. Yet, a comprehensive review of the empirical literature on children's adjustment over the transition to siblinghood (TTS) has not been done for several decades. The current review summarizes research examining change in first borns' adjustment to determine whether there is evidence that the TTS is disruptive for most children. Thirty studies addressing the TTS were found, and of those studies, the evidence did not support a crisis model of developmental transitions, nor was there overwhelming evidence of consistent changes in firstborn adjustment. Although there were decreases in children's affection and responsiveness toward mothers, the results were more equivocal for many other behaviors (e.g., sleep problems, anxiety, aggression, regression). An inspection of the scientific literature indicated there are large individual differences in children's adjustment and that the TTS can be a time of disruption, an occasion for developmental advances, or a period of quiescence with no noticeable changes. The TTS may be a developmental turning point for some children that portends future psychopathology or growth depending on the transactions between children and the changes in the ecological context over time. A developmental ecological systems framework guided the discussion of how child, parent, and contextual factors may contribute to the prediction of firstborn children's successful adaptation to the birth of a sibling.",
"title": ""
},
{
"docid": "193cb03ebb59935ea33d23daaebbfb74",
"text": "We study semi-supervised learning when the data consists of multiple intersecting manifolds. We give a finite sample analysis to quantify the potential gain of using unlabeled data in this multi-manifold setting. We then propose a semi-supervised learning algorithm that separates different manifolds into decision sets, and performs supervised learning within each set. Our algorithm involves a novel application of Hellinger distance and size-constrained spectral clustering. Experiments demonstrate the benefit of our multimanifold semi-supervised learning approach.",
"title": ""
},
{
"docid": "0e1e5ab11e04789e00c99439384edc82",
"text": "Linking multiple accounts owned by the same user across different online social networks (OSNs) is an important issue in social networks, known as identity reconciliation. Graph matching is one of popular techniques to solve this problem by identifying a map that matches a set of vertices across different OSNs. Among them, percolation-based graph matching (PGM) has been explored to identify entities belonging to a same user across two different networks based on a set of initial pre-matched seed nodes and graph structural information. However, existing PGM algorithms have been applied in only undirected networks while many OSNs are represented by directional relationships (e.g., followers or followees in Twitter or Facebook). For PGM to be applicable in real world OSNs represented by directed networks with a small set of overlapping vertices, we propose a percolation-based directed graph matching algorithm, namely PDGM, by considering the following two key features: (1) similarity of two nodes based on directional relationships (i.e., outgoing edges vs. incoming edges); and (2) celebrity penalty such as penalty given for nodes with a high in-degree. Through the extensive simulation experiments, our results show that the proposed PDGM outperforms the baseline PGM counterpart that does not consider either directional relationships or celebrity penalty.",
"title": ""
},
{
"docid": "da2bc0813d4108606efef507e50100e3",
"text": "Prediction is one of the most attractive aspects in data mining. Link prediction has recently attracted the attention of many researchers as an effective technique to be used in graph based models in general and in particular for social network analysis due to the recent popularity of the field. Link prediction helps to understand associations between nodes in social communities. Existing link prediction-related approaches described in the literature are limited to predict links that are anticipated to exist in the future. To the best of our knowledge, none of the previous works in this area has explored the prediction of links that could disappear in the future. We argue that the latter set of links are important to know about; they are at least equally important as and do complement the positive link prediction process in order to plan better for the future. In this paper, we propose a link prediction model which is capable of predicting both links that might exist and links that may disappear in the future. The model has been successfully applied in two different though very related domains, namely health care and gene expression networks. The former application concentrates on physicians and their interactions while the second application covers genes and their interactions. We have tested our model using different classifiers and the reported results are encouraging. Finally, we compare our approach with the internal links approach and we reached the conclusion that our approach performs very well in both bipartite and non-bipartite graphs.",
"title": ""
},
{
"docid": "af254a16b14a3880c9b8fe5b13f1a695",
"text": "MOOCs or Massive Online Open Courses based on Open Educational Resources (OER) might be one of the most versatile ways to offer access to quality education, especially for those residing in far or disadvantaged areas. This article analyzes the state of the art on MOOCs, exploring open research questions and setting interesting topics and goals for further research. Finally, it proposes a framework that includes the use of software agents with the aim to improve and personalize management, delivery, efficiency and evaluation of massive online courses on an individual level basis.",
"title": ""
}
] |
scidocsrr
|
d37b1936d83efd035c88bc5dcac8fe31
|
USC-HAD: a daily activity dataset for ubiquitous activity recognition using wearable sensors
|
[
{
"docid": "2e8251644f82f3a965cf6360416eaaaa",
"text": "The past decade has witnessed a rapid proliferation of video cameras in all walks of life and has resulted in a tremendous explosion of video content. Several applications such as content-based video annotation and retrieval, highlight extraction and video summarization require recognition of the activities occurring in the video. The analysis of human activities in videos is an area with increasingly important consequences from security and surveillance to entertainment and personal archiving. Several challenges at various levels of processing-robustness against errors in low-level processing, view and rate-invariant representations at midlevel processing and semantic representation of human activities at higher level processing-make this problem hard to solve. In this review paper, we present a comprehensive survey of efforts in the past couple of decades to address the problems of representation, recognition, and learning of human activities from video and related applications. We discuss the problem at two major levels of complexity: 1) \"actions\" and 2) \"activities.\" \"Actions\" are characterized by simple motion patterns typically executed by a single human. \"Activities\" are more complex and involve coordinated actions among a small number of humans. We will discuss several approaches and classify them according to their ability to handle varying degrees of complexity as interpreted above. We begin with a discussion of approaches to model the simplest of action classes known as atomic or primitive actions that do not require sophisticated dynamical modeling. Then, methods to model actions with more complex dynamics are discussed. The discussion then leads naturally to methods for higher level representation of complex activities.",
"title": ""
},
{
"docid": "ac96b284847f58c7683df92e13157f40",
"text": "Falls are dangerous for the aged population as they can adversely affect health. Therefore, many fall detection systems have been developed. However, prevalent methods only use accelerometers to isolate falls from activities of daily living (ADL). This makes it difficult to distinguish real falls from certain fall-like activities such as sitting down quickly and jumping, resulting in many false positives. Body orientation is also used as a means of detecting falls, but it is not very useful when the ending position is not horizontal, e.g. falls happen on stairs. In this paper we present a novel fall detection system using both accelerometers and gyroscopes. We divide human activities into two categories: static postures and dynamic transitions. By using two tri-axial accelerometers at separate body locations, our system can recognize four kinds of static postures: standing, bending, sitting, and lying. Motions between these static postures are considered as dynamic transitions. Linear acceleration and angular velocity are measured to determine whether motion transitions are intentional. If the transition before a lying posture is not intentional, a fall event is detected. Our algorithm, coupled with accelerometers and gyroscopes, reduces both false positives and false negatives, while improving fall detection accuracy. In addition, our solution features low computational cost and real-time response.",
"title": ""
},
{
"docid": "13d8ce0c85befb38e6f2da583ac0295b",
"text": "The addition of sensors to wearable computers allows them to adapt their functions to more suit the activities and situation of their wearers. A wearable sensor badge is described constructed from (hard) electronic components, which can sense perambulatory activities for context-awareness. A wearable sensor jacket is described that uses advanced knitting techniques to form (soft) fabric stretch sensors positioned to measure upper limb and body movement. Worn on-the-hip, or worn as clothing, these unobtrusive sensors supply abstract information about your current activity to your other wearable computers.",
"title": ""
}
] |
[
{
"docid": "9415182c28d6c20768cfba247eb63bac",
"text": "The aim of this paper is to perform the main part of the restructuring processes with Business Process Reengineering (BPR) methodology. The first step was to choose the processes for analysis. Two business processes, which occur in most of the manufacturing companies, have been selected. Afterwards, current state of these processes was examined. The conclusions were used to propose own changes in accordance with assumptions of the BPR. This was possible through modelling and simulation of selected processes with iGrafx modeling software.",
"title": ""
},
{
"docid": "2a2f8ff4590eff0d930c5d7168ef5a58",
"text": "The six-step operation of surface-mounted permanent magnet machine drives in a flux weakening region has many advantages compared to the pulse width modulation mode, such as the reduced switching loss and fully utilized inverter output voltage. However, if the ratio of the sampling frequency to the fundamental frequency is low in fixed sampling system, the low-frequency oscillation in the current can be incurred in the six-step operation. The low-frequency current causes a system stability problem and reduces system efficiency due to an excessive heat and high power loss. Therefore, this paper proposes the variable time step controller for six-step operation. By updating an output voltage, sampling phase currents, and executing the digital controller synchronized with the variable sampling time, the turn on and off switch signals for six-step operation can be generated at the exact moment. As a result, the low-frequency oscillation in the phase current can be eliminated. In addition, the system transfer function of the proposed control method is discussed for the system stability and system dynamic analysis. The effectiveness of the proposed method is verified by the comparative simulation and experimental results.",
"title": ""
},
{
"docid": "6021388395ddd784422a22d30dac8797",
"text": "Introduction: The European Directive 2013/59/EURATOM requires patient radiation dose information to be included in the medical report of radiological procedures. To provide effective communication to the patient, it is necessary to first assess the patient's level of knowledge regarding medical exposure. The goal of this work is to survey patients’ current knowledge level of both medical exposure to ionizing radiation and professional disciplines and communication means used by patients to garner information. Material and Methods: A questionnaire was designed comprised of thirteen questions: 737 patients participated in the survey. The data were analysed based on population age, education, and number of radiological procedures received in the three years prior to survey. Results: A majority of respondents (56.4%) did not know which modality uses ionizing radiation. 74.7% had never discussed with healthcare professionals the risk concerning their medical radiological procedures. 70.1% were not aware of the professionals that have expertise to discuss the use of ionizing radiation for medical purposes, and 84.7% believe it is important to have the radiation dose information stated in the medical report. Conclusion: Patients agree with new regulations that it is important to know the radiation level related to the medical exposure, but there is little awareness in terms of which modalities use X-Rays and the professionals and channels that can help them to better understand the exposure information. To plan effective communication, it is essential to devise methods and adequate resources for key professionals (medical physicists, radiologists, referring physicians) to convey correct and effective information.",
"title": ""
},
{
"docid": "53eabc7cc5e4f3c6a354d88ea1251fbf",
"text": "An improved very wideband radial waveguide-based power divider/combiner is presented, which uses switching to compensate for cavity resonance. The combiner is implemented with broadband probes composed of cylindrical conductors and dielectric spacers, arranged on a rod for mechanical stability. The proposed switch-controlled radial power combiner provides low loss (<1 dB), broad bandwidth (400 MHz∼2000 MHz), and high power capability.",
"title": ""
},
{
"docid": "3f9e9ee1568e096707bda07cb959cec5",
"text": "Animal acoustic communication often takes the form of complex sequences, made up of multiple distinct acoustic units. Apart from the well-known example of birdsong, other animals such as insects, amphibians, and mammals (including bats, rodents, primates, and cetaceans) also generate complex acoustic sequences. Occasionally, such as with birdsong, the adaptive role of these sequences seems clear (e.g. mate attraction and territorial defence). More often however, researchers have only begun to characterise - let alone understand - the significance and meaning of acoustic sequences. Hypotheses abound, but there is little agreement as to how sequences should be defined and analysed. Our review aims to outline suitable methods for testing these hypotheses, and to describe the major limitations to our current and near-future knowledge on questions of acoustic sequences. This review and prospectus is the result of a collaborative effort between 43 scientists from the fields of animal behaviour, ecology and evolution, signal processing, machine learning, quantitative linguistics, and information theory, who gathered for a 2013 workshop entitled, 'Analysing vocal sequences in animals'. Our goal is to present not just a review of the state of the art, but to propose a methodological framework that summarises what we suggest are the best practices for research in this field, across taxa and across disciplines. We also provide a tutorial-style introduction to some of the most promising algorithmic approaches for analysing sequences. We divide our review into three sections: identifying the distinct units of an acoustic sequence, describing the different ways that information can be contained within a sequence, and analysing the structure of that sequence. Each of these sections is further subdivided to address the key questions and approaches in that area. We propose a uniform, systematic, and comprehensive approach to studying sequences, with the goal of clarifying research terms used in different fields, and facilitating collaboration and comparative studies. Allowing greater interdisciplinary collaboration will facilitate the investigation of many important questions in the evolution of communication and sociality.",
"title": ""
},
{
"docid": "05a5620c883117fd45de32f124b32cc6",
"text": "The powerful and democratic activity of social tagging allows the wide set of Web users to add free annotations on resources. Tags express user interests, preferences and needs, but also automatically generate folksonomies. They can be considered as gold mine, especially for e-commerce applications, in order to provide effective recommendations. Thus, several recommender systems exploit folksonomies in this context. Folksonomies have also been involved in many information retrieval approaches. In considering that information retrieval and recommender systems are siblings, we notice that few works deal with the integration of their approaches, concepts and techniques to improve recommendation. This paper is a first attempt in this direction. We propose a trail through recommender systems, social Web, e-commerce and social commerce, tags and information retrieval: an overview on the methodologies, and a survey on folksonomy-based information retrieval from recommender systems point of view, delineating a set of open and new perspectives.",
"title": ""
},
{
"docid": "2701f46ac9a473cb809773df5ae1d612",
"text": "Testing and measuring the security of software system architectures is a difficult task. An attempt is made in this paper to analyze the issues of architecture security of object-oriented software’s using common security concepts to evaluate the security of a system under design. Object oriented systems are based on various architectures like COM, DCOM, CORBA, MVC and Broker. In object oriented technology the basic system component is an object. Individual system component is posing it own risk in the system. Security policies and the associated risk in these software architectures can be calculated for the individual component. Overall risk can be calculated based on the context and risk factors in the architecture. Small risk factors get accumulated together and form a major risk in the systems and can damage the systems.",
"title": ""
},
{
"docid": "cebc36cd572740069ab22e8181c405c4",
"text": "Dealing with high-dimensional input spaces, like visual input, is a challenging task for reinforcement learning (RL). Neuroevolution (NE), used for continuous RL problems, has to either reduce the problem dimensionality by (1) compressing the representation of the neural network controllers or (2) employing a pre-processor (compressor) that transforms the high-dimensional raw inputs into low-dimensional features. In this paper, we are able to evolve extremely small recurrent neural network (RNN) controllers for a task that previously required networks with over a million weights. The high-dimensional visual input, which the controller would normally receive, is first transformed into a compact feature vector through a deep, max-pooling convolutional neural network (MPCNN). Both the MPCNN preprocessor and the RNN controller are evolved successfully to control a car in the TORCS racing simulator using only visual input. This is the first use of deep learning in the context evolutionary RL.",
"title": ""
},
{
"docid": "5e64e36e76f4c0577ae3608b6e715a1f",
"text": "Deep learning has recently become very popular on account of its incredible success in many complex datadriven applications, including image classification and speech recognition. The database community has worked on data-driven applications for many years, and therefore should be playing a lead role in supporting this new wave. However, databases and deep learning are different in terms of both techniques and applications. In this paper, we discuss research problems at the intersection of the two fields. In particular, we discuss possible improvements for deep learning systems from a database perspective, and analyze database applications that may benefit from deep learning techniques.",
"title": ""
},
{
"docid": "98dcb6001d3b487493e911cc2737ce47",
"text": "The development of an automatic telemedicine system for computer-aided screening and grading of diabetic retinopathy depends on reliable detection of retinal lesions in fundus images. In this paper, a novel method for automatic detection of both microaneurysms and hemorrhages in color fundus images is described and validated. The main contribution is a new set of shape features, called Dynamic Shape Features, that do not require precise segmentation of the regions to be classified. These features represent the evolution of the shape during image flooding and allow to discriminate between lesions and vessel segments. The method is validated per-lesion and per-image using six databases, four of which are publicly available. It proves to be robust with respect to variability in image resolution, quality and acquisition system. On the Retinopathy Online Challenge's database, the method achieves a FROC score of 0.420 which ranks it fourth. On the Messidor database, when detecting images with diabetic retinopathy, the proposed method achieves an area under the ROC curve of 0.899, comparable to the score of human experts, and it outperforms state-of-the-art approaches.",
"title": ""
},
{
"docid": "64ddf475e5fcf7407e4dfd65f95a68a8",
"text": "Fuzzy PID controllers have been developed and applied to many fields for over a period of 30 years. However, there is no systematic method to design membership functions (MFs) for inputs and outputs of a fuzzy system. Then optimizing the MFs is considered as a system identification problem for a nonlinear dynamic system which makes control challenges. This paper presents a novel online method using a robust extended Kalman filter to optimize a Mamdani fuzzy PID controller. The robust extended Kalman filter (REKF) is used to adjust the controller parameters automatically during the operation process of any system applying the controller to minimize the control error. The fuzzy PID controller is tuned about the shape of MFs and rules to adapt with the working conditions and the control performance is improved significantly. The proposed method in this research is verified by its application to the force control problem of an electro-hydraulic actuator. Simulations and experimental results show that proposed method is effective for the online optimization of the fuzzy PID controller. 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "41e3ec35f9ca27eef6e70c963628281e",
"text": "An emerging problem in computer vision is the reconstruction of 3D shape and pose of an object from a single image. Hitherto, the problem has been addressed through the application of canonical deep learning methods to regress from the image directly to the 3D shape and pose labels. These approaches, however, are problematic from two perspectives. First, they are minimizing the error between 3D shapes and pose labels - with little thought about the nature of this “label error” when reprojecting the shape back onto the image. Second, they rely on the onerous and ill-posed task of hand labeling natural images with respect to 3D shape and pose. In this paper we define the new task of pose-aware shape reconstruction from a single image, and we advocate that cheaper 2D annotations of objects silhouettes in natural images can be utilized. We design architectures of pose-aware shape reconstruction which reproject the predicted shape back on to the image using the predicted pose. Our evaluation on several object categories demonstrates the superiority of our method for predicting pose-aware 3D shapes from natural images.",
"title": ""
},
{
"docid": "869889e8be00663e994631b17061479b",
"text": "In this study we approach the problem of distinguishing general profanity from hate speech in social media, something which has not been widely considered. Using a new dataset annotated specifically for this task, we employ supervised classification along with a set of features that includes n-grams, skip-grams and clustering-based word representations. We apply approaches based on single classifiers as well as more advanced ensemble classifiers and stacked generalization, achieving the best result of 80% accuracy for this 3-class classification task. Analysis of the results reveals that discriminating hate speech and profanity is not a simple task, which may require features that capture a deeper understanding of the text not always possible with surface n-grams. The variability of gold labels in the annotated data, due to differences in the subjective adjudications of the annotators, is also an issue. Other directions for future work are discussed.",
"title": ""
},
{
"docid": "5f6b9fd58c633bf1de0158f0356bda80",
"text": "Crop diseases are a major threat to food security, but their rapid identification remains difficult in many parts of the world due to the lack of the necessary infrastructure. The combination of increasing global smartphone penetration and recent advances in computer vision made possible by deep learning has paved the way for smartphone-assisted disease diagnosis. Using a public dataset of 54,306 images of diseased and healthy plant leaves collected under controlled conditions, we train a deep convolutional neural network to identify 14 crop species and 26 diseases (or absence thereof). The trained model achieves an accuracy of 99.35% on a held-out test set, demonstrating the feasibility of this approach. Overall, the approach of training deep learning models on increasingly large and publicly available image datasets presents a clear path toward smartphone-assisted crop disease diagnosis on a massive global scale.",
"title": ""
},
{
"docid": "53e3b3c3bcc4fc3c4ddb2e3defcc78a2",
"text": "The Cognitive Tutor Authoring Tools (CTAT) support creation of a novel type of tutors called example-tracing tutors. Unlike other types of ITSs (e.g., model-tracing tutors, constraint-based tutors), exampletracing tutors evaluate student behavior by flexibly comparing it against generalized examples of problemsolving behavior. Example-tracing tutors are capable of sophisticated tutoring behaviors; they provide step-bystep guidance on complex problems while recognizing multiple student strategies and (where needed) maintaining multiple interpretations of student behavior. They therefore go well beyond VanLehn’s (2006) minimum criterion for ITS status, namely, that the system has an inner loop (i.e., provides within-problem guidance, not just end-of-problem feedback). Using CTAT, example-tracing tutors can be created without programming. An author creates a tutor interface through drag-and-drop techniques, and then demonstrates the problem-solving behaviors to be tutored. These behaviors are recorded in a “behavior graph,” which can be easily edited and generalized. Compared to other approaches to programming by demonstration for ITS development, CTAT implements a simpler method (no machine learning is used) that is currently more pragmatic and proven for widespread, real-world use by non-programmers. Development time estimates from a large number of real-world ITS projects that have used CTAT suggest that example-tracing tutors reduce development cost by a factor of 4 to 8, compared to “historical” estimates of ITS development time and cost. The main contributions of the work are a novel ITS technology, based on the use of generalized behavioral examples to guide students in problem-solving exercises, as well as a suite of mature and robust tools for efficiently building real-world ITSs without programming.",
"title": ""
},
{
"docid": "0481c35949653971b75a3a4c3051c590",
"text": "Handling appearance variations is a very challenging problem for visual tracking. Existing methods usually solve this problem by relying on an effective appearance model with two features: 1) being capable of discriminating the tracked target from its background 2) being robust to the target’s appearance variations during tracking. Instead of integrating the two requirements into the appearance model, in this paper, we propose a tracking method that deals with these problems separately based on sparse representation in a particle filter framework. Each target candidate defined by a particle is linearly represented by the target and background templates with an additive representation error. Discriminating the target from its background is achieved by activating the target templates or the background templates in the linear system in a competitive manner. The target’s appearance variations are directly modeled as the representation error. An online algorithm is used to learn the basis functions that sparsely span the representation error. The linear system is solved via l1 minimization. The candidate with the smallest reconstruction error using the target templates is selected as the tracking result. We test the proposed approach using four sequences with heavy occlusions, large pose variations, drastic illumination changes and low foreground-background contrast. The proposed approach shows excellent performance in comparison with two latest state-of-the-art trackers.",
"title": ""
},
{
"docid": "8207368588342eb6b114b23e14cd8349",
"text": "In this letter, we present an improved Vivaldi antenna by using ultrathin microwave-absorbing materials (MAMs). The reverse currents at the outer side edges of the Vivaldi antenna always distort its radiation patterns and worsen its VSWR in some frequencies. To solve this problem, the ultrathin MAMs are loaded at the side edges of the antenna to absorb the energy of the current, and also to reduce its radar cross section (RCS). Simulated and measured results show that the proposed antenna operates from 0.8 to 15.5 GHz with VSWR < 2.15, and its gains are more stable than those of the antenna without MAMs in a very wide frequency band. The average monostatic RCSs of the proposed antenna in 2–18-GHz band and $\\theta \\in [- 30^{\\circ}, + 30^{\\circ}],\\; \\phi = [- 45, + 45^{\\circ}]$ angle range are less than 0.001 m2, which make it very attractive for a variety of applications.",
"title": ""
},
{
"docid": "6c411f36e88a39684eb9779462117e6b",
"text": "Number of people who use internet and websites for various purposes is increasing at an astonishing rate. More and more people rely on online sites for purchasing songs, apparels, books, rented movies etc. The competition between the online sites forced the web site owners to provide personalized services to their customers. So the recommender systems came into existence. Recommender systems are active information filtering systems that attempt to present to the user, information items in which the user is interested in. The websites implement recommender system feature using collaborative filtering, content based or hybrid approaches. The recommender systems also suffer from issues like cold start, sparsity and over specialization. Cold start problem is that the recommenders cannot draw inferences for users or items for which it does not have sufficient information. This paper attempts to propose a solution to the cold start problem by combining association rules and clustering technique. Comparison is done between the performance of the recommender system when association rule technique is used and the performance when association rule and clustering is combined. The experiments with the implemented system proved that accuracy can be improved when association rules and clustering is combined. An accuracy improvement of 36% was achieved by using the combination technique over the association rule technique.",
"title": ""
},
{
"docid": "578130d8ef9d18041c84ed226af8c84a",
"text": "Ranking and scoring are ubiquitous. We consider the setting in which an institution, called a ranker, evaluates a set of individuals based on demographic, behavioral or other characteristics. The final output is a ranking that represents the relative quality of the individuals. While automatic and therefore seemingly objective, rankers can, and often do, discriminate against individuals and systematically disadvantage members of protected groups. This warrants a careful study of the fairness of a ranking scheme, to enable data science for social good applications, among others.\n In this paper we propose fairness measures for ranked outputs. We develop a data generation procedure that allows us to systematically control the degree of unfairness in the output, and study the behavior of our measures on these datasets. We then apply our proposed measures to several real datasets, and detect cases of bias. Finally, we show preliminary results of incorporating our ranked fairness measures into an optimization framework, and show potential for improving fairness of ranked outputs while maintaining accuracy.\n The code implementing all parts of this work is publicly available at https://github.com/DataResponsibly/FairRank.",
"title": ""
}
] |
scidocsrr
|
083c185e2cb0c777fd25956e47b97b1c
|
Online decision making in crowdsourcing markets: theoretical challenges
|
[
{
"docid": "526e6384b38b9254f0e755a13b3ab193",
"text": "In a multi-armed bandit problem, an online algorithm chooses from a set of strategies in a sequence of $n$ trials so as to maximize the total payoff of the chosen strategies. While the performance of bandit algorithms with a small finite strategy set is quite well understood, bandit problems with large strategy sets are still a topic of very active investigation, motivated by practical applications such as online auctions and web advertisement. The goal of such research is to identify broad and natural classes of strategy sets and payoff functions which enable the design of efficient solutions.\n In this work we study a very general setting for the multi-armed bandit problem in which the strategies form a metric space, and the payoff function satisfies a Lipschitz condition with respect to the metric. We refer to this problem as the \"Lipschitz MAB problem\". We present a complete solution for the multi-armed problem in this setting. That is, for every metric space (L,X) we define an isometry invariant Max Min COV(X) which bounds from below the performance of Lipschitz MAB algorithms for $X$, and we present an algorithm which comes arbitrarily close to meeting this bound. Furthermore, our technique gives even better results for benign payoff functions.",
"title": ""
}
] |
[
{
"docid": "1efe9405027ad67ccba8b18c3a28c6f0",
"text": "To encourage strong passwords, system administrators employ password-composition policies, such as a traditional policy requiring that passwords have at least 8 characters from 4 character classes and pass a dictionary check. Recent research has suggested, however, that policies requiring longer passwords with fewer additional requirements can be more usable and in some cases more secure than this traditional policy. To explore long passwords in more detail, we conducted an online experiment with 8,143 participants. Using a cracking algorithm modified for longer passwords, we evaluate eight policies across a variety of metrics for strength and usability. Among the longer policies, we discover new evidence for a security/usability tradeoff, with none being strictly better than another on both dimensions. However, several policies are both more usable and more secure that the traditional policy we tested. Our analyses additionally reveal common patterns and strings found in cracked passwords. We discuss how system administrators can use these results to improve password-composition policies.",
"title": ""
},
{
"docid": "13a9329bdd46ba243003090bf219a20a",
"text": "Visual art represents a powerful resource for mental and physical well-being. However, little is known about the underlying effects at a neural level. A critical question is whether visual art production and cognitive art evaluation may have different effects on the functional interplay of the brain's default mode network (DMN). We used fMRI to investigate the DMN of a non-clinical sample of 28 post-retirement adults (63.71 years ±3.52 SD) before (T0) and after (T1) weekly participation in two different 10-week-long art interventions. Participants were randomly assigned to groups stratified by gender and age. In the visual art production group 14 participants actively produced art in an art class. In the cognitive art evaluation group 14 participants cognitively evaluated artwork at a museum. The DMN of both groups was identified by using a seed voxel correlation analysis (SCA) in the posterior cingulated cortex (PCC/preCUN). An analysis of covariance (ANCOVA) was employed to relate fMRI data to psychological resilience which was measured with the brief German counterpart of the Resilience Scale (RS-11). We observed that the visual art production group showed greater spatial improvement in functional connectivity of PCC/preCUN to the frontal and parietal cortices from T0 to T1 than the cognitive art evaluation group. Moreover, the functional connectivity in the visual art production group was related to psychological resilience (i.e., stress resistance) at T1. Our findings are the first to demonstrate the neural effects of visual art production on psychological resilience in adulthood.",
"title": ""
},
{
"docid": "589c347dd860c238e1ee60bf81c08b1f",
"text": "OBJECTIVE\nEven though much progress has been made in defining primitive hematologic cell phenotypes by using flow cytometry and clonogenic methods, the direct method for study of marrow repopulating cells still remains to be elusive. Long Term Culture-Initiating Cells (LTC-IC) are known as the most primitive human hematopoietic cells detectable by in vitro functional assays.\n\n\nMETHODS\nIn this study, LTC-IC with limiting dilution assay was used to evaluate repopulating potential of cord blood stem cells.\n\n\nRESULTS\nCD34 selections from cord blood were completed succesfully with magnetic beads (73,64%±9,12). The average incidence of week 5 LTC-IC was 1: 1966 CD34+ cells (range 1261-2906).\n\n\nCONCLUSION\nWe found that number of LTC-IC obtained from CD34+ cord blood cells were relatively low in numbers when compared to previously reported bone marrow CD34+ cells. This may be due to the lack of some transcription and growth factors along with some cytokines and chemokines released by accessory cells which are necessary for proliferation of cord blood progenitor/stem cells and it presents an area of interest for further studies.",
"title": ""
},
{
"docid": "5d80c293595fc4fc9fd52218a3a639fa",
"text": "Recent works on image retrieval have proposed to index images by compact representations encoding powerful local descriptors, such as the closely related VLAD and Fisher vector. By combining such a representation with a suitable coding technique, it is possible to encode an image in a few dozen bytes while achieving excellent retrieval results. This paper revisits some assumptions proposed in this context regarding the handling of \"visual burstiness\", and shows that ad-hoc choices are implicitly done which are not desirable. Focusing on VLAD without loss of generality, we propose to modify several steps of the original design. Albeit simple, these modifications significantly improve VLAD and make it compare favorably against the state of the art.",
"title": ""
},
{
"docid": "a47d9d5ddcd605755eb60d5499ad7f7a",
"text": "This paper presents a 14MHz Class-E power amplifier to be used for wireless power transmission. The Class-E power amplifier was built to consider the VSWR and the frequency bandwidth. Tw o kinds of circuits were designed: the high and low quality factor amplifiers. The low quality factor amplifier is confirmed to have larger bandwidth than the high quality factor amplifier. It has also possessed less sensitive characteristics. Therefore, the low quality factor amplifier circuit was adopted and tested. The effect of gate driving input source is studied. The efficiency of the Class-E amplifier reaches 85.5% at 63W.",
"title": ""
},
{
"docid": "3c1c89aeeae6bde84e338c15c44b20ce",
"text": "Using statistical machine learning for making security decisions introduces new vulnerabilities in large scale systems. This paper shows how an adversary can exploit statistical machine learning, as used in the SpamBayes spam filter, to render it useless—even if the adversary’s access is limited to only 1% of the training messages. We further demonstrate a new class of focused attacks that successfully prevent victims from receiving specific email messages. Finally, we introduce two new types of defenses against these attacks.",
"title": ""
},
{
"docid": "e747b34292b95cd490b11ace7e7fdfec",
"text": "The present study used simulator sickness questionnaire data from nine different studies to validate and explore the work of the most widely used simulator sickness index. The ability to predict participant dropouts as a result of simulator sickness symptoms was also evaluated. Overall, participants experiencing nausea and nausea-related symptoms were the most likely to fail to complete simulations. Further, simulation specific factors that increase the discrepancy between visual and vestibular perceptions are also related to higher participant study dropout rates. As a result, it is suggested that simulations minimize turns, curves, stops, et cetera, if possible, in order to minimize participant simulation sickness symptoms. The present study highlights several factors to attend to in order to minimize elevated participant simulation sickness.",
"title": ""
},
{
"docid": "f5ce928373042e01a48496b104da28f6",
"text": "This paper explores the most common methods of data collection used in qualitative research: interviews and focus groups. The paper examines each method in detail, focusing on how they work in practice, when their use is appropriate and what they can offer dentistry. Examples of empirical studies that have used interviews or focus groups are also provided.",
"title": ""
},
{
"docid": "a324180129b78d853c035c2477f54a30",
"text": "A book aiming to build a bridge between two fields that share the subject of research but do not share the same views necessarily puts itself in a difficult position: The authors have either to strike a fair balance at peril of dissatisfying both sides or nail their colors to the mast and cater mainly to one of two communities. For semantic processing of natural language with either NLP methods or Semantic Web approaches, the authors clearly favor the latter and propose a strictly ontology-driven interpretation of natural language. The main contribution of the book, driving semantic processing from the ground up by a formal domain-specific ontology, is elaborated in ten well-structured chapters spanning 143 pages of content.",
"title": ""
},
{
"docid": "c19658ecdae085902d936f615092fbe5",
"text": "Predicting student attrition is an intriguing yet challenging problem for any academic institution. Classimbalanced data is a common in the field of student retention, mainly because a lot of students register but fewer students drop out. Classification techniques for imbalanced dataset can yield deceivingly high prediction accuracy where the overall predictive accuracy is usually driven by the majority class at the expense of having very poor performance on the crucial minority class. In this study, we compared different data balancing techniques to improve the predictive accuracy in minority class while maintaining satisfactory overall classification performance. Specifically, we tested three balancing techniques—oversampling, under-sampling and synthetic minority over-sampling (SMOTE)—along with four popular classification methods—logistic regression, decision trees, neuron networks and support vector machines. We used a large and feature rich institutional student data (between the years 2005 and 2011) to assess the efficacy of both balancing techniques as well as prediction methods. The results indicated that the support vector machine combined with SMOTE data-balancing technique achieved the best classification performance with a 90.24% overall accuracy on the 10-fold holdout sample. All three data-balancing techniques improved the prediction accuracy for the minority class. Applying sensitivity analyses on developed models, we also identified the most important variables for accurate prediction of student attrition. Application of these models has the potential to accurately predict at-risk students and help reduce student dropout rates. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "df4923225affcd0ad02db3719409d5f2",
"text": "Emotions have a high impact in productivity, task quality, creativity, group rapport and job satisfaction. In this work we use lexical sentiment analysis to study emotions expressed in commit comments of different open source projects and analyze their relationship with different factors such as used programming language, time and day of the week in which the commit was made, team distribution and project approval. Our results show that projects developed in Java tend to have more negative commit comments, and that projects that have more distributed teams tend to have a higher positive polarity in their emotional content. Additionally, we found that commit comments written on Mondays tend to a more negative emotion. While our results need to be confirmed by a more representative sample they are an initial step into the study of emotions and related factors in open source projects.",
"title": ""
},
{
"docid": "49e0aa9d6fa579b4217bdd7f61d1d0eb",
"text": "Big data analytics is firmly recognized as a strategic priority for modern enterprises. At the heart of big data analytics lies the data curation process, consists of tasks that transform raw data (unstructured, semi-structured and structured data sources) into curated data, i.e. contextualized data and knowledge that is maintained and made available for use by end-users and applications. To achieve this, the data curation process may involve techniques and algorithms for extracting, classifying, linking, merging, enriching, sampling, and the summarization of data and knowledge. To facilitate the data curation process and enhance the productivity of researchers and developers, we identify and implement a set of basic data curation APIs and make them available as services to researchers and developers to assist them in transforming their raw data into curated data. The curation APIs enable developers to easily add features such as extracting keyword, part of speech, and named entities such as Persons, Locations, Organizations, Companies, Products, Diseases, Drugs, etc.; providing synonyms and stems for extracted information items leveraging lexical knowledge bases for the English language such as WordNet; linking extracted entities to external knowledge bases such as Google Knowledge Graph and Wikidata; discovering similarity among the extracted information items, such as calculating similarity between string and numbers; classifying, sorting and categorizing data into various types, forms or any other distinct class; and indexing structured and unstructured data into their data applications. These services can be accessed via a REST API, and the data is returned as a JSON file that can be integrated into data applications. The curation APIs are available as an open source project on GitHub.",
"title": ""
},
{
"docid": "082517b83d9a9cdce3caef62a579bf2e",
"text": "To enable autonomous driving, a semantic knowledge of the environment is unavoidable. We therefore introduce a multiclass classifier to determine the classes of an object relying solely on radar data. This is a challenging problem as objects of the same category have often a diverse appearance in radar data. As classification methods a random forest classifier and a deep convolutional neural network are evaluated. To get good results despite the limited training data available, we introduce a hybrid approach using an ensemble consisting of the two classifiers. Further we show that the accuracy can be improved significantly by allowing a lower detection rate.",
"title": ""
},
{
"docid": "0ac679740e0e3911af04be9464f76a7d",
"text": "Max-Min Fairness is a flexible resource allocation mechanism used in most datacenter schedulers. However, an increasing number of jobs have hard placement constraints, restricting the machines they can run on due to special hardware or software requirements. It is unclear how to define, and achieve, max-min fairness in the presence of such constraints. We propose Constrained Max-Min Fairness (CMMF), an extension to max-min fairness that supports placement constraints, and show that it is the only policy satisfying an important property that incentivizes users to pool resources. Optimally computing CMMF is challenging, but we show that a remarkably simple online scheduler, called Choosy, approximates the optimal scheduler well. Through experiments, analysis, and simulations, we show that Choosy on average differs 2% from the optimal CMMF allocation, and lets jobs achieve their fair share quickly.",
"title": ""
},
{
"docid": "06c3f32f07418575c700e2f0925f4398",
"text": "The spacing of a fixed amount of study time across multiple sessions usually increases subsequent test performance*a finding known as the spacing effect. In the spacing experiment reported here, subjects completed multiple learning trials, and each included a study phase and a test. Once a subject achieved a perfect test, the remaining learning trials within that session comprised what is known as overlearning. The number of these overlearning trials was reduced when learning trials were spaced across multiple sessions rather than massed in a single session. In addition, the degree to which spacing reduced overlearning predicted the size of the spacing effect, which is consistent with the possibility that spacing increases subsequent recall by reducing the occurrence of overlearning. By this account, overlearning is an inefficient use of study time, and the efficacy of spacing depends at least partly on the degree to which it reduces the occurrence of overlearning.",
"title": ""
},
{
"docid": "8f53f02a1bae81e5c06828b6147d2934",
"text": "E-Government, as a vehicle to deliver enhanced services to citizens, is now extending its reach to the elderly population through provision of targeted services. In doing so, the ideals of ubiquitous e-Government may be better achieved. However, there is a lack of studies on e-Government adoption among senior citizens, especially considering that this age group is growing in size and may be averse to new IT applications. This study aims to address this gap by investigating an innovative e- Government service specifically tailored for senior citizens, called CPF e-Withdrawal. Technology adoption model (TAM) is employed as the theoretical foundation, in which perceived usefulness is recognized as the most significant predictor of adoption intention. This study attempts to identify the antecedents of perceived usefulness by drawing from the innovation diffusion literature as well as age-related studies. Our findings agree with TAM and indicate that internet safety perception and perceived ease of use are significant predictors of perceived usefulness.",
"title": ""
},
{
"docid": "c1389acb62cca5cb3cfdec34bd647835",
"text": "A Chinese resume information extraction system (CRIES) based on semi-structured text is designed and implemented to obtain formatted information by extracting text content of every field from resumes in different formats and update information automatically based on the web. Firstly, ideas to classify resumes, some constraints obtained by analyzing resume features and overall extraction strategy is introduced. Then two extraction algorithms for parsing resumes in different text formats are given. Consequently, the system was implemented by java programming. Finally, use the system to resolve the resume samples, and the statistical analysis and system optimization analysis are carried out according to the accuracy rate and recall rate of the extracted results.",
"title": ""
},
{
"docid": "60f2baba7922543e453a3956eb503c05",
"text": "Pylearn2 is a machine learning research library. This does n t just mean that it is a collection of machine learning algorithms that share a comm n API; it means that it has been designed for flexibility and extensibility in ord e to facilitate research projects that involve new or unusual use cases. In this paper we give a brief history of the library, an overview of its basic philosophy, a summar y of the library’s architecture, and a description of how the Pylearn2 communi ty functions socially.",
"title": ""
},
{
"docid": "09f812cae6c8952d27ef86168906ece8",
"text": "Genetic algorithms provide an alternative to traditional optimization techniques by using directed random searches to locate optimal solutions in complex landscapes. We introduce the art and science of genetic algorithms and survey current issues in GA theory and practice. We do not present a detailed study, instead, we offer a quick guide into the labyrinth of GA research. First, we draw the analogy between genetic algorithms and the search processes in nature. Then we describe the genetic algorithm that Holland introduced in 1975 and the workings of GAs. After a survey of techniques proposed as improvements to Holland's GA and of some radically different approaches, we survey the advances in GA theory related to modeling, dynamics, and deception.<<ETX>>",
"title": ""
}
] |
scidocsrr
|
f395c8da9b7b7a4608af759b3e2548fc
|
Towards Robust and Privacy-preserving Text Representations
|
[
{
"docid": "214231e8bb6ccd31a0ea42ffe73c0ee6",
"text": "Language is increasingly being used to define rich visual recognition problems with supporting image collections sourced from the web. Structured prediction models are used in these tasks to take advantage of correlations between co-occurring labels and visual input but risk inadvertently encoding social biases found in web corpora. In this work, we study data and models associated with multilabel object classification and visual semantic role labeling. We find that (a) datasets for these tasks contain significant gender bias and (b) models trained on these datasets further amplify existing bias. For example, the activity cooking is over 33% more likely to involve females than males in a training set, and a trained model further amplifies the disparity to 68% at test time. We propose to inject corpus-level constraints for calibrating existing structured prediction models and design an algorithm based on Lagrangian relaxation for collective inference. Our method results in almost no performance loss for the underlying recognition task but decreases the magnitude of bias amplification by 47.5% and 40.5% for multilabel classification and visual semantic role labeling, respectively.",
"title": ""
},
{
"docid": "e49aa0d0f060247348f8b3ea0a28d3c6",
"text": "Over the past five years a new approach to privacy-preserving data analysis has born fruit [13, 18, 7, 19, 5, 37, 35, 8, 32]. This approach differs from much (but not all!) of the related literature in the statistics, databases, theory, and cryptography communities, in that a formal and ad omnia privacy guarantee is defined, and the data analysis techniques presented are rigorously proved to satisfy the guarantee. The key privacy guarantee that has emerged is differential privacy. Roughly speaking, this ensures that (almost, and quantifiably) no risk is incurred by joining a statistical database. In this survey, we recall the definition of differential privacy and two basic techniques for achieving it. We then show some interesting applications of these techniques, presenting algorithms for three specific tasks and three general results on differentially private learning.",
"title": ""
},
{
"docid": "7726aea214548f52e719c7b0f50d0d7e",
"text": "Part-of-speech (POS) taggers trained on newswire perform much worse on domains such as subtitles, lyrics, or tweets. In addition, these domains are also heterogeneous, e.g., with respect to registers and dialects. In this paper, we consider the problem of learning a POS tagger for subtitles, lyrics, and tweets associated with African-American Vernacular English (AAVE). We learn from a mixture of randomly sampled and manually annotated Twitter data and unlabeled data, which we automatically and partially label using mined tag dictionaries. Our POS tagger obtains a tagging accuracy of 89% on subtitles, 85% on lyrics, and 83% on tweets, with up to 55% error reductions over a state-of-the-art newswire POS tagger, and 15-25% error reductions over a state-of-the-art Twitter POS tagger.",
"title": ""
},
{
"docid": "f18a19159e71e4d2a92a465217f93366",
"text": "Extra-linguistic factors influence language use, and are accounted for by speakers and listeners. Most natural language processing (NLP) tasks to date, however, treat language as uniform. This assumption can harm performance. We investigate the effect of including demographic information on performance in a variety of text-classification tasks. We find that by including age or gender information, we consistently and significantly improve performance over demographic-agnostic models. These results hold across three text-classification tasks in five languages.",
"title": ""
}
] |
[
{
"docid": "22992fe4908ebcf8ae9f22f3ea2d5a27",
"text": "This paper contains a comparison of common, simple thresholding methods. Basic thresholding, two-band thresholding, optimal thresholding (Calvard Riddler), adaptive thresholding, and p-tile thresholding is compared. The different thresholding methods has been implemented in the programming language c, using the image analysis library Xite. The program sources should accompany this paper. 1 Methods of thresholding Basic thresholding. Basic thresholding is done by visiting each pixel site in the image, and set the pixel to maximum value if its value is above or equal to a given threshold value and to the minimum value if the threshold value is below the pixels value. Basic thresholding is often used as a step in other thresholding algorithms. Implemented by the function threshold in thresholding.h Band thresholding. Band thresholding is similar to basic thresholding, but has two threshold values, and set the pixel site to maximum value if the pixels intensity value is between or at the threshold values, else it it set to minimum. Implemented by the function bandthresholding2 in thresholding.h P-tile thresholding. P-tile is a method for choosing the threshold value to input to the “basic thresholding” algorithm. P-tile means “Percentile”, and the threshold is chosen to be the intensity value where the cumulative sum of pixel intensities is closest to the percentile. Implemented by the function ptileThreshold in thresholding.h Optimal thresholding. Optimal thresholding selects a threshold value that is statistically optimal, based on the contents of the image. Algorithm, due to Calvard and Riddler: http://www.ifi.uio.no/forskning/grupper/dsb/Programvare/Xite/",
"title": ""
},
{
"docid": "f7e004c4e506681f2419878b59ad8b53",
"text": "We examine unsupervised machine learning techniques to learn features that best describe configurations of the two-dimensional Ising model and the three-dimensional XY model. The methods range from principal component analysis over manifold and clustering methods to artificial neural-network-based variational autoencoders. They are applied to Monte Carlo-sampled configurations and have, a priori, no knowledge about the Hamiltonian or the order parameter. We find that the most promising algorithms are principal component analysis and variational autoencoders. Their predicted latent parameters correspond to the known order parameters. The latent representations of the models in question are clustered, which makes it possible to identify phases without prior knowledge of their existence. Furthermore, we find that the reconstruction loss function can be used as a universal identifier for phase transitions.",
"title": ""
},
{
"docid": "298d3280deb3bb326314a7324d135911",
"text": "BACKGROUND\nUterine leiomyomas are rarely seen in adolescent and to date nine leiomyoma cases have been reported under age 17. Eight of these have been treated surgically via laparotomic myomectomy.\n\n\nCASE\nA 16-year-old girl presented with a painless, lobulated necrotic mass protruding through the introitus. The mass originated from posterior uterine wall resected using hysteroscopy. Final pathology report revealed a submucous uterine leiomyoma.\n\n\nSUMMARY AND CONCLUSION\nSubmucous uterine leiomyomas may present as a vaginal mass in adolescents and can be safely treated using hysteroscopy.",
"title": ""
},
{
"docid": "5c487fc01aa8a4e86ccef3a59055f9e4",
"text": "IM A G E : LA G U N A D E S IG N /G E TT Y IM A G E S The United States Pharmacopeial Convention (USP) Therapeutic Peptides Expert Panel was formed in 2013 to evaluate quality attributes for synthetic peptides based on currently available regulatory guidance and expectations. Public quality standards for drug products and drug substances are developed by USP and enforceable by FDA. This series of three articles by the Panel explores the current manufacturing and regulatory landscape and provides a comprehensive overview of quality attributes to be considered for successful synthetic peptide active pharmaceutical ingredient (API) development from manufacturing to lot release. Specifically, the first article covers analytical characterization methods, lot release tests, and points to consider for synthetic peptide API manufacturers entering the market. The second article will focus on quality control of raw materials and impurities resulting from the starting materials used for peptide synthesis. The last article will be devoted to manufacturing processes and impurity control of synthetic peptide APIs. In 2012, the number of peptide drugs approved by FDA surpassed the number of approved monoclonal antibodies and enzymes (1). These approvals serve to highlight the recent revival of interest in peptides, which have generally been considered to be poor drug candidates due to their low oral bioavailability and propensity to be rapidly metabolized. However, new formulation and conjugation strategies for alternative routes of administration and overcoming short half-lives have emerged, resulting in a larger number of marketed peptide-based drugs, some of which have reached blockbuster status (2, 3). Despite these successes, some challenges still remain. Due to varying sizes and amino acid sequences, synthetic peptides are not easily classified into either small molecule or biologic categories. Therein lie the regulatory challenges with peptides, especially with respect to impurities and bioassay requirements. Understanding these challenges can help shape and create consistency among USP’s quality standards for this growing class of drugs. Control Strategies for Synthetic Therapeutic Peptide APIs",
"title": ""
},
{
"docid": "6a01ccb9b2e0066340815752fd05588e",
"text": "The microRNA(miRNA)-34a is a key regulator of tumor suppression. It controls the expression of a plethora of target proteins involved in cell cycle, differentiation and apoptosis, and antagonizes processes that are necessary for basic cancer cell viability as well as cancer stemness, metastasis, and chemoresistance. In this review, we focus on the molecular mechanisms of miR-34a-mediated tumor suppression, giving emphasis on the main miR-34a targets, as well as on the principal regulators involved in the modulation of this miRNA. Moreover, we shed light on the miR-34a role in modulating responsiveness to chemotherapy and on the phytonutrients-mediated regulation of miR-34a expression and activity in cancer cells. Given the broad anti-oncogenic activity of miR-34a, we also discuss the substantial benefits of a new therapeutic concept based on nanotechnology delivery of miRNA mimics. In fact, the replacement of oncosuppressor miRNAs provides an effective strategy against tumor heterogeneity and the selective RNA-based delivery systems seems to be an excellent platform for a safe and effective targeting of the tumor.",
"title": ""
},
{
"docid": "04e269feb0402a54317bd09f72e77144",
"text": "Fourier ptychography microscopy (FPM) is a lately developed technique, which achieves wide field, high resolution, and phase imaging, simultaneously. FPM stitches together the captured low-resolution images corresponding to angular varying illuminations in Fourier domain utilizing the concept of synthetic aperture and phase retrieval algorithms, which can surpass the space-bandwidth product limit of the objective lens and reconstruct a high-resolution complex image. In general FPM system, the LED source is important for the reconstructed quality and it is sensitive to the positions of each LED element. We find that the random positional deviations of each LED element can bring errors in reconstructed results, which is relative to a feedback parameter. To improve the reconstruction rate and correct random deviations, we combine an initial phase guess and a feedback parameter based on differential phase contrast and extended ptychographical iterative engine to propose an optimized iteration process for FPM. The simulated and experimental results indicate that the proposed method shows the reliability and validity towards the random deviations yet accelerates the convergence. More importantly, it is verified that this method can accelerate the convergence, reduce the requirement of LED array accuracy, and improve the quality of the reconstructed results.",
"title": ""
},
{
"docid": "d84abd378e3756052ede68731d73ca45",
"text": "A major difficulty in applying word vector embeddings in information retrieval is in devising an effective and efficient strategy for obtaining representations of compound units of text, such as whole documents, (in comparison to the atomic words), for the purpose of indexing and scoring documents. Instead of striving for a suitable method to obtain a single vector representation of a large document of text, we aim to develop a similarity metric that makes use of the similarities between the individual embedded word vectors in a document and a query. More specifically, we represent a document and a query as sets of word vectors, and use a standard notion of similarity measure between these sets, computed as a function of the similarities between each constituent word pair from these sets. We then make use of this similarity measure in combination with standard information retrieval based similarities for document ranking. The results of our initial experimental investigations show that our proposed method improves MAP by up to 5.77%, in comparison to standard text-based language model similarity, on the TREC 6, 7, 8 and Robust ad-hoc test collections.",
"title": ""
},
{
"docid": "9cddaea30d7dda82537c273e97bff008",
"text": "A low-offset latched comparator using new dynamic offset cancellation technique is proposed. The new technique achieves low offset voltage without pre-amplifier and quiescent current. Furthermore the overdrive voltage of the input transistor can be optimized to reduce the offset voltage of the comparator independent of the input common mode voltage. A prototype comparator has been fabricated in 90 nm 9M1P CMOS technology with 152 µm2. Experimental results show that the comparator achieves 3.8 mV offset at 1 sigma at 500 MHz operating, while dissipating 39 μW from a 1.2 V supply.",
"title": ""
},
{
"docid": "69eceabd9967260cbdec56d02bcafd83",
"text": "A modified Vivaldi antenna is proposed in this paper especially for the millimeter-wave application. The metal support frame is used to fix the structured substrate and increased the front-to-back ratio as well as the radiation gain. Detailed design process are presented, following which one sample is designed with its working frequency band from 75GHz to 150 GHz. The sample is also fabricated and measured. Good agreements between simulated results and measured results are obtained.",
"title": ""
},
{
"docid": "b70a70896a3d904c25adb126b584a858",
"text": "A case of a fatal cardiac episode resulting from an unusual autoerotic practice involving the use of a vacuum cleaner, is presented. Scene investigation and autopsy findings are discussed.",
"title": ""
},
{
"docid": "061d64528fa05389b81f98f0ed224d35",
"text": "Accurate lung nodule segmentation from computed tomography (CT) images is of great importance for image-driven lung cancer analysis. However, the heterogeneity of lung nodules and the presence of similar visual characteristics between nodules and their surroundings make it difficult for robust nodule segmentation. In this study, we propose a data-driven model, termed the Central Focused Convolutional Neural Networks (CF-CNN), to segment lung nodules from heterogeneous CT images. Our approach combines two key insights: 1) the proposed model captures a diverse set of nodule-sensitive features from both 3-D and 2-D CT images simultaneously; 2) when classifying an image voxel, the effects of its neighbor voxels can vary according to their spatial locations. We describe this phenomenon by proposing a novel central pooling layer retaining much information on voxel patch center, followed by a multi-scale patch learning strategy. Moreover, we design a weighted sampling to facilitate the model training, where training samples are selected according to their degree of segmentation difficulty. The proposed method has been extensively evaluated on the public LIDC dataset including 893 nodules and an independent dataset with 74 nodules from Guangdong General Hospital (GDGH). We showed that CF-CNN achieved superior segmentation performance with average dice scores of 82.15% and 80.02% for the two datasets respectively. Moreover, we compared our results with the inter-radiologists consistency on LIDC dataset, showing a difference in average dice score of only 1.98%.",
"title": ""
},
{
"docid": "40bc405aaec0fd8563de84e163091325",
"text": "The extremely tight binding between biotin and avidin or streptavidin makes labeling proteins with biotin a useful tool for many applications. BirA is the Escherichia coli biotin ligase that site-specifically biotinylates a lysine side chain within a 15-amino acid acceptor peptide (also known as Avi-tag). As a complementary approach to in vivo biotinylation of Avi-tag-bearing proteins, we developed a protocol for producing recombinant BirA ligase for in vitro biotinylation. The target protein was expressed as both thioredoxin and MBP fusions, and was released from the corresponding fusion by TEV protease. The liberated ligase was separated from its carrier using HisTrap HP column. We obtained 24.7 and 27.6 mg BirA ligase per liter of culture from thioredoxin and MBP fusion constructs, respectively. The recombinant enzyme was shown to be highly active in catalyzing in vitro biotinylation. The described protocol provides an effective means for making BirA ligase that can be used for biotinylation of different Avi-tag-bearing substrates.",
"title": ""
},
{
"docid": "df85e65b4647f355453cd660bb8a7ce3",
"text": "In this paper we give a unified asymptotic formula for the partial gcd-sum function. We also study the mean-square of the error in the asymptotic formula.",
"title": ""
},
{
"docid": "d609323505fc3e7babc85f9c5579ddde",
"text": "BACKGROUND\nA critical component that influences the measurement properties of a patient-reported outcome (PRO) instrument is the rating scale. Yet, there is a lack of general consensus regarding optimal rating scale format, including aspects of question structure, the number and the labels of response categories. This study aims to explore the characteristics of rating scales that function well and those that do not, and thereby develop guidelines for formulating rating scales.\n\n\nMETHODS\nSeventeen existing PROs designed to measure vision-related quality of life dimensions were mailed for self-administration, in sets of 10, to patients who were on a waiting list for cataract extraction. These PROs included questions with ratings of difficulty, frequency, severity, and global ratings. Using Rasch analysis, performance of rating scales were assessed by examining hierarchical ordering (indicating categories are distinct from each other and follow a logical transition from lower to higher value), evenness (indicating relative utilization of categories), and range (indicating coverage of the attribute by the rating scale).\n\n\nRESULTS\nThe rating scales with complicated question format, a large number of response categories, or unlabelled categories, tended to be dysfunctional. Rating scales with five or fewer response categories tended to be functional. Most of the rating scales measuring difficulty performed well. The rating scales measuring frequency and severity demonstrated hierarchical ordering but the categories lacked even utilization.\n\n\nCONCLUSION\nDevelopers of PRO instruments should use a simple question format, fewer (four to five) and labelled response categories.",
"title": ""
},
{
"docid": "490dc6ee9efd084ecf2496b72893a39a",
"text": "The rise of blockchain-based cryptocurrencies has led to an explosion of services using distributed ledgers as their underlying infrastructure. However, due to inherently single-service oriented blockchain protocols, such services can bloat the existing ledgers, fail to provide sufficient security, or completely forego the property of trustless auditability. Security concerns, trust restrictions, and scalability limits regarding the resource requirements of users hamper the sustainable development of loosely-coupled services on blockchains. This paper introduces Aspen, a sharded blockchain protocol designed to securely scale with increasing number of services. Aspen shares the same trust model as Bitcoin in a peer-to-peer network that is prone to extreme churn containing Byzantine participants. It enables introduction of new services without compromising the security, leveraging the trust assumptions, or flooding users with irrelevant messages.",
"title": ""
},
{
"docid": "b563c69fc65fa8fd8d560aab9d4c20a0",
"text": "Individuals who are given a preventive exam by a primary care provider are more likely to agree to cancer screening. The provider recommendation has been identified as the strongest factor associated with screening utilization. This article provides a framework for breast cancer risk assessment for an advanced practice registered nurse working in primary care practice.",
"title": ""
},
{
"docid": "63cbc307e146bafb59338dbdc5b56313",
"text": "Many techniques have been described for the surgical correction of protruding ears. A novel modification of a cartilage-sparing otoplastic technique is provided herein. In this modification, a diamond-coated file is used to abrade the anterior surface of the antihelical cartilage to create biomechanical remodeling with resultant formation of a new antihelix. A case series of 302 ears, operated on over a 3 1/2-year period, is presented in support of this technique. This procedure is appropriate for patients having firm or soft auricular cartilage, an underdeveloped antihelical ridge, and a prominent or moderate hypertrophic conchal wall.",
"title": ""
},
{
"docid": "8c55e20ae3d116811dba74ee5da3679f",
"text": "In this paper we present a Neural Network (NN) architecture for detecting grammatical errors in Statistical Machine Translation (SMT) using monolingual morpho-syntactic word representations in combination with surface and syntactic context windows. We test our approach on two language pairs and two tasks, namely detecting grammatical errors and predicting overall post-editing effort. Our results show that this approach is not only able to accurately detect grammatical errors but it also performs well as a quality estimation system for predicting overall post-editing effort, which is characterised by all types of MT errors. Furthermore, we show that this approach is portable to other languages.",
"title": ""
},
{
"docid": "01ba4d36dd05cb533e5ff1ea462888d6",
"text": "Against a backdrop of serious corporate and mutual fund scandals, governmental bodies, institutional and private investors have demanded more effective corporate governance structures procedures and systems. The compliance function is now an integral part of corporate policy and practice. This paper presents the findings from a longitudinal qualitative research study on the introduction of an IT-based investment management system at four client sites. Using institutional theory to analyze our data, we find the process of institutionalization follows a non-linear pathway where regulative, normative and cultural forces within the investment management industry produce conflicting organizational behaviours and outcomes.",
"title": ""
}
] |
scidocsrr
|
22a5a8023914565ad55f8da332b03bf7
|
Vicinity-Driven Paragraph and Sentence Alignment for Comparable Corpora
|
[
{
"docid": "17cc2f4ae2286d36748b203492d406e6",
"text": "In this paper, we consider sentence simplification as a special form of translation with the complex sentence as the source and the simple sentence as the target. We propose a Tree-based Simplification Model (TSM), which, to our knowledge, is the first statistical simplification model covering splitting, dropping, reordering and substitution integrally. We also describe an efficient method to train our model with a large-scale parallel dataset obtained from the Wikipedia and Simple Wikipedia. The evaluation shows that our model achieves better readability scores than a set of baseline systems.",
"title": ""
},
{
"docid": "63db56ab2192ed6fd244e48bd746234d",
"text": "In this paper we examine the task of sentence simplification which aims to reduce the reading complexity of a sentence by incorporating more accessible vocabulary and sentence structure. We introduce a new data set that pairs English Wikipedia with Simple English Wikipedia and is orders of magnitude larger than any previously examined for sentence simplification. The data contains the full range of simplification operations including rewording, reordering, insertion and deletion. We provide an analysis of this corpus as well as preliminary results using a phrase-based translation approach for simplification.",
"title": ""
}
] |
[
{
"docid": "259647f0899bebc4ad67fb30a8c6f69b",
"text": "Internet of Things (IoT) communication is vital for the developing of smart communities. The rapid growth of IoT depends on reliable wireless networks. The evolving 5G cellular system addresses this challenge by adopting cloud computing technology in Radio Access Network (RAN); namely Cloud RAN or CRAN. CRAN enables better scalability, flexibility, and performance that allows 5G to provide connectivity for the vast volume of IoT devices envisioned for smart cities. This work investigates the load balance (LB) problem in CRAN, with the goal of reducing latencies experienced by IoT communications. Eight practical LB algorithms are studied and evaluated in CRAN environment, based on real cellular network traffic characteristics provided by Nokia Research. Experiment results on queue-length analysis show that the simple, light-weight queue-based LB is almost as effectively as the much more complex waiting-time-based LB. We believe that this study is significant in enabling 5G networks for providing IoT communication backbone in the emerging smart communities; it also has wide applications in other distributed systems.",
"title": ""
},
{
"docid": "3bf45acc1894d67e9575aa17ed4029ea",
"text": "In recent years, sequence-to-sequence (seq2seq) models are used in a variety of tasks from machine translation, headline generation, text summarization, speech to text, to image caption generation. The underlying framework of all these models are usually a deep neural network which contains an encoder and decoder. The encoder processes the input data and a decoder receives the output of the encoder and generates the final output. Although simply using an encoder/decoder model would, most of the time, produce better result than traditional methods on the above-mentioned tasks, researchers proposed additional improvements over these sequence to sequence models, like using an attention-based model over the input, pointer-generation models, and self-attention models. However, all these seq2seq models suffer from two common problems: 1) exposure bias and 2) inconsistency between train/test measurement. Recently a completely fresh point of view emerged in solving these two problems in seq2seq models by using methods in Reinforcement Learning (RL). In these new researches, we try to look at the seq2seq problems from the RL point of view and we try to come up with a formulation that could combine the power of RL methods in decision-making and sequence to sequence models in remembering long memories. In this paper, we will summarize some of the most recent frameworks that combines concepts from RL world to the deep neural network area and explain how these two areas could benefit from each other in solving complex seq2seq tasks. In the end, we will provide insights on some of the problems of the current existing models and how we can improve them with better RL models. We also provide the source code for implementing most of the models that will be discussed in this paper on the complex task of abstractive text summarization.",
"title": ""
},
{
"docid": "065ca3deb8cb266f741feb67e404acb5",
"text": "Recent research on deep convolutional neural networks (CNNs) has focused primarily on improving accuracy. For a given accuracy level, it is typically possible to identify multiple CNN architectures that achieve that accuracy level. With equivalent accuracy, smaller CNN architectures offer at least three advantages: (1) Smaller CNNs require less communication across servers during distributed training. (2) Smaller CNNs require less bandwidth to export a new model from the cloud to an autonomous car. (3) Smaller CNNs are more feasible to deploy on FPGAs and other hardware with limited memory. To provide all of these advantages, we propose a small CNN architecture called SqueezeNet. SqueezeNet achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters. Additionally, with model compression techniques, we are able to compress SqueezeNet to less than 0.5MB (510× smaller than AlexNet). The SqueezeNet architecture is available for download here: https://github.com/DeepScale/SqueezeNet",
"title": ""
},
{
"docid": "72af6905a39d9f25dab5d226b11d2a66",
"text": "Scoping reviews of the biomedical literature are very commonly used in health technology assessments to inform the planning of more detailed and resource-intensive evaluations. A typical task is to ‘map’ the literature addressing a specific clinical question, i.e., (i) identify as many relevant articles of interest as feasible under a constrained budget, and (ii) estimate how many such articles likely exist. These are competing objectives. Using active retrieval strategies (e.g., active learning) to realize the former aim immediately hinders our ability to achieve the latter: ‘naive’ estimates of the amount of relevant articles taken over an enriched sampled acquired through selective sampling will be inflated. We propose a novel method for correcting such estimates. We demonstrate the efficacy of our approach on three systematic review datasets, showing that we can achieve both aims: rapid evidence discovery and acceptably accurate estimation of the number of relevant articles.",
"title": ""
},
{
"docid": "5f1474036533a4583520ea2526d35daf",
"text": "We motivate the integration of programming by example and natural language programming by developing a system for specifying programs for simple text editing operations based on regular expressions. The programs are described with unconstrained natural language instructions, and providing one or more examples of input/output. We show that natural language allows the system to deduce the correct program much more often and much faster than is possible with the input/output example(s) alone, showing that natural language programming and programming by example can be combined in a way that overcomes the ambiguities that both methods suffer from individually and, at the same time, provides a more natural interface to the user.",
"title": ""
},
{
"docid": "fba2a59e74e7288cbdb1970e4a52d454",
"text": "Suppose that, for a learning task, we have to select one hypothesis out of a set of hypotheses (that may, for example, have been generated by multiple applications of a randomized learning algorithm). A common approach is to evaluate each hypothesis in the set on some previously unseen cross-validation data, and then to select the hypothesis that had the lowest cross-validation error. But when the cross-validation data is partially corrupted such as by noise, and if the set of hypotheses we are selecting from is large, then \\folklore\" also warns about \\overrtting\" the cross-In this paper, we explain how this \\overrtting\" really occurs, and show the surprising result that it can be overcome by selecting a hypothesis with a higher cross-validation error, over others with lower cross-validation errors. We give reasons for not selecting the hypothesis with the lowest cross-validation error, and propose a new algorithm, LOOCVCV, that uses a computa-tionally eecient form of leave{one{out cross-validation to select such a hypothesis. Finally , we present experimental results for one domain, that show LOOCVCV consistently beating picking the hypothesis with the lowest cross-validation error, even when using reasonably large cross-validation sets.",
"title": ""
},
{
"docid": "8d2aeee4064a2d6e65afeaf5330b2c49",
"text": "In this paper we discuss verification and validation of simulation models. Four different approaches to deciding model validity are described; two different paradigms that relate verification and validation to the model development process are presented; various validation techniques are defined; conceptual model validity, model verification, operational validity, and data validity are discussed; a way to document results is given; a recommended procedure for model validation is presented; and accreditation is briefly discussed.",
"title": ""
},
{
"docid": "819d077913e6a956fc57241f81a73df3",
"text": "Humans are avid consumers of visual content. Every day, people watch videos, play games, and share photos on social media. However, there is an asymmetry—while everybody is able to consume visual data, only a chosen few are talented enough to express themselves visually. For the rest of us, most attempts at creating realistic visual content end up quickly “falling off” what we could consider to be natural images. In this thesis, we investigate several machine learning approaches for preserving visual realism while creating and manipulating photographs. We use these methods as training wheels for visual content creation. These methods not only help users easily synthesize realistic photos but also enable previously not possible visual effects.",
"title": ""
},
{
"docid": "ac078f78fcf0f675c21a337f8e3b6f5f",
"text": "bstract. Plenoptic cameras, constructed with internal microlens rrays, capture both spatial and angular information, i.e., the full 4-D adiance, of a scene. The design of traditional plenoptic cameras ssumes that each microlens image is completely defocused with espect to the image created by the main camera lens. As a result, nly a single pixel in the final image is rendered from each microlens mage, resulting in disappointingly low resolution. A recently develped alternative approach based on the focused plenoptic camera ses the microlens array as an imaging system focused on the imge plane of the main camera lens. The flexible spatioangular tradeff that becomes available with this design enables rendering of final mages with significantly higher resolution than those from traditional lenoptic cameras. We analyze the focused plenoptic camera in ptical phase space and present basic, blended, and depth-based endering algorithms for producing high-quality, high-resolution imges. We also present our graphics-processing-unit-based impleentations of these algorithms, which are able to render full screen efocused images in real time. © 2010 SPIE and IS&T. DOI: 10.1117/1.3442712",
"title": ""
},
{
"docid": "02d254abf79e779cf6ec827c0826c2be",
"text": "Hosts used for the production of recombinant proteins are typically high-protein secreting mutant strains that have been selected for a specific purpose, such as efficient production of cellulose-degrading enzymes. Somewhat surprisingly, sequencing of the genomes of a series of mutant strains of the cellulolytic Trichoderma reesei, widely used as an expression host for recombinant gene products, has shed very little light on the nature of changes that boost high-level protein secretion. While it is generally agreed and shown that protein secretion in filamentous fungi occurs mainly through the hyphal tip, there is growing evidence that secretion of proteins also takes place in sub-apical regions. Attempts to increase correct folding and thereby the yields of heterologous proteins in fungal hosts by co-expression of cellular chaperones and foldases have resulted in variable success; underlying reasons have been explored mainly at the transcriptional level. The observed physiological changes in fungal strains experiencing increasing stress through protein overexpression under strong gene promoters also reflect the challenge the host organisms are experiencing. It is evident, that as with other eukaryotes, fungal endoplasmic reticulum is a highly dynamic structure. Considering the above, there is an emerging body of work exploring the use of weaker expression promoters to avoid undue stress. Filamentous fungi have been hailed as candidates for the production of pharmaceutically relevant proteins for therapeutic use. One of the biggest challenges in terms of fungally produced heterologous gene products is their mode of glycosylation; fungi lack the functionally important terminal sialylation of the glycans that occurs in mammalian cells. Finally, exploration of the metabolic pathways and fluxes together with the development of sophisticated fermentation protocols may result in new strategies to produce recombinant proteins in filamentous fungi.",
"title": ""
},
{
"docid": "a0c15895a455c07b477d4486d32582ef",
"text": "PURPOSE\nTo evaluate the efficacy of α-lipoic acid (ALA) in reducing scarring after trabeculectomy.\n\n\nMATERIALS AND METHODS\nEighteen adult New Zealand white rabbits underwent trabeculectomy. During trabeculectomy, thin sponges were placed between the sclera and Tenon's capsule for 3 minutes, saline solution, mitomycin-C (MMC) and ALA was applied to the control group (CG) (n=6 eyes), MMC group (MMCG) (n=6 eyes), and ALA group (ALAG) (n=6 eyes), respectively. After surgery, topical saline and ALA was applied for 28 days to the control and ALAGs, respectively. Filtrating bleb patency was evaluated by using 0.1% trepan blue. Hematoxylin and eosin and Masson trichrome staining for toxicity, total cellularity, and collagen organization; α-smooth muscle actin immunohistochemistry staining performed for myofibroblast phenotype identification.\n\n\nRESULTS\nClinical evaluation showed that all 6 blebs (100%) of the CG had failed, whereas there were only 2 failures (33%) in the ALAG and no failures in the MMCG on day 28. Histologic evaluation showed significantly lower inflammatory cell infiltration in the ALAGs and CGs than the MMCG. Toxicity change was more significant in the MMCG than the control and ALAGs. Collagen was better organized in the ALAG than control and MMCGs. In immunohistochemistry evaluation, ALA significantly reduced the population of cells expressing α-smooth muscle action.\n\n\nCONCLUSIONS\nΑLA prevents and/or reduces fibrosis by inhibition of inflammation pathways, revascularization, and accumulation of extracellular matrix. It can be used as an agent for delaying tissue regeneration and for providing a more functional-permanent fistula.",
"title": ""
},
{
"docid": "7b28877bcda4c0fa0f89eadd7146e173",
"text": "REST architectural style gains increasing popularity in the networking protocol design, and it has become a prevalent choice for northbound API of Software-Defined Networking (SDN). This paper addresses many critical issues in RESTful networking protocol design, and presents a framework on how a networking protocol can be designed in a truly RESTful manner, making it towards a service oriented data networking. In particular, we introduce the HTTP content negotiation mechanism which allows clients to select different representation formats from the same resource URI. Most importantly, we present a hypertext-driven approach, so that hypertext links are defined between REST resources for the networking protocol to guide clients to identify the right resources rather than relying on fixed resource URIs. The advantages of our approach are verified in two folds. First, we show how to apply our approach to fix REST design problems in some existing northbound networking APIs, and then we show how to design a RESTful northbound API of SDN in the context of OpenStack. We implemented our proposed approach in the northbound REST API of SOX, a generalized SDN controller, and the benefits of the proposed approach are experimentally verified.",
"title": ""
},
{
"docid": "06db3ede44c48a09f8d280cf13bd8fd2",
"text": "An increasing number of distributed applications requires processing continuously flowing data from geographically distributed sources at unpredictable rate to obtain timely responses to complex queries. Examples of such applications come from the most disparate fields: from wireless sensor networks to financial tickers, from traffic management to click stream inspection.\n These requirements led to the development of a number of systems specifically designed to process information as a flow according to a set of pre-deployed processing rules. We collectively call them Information Flow Processing (IFP) Systems. Despite having a common goal, IFP systems differ in a wide range of aspects, including architectures, data models, rule languages, and processing mechanisms.\n In this tutorial we draw a general framework to analyze and compare the results achieved so far in the area of IFP systems. This allows us to offer a systematic overview of the topic, favoring the communication between different communities, and highlighting a number of open issue that still need to be addressed in research.",
"title": ""
},
{
"docid": "adaab9f6e0355af12f4058a350076f87",
"text": "Recently, the fusion of hyperspectral and light detection and ranging (LiDAR) data has obtained a great attention in the remote sensing community. In this paper, we propose a new feature fusion framework using deep neural network (DNN). The proposed framework employs a novel 3D convolutional neural network (CNN) to extract the spectral-spatial features of hyperspectral data, a deep 2D CNN to extract the elevation features of LiDAR data, and then a fully connected deep neural network to fuse the extracted features in the previous CNNs. Through the aforementioned three deep networks, one can extract the discriminant and invariant features of hyperspectral and LiDAR data. At last, logistic regression is used to produce the final classification results. The experimental results reveal that the proposed deep fusion model provides competitive results. Furthermore, the proposed deep fusion idea opens a new window for future research.",
"title": ""
},
{
"docid": "b1101de8c110fc0475fbba738d1553c5",
"text": "In document stores, schema is a soft concept and the documents in a collection can have different schemata; this gives designers and implementers augmented flexibility but requires an extra effort to understand the rules that drove the use of alternative schemata when heterogeneous documents are to be analyzed or integrated. In this paper we outline a technique, called schema profiling, to explain the schema variants within a collection in document stores by capturing the hidden rules explaining the use of these variants; we express these rules in the form of a decision tree, called schema profile, whose main feature is the coexistence of value-based and schema-based conditions. Consistently with the requirements we elicited from real users, we aim at creating explicative, precise, and concise schema profiles; to quantitatively assess these qualities we introduce a novel measure of entropy. Keywords: NoSQL, Schema Discovery, Decision Trees 1 Motivation and Outline Recent years have witnessed an erosion of the relational DBMS predominance to the benefit of DBMSs based on alternative representation models (e.g., documentoriented and graph-based) which adopt a schemaless representation for data. Schemaless databases are preferred to relational ones for storing heterogeneous data with variable schemata and structural forms; typical schema variants within a collection consist in missing or additional attributes, in different names or types for an attribute, and in different structures for instances. The absence of a unique schema grants flexibility to operational applications but adds complexity to analytical applications, in which a single analysis often involves large sets of data with different schemata. Dealing with this complexity requires a notable effort to understand the rules that drove the use of alternative schemata, plus an integration activity to identify a common schema to be adopted for analysis —which is quite hard when no documentation is available. In this paper we outline a technique to explain the schema variants within a collection in document stores by capturing the hidden rules explaining the use of these variants. We call this activity schema profiling. Schema profiling can be used for instance when trying to decode the behavior of an undocumented application that manages a document-base, or to support analytical applications ? This work was partly supported by the EU-funded project TOREADOR (contract n. H2020-688797). {\t\r \t\r \"Ac&vityType\"\t\r :\t\r \"Walk\", \"User\"\t\r : {\t\r \t\r \"UserID\"\t\r :\t\r 23, \"Age\"\t\r :\t\r 42 } } Ac&vityType",
"title": ""
},
{
"docid": "990fb61d1135b05f88ae02eb71a6983f",
"text": "Previous efforts in recommendation of candidates for talent search followed the general pattern of receiving an initial search criteria and generating a set of candidates utilizing a pre-trained model. Traditionally, the generated recommendations are final, that is, the list of potential candidates is not modified unless the user explicitly changes his/her search criteria. In this paper, we are proposing a candidate recommendation model which takes into account the immediate feedback of the user, and updates the candidate recommendations at each step. This setting also allows for very uninformative initial search queries, since we pinpoint the user's intent due to the feedback during the search session. To achieve our goal, we employ an intent clustering method based on topic modeling which separates the candidate space into meaningful, possibly overlapping, subsets (which we call intent clusters) for each position. On top of the candidate segments, we apply a multi-armed bandit approach to choose which intent cluster is more appropriate for the current session. We also present an online learning scheme which updates the intent clusters within the session, due to user feedback, to achieve further personalization. Our offline experiments as well as the results from the online deployment of our solution demonstrate the benefits of our proposed methodology.",
"title": ""
},
{
"docid": "a71bfbdbb8c78578d186caaef55d593b",
"text": "[Excerpt] Entrepreneurship is the process by which \"opportunities to create future goods and services are discovered, evaluated, and exploited\" (Shane and Venkataraman, 2000: 218). In other words, it is the process by which organizations and individuals convert new knowledge into new opportunities in the form of new products and services. Strategic human resource management (SHRM) has been defined as the system of organizational practices and policies used to manage employees in a manner that leads to higher organizational performance (Wright and McMahan, 1992). Further, one perspective suggests that sets of HR practices do not themselves create competitive advantage; instead, they foster the development of organizational capabilities which in turn create such advantages (Lado and Wilson, 1994; Wright, Dunford, and Snell, 2001). Specifically, this body of literature suggests that HR practices lead to firm performance when they are aligned to work together to create and support the employee-based capabilities that lead to competitive advantage (Wright and Snell, 2000; Wright, Dunford, and Snell, 2001). Thus, entrepreneurial human resource strategy is best defined as the set or sets of human resources practices that will increase the likelihood that new knowledge will be converted to new products or services.",
"title": ""
},
{
"docid": "1f7d0ccae4e9f0078eabb9d75d1a8984",
"text": "A social network is composed by communities of individuals or organizations that are connected by a common interest. Online social networking sites like Twitter, Facebook and Orkut are among the most visited sites in the Internet. Presently, there is a great interest in trying to understand the complexities of this type of network from both theoretical and applied point of view. The understanding of these social network graphs is important to improve the current social network systems, and also to develop new applications. Here, we propose a friend recommendation system for social network based on the topology of the network graphs. The topology of network that connects a user to his friends is examined and a local social network called Oro-Aro is used in the experiments. We developed an algorithm that analyses the sub-graph composed by a user and all the others connected people separately by three degree of separation. However, only users separated by two degree of separation are candidates to be suggested as a friend. The algorithm uses the patterns defined by their connections to find those users who have similar behavior as the root user. The recommendation mechanism was developed based on the characterization and analyses of the network formed by the user's friends and friends-of-friends (FOF).",
"title": ""
},
{
"docid": "925709dfe0d0946ca06d05b290f2b9bd",
"text": "Mentalization, operationalized as reflective functioning (RF), can play a crucial role in the psychological mechanisms underlying personality functioning. This study aimed to: (a) study the association between RF, personality disorders (cluster level) and functioning; (b) investigate whether RF and personality functioning are influenced by (secure vs. insecure) attachment; and (c) explore the potential mediating effect of RF on the relationship between attachment and personality functioning. The Shedler-Westen Assessment Procedure (SWAP-200) was used to assess personality disorders and levels of psychological functioning in a clinical sample (N = 88). Attachment and RF were evaluated with the Adult Attachment Interview (AAI) and Reflective Functioning Scale (RFS). Findings showed that RF had significant negative associations with cluster A and B personality disorders, and a significant positive association with psychological functioning. Moreover, levels of RF and personality functioning were influenced by attachment patterns. Finally, RF completely mediated the relationship between (secure/insecure) attachment and adaptive psychological features, and thus accounted for differences in overall personality functioning. Lack of mentalization seemed strongly associated with vulnerabilities in personality functioning, especially in patients with cluster A and B personality disorders. These findings provide support for the development of therapeutic interventions to improve patients' RF.",
"title": ""
},
{
"docid": "3679fbedadd1541ba8c1f94ea9b3b85d",
"text": "Concrete is very sensitive to crack formation. As wide cracks endanger the durability, repair may be required. However, these repair works raise the life-cycle cost of concrete as they are labor intensive and because the structure becomes in disuse during repair. In 1994, C. Dry was the first who proposed the intentional introduction of self-healing properties in concrete. In the following years, several researchers started to investigate this topic. The goal of this review is to provide an in-depth comparison of the different self-healing approaches which are available today. Among these approaches, some are aimed at improving the natural mechanism of autogenous crack healing, while others are aimed at modifying concrete by embedding capsules with suitable healing agents so that cracks heal in a completely autonomous way after they appear. In this review, special attention is paid to the types of healing agents and capsules used. In addition, the various methodologies have been evaluated based on the trigger mechanism used and attention has been paid to the properties regained due to self-healing.",
"title": ""
}
] |
scidocsrr
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.